title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Continuous Bayesian Model Selection for Multivariate Causal Discovery
Accept (poster)
Summary: This paper studies structure learning for observational data using Bayeisan model selection. It falls into the category of score based learning and uses model evidence as score to select DAG. It shows the existing work on bivariate case [Dhir et al., 2024] can be extended to multivariate case, and applies a flexible model called Causal Gaussian Process Conditional Density Estimator (CGP-CDE) with continuous reparametrization and variational inference to learn the DAG. Experiments are conducted to show the effectiveness. Claims And Evidence: The claims on improvement are supported via experiments. Methods And Evaluation Criteria: The evaluation criteria via SHD, SID, F1 are a commonly used critera. Theoretical Claims: One of the proofs on the main results -- Theorem B.6 -- concerns me. One of the main contribution of this paper is to extend the existing identifiability result to multivariate case. However, the proof of this theorem only says it follows directly from [Dhir et al., 2024] without any details. Experimental Designs Or Analyses: The experimental design makes sense. Supplementary Material: I check the proof for Theorem B.6, which actually is not in the main paper. Relation To Broader Scientific Literature: Causal discovery is widely used in scientific research for exploratory analysis. Developing identifiability and scalable method is important for broader scientific literature. Essential References Not Discussed: No Other Strengths And Weaknesses: - The paper derives from the ICM assumption for identifiability and allows for much flexible class of identifiable Bayesian network - The proposed method leverages the powerful tools for modelling and learning nonparametric DAGs in a scalable manner. - The experiments show competitive performance with many benchmarks. Other Comments Or Suggestions: - Many mathemetical details are hidden in the appendix. Especially for Section 3 on the theoretical extension, there is no one theoretical statement in this section while everything is burried in the appendix. Questions For Authors: - The definition of Bayesian equivalence (identifiability and distinguishabllity) involves data $X$, also relates to the definition of probrobability of error. I wonder for two models that are not Bayesian equivalent, why can they have positive probrobability of error. Should it be consistent to use the model evidence to find the true DAG? I guess the question is does the sample size in the definition of Bayesian equivalence goes to infinity? Or why do we have identifiability and distinguishabllity and what are their differences? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive and encouraging feedback on our work. We appreciate your acknowledgement that the proposed method **"allows for learning nonparametric DAGs in a scalable manner"** and that our **"experiments show competitive performance with the benchmarks"**. We address your comments in the following. >However, the proof of this theorem only says it follows directly from [Dhir et al., 2024] without any details. Our construction and proofs earlier in the sections lead us to write the first part of our theorem exactly as Prop. 4.7 in Dhir et al., 2024. Hence, we can directly refer to the proof. We will make this clearer by rewriting the theorem from Dhir et al., (2024) and explicitly stating where we invoke the result. > Many mathemetical details are hidden in the appendix. Especially for Section 3 on the theoretical extension, there is no one theoretical statement in this section while everything is burried in the appendix. We would love to put more details in the main paper. Our theory requires us to define several concepts not relevant to the rest of the paper, but that allow us to state accurate and precise theorems. As this takes up a lot of space, we thought it best to do this in the Appendix. **We do provide a general intuition for our theoretical results in the main paper** (L174 LHS). We hope you agree with this approach. > I wonder for two models that are not Bayesian equivalent, why can they have positive probrobability of error The posteriors may overlap even in the infinite data setting. We can show this with a very simple example. Normalised linear Gaussian models are not identifiable (in the population setting) [1, Appendix D.2]. If a chosen model can approximate a normalised linear Gaussian model, and we sample from the model, there is a non-zero probability that we sample a normalised linear Gaussian dataset. Thus the **chosen model must have non-zero probability of error**. Hence, while non-linear additive noise models are identifiable in general, additive noise models (also containing linear functions) will have a positive probability of error (but not completely unidentifiable). [1] Dhir et al., "Bivariate Causal Discovery using Bayesian Model Selection." ICML, 2024. [2] Hoyer et al. "Nonlinear causal discovery with additive noise models." Advances in neural information processing systems 21 (2008). > Should it be consistent to use the model evidence to find the true DAG? I guess the question is does the sample size in the definition of Bayesian equivalence goes to infinity? The definition of Bayesian distribution-equivalence holds for any sample size. However, our theorems (B.3 and B.6) hold in the population setting. We do state that in B.3, but we will state that in theorem B.6. Thank you for pointing this out. >Or why do we have identifiability and distinguishabllity and what are their differences? Identifiability is where the probability of error is zero, whereas distinguishability is where the probability of error is less than random uniform. We will make it clear that these are in the population setting where we define these concepts in L200 LHS. --- Rebuttal Comment 1.1: Comment: I thanks the authors for their response, which has assured some of my concerns. I keep the score unchanged to reflect my lack of familiarity.
Summary: This paper presents a multivariate causal discovery approach based on Bayesian model selection. It builds on the work of Dhir et al. (2024), who proposed to use Bayesian model selection to identify causal direction in the bivariate case. The Bayesian model selection framework allows for a trade-off between a model's goodness of fit and complexity. The original work showed that Bayesian Model Selection can discriminate causal directions even when Maximum Likelihood (ML) methods fail due to distribution equivalence, thanks to the marginal likelihood, which, unlike the likelihood, is not symmetric (they consider the whole function space instead of the best-fitting function). They use the Causal Gaussian Process Conditional Density Estimator (CGP-CDE) to impose no restrictions on the model (e.g. linear model, additive noise...), which is a significant contribution of this work. This makes it possible to relax the assumptions, one of the most critical challenges in causal discovery. On the other hand, their method does not come with a strict identifiability guarantee due to potential overlaps in the posterior of different causal models. They propose a way to estimate the error probability using a sampling strategy. The causal identifiability proof is based on the Independent Causal Mechanism (ICM) assumption, which the authors use to claim that well-chosen Bayesian priors for the distribution of a cause and the distribution of an effect given the cause should be independent. In practice, to avoid comparing all possible DAGs and their posteriors, the model is identified by continuously optimizing the hyperparameters of the Gaussian process priors using a variational autoencoder (VAE) with a Bayesian model selection-based loss and acyclicity penalty. They also define priors on the resulting directed acyclic graphs (DAGs) to favor sparse models. As with most existing methods, their approach only works if all confounders are observed (causal sufficiency assumption). Claims And Evidence: Not all claims are theoretically substantiated. The authors claim to find a causal model. Their continuous optimization method yields a direct acyclic graph, but it is not made clear whether the found model respects the Markov condition (each node is conditionally independent of its non-descendants, given its parents), which is required for a DAG to be causal. It is unclear whether the continuous optimization method finds well-chosen priors, which is required for identifiability. Methods And Evaluation Criteria: The synthetic data experiment is performed with different generation settings (neural network as function, noise additive or not...). Syntren is a simulated data set. There is no experiment on real data. SHD and SID are commonly used metrics for causal discovery. Theoretical Claims: I read the proofs in the appendix and found no errors. Experimental Designs Or Analyses: The experimental design includes several settings. Results on synthetic data generated with a neural network show that the proposed method largely outperforms competitors, which was expected since their assumptions (e.g. additive noise) are violated. However, the author(s) also provide results for data generated with additive noise and their performance is still competitive with other methods, showing that the relaxation of the assumptions does not come at the expense of overall performance. Supplementary Material: I read the appendix of the paper carefully, but did not look at the rest of the supplementary material. Relation To Broader Scientific Literature: This paper contributes to the field of causal discovery and, more specifically, to the score-based and continuous optimisation-based lines of research. It extends the work of Dhir et al. (2024), who proposed using Bayesian model selection for causal edge orientation. This paper's novelty is its adaptation and proof for the multivariate case, the use of CGP-CDE for flexible modeling, and the continuous optimization of model parameters to find a DAG. Essential References Not Discussed: The authors do not discuss the literature on the information-theoretic view of causality (Janzing, Schölkopf, 2010) in the related works although it is conceptually very close -- instantiating Occam's razor to distinguish between models. There is no comparison with other score-based approaches that also aim to balance complexity and goodness of fit, such as GES (Chickering, Maxwell, 2002), which uses the Bayesian Information Criterion (BIC), and GLOBE (Mian, 2021), which uses a two-part Minimum Description Length (MDL) score. References: - Chickering, David Maxwell. "Optimal structure identification with greedy search." Journal of machine learning research 3.Nov (2002): 507-554. - Mian, Osman A., Alexander Marx, and Jilles Vreeken. "Discovering fully oriented causal networks." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 10. 2021. - Janzing, Dominik, and Bernhard Schölkopf. "Causal inference using the algorithmic Markov condition." IEEE Transactions on Information Theory 56.10 (2010): 5168-5194. Other Strengths And Weaknesses: Strengths: - The paper is well written and well motivated. - The use of a flexible model and VAE allows for fewer assumptions than usual and therefore makes the model applicable to a wider range of applications. - The theory is well covered and convincing. - The extensive experiments show that the proposed methods significantly outperform the evaluated competitors. Weaknesses: - The causal model is not strictly identifiable. - Continuous optimisation may not be efficient (see questions 1, 2 and 3). - It is unclear whether the continuous optimisation approach maintains the theoretical guarantees (see questions 4 and 5). Other Comments Or Suggestions: NOTEARS is a structure learning approach, not a causal discovery algorithm. That is, it finds a DAG, but there are no guarantees this a causal one. This should be mentioned. I have the same concern about the proposed method. ### update after rebuttal ### I thank the authors for their answers, but I'm afraid they did not take away my concerns regarding the identifiability of the approach and therewith whether this is a method that is guaranteed to return causal structures. Questions For Authors: 1) How do you interpret the poor performance of CGP-CDE in the 3-variable experiment? 2) In the continuous optimization, the warm-up phase seems computationally expensive (about 25,000 iterations for 1,000 samples?). Is this a typical number of iterations for an initialization phase? Could another initialization reduce the number of iterations needed? What about the number of iterations for the cooling phase (also 25,000 iterations for 1,000 samples)? 3) In the 3-variable experiment, where the data are generated using a Bayesian causal model, DGP-CDE acts as a sanity check, as it considers all models and thus includes the true one used to generate the data. However, the poor performance of the continuous optimization is worrying given the small search space (25 graphs) and the fact that the data was generated using a Bayesian causal model (unlike the opponent whose assumptions were violated). How do you explain this? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your insightful review. We appreciate your recognition of our contribution to the field of causal discovery. We are glad you think the **"theory was well covered and convincing"**, the paper is **"well written and well motivated"** and our **"extensive experiments significantly outperform the evaluated competitors"**. > whether the found model respects the Markov condition **Our model itself satisfies the causal Markov condition at the end of training by construction**. This happens in the acyclicity thresholding phase of the training (L935). The causal Markov assumption follows trivially from the assumption of no hidden confounders (L82 LHS) and that the noise terms are independent (eq 8). We are aware of the larger discussion of whether the causal Markov condition implies conditional independences in all generality [1]. However, the same work states that the added assumption of "Modularity" (Property 7.3 in [1]) provides a "fully quantitative solution to the problem of inferring causality from observational data". The ICM assumption we make (L145 LHS) implies a form of modularity [2]. We note that works like NOTEARS do not make this assumption, and the only similarity to our work is that we also use a continuous optimisation scheme. [1] Dawid et al., "Beware of the DAG!." Causality: objectives and assessment. PMLR, 2010. [2] Janzing et al., "Causal inference using the algorithmic Markov condition." IEEE Transactions on Information Theory (2010). > It is unclear whether the continuous optimization method finds well-chosen priors Our continuous optimization does not find the prior, but the *posterior* given a prior (L255 RHS). > literature on the information-theoretic view of causality (Janzing, Schölkopf, 2010) The basic principle that allows for distinguishing causal structure is very related to the work you have mentioned [1, Appendix B.2]. We will include a discussion on this. [1] Dhir et al., "Bivariate Causal Discovery using Bayesian Model Selection." ICML, 2024. > There is no comparison with other score-based approaches We have run GES and GLOBE for all experiments, results can be viewed here: https://anonymous.4open.science/r/Additional-Results/. GES performs poorly compared to CGP-CDE and the other baselines, except for competitve performance on the 50 var ER1 dataset. This is because GES performs well on sparse graphs, but struggles with denser graphs as it gets stuck in local optima. GLOBE didn't perform well on most of the datasets, except getting the best SHD (but poor SID and F1) for Syntren. > The causal model is not strictly identifiable. This is true, but we show that **for our model the probability of error is small**. Our motivation is that restrictions made to gain strict identifiability can be unrealistic. When the data is generated from a different model, identifiability does not hold anyway. We argue that tolerating a small probability of error for gaining more flexibility can be beneficial in these cases. We show exactly this in our experiments. > How do you interpret the poor performance of CGP-CDE in the 3-variable experiment? We discuss this in Line 368 RHS, although we note the wrong figure was referenced, which we will fix. The good performance of the discrete case **shows that our objective empirically identifies the correct causal graph in the multivariate case**. The continuous approximation uses (noisy) gradients to find the most likely causal structure, which introduces errors. The comparison to the continuous case thus quantifies the error introduced by the continuous approximation. We believe this shows that the continuous approximation scheme can be improved on. Nevertheless, we emphasise that we clearly outperform baselines in datasets with more variables (Appendix I). > Could another initialization reduce the number of iterations needed? We were overly conservative with the number of iterations for the warm up and cool down phases. You are correct, better initialisation will reduce the number of iterations required. This can be monitored simply by looking at the training curve. > In the 3-variable experiment... How do you explain this? The point of this experiment was to **show the price we pay for scalability**. The difference in performance is due to the continuous optimisation, which introduces errors (see above). These results show that while our Bayesian principle is correct, large improvements can be made in the continuous approximation scheme. As we experiment on **all possible** 3 variable graphs, this experiment also contains a more dense graphs (relative to the number of variables), which may be harder to find. Note we also provide other experiments with more variables where the data isn't generated from our model and the ANM experiments where the baseline models' assumptions aren't violated. In both cases, we outperform the baselines. The reviewer refers to questions 4 and 5 but the questions only go up to 3.
Summary: Recent work shows that in the bivariate case Bayesian model selection can be used for structure identification under more flexible assumptions at the cost of a small probability of error. This paper extends the previous result to the multivariate case. The authors empirically validate the method by comparing to multiple baselines under extensive experimental settings. ## update after rebuttal Thank you for the author's response. After reading all the review comments I decide to keep my rating unchanged. Claims And Evidence: Yes. Methods And Evaluation Criteria: It makes sense to me that for two distribution equivalent graphs we can rely on independent priors to distinguish them. Yet, it is unclear to me that do we need to know the correct prior in advance? If yes, what will happen under misspecification of prior? Theoretical Claims: No. Experimental Designs Or Analyses: Yes. The experimental designs look sensible to me. The setting is extensive including both synthetic setting with multiple variables and real-life experiments. Supplementary Material: Yes I briefly went through Appendix E,F,G,I. Relation To Broader Scientific Literature: Key contributions of the paper compared to prior work are properly discussed. Essential References Not Discussed: Related works are properly discussed. Other Strengths And Weaknesses: Strengths 1. The proposed method and theory of using bayesian model selection for multivariate structure learning look novel to me. 2. The experimental setting is extensive including both synthetic and real-life data. 3. It is interesting to see methods that can distinguish between graphs that are distribution-equivalent. Weaknesses 1. I am not very familiar with bayesian model selection so I am not sure about the significance of extending from bivariate (Dhiretal.,2024) to the multivariate scenario. I will defer to other reviewers regarding this point. Other Comments Or Suggestions: N.A. Questions For Authors: Please refer to my question in Methods And Evaluation Criteria part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed feedback. We appreciate your acknowledgement of the **novelty of our approach in applying Bayesian model selection for multivariate structure learning** and your recognition of the **thorough discussion of our contributions relative to prior work**. We are also pleased you found the **experimental design sensible and extensive**. Additionally, we are glad you highlighted the importance of our method’s ability to distinguish between distribution-equivalent graphs, which is a key strength of our approach. We address your specific comments in the following. > do we need to know the correct prior in advance? It is not necessary to know the correct prior in advance. **A key reason that allows for distinguishing causal graphs is the ICM assumption that is encoded in the prior** (L153 LHS, "separable compatible" in Theorem B.6). The prior over functional mechanisms is also important. Our approach for this was to try and choose a model/prior that ensures as much as possible we do not put zero probability on any dataset (see for example [1, Section 1.2]). [1] Hjort et al., eds. Bayesian nonparametrics. Vol. 28. Cambridge University Press, 2010. > what will happen under misspecification of prior? We discuss this in L183 RHS. The prior defines what datasets are likely under our model (data distribution). This data distribution can be used to define a probability of error, which is a measure of how distinguishable causal graphs under our model are (eq 7). Given datasets from a *separate* distribution (data generating process), the accuracy of the estimated probability of error depends on how far the model's data distribution is from the data generating process. Small differences in the model and the data generating process do not result in complete invalidity of the estimated probability of error. As with any method, large variations will mean the estimated probability of error differs from the true probability of error (of the data generating process). For more discussion on this see [1, Section 4.4]. We note that the a-priori functional restrictions imposed by previous methods are also unverifiable assumptions. As with any unverifiable assumption, it is necessary to empirically validate how well it works in practice. We test exactly this in our experiments (Section 6.2, 6.3), **where the data generating processes don't match our model prior and our model outperforms the baseline methods**. [1] Dhir et al., "Bivariate Causal Discovery using Bayesian Model Selection." ICML, 2024. > I am not very familiar with bayesian model selection so I am not sure about the significance of extending from bivariate (Dhiretal.,2024) to the multivariate scenario. I will defer to other reviewers regarding this point. Dhir et al.,2024 consider the bivariate case, but we answer the question: does the theory hold in the multivariate case, and how does the performance scale with number of variables? We theoretically show Bayesian model selection works for multivariate datasets and propose a method to effectively scale to large numbers of variables, overcoming the costs which would be super-exponential with number of variables. We then show that this approach outperforms competing methods in the multivariate case.
Summary: The paper proposes a new method called CGP-CDE for (Bayesian) causal model discovery that allows for less restrictive model assumptions and can be applied to higher dimensional systems as well. It is based on a GP approach to obtain a nonparametric conditional density estimator for each node given its parents in the causal DAG, ultimately capturing the likelihood of a graph. The search for the best fitting model is turned into a continuous optimization problem by adding a familiar acyclicity constraint with penalty on the weight matrix representing the graph. The method is evaluated on synthetic/realistic data and found to compare equal or favourably to other alternatives. ## update after rebuttal I thank the authors for their reply, and, having read the other reviews & rebuttals as well, I am happy to leave my score at '4: accept'. One final remark: I understand the temptation to go for maximal informativeness, but the authors will know that typically the score differences within a MEC are much smaller than between MECs, and therefore the extra information from orienting the full graph tends to be much less reliable, in turn making the entire output less trustworthy = less useful in practice. Given that any directed graph has a unique mapping to a MEC one does not need to 'remove' the preference within a MEC. Instead, we can easily show both outputs, but with a much higher reliability for anything implied by the MEC representation, giving practitioners an intuitive way to distinguish between more and less reliable conclusions in the output. Claims And Evidence: The paper initially suggests that it will solve the problem of restrictive / unrealistic model assumptions encountered when tackling real world data, but this is of course nonsense. It still starts from the causal sufficiency and acyclicity assumptions (which is not how the world works), and relies on a rather arbitrary prior to go from Markov equivalence class (MEC) to unique model. Yes it ensures identifiability, but you are essentially finding back what you put in and assume/hope that it matches reality. However, I do really like the GP approach to modelling the likelihood, which indeed does provide a significant improvement over existing parametric approaches to Bayesian/likelihood based inference. Methods And Evaluation Criteria: Experimental evaluation is limited but ok, and demonstrates the potential of the core contribution. Theoretical Claims: No explicit theoretical claims in the main paper, but overall approach is sound. Experimental Designs Or Analyses: As mentioned, experimental evaluation is limited but ok. One of the most striking findings is that the discrete version (DGP-CDE) is vastly superior (FIg.1), but that the authors still choose to focus on the (currently in vogue) continuous approximation, even though it suffers from the exact same problems w.r.t. finding the global optimum in anything with higher dimensions. But I will not hold that against the authors :) Supplementary Material: No. Relation To Broader Scientific Literature: Relevant (recent) work is discussed. Essential References Not Discussed: Cooper & Herskovitz (1992): this old but seminal work already contains a Bayesian score-based method that relies on the prior to select an optimal posterior causal DAG. The subsequent work by Heckerman showed how to combat this (rather undesirable) behaviour by introducing a score that ensures MEC-equivalence, but the current paper now seems to suggest that they came up with the idea of selecting models *within* a MEC by using a non-equivalent score … Other Strengths And Weaknesses: As stated: paper suggests more than it does, and some aspects (like aiming for unique identifiability from using an in practice hard to justify/assess prior) actually weakens the output, and I would have preferred§ a modest but more robust aim for e.g. the MEC. Also starting from the causal sufficiency assumption (and to some degree also the acyclicity assumption) is `unforgivable’ when the aim is to make a causal discovery method more suitable to real-world applications. Yes it makes everything easier, but that is not a good enough reason to do / keep doing it. On the plus side: paper is clear and well written, and the GP approach at the heart of the method is a significant and promising contribution, and for that reason I will recommend accept. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive and encouraging feedback. We are glad you found the paper **"clear and well written"**, and appreciate your comment that our method is a **"significant and promising contribution"**. > The paper initially suggests that it will solve the problem of restrictive / unrealistic model assumptions encountered when tackling real world data Previous methods regularly make restrictive functional assumptions that may not hold in practice. In this paper, **we address relaxing these restrictive functional modelling assumptions** (L12 RHS, L172 LHS). We wholeheartedly agree causal sufficiency and acyclicity can be unrealistic assumptions and are worth relaxing. These are very common in causal discovery algorithms, and the baselines we compare against also make these assumptions. We hope we didn't overclaim on this point and will make clear that this paper is only a step towards more realistic causal discovery (L70 LHS). > Yes it ensures identifiability, but you are essentially finding back what you put in and assume/hope that it matches reality. > ...using an in practice hard to justify/assess prior Our work shows that a relatively simple assumption - independent causal mechanisms (see L148 LHS) - can allow for distinguishing causal structure within a Markov equivalence class (L129 RHS, Theorem B.6). This assumption can be encoded in the prior by ensuring the priors factorise appropriately (L153 LHS). The specific priors over functional mechanisms are also important. Our approach for this was to ensure, as much as possible, that the prior does not put zero probability on any dataset (see for example [1, Section 1.2]). The model/prior that we chose is not arbitrary, but close to known identifiable models (non-linear additive noise models), as shown by the good performance on additive noise datasets (Appendix I.1). The difference to previous methods is that we do not make hard restrictions - our model allows us to approximate more than just additive noise datasets (L232 LHS). We note that assumptions made in previous methods that a-priori restrict functional form are also unverifiable. Any unverifiable assumption requires empirical verification. This is exactly what we do in sections 6.2 and 6.3. Here, the data is generated from mechanisms that are different to all the models. We vary other factors as well (graph types and density). Our method outperforms previous methods (full results in Appendix I). We believe this shows the usefulness of the Bayesian approach. [1] Hjort et al., eds. Bayesian nonparametrics. Vol. 28. Cambridge University Press, 2010. > One of the most striking findings is that the discrete version (DGP-CDE) is vastly superior (FIg.1), but that the authors still choose to focus on the (currently in vogue) continuous approximation, even though it suffers from the exact same problems w.r.t. finding the global optimum in anything with higher dimensions. This is indeed very interesting. The cost of the discrete version, which requires enumerating over all possible graphs, is too high for more than a few variables. We included this result because the difference in performance between the CGP-CDE and the DGP-CDE clearly shows that, although our principle is correct, there is room for improvement in the continuous relaxation. However, CGP-CDE allows us to scale to a larger numbers of variables. > Cooper & Herskovitz (1992): this old but seminal work already contains a Bayesian score-based method that relies on the prior to select an optimal posterior causal DAG... The papers you mention are important but only consider simple linear models. However, it has been known that with more complicated models, Bayesian models tend to have an opinion within an MEC [1,2,3]. The main reason (as shown in Appendix B) is because to create equivalent models, the ICM assumption has to hold in multiple factorisations (L129 RHS, Theorem B.6). While it is relatively simple to construct models that do this with linear models [1, Appendix D.1, D.2], it is not clear whether this is possible with more complex models [1, Appendix D.3]. [1] Dhir et al., "Bivariate Causal Discovery using Bayesian Model Selection." ICML, 2024. [2] Friedman et al., "Gaussian process networks." Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence. 2000. [3] Stegle et al. "Probabilistic latent variable models for distinguishing between cause and effect." Advances in neural information processing systems 23 (2010). > I would have preferred a modest but more robust aim for e.g. the MEC. This would obscure the fact that the model (due to the ICM assumption in the prior) **has a preference** for certain causal structures within an MEC over others (Line 138 RHS and line 720). **This preference within an MEC can only be removed by breaking the ICM assumption** (Theorem B.6). We thus make use of this preference.
null
null
null
null
null
null
MP-Nav: Enhancing Data Poisoning Attacks against Multimodal Learning
Accept (poster)
Summary: 1. The author analyzed the shortcomings of existing attack methods: only associating errors by randomly selecting concepts, and poisoning instances randomly, which usually makes it difficult to achieve a good attack effect. 2. The authors proposed a plug-and-play module MP-Nav. MP-Nav effectively solves the problems existing in the random selection method of existing methods by identifying semantically similar concepts at both the concept and instance levels and selecting robust instances, and effectively improves the attack effect. 3. Experiments have confirmed that MP-Nav can significantly enhance the effectiveness of the most advanced data poisoning attacks, i.e., AtoB and ShadowCast in multimodal tasks, and maintain the practicality of the model in various datasets. Claims And Evidence: Although the author pointed out the problem that existing methods can hardly achieve good attack effects only by randomly selecting concepts, the author only showed statistical experimental results, and did not have specific visualization experiments to prove their view point. Methods And Evaluation Criteria: yes Theoretical Claims: The author conducted experimental verification of the proposed method, but lacked a detailed experimental analysis of the shortcomings of existing methods. Experimental Designs Or Analyses: Although the author conducted a large number of attack experiments to verify the effectiveness of the proposed method, they used too few baseline methods. There are only two baseline methods in the experimental part: AtoB and ShadowCast, which makes it difficult to fully prove the applicability of the proposed plug-and-play MP-Nav. The author should consider using some other methods as baselines, e.g., [1-3]. [1] Data Poisoning Attacks Against Multimodal Encoders, Ziqing Yang, Xinlei He, Zheng Li, M. Backes, Mathias Humbert, Pascal Berrang, Yang Zhang, International Conference on Machine Learning, 2022. [2] CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning, Hritik Bansal, Nishad Singhi, Yu Yang, Fan Yin, Aditya Grover, Kai-Wei Chang, IEEE International Conference on Computer Vision, 2023. [3] Backdooring Multimodal Learning, Xingshuo Han, Yutong Wu, Qingjie Zhang, Yuan Zhou, Yuan Xu, Han Qiu, Guowen Xu, Tianwei Zhang, IEEE Symposium on Security and Privacy, 2024. Supplementary Material: Yes, I reviewed the expanded experimental part in the supplementary materials. Relation To Broader Scientific Literature: The method proposed in this paper is a plug-and-play module, and the author has verified its effectiveness on the baselines method, i.e., AtoB and ShadowCast. Essential References Not Discussed: There are only two Baseline methods in the experimental part: AtoB and ShadowCast. This makes it difficult to fully prove the applicability of the proposed plug-and-play MP-Nav. The authors should consider using some other methods as baselines, such as [1][2][3]. [1] Data Poisoning Attacks Against Multimodal Encoders, Ziqing Yang, Xinlei He, Zheng Li, M. Backes, Mathias Humbert, Pascal Berrang, Yang Zhang, International Conference on Machine Learning, 2022. [2] CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning, Hritik Bansal, Nishad Singhi, Yu Yang, Fan Yin, Aditya Grover, Kai-Wei Chang, IEEE International Conference on Computer Vision, 2023. [3] Backdooring Multimodal Learning, Xingshuo Han, Yutong Wu, Qingjie Zhang, Yuan Zhou, Yuan Xu, Han Qiu, Guowen Xu, Tianwei Zhang, IEEE Symposium on Security and Privacy, 2024. Other Strengths And Weaknesses: Strengths: This paper summarizes the key problems of existing multi-modal poisoning attack methods, namely they only associate concepts randomly and poison instances randomly, which usually makes it difficult to achieve a good attack effect. 2. The author made improvements to the above issues and proposed a plug-and-play module MP-Nav, which includes two components: Concept-level Selection and Instance-level Selection. 3. The author conducted a large number of experiments to verify the effectiveness of their method. Weaknesses: 1. The author didn't conduct a detailed experimental analysis and explanation of the essential reasons for the existing method problems, so readers may find it difficult to deeply understand the starting point of this article. 2. Table 2 and Figure 2 represent the effectiveness of the author's method on two different datasets. However, on the PASCAL dataset, the effect of Instance-level Selection is negligible, and the author should explain why the results are inconsistent with those on the COCO dataset. 3. Since the proposed method is a plug-and-play module, the author should choose more methods as baselines. Other Comments Or Suggestions: Please answer all my doubts carefully, and I will adjust my final score based on the answers. Questions For Authors: 1. How is the number of concept centers determined, and is it related to the number of instance categories? 2. In lines 232 to 234, are the images and texts of the original concept pairs (I_A, T_A) and the target concept pairs (I_B, T_B) being swapped? After the swap, does the original concept pair become (I_A, T_B), and the target concept pair become (I_B, T_A)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your positive score. Please find our responses below. 1 [Essential References Not Discussed]: “The author should consider using some other methods as baselines [1-3]”\ **Response** 1: We have indeed used [1] as one of the baseline methods that our paper has made comparisons with. [2] focuses on the backdoor defense (and not poisoning attack) of CLIP, and [3] focuses on the backdoor attacks for multimodal learning. Table 1 in our paper has highlighted the setting differences of data poisoning and backdoor triggers. Nevertheless, MP-Nav is a plug-and-play module, which can potentially enhance other types of attacks, including backdoor attacks, adversarial evasion attacks, model inversion attacks, etc. In revision, we will cite the relevant papers [1-3], and in the future, we will compare MP-Nav with [3] and produce more results. 2 [Other Strengths And Weaknesses]: “The author didn't conduct a detailed experimental analysis and explanation of the essential reasons for the existing method problems, so readers may find it difficult to deeply understand the starting point of this article.”\ **Response** 2: The two methods (existing random selection method and our MP-Nav method) fundamentally differ in strategy: random selection relies on stochastic trials, but MP-Nav relies on principled guidance to enhance poisoning efficacy. Specifically, random selections frequently choose concept pairs that are semantically distant or instances that poorly represent the targeted concepts, resulting in easily diluted poisoning effects by benign instances. In revision, we plan to add a visualization experiment on showing the above fact. Since rebuttal is character-limited only, kindly allow us to describe this visualization experiment: We will use PCA/T-SNE to reduce high-dimensional features into 2-D vectors and visualize and compare differences between embeddings of poisoned and benign instances. We will also plot the embedding evolutions over the training epochs and compare how MP-Nav selected poisoning instances differ from randomly selected poisoning instances, w.r.t. the counterpart benign instances. 3 [Other Strengths And Weaknesses]: “Table 2 and Figure 2 represent the effectiveness of the author's method on two different datasets. However, on the PASCAL dataset, the effect of Instance-level Selection is negligible, and the author should explain why the results are inconsistent with those on the COCO dataset.”\ **Response** 3: In Table 2 of COCO dataset, we fixed the poisoning data of 284 (out of 119387 training data) and made fair comparisons with baseline A2B attack, which keeps the same as the baseline paper. However, in Figure 2 (PASCAL-Flicker combined dataset), the scenario differs. Flicker is large image-text pairs (without ground truth concepts) of 29000 images, where the attacker does not touch. The PASCAL dataset contains only 500 labeled images (for training) divided equally across 20 concepts, leaving a maximum of 25 instances per concept available for poisoning. In the original baseline paper, all 25 instances per concept were poisoned—an impractical scenario in real-world settings. To reflect more realistic conditions, we reduced the attacker’s budget (number of allowed poisoned instances), poisoning fewer than the maximum number of available instances per concept, leaving the remainder as benign data. As a consequence, when the attacker’s budget reaches 25 poisoned instances, the MP-Nav instance-level selection and random selection both utilize all available instances, naturally leading to **identical** performance. Thus, the negligible difference at instance-level selection arises directly from the dataset limitation rather than inconsistency in MP-Nav’s effectiveness. 4 [Questions For Authors]: “How is the number of concept centers determined, and is it related to the number of instance categories?”\ **Response** 4: Yes. Each category has one concept center. The concept center considers both image and text embeddings of the same category (concept). 5 [Questions For Authors]: “In lines 232 to 234, are the images and texts of the original concept pairs (I_A, T_A) and the target concept pairs (I_B, T_B) being swapped? After the swap, does the original concept pair become (I_A, T_B), and the target concept pair become (I_B, T_A)?”\ **Response** 5: No, it is not being swapped. To make the fair comparison with baseline A2B attack [1], we follow the setting of [1], i.e., original concept pairs (I_A, T_A) and the target concept pairs (I_B, T_B) -> (I_A, T_B) and (I_B, T_B) . Attackers only change part of texts of original concept instances, and do not touch target concept instances. Kindly refer to Section 2 Preliminary for details. \ [1] Data poisoning attacks against multimodal encoders, in ICML 2023 --- Rebuttal Comment 1.1: Comment: I confirm that I have read the author's response, and maintain my original score. --- Reply to Comment 1.1.1: Comment: Thank you for carefully reading our response.
Summary: This paper introduces the Multimodal Poison Navigator (MP-Nav), a plug-and-play module designed to improve data poisoning attacks on multi-modal models. The authors propose a two-step approach: (1) concept-level selection, which identifies semantically similar concepts for misassociation, and (2) instance-level selection, which ranks and selects robust instances to maximize attack efficacy. The proposed method enhances existing data poisoning attacks such as AtoB and ShadowCast and is evaluated on Text-Image Retrieval (TIR) and Visual Question Answering (VQA) tasks. Experimental results show that MP-Nav significantly improves attack effectiveness while preserving model utility. ## update after rebuttal The authors address most of my concerns. So I raised my score. Claims And Evidence: Yes, it is clear and convincing. The paper claims that MP-Nav enhances the efficacy of data poisoning attacks against multimodal models while maintaining model utility. The evidence provided includes empirical results on benchmark datasets (such as COCO, Flickr, PASCAL, and Food101), and customized “Biden-Trump”. Dataset. The evaluations demonstrate that MP-Nav improves attack success rates while keeping retrieval and classification performance intact. Methods And Evaluation Criteria: Yes, the proposed method makes sense. The evaluation criteria (such as Hit@K and MinRank for TIR and attack success rate for VQA) are appropriate. Theoretical Claims: Not applicable. This paper does not have theoretical claims. Experimental Designs Or Analyses: The experimental design follows the standard benchmarks and datasets. The comparison between baseline attacks and MP-Nav-enhanced attacks demonstrates meaningful improvements. Furthermore, in Figure 5 & Table 5 of the appendix, the paper has provided comprehensive evaluations of differently-similar concepts on the attack efficacy. This could confirm the observed performance gains. Supplementary Material: I have read all the supplementary material. It mainly focuses on extended experimental results. Relation To Broader Scientific Literature: The paper is well-positioned in the field of data poisoning attacks and multimodal learning. It builds upon prior work in data poisoning but distinguishes itself by proposing a structured poisoning strategy tailored to different multimodal models. Essential References Not Discussed: All highly related works are discussed as far as I know. Other Strengths And Weaknesses: This paper presents an effective enhancement to existing multimodal poisoning attacks. The modularity of MP-Nav allows for easy integration with various multimodal tasks. However, a discussion on potential countermeasures would add more depth. Other Comments Or Suggestions: No further comments. Please see my questions below. Questions For Authors: - Would MP-Nav induce the computational overhead for the attacker? Could you clarify the attacker (using MP-Nav) could be different from the attacker (using random selections)? - Can MP-Nav be adapted to other attack paradigms beyond data poisoning? - In Section 4.2, authors poisoned the LLaVA-1.5 model by fine-tuning it for 1 epoch. I understand this setting is originally from ShawdowCast paper. How about fine-tuning LLaVA for several epochs? Would the poison effect still stay under the presence of benign data? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive score. Please find our responses below. 1 [Other Strengths And Weaknesses]: “a discussion on potential countermeasures would add more depth.”\ **Response** 1: This is a similar question to one raised by reviewer cLuX. Kindly refer to the "Response 3" for reviewer cLuX. 2 [Questions For Authors]: “Would MP-Nav induce the computational overhead for the attacker? Could you clarify the attacker (using MP-Nav) could be different from the attacker (using random selections)?”\ **Response** 2: MP-Nav indeed introduces some computational overhead compared to a purely random selection strategy due to the computation of semantic embeddings of instances. MP-Nav requires o(nW) computation overhead for the model inference, where n refers to size of training set and W refers to size of open-sourced model parameters. In our experiment, a single 4090 GPU is sufficiently powerful for MP-Nav computations. Kindly note that the two methods (i.e., MP-Nav and random selections) fundamentally differ in strategy: random selection relies on stochastic trials, but MP-Nav relies on principled guidance to enhance poisoning efficacy. 3 [Questions For Authors]: “Can MP-Nav be adapted to other attack paradigms beyond data poisoning?”\ **Response** 3: Yes. MP-Nav is a plug-and-play module, and we presume it can also enhance adversarial evasion attack and model extraction attacks. In evasion attacks, attackers could identify vulnerable pairs of concepts where decision boundaries are naturally close, thus focusing adversarial perturbations on the most vulnerable concept pairs. In model extraction attacks, MP-Nav’s embedding-based selection could assist attackers in choosing query samples that maximize information gain about internal model decision boundaries, facilitating faster or more effective extraction of the victim models. We will make a thorough exploration of the above attacks in future. 4 [Questions For Authors]: “In Section 4.2, authors poisoned the LLaVA-1.5 model by fine-tuning it for 1 epoch. I understand this setting is originally from ShawdowCast paper. How about fine-tuning LLaVA for several epochs? Would the poison effect still stay under the presence of benign data?”\ **Response** 4: Yes. The poison attack is still effective under the presence of benign data. We have conducted additional experiments by extending fine-tuning over 4 epochs, and we report attack success rate (SR) in the table below. Interestingly, we have observed that benign data won’t dilute the poisoning effect over epochs, and the poisoning effect is further enhanced, especially under small poisoning ratio and when employing the MP-Nav’s robust instance selection. One plausible explanation is that large models tend to overfit specific associations over prolonged fine-tuning, thereby reinforcing the malicious associations introduced by carefully selected poisoned instances. We will add the additional results in revision. |Attack |Poison ratio | SR(Epoch #1) | SR(Epoch #2)| SR(Epoch #3)| SR(Epoch #4)| | --------| -------- | ------- |------- |------- |------- | | MP-Nav| 1% | 0.01 | 0.08| 0.58| 0.56| | Random| 1% | 0.01 | 0.02| 0.17| 0.23| | MP-Nav| 3% | 0.02 | 0.98| 0.97| 0.97| | Random| 3% | 0.01 | 0.62| 0.90| 0.89| --- Rebuttal Comment 1.1: Comment: Thanks for the clear responses and solid additional experiments on LLaVA-1.5 fine-tuning. The results demonstrating MP-Nav’s robustness and its ability to enhance poisoning efficacy even under the presence of benign data are compelling, and these findings significantly strengthen the paper. MP-Nav’s effectiveness and adaptability stand out. I’m happy to increase my score—great work! --- Reply to Comment 1.1.1: Comment: Thank you for carefully reading our response.
Summary: This paper presents MP-Nav that optimizes data poisoning attacks for vision-language models. The approach strategically selects concept pairs and robust instances to maximize poisoning efficiency while maintaining overall model utility. The authors evaluate MP-Nav on benchmark datasets and demonstrate improvements over existing poisoning attacks. Claims And Evidence: The claims regarding improved attack success are well-supported by extensive experiments. However, the claim that MP-Nav preserves model utility requires further clarification, as the poisoned data could have long-term effects on model behavior. Methods And Evaluation Criteria: This paper mathematically formulates the proposed MP-Nav., which greatly helps readability. The proposed algorithm is clear and well correlated to Figure 1, so I can understand the method easily. Theoretical Claims: The methods are clearly described, and the evaluation criteria align well with the research objectives. However, the paper lacks an explicit discussion on the limitations of MP-Nav, particularly regarding scenarios where poisoning may not be effective. Experimental Designs Or Analyses: The experimental setup is rigorous and the comparisons with baseline methods are fair. Thus, the experimental results are trustworthy. Supplementary Material: Yes. I have read all the supplementary material. Relation To Broader Scientific Literature: How are the key contributions of the paper related to the broader scientific literature? Be specific in terms of prior related findings/results/ideas/etc. The paper contributes to adversarial machine learning research. The prior studies neglect the importance of concepts and instance-level selections for effectively evaluating the threat of data poisoning attacks. With this regard, this paper has made the remedy by proposing an MP-Nav method that can comprehensively enhance multi-modal data poisoning attacks. Essential References Not Discussed: To the best of my knowledge, all relevant related works that should be mentioned have been included. Other Strengths And Weaknesses: +Strengths: Clear motivation, comprehensive experimental results, and a well-structured poisoning approach. -Weaknesses: Limited exploration of countermeasures and lack of explicit discussion of the limitation of the proposed MP-Nav. Other Comments Or Suggestions: 1 Please explicitly discuss the limitations of MP-Nav, particularly regarding scenarios where poisoning may not be effective. Questions For Authors: 1 Have authors observed any situations where MP-Nav fails to improve poisoning efficacy, or where it inadvertently lowers the effectiveness of an attack compared to baseline methods? 2 From my understanding, MP-Nav currently assumes access to embeddings for similarity computation. How well does it perform if the attacker does not have access to the model’s internal representations? 3 Could existing outlier detection or data sanitization methods be used to identify and filter poisoned samples before model training? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive review. Please find our responses below. 1 [Other Comments Or Suggestions]: “Please explicitly discuss the limitations of MP-Nav, particularly regarding scenarios where poisoning may not be effective.”\ **Response** 1: There are potentially two limitations. First, MP-Nav's success depends on the availability and quality of open-sourced models (such as CLIP, Deepseek, etc). When embeddings are noisy or misaligned due to insufficient training data, MP-Nav’s guidance may become suboptimal. \ Second, the limitation (not specific to MP-Nav but to poisoning attacks in general) arises in datasets where the benign samples outnumber poisoned instances that can dilute the poisoning effects. Kindly note that our MP-Nav requires less number of poisoned instances than existing attack methods. 2 [Claims And Evidence]: “The claim that MP-Nav preserves model utility requires further clarification, as the poisoned data could have long-term effects on model behavior.”\ **Response** 2: In our experiments, we primarily assessed immediate model performance post-poisoning. Indeed, it is possible that the cumulative influence of poisoning could manifest more over extended fine-tuning or continuous learning. In future work, we will investigate the poisoning effect in continuous learning with a presumed research question on how to enhance the cumulative poisoning effect under catastrophic forgetting in the setting of continuous learning. 3 [Other Strengths And Weaknesses, Questions For Authors]: “Limited exploration of countermeasures.” “ Could existing outlier detection or data sanitization methods be used to identify and filter poisoned samples before model training”\ **Response** 3: We have implemented a simple defense method that is also used as the pre-training defense in [1]. Since the AtoB is a dirty-label poisoning attack, the input sanitization is an effective countermeasure. We used an open-sourced model to calculate cosine similarities of embeddings of both images and their corresponding texts. A higher cosine distance means images and texts are less relevant. We set threshold as 0.8 to avoid benign instances to be filtered out. We report the results below. |Poison setting | Defense | Poison data Number| | -------- | ------- |------- | | boat2dog | No input sanitization | 284 (out of 119387) | | boat2dog | pre-training defense [1] | 39 | | boat2kit (MP-Nav) | No input sanitization | 284 (out of 119387) | | boat2kit (MP-Nav) | pre-training defense [1] | 140 | As we can see above, input sanitization is quite effective in A2B attacks [1]. Kindly note that MP-Nav makes A2B have stronger resistance against input sanitization. [1] Data Poisoning Attacks Against Multimodal Encoders, in ICML 2023. In terms of ShadowCast attack (clean-label attack), our paper has revealed that clean and benign examples can largely mitigate the poisoning effect. Nevertheless, the MP-Nav can still enhance the poisoning effect under the presence of benign examples. 4 [Questions For Authors]: “Have authors observed any situations where MP-Nav fails to improve poisoning efficacy, or where it inadvertently lowers the effectiveness of an attack compared to baseline methods?”\ **Response** 4: Indeed, we have observed scenarios where MP-Nav does not outperform baseline methods in some cases. This is due to inherent noise occurring in the training data (such as irrelevant features in the images, noisy captions, data shortage). As visualized in the top two panels of Figure 5 in the appendix, the poisoning effect is noisy w.r.t. similarity scores. Despite the noise, the bottom two panels of Figure 5 unveil the positive correlation between similarity scores and the poisoning effect. 5 [Questions For Authors]: From my understanding, MP-Nav currently assumes access to embeddings for similarity computation. How well does it perform if the attacker does not have access to the model’s internal representations?\ **Response** 5: Kindly allow us to clarify the assumptions in the paper. The attacker does not have access to the learner’s model (and therefore embeddings) but can get access to an open-sourced surrogate model that computes and compares data’s similarities. Thus, all the reported results reflect scenarios where the attacker does not have access to the model’s internal representation. --- Rebuttal Comment 1.1: Comment: Thank you for your comprehensive response. The authors have provided effective clarifications to my questions, and I have raised my score to 4. --- Reply to Comment 1.1.1: Comment: Thank you for carefully reading our response.
Summary: This paper addresses the vulnerability of large-scale multimodal learning models to data poisoning attacks, where adversaries subtly inject malicious instances into training data to misalign concepts. It proposes MP-Nav (Multimodal Poison Navigator), a module that strategically selects semantically similar concept pairs and robust instances to enhance the effectiveness of poisoning attacks. Experimental results demonstrate that MP-Nav improves attack success rates while preserving model utility, highlighting the security risks of multimodal models and emphasizing the need for stronger defenses. Claims And Evidence: This paper has 4 major claims. C1. Not all concepts are equally vulnerable to disassociation. - I did not see direct evidence for the claim. However, based on an ablation study, the claim may be true. C2. Not all instances contribute equally. - I did not see direct evidence for the claim. However, based on the ablation study, the claim may be true. C3. MP-Nav Enhances Attack Effectiveness: The proposed MP-Nav module systematically selects semantically similar concept pairs and robust instances, significantly improving the success rate of multimodal data poisoning attacks. - In Section 4.1, the empirical results show that selecting instances and concepts can improve attack effectiveness. C4. Resilience Against Benign Data Dilution: By selecting robust instances within chosen concepts, MP-Nav ensures that poisoned instances remain effective even when mixed with a large number of benign samples, maintaining attack efficacy while preserving model utility. - Table 3 shows the utility of the method, which has a marginal difference as the baselines. I doubt if the testing is hard enough, as all methods share similar utility. It is not clear if this attack method will balance utility with attack effectiveness. Since the number of poisoned examples is not specified, it is hard to associate the attack result in Fig 2 with Table 3. Methods And Evaluation Criteria: The paper proposes MP-Nav (Multimodal Poison Navigator), a module that strategically selects semantically similar concept pairs and robust instances to enhance the effectiveness of poisoning attacks. The method enhanced existing attacks AtoB and ShawdowCast while maintaining utility. The paper used Flickr-PASCAL and COCO datasets for evaluation. The metrics are 1. Model Utility – This checks how well the model retrieves the correct results. It uses Recall at K (R@K), which tells us how often the correct answer appears in the top K results when searching for images or captions. 2. Poisoning Efficacy – This measures how well an attacker can trick the model into linking the wrong concepts. It uses Hit@K, which shows how often the wrong (targeted) image appears in the top K results, and MinRank, which tells how early the wrong image appears in the ranked list (lower is worse). Both metrics are reasonable from the literatures. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: It is hard to find where the claims 1 and 2 are validated. I can only infer that the ablation study in Sec 3.1 may imply claim 1 and 2. Supplementary Material: Fig. 5 and 6. The two figures are hard to follow as they are referred and used in the main body. That makes it hard to read. Relation To Broader Scientific Literature: The method enhanced existing methods like AtoB and ShadowCast. This method enforces the embeddings’ misalignment in the learned space to achieve the misassociation while preserving the correct alignments for other concept pairs. By selecting data and concepts, the AtoB and ShadowCast methods are strengthened. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: * The proposed method improve the existing attacks by selecting concept and instances. The method is simple and effective. Weakness * The effectiveness of the method lies in a small range of the number of poisoned samples 10-25. This makes the result less significant. I doubt whether the method is really necessary. An attacker can simply increase the number of poisoned instances can enhance the attack. I did not see a reason not to scale up the poisoned instances. Especially, when 25 samples could be very effective. Other Comments Or Suggestions: "image captaining" in Line 12. Figues 2 should be Figure 2 in Line 331. Questions For Authors: * What is the intuition for claim 1 and claim 2? Any theoretical insights for the two claims? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Many thanks for the reviewer’s statement that “Experimental results demonstrate that MP-Nav improves attack success rates while preserving model utility”, and the acknowledgment that “the method is simple and effective”. Kindly find our response below. 1 [Claims and Evidence (First two points)]: “It is hard to find where the claims 1 (C1) and 2 (C2) are validated. What is the intuition for claim 1 and claim 2? Any theoretical insights for the two claims?”\ **Response 1**: In the review, the C1 refers to "Not all concepts are equally vulnerable to disassociation," and the C2 refers to "Not all instances contribute equally." Kindly note that C1 and C2 are not our claims, but facts/observations that are backed by prior literature such as [1, 2, 3]. To choose robust instances to enhance poisons, we are inspired by [4]. Motivated by the facts C1 and C2, we proposed the method of MP-Nav. Thanks for pointing out this; we will revise the paper and further clarify the explanations. \ [1] Geometry-aware instance-dependent adversarial training, in ICLR 2021\ [2] Data-efficient backdoor attack, in IJCAI 2022\ [3] Towards effective clean label backdoor attacks, in Pattern Recognition 2023\ [4] BadLabel: A robust perspective on evaluating and enhancing label-noise learning, in TPAMI 2024 2 [Claims and Evidence (Comment C4)]: “Table 3 shows the utility of the method, which has a marginal difference as the baselines. I doubt if the testing is hard enough, as all methods share similar utility. It is not clear if this attack method will balance utility with attack effectiveness. Since the number of poisoned examples is not specified, it is hard to associate the attack result in Fig 2 with Table 3.” \ **Response 2**: In Table 3, we have chosen the standard benchmark test sets, Flickr-PASCAL has a 1K test set, and COCO has a 3.9K test set. The test set is large and comprehensive for evaluating overall utility. Attacker aims to maintain the model utility and enhance attack efficacy. In Flickr-PASCAL with a training set (around 30K), we only allow 25 poisoned data, and in COCO training dataset (around 120K), we only allow 284 poisoned data. Marginal difference or even no difference in Table 3 is exactly what the attacker is aiming for. Table 3 (model utility) associates with both Table 2 and Figure 2 (attack efficacy), which justifies the MP-Nav outperforming the baselines. We will revise the paper and make the explanation clear. 3 [Weakness comment]: “The effectiveness of the method lies in a small range of the number of poisoned samples 10-25. This makes the result less significant. I doubt whether the method is really necessary. An attacker can simply increase the number of poisoned instances can enhance the attack. I did not see a reason not to scale up the poisoned instances. Especially, when 25 samples could be very effective.”\ **Response** 3: Conceptually, the number of poisoned instances is related to the attacker budget: a more effective attack method prefers using less budget. For example, in decentralized learning, the attacker aims to control as few as possible nodes to conduct effective poisoning attacks. Moreover, injecting a larger number of poisoned instances increases the risk of detection (that is even the case for other attacks -- such as backdoors -- in existing literature). We will clarify this point further in the revision. 4 [Clarifications & Typos]: “Fig. 5 and 6 are hard to read” and “image captaining in Line 12” “Figues 2 should be Figure 2 in Line 331.”\ **Response** 4: Figures 5 and 6 are comprehensive results that back Table 2. Main results that justify the efficacy of MP-Nav. Thank you, we will re-work the figures to enhance readability, add more descriptions, and correct the mentioned typos.
null
null
null
null
null
null
CAT Merging: A Training-Free Approach for Resolving Conflicts in Model Merging
Accept (poster)
Summary: ## Summary The paper introduces CAT Merging, a training-free framework for merging multiple expert models while mitigating knowledge conflicts. Existing methods, such as task vectors, merge models by accumulating task vector weights, but conflicting components across tasks can lead to performance degradation. CAT Merging addresses this by formulating conflict resolution as an optimization problem to trim conflict-prone components. Experimental results show that CAT Merging significantly reduces knowledge conflicts and show (in most cases) accuracy gains of a few percentages over SoTA. ## Update after rebuttal I appreciate the response by the authors. I maintain my score. Claims And Evidence: Yes. I found the paper very clear in this respect. The presented claims are well supported by theoretical arguments and experiments. Methods And Evaluation Criteria: Yes, the method is thoroughly evaluated across multiple architectures and datasets. Additionally, the paper includes a well-conducted ablation study and sensitivity analysis. Theoretical Claims: I checked the correctness of the claims in the main paper. I didn't look into the proofs provided in the appendix. Experimental Designs Or Analyses: The experimental results seem sound, but the reviewer is concerned that the performance gains are relatively modest. Given that the proposed method does not achieve the best performance across all tested settings despite its solid theoretical foundation, it would be valuable to analyze and discuss the sources of suboptimality. Supplementary Material: Yes. The reviewer checked additional experiments, but not the theoretical proofs. Relation To Broader Scientific Literature: The authors provide a clear presentation of their method and its contributions to the scientific literature. I would like to bring their attention to the following concurrent work: https://openreview.net/forum?id=4wuvmJRAU4. Essential References Not Discussed: The references are sufficient. Other Strengths And Weaknesses: The paper presents a theoretically grounded approach to conflict resolution in task vectors, making it an interesting read. However, the somewhat low score is primarily due to the method's weak performance compared to the baselines and the lack of clarity on the underlying reasons for this suboptimality. Other Comments Or Suggestions: In Sec. 5.1, it would be helpful to clarify the significance of the removal basis B. It is also unclear if B is a binary matrix. Explicitly stating this would improve clarity. Questions For Authors: What factors contribute to the suboptimal performance of CAT Merging? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q4.1: Pay attention to the concurrent work.** **A4.1:** Thank you for highlighting the concurrent work, "Interfering with Interference: Blind Shuffling and Superposition for Better Multi-Model Compression," which addresses interference during multi-model merging through random layer shuffling and orthogonal transformations. We appreciate the relevance, as both this concurrent paper and our CAT Merging framework aim to mitigate interference between task vectors. However, there are fundamental differences between their approach and ours: **Motivation and Insight**: The concurrent work attributes interference primarily to task vector similarity and proposes randomization-based techniques (layer shuffling and task vector superposition) to increase orthogonality. In contrast, our work explicitly identifies conflict-prone components within task vectors and resolves them through targeted, parameter-specific trimming strategies. **Methodological Differences**: Their approach depends on randomized transformations, necessitating task-specific decoding (inverse transformations) at inference time. This requirement limits applicability in scenarios involving mixed-task batches. In contrast, CAT Merging enables seamless multi-task inference without task-specific decoding or per-sample routing, making it more suitable for practical deployments involving shared-task batches. We will include a discussion of this concurrent study in the related work section of our revised manuscript to position our contributions relative to this relevant work. **Q4.2: What factors lead to the suboptimal performance of CAT Merging?** **A4.2:** We thank the reviewer for highlighting this important point. While CAT Merging achieves superior **average** performance—improving accuracy by 2.5% (ViT-B/32) and 2.0% (ViT-L/14) compared to state-of-the-art methods—it is true that it does not always yield the highest accuracy on every individual dataset. Specifically, we observe that Fisher Merging exhibits better performance on certain datasets (e.g., Cars and SUN397), likely because its weighting mechanism, based on the Fisher information matrix, implicitly prioritizes tasks with weaker performance (larger gradients produce higher Fisher information scores). Conversely, PCB Merging achieves superior performance on datasets like SVHN and MNIST, where masking low-magnitude vector components implicitly favors tasks with stronger fine-tuning outcomes (assuming larger vector magnitudes correlate with greater task specialization). However, both Fisher and PCB merging tend to perform less consistently across other tasks. In contrast, our CAT Merging framework explicitly targets inter-task knowledge conflicts and aims for a balanced integration across tasks. To illustrate this balance quantitatively, we measured the standard deviation of accuracy drops (defined as the accuracy difference between task-specific models and the merged model) across tasks in the following table. It shows that CAT Merging demonstrates significantly lower variance, reflecting its ability to merge multiple tasks more evenly, without undue preference towards any particular task. We will clarify this analysis and discussion in the revised manuscript. | | Fisher Merging | RegMean | Task Arithmetic | PCB Merging | CAT Merging (ours) | | --- | --- | --- | --- | --- | --- | | ViT-B/32 | 13.78 | 8.46 | 8.95 | 6.85 | **6.21** | | ViT-L/14 | 6.79 | 6.86 | 5.11 | 3.49 | **2.51** | **Q4.3: What is the significance of the removal basis? Is it binary?** **A4.3:** Thank you for this helpful suggestion. We will clarify this point explicitly in the revised manuscript. Specifically, the removal basis $B$ is a real-valued matrix (not binary). It is optimized to define a task-specific subspace, within which conflict-prone components are identified and suppressed via orthogonal projection. The basis matrix $B$ thus plays a crucial role in effectively mitigating knowledge conflicts during the merging process.
Summary: The paper introduces Conflict-Aware Task Merging, a training-free model merging method that addresses knowledge conflicts in multi-task model merging. The meaning of knowledge conflicts is that existing methods, such as Task Arithmetic, suffer from conflicts when integrating multiple fine-tuned task vectors, often resulting in performance degradation. CAT Merging mitigates these conflicts by selectively trimming conflict-prone components from task vectors based on different parameter types: Feature projection for linear weights, Masking for scaling parameters in normalization layers and Masking for shifting parameters. The method is evaluated on vision and vision-language tasks, demonstrating up to 4.7% accuracy improvement on ViT-B/32 and 2.0% on ViT-L/14 compared to state-of-the-art model merging techniques. Claims And Evidence: S1: The paper is well-written and clearly organized, with detailed algorithmic descriptions (e.g., Algorithm 1) and thorough theoretical derivations (e.g., Theorem 5.1), making it easy for readers to understand the problem, methodology, and experiments. Additionally, the related work section is logically organized and highly readable; Methods And Evaluation Criteria: S3: The method is sound, and the idea of modeling the knowledge conflict through inter-task knowledge conflict and intra-task knowledge deviation is good. Additionally, the proposal and the design of Φ_k are well-designed; Theoretical Claims: S2: The mathematical derivations in the paper are well executed, with sufficient details provided throughout. Additionally, the proofs included in the appendix are clear and well-structured, making it easy for readers to follow and understand; S4: The approach decomposes the global optimization problem into layer-wise sub-problems, which not only simplifies the complex merging process but also provides valuable theoretical insights. W7: The method’s reliance on theoretical assumptions (e.g., Lipschitz continuity) might not fully capture the complexities encountered in real-world scenarios; a discussion on potential limitations would be valuable. W9: While projection for linear weights and masking for normalization/shift parameters seem reasonable, the paper does not provide a principled justification for why these are the best strategies. Experimental Designs Or Analyses: W1: The paper provides an analysis of previous methods, such as Fisher Merging and Ties-Merging, which are discussed in sections like “Knowledge Conflict” and “Introduction”. However, the sentence in the Introduction --- “As the example shown in the figure 1, magnitude-based methods such as Ties-Merging prioritize task vectors with larger magnitudes (e.g., Task 1) while trimming dimensions with smaller magnitudes, inadvertently discarding critical information from Task 2.” may require further clarification. In contrast, the statement in the Knowledge Conflict section --- “Most existing methods such as Fisher Merging or Ties Merging, implicitly prioritize preserving intra-task knowledge while neglecting the inter-task knowledge conflict” is more direct and could be placed earlier in the paper to better introduce the problem. W6: The experimental evaluation does not include studies on large language models (LLM), which may limit the understanding of the method’s applicability in broader contexts. W8: The paper evaluates CAT Merging on ViT and BLIP models, but does not test its applicability to LLMs. W10: While training-based methods have additional costs, it would be useful to see a comparison against test-time adapted merging techniques to better contextualize the trade-offs. Supplementary Material: Yes, I have review the supplementary materia. W4: In Appendix A.1, the paragraphs “Single-layer Expansion” and “Summing Over All Layers” may have some unclear expressions. For instance, the formula in “Summing Over All Layers”, ∥z(W)-z(W')∥=∥f^L (W^L+ΔW^L )-f^L (W^L )∥, could potentially mislead the reader into thinking that the change in the final output z is only related to the change in the W^L of the L-th layer. Additionally, in the formula in “Single-layer Expansion”, based on the assumptions, it seems that the change in W^l on the left-hand side of equation may not need to be emphasized. Relation To Broader Scientific Literature: S5:The approach is training-free and only relies on a lightweight forward pass with a small number of exemplars. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: The writing is clear, and the logic is well-structured. The method is novel and sound, and I appreciate the modeling of knowledge conflict, the proposal of the Φ_k operation, and the design of the parameter-specific strategies. Additionally, I think the method is reproducible. Considering the strengths and weaknesses discussed above, I find this paper to be a solid contribution. I also hope that the work presented in this paper will see broader applications in real-world settings. Other Comments Or Suggestions: W2: The formatting of Equation (7) appears to have an issue, as there seems to be an extra Δ symbol. W3: There seems to be an issue with Equation (15), where it likely misses parentheses (( and )). Additionally, in the term ∥T_i^l+T_i^l∘m_k^l ∥^2, there should be a - instead of a +. W5: Additional comparisons on inference speed and computational overhead during the merging process would offer a more complete evaluation of the method’s efficiency. Questions For Authors: Please refer to the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q3.1: Writing issues (W1-4).** **A3.1:** Thanks for the suggestions. We will revise them and thoroughly double-check the manuscript to avoid similar issues. **Q3.2: Comparisons on inference speed and computational overhead (W5).** **A3.2:** **The inference speed** remains consistent with that of the individual models, as CAT Merging produces a unified model to conduct inference without adding extra layers. **The computational overhead** of CAT Merging is reasonable and practically efficient. Specifically, CAT Merging involves two main steps: 1. **Feature Extraction:** This step is lightweight and efficient, requiring only a small number (2–3 per task) of unlabeled samples. 2. **Eigendecomposition:** While eigendecomposition has theoretically higher computational complexity, in practice, we efficiently mitigate this through GPU parallelization. Moreover, CAT Merging only requires the eigenvectors corresponding to the top-*c* (2-4 in our work) eigenvalues, enabling further acceleration through specialized methods (e.g., `torch.lobpcg`). Empirical results (provided in the table below, measured on a single RTX3090 GPU in seconds) demonstrate that CAT Merging significantly outperforms training-based counterparts (e.g., TA w/ Surgery, AdaMerging) in terms of computational efficiency. | | ViT-B/32 | ViT-L/14 | | --- | --- | --- | | PCB Merging | 43 | 131 | | CAT Merging (ours) | 46 | 150 | | TATR | 176 | 283 | | TA w/ Surgery | 12621 | 36826 | | AdaMerging | 8276 | 16299 | **Q3.3: Comparisons of LLMs (W6 and W8).** **A3.3:** Thanks for your suggestion. To further validate CAT Merging on language tasks, we conducted additional experiments using RoBERTa as the backbone model on the GLUE benchmark, which comprises eight diverse NLP tasks, including classification and regression (STS-B). We report accuracy for classification tasks and the mean of Pearson and Spearman correlations for the regression task. As summarized in the table below, CAT Merging consistently achieves superior average performance compared to existing state-of-the-art merging methods, demonstrating its effective generalization and robustness in language model merging scenarios. | **Algorithm** | **cola** | **mnli** | **mrpc** | **qnli** | **qqp** | **rte** | **sst2** | **stsb** | **Average** | #best | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Task Arithmetic | 6.68 | 66.23 | **78.46** | 78.62 | 72.69 | 53.43 | 83.49 | 27.10 | 58.34 | 1 | | Ties-Merging | 9.46 | 59.34 | 74.71 | 65.93 | 41.29 | 47.29 | 72.13 | 9.210 | 47.42 | 0 | | PCB Merging | 11.40 | 50.85 | 77.63 | 78.22 | 55.78 | 60.29 | 75.57 | **67.01** | 59.59 | 1 | | CAT Merging (Ours) | **33.20** | **72.33** | 68.22 | **82.92** | **76.05** | **62.82** | **89.33** | 15.57 | **62.56** | **6** | **Q3.4: The theoretical assumptions do not capture the complexities of real-world scenarios (W7).** **A3.4:** We agree with the reviewers that theoretical assumptions, such as Lipschitz continuity, may not fully capture the complexities encountered in real-world scenarios. Nevertheless, we would like to emphasize that the primary role of Theorem 5.1 is to provide theoretical insight and motivation for our layer-by-layer trimming strategy, rather than deriving tight practical bounds. For further details, please see our response A2.6. In the final version of our manuscript, we will clarify this point further and include an explicit discussion on the potential limitations of our method. **Q3.5: Are the projection and masking the best strategies (W9)?** **A3.5:** Good question. Our proposed strategies—projection for linear weights and masking for normalization parameters—are empirically motivated heuristics chosen for their computational efficiency and practical effectiveness in mitigating parameter conflicts. While these methods achieve strong empirical results, we acknowledge they are not theoretically guaranteed to be optimal. In future work, we aim to explore principled approaches, such as weighted averaging or Mixture-of-Experts (MoE), to further refine and theoretically ground our conflict mitigation techniques. **Q3.6: Comparing with training-based methods (W10).** **A3.6:** Thank you for this suggestion. The following table shows that our CAT Merging achieves comparable or superior performance relative to two representative training-based techniques, demonstrating its effectiveness without incurring additional computational costs. | | ViT-B/32 | ViT-L/14 | | --- | --- | --- | | TA w/ Surgery | 80.9 | 89.0 | | AdaMerging | 81.1 | 91.0 | | CAT Merging (ours) | 78.3 | 89.6 |
Summary: The paper proposes a novel model training-free model merging algorithm that removes the conflicting components of task vectors. This is done in a round robin fashion; for each task vector, the conflicting components of each other task vector are computed and removed from them. This is done with a projection for linear layers and masking for others. The method requires task-specific data, but authors show that even 1 sample per dataset yields good performance. The paper includes the standard 8-task CLIP benchmark proposed by the original task arithmetic paper as well as a vision-language benchmark. Claims And Evidence: The claims made in the submission are supported by empirical evidence. They would be stronger if LLM benchmarks would be included like in most model merging works. Methods And Evaluation Criteria: The proposed methods and evaluation criteria do make sense. The authors should include all baselines from Tables 2 and 3 in their final experiment presented in Table 4 for completeness. Theoretical Claims: There are some issues with the theoretical claims in the paper that I encourage the authors to fix and/or clarify. The authors present their method for various types of layers in subsections 5.1, 5.2 and 5.3. A derivation is given and the proofs are provided in the appendix. In all cases, the end results refer to “top $c$ eigenvalues or components”. However, in Appendices A.3, A.4, A.5 the “proof” says that this is **"implied"** and does not provide any reasoning about how to actually optimize for $c$. I believe that this makes the derivation too informal and undermines the paper. Furthermore, the bound provided in Theorem 5.1 is vacuous and does not provide any insight. Experimental Designs Or Analyses: The experiments are sound and valid, following the standard practices of the field. Supplementary Material: I reviewed Appendix A, containing the proofs. Relation To Broader Scientific Literature: The paper reviews adequately the related work and provides multiple recent baselines in the experimental section. Essential References Not Discussed: I think all essential references are discussed. Other Strengths And Weaknesses: ## Strengths * The paper provides an algorithm that can work with as few as 1 sample per task to remove the conflicting components in the task vectors. This is a major strength of the paper and should be highlighted more imo. * The performance of the proposed method is strong across multiple benchmarks and the experimental validation includes multiple baselines. * The analysis of knowledge conflict is illuminating. * Afaik, the idea is original and the result significant. Similar methodologies, such as Ties are referenced in the paper. ## Weaknesses 1. The theoretical claims are not well supported \- see comment on “Theoretical Claims”. 2. The writing of the paper can be improved: 1. Motivation is task vectors with differing magnitudes (see Figure 1). However, the benchmarks are about vision classification with CLIP where the norms are approximately the same for all tasks. Hence, the motivation should be modified to mention the important task directions 2. The mathematical formulations are too verbose and not enough intuition is provided. For instance, in Section 5: no need to have the superscript $l$, it makes the notation cumbersome. Similarly eq9 does not need the sum 3. L149-150: assuming alpha=1 is clearly wrong, since the performance of model merging papers highlights the scaling as a very important parameter. It should not be glanced over for brevity 4. Theorem 5.1 isnt the upper bound vacuous? 5. The paper claims that the procedure is lightweight but does not offer actual runtimes for comparison. Other Comments Or Suggestions: * L86 (2nd col) fix Ilharco citation with `\citet` * L159 fix Guillermo et al. * L159 “perfect weight disentanglement” is not a proper term, since weight disentanglement has not been introduced at all at this stage * Include baselines in Figure 2. For figure 2a only present one model because the difference in performance between the two models might make the curves look flatter than they are. Questions For Authors: 1. How are the exemplars selected in Figure 2? 2. What exactly do you mean by “shift” parameters? The biases? 3. Are attention layers treated as linear layers? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Q2.1: Results on LLM.** **A2.1:** Thanks for your suggestion. We conducted additional experiments using RoBERTa as the backbone model on the GLUE benchmark. As summarized in A3.3 below, CAT Merging consistently achieves superior average performance compared to existing state-of-the-art merging methods, demonstrating its effective generalization and robustness in language model merging scenarios. **Q2.2: Should all baselines from Tables 2 and 3 be included in Table 4?** **A2.2:** Thank you for this suggestion. Table 4 is intended as an ablation study specifically designed to analyze individual components within our proposed method; therefore, including all baselines from Tables 2 and 3 may not align well with its purpose. Could you please clarify whether you instead suggest adding all baselines to Table 3? If so, we will gladly incorporate the additional baselines to ensure a comprehensive and thorough evaluation. **Q2.3: How do the proofs in Appendices A.3, A.4, A.5 “imply top c eigenvalues or components”, and how is c optimized?** **A2.3:** Thank you for pointing out this issue. Specifically, the parameter c is a practical, data-dependent hyperparameter chosen to balance performance stability and exemplar efficiency. Given a fixed c, the optimality of selecting the top c eigenvectors can be rigorously justified. For instance, Eq. (21) in A.3 leads to the following optimization form: $Tr(B^\top GB)=\sum_d \lambda_d ||B^\top v_d||^2$, This is a well-known optimization problem whose solution is directly obtained via the Courant–Fischer theorem, establishing that the optimal choice of $B$ consists precisely of the eigenvectors corresponding to the top c eigenvalues of $G$. We will provide a clearer, step-by-step derivation of this result in the revised appendix. **Q2.4: Modify the motivation to highlight the important task directions instead of magnitude issues.** **A2.4:** Thank you for this insightful suggestion. We will clarify our motivation to reflect that CAT Merging explicitly identifies and trims conflict-prone components based on their directional contributions to knowledge conflicts rather than relying solely on vector magnitudes. **Q2.5: Should not assume α=1.** **A2.5:** Thank you for raising this point. The assumption of α=1 was introduced solely to simplify the theoretical analysis, as the scaling factor α does not affect the resulting conclusion. For example, in Eqs. (9)&(10), if we explicitly include α, the optimal vector B corresponds to the eigenvector of the matrix: $\sum_{i\neq k} {(\alpha T_i)}^\top( {X_k}^\top X_k - \lambda {X_i}^\top X_i) {(\alpha T_i)}=\alpha^2\sum_{i\neq k} {T_i}^\top( {X_k}^\top X_k - \lambda {X_i}^\top X_i) {T_i}$. Since scaling a matrix by a nonzero constant α² does not alter its eigenvectors, our conclusions remain valid. Nevertheless, we agree with the reviewer that, in practical model merging, α is a critical hyperparameter that must be tuned carefully. We will provide necessary clarifications in the revised manuscript. **Q2.6: Isn't the upper bound vacuous in Theorem 5.1?** **A2.6:** We acknowledge that the bound presented in Theorem 5.1 is relatively loose. However, this bound is primarily intended to provide conceptual insight into the motivation underlying our layer-wise trimming strategy. Specifically, Theorem 5.1 establishes the inequality: $| L(W) - L( W + \Delta W) | < \beta \sum_{l=1}^L \Bigl(\prod_{m=l+1}^L \gamma_m\Bigr) \|\Delta f^l(W^l) - f^l(W^l + \Delta W^l) \|$. This inequality highlights that reducing layer-specific conflicts (the right-hand side) directly contributes to controlling the difference in model performance (the left-hand side). Thus, even though the bound itself is not tight, it provides theoretical justification for our subsequent conflict-aware trimming strategies at each layer. **Q2.7: Comparison of runtimes.** **A2.7:** Please see A 1.3 in our response to C9mN. **Q2.8: Figure 2 needs improvement.** **A2.8:** Thanks. We update Figure 2 in [sensitivity-experiment.png](https://postimg.cc/HjPTnbtt), which now more clearly demonstrates that our method remains stable in a rational range, with respect to both the number of exemplars and the value of α. **Q2.9: How are the exemplars selected in Figure 2?** **A2.9:** The exemplars are selected randomly, and we report the average performance of three random runs. **Q2.10: Do “shift” parameters mean biases?** **A2.10:** Yes. They include both the bias parameters of linear layers and the shift parameters in normalization layers. Since they share the same trimming form, we treat both of them as “shift” parameters for brevity. **Q2.11: Are attention layers treated as linear layers?** **A2.11:** We decompose it into several linear layers and treat them separately (e.g., three linear layers for Q/K/V calculations). **Q2.12: Writing issues (verbose formulations and citations).** **A2.12:** We will revise accordingly in the final version.
Summary: This paper proposes Conflict-Aware Task Merging (CAT Merging), a training-free method to combine multiple fine-tuned models while alleviating knowledge conflicts that degrade performance when merging. The core idea is to selectively trim conflict-prone components from each task’s weight update (“task vector”) instead of simply adding them. The approach applies parameter-specific strategies – projecting linear layer weight updates and masking normalization scale/shift parameters. Claims And Evidence: The claims made in the submission are generally supported. Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense Theoretical Claims: I have carefully reviewed all the theoretical formulations and proofs, and from my perspective, most of them appear to be correct. However, I have concerns regarding Theorem 5.1. The theorem seems to assume that all linear layers are stacked sequentially, which is not always the case in practical architectures. For example, in Transformer attention layers, the mathematical formulation involves a multiplicative interaction: $$z=σ(XW_Q (XW_K )^T )XW_V=σ(XW ^{KQ}X^T )XW_v, W ^{KQ}=W_Q W_K^T$$ where three linear layers are multiplied together rather than applied in a purely sequential manner. This multiplicative structure makes the Lipschitz continuity assumption less reliable, as the final output’s dependence on input perturbations is quadratic or even cubic (at least $XW^{KQ}X^T$ is quadratic to the input). When the experiment results in this paper mainly focus on Transformer architectures, could the author explain this assumption further? Experimental Designs Or Analyses: The experiments look descent. Supplementary Material: I read all the supplementary material. Relation To Broader Scientific Literature: This work situates itself at the intersection of multi-task learning and model merging, building upon and extending prior research in both domains. The authors provide a solid literature review (Section 2) that distinguishes traditional multi-task learning from the newer paradigm of model merging. Essential References Not Discussed: I am not familiar with multi-task literature and therefore unaware of important references. Other Strengths And Weaknesses: Strengths: - The paper presents a novel solution to an important problem. This paper takes the concept of knowledge conflicts into consideration in the problem of task merging. - The implementation of training-free conflict-aware merging is elegant. This work explicitly formulated and solved the dual-objective conflict minimization per layer. - The experiment results are impressive and comprehensive. Weakness: - Over-prioritization vs magnitude. the paper mentions that differences in task vector magnitudes can lead to over-prioritization of certain task vectors, resulting in problematic merging. However, the proposed trimming method does not seem to fully resolve this issue. For example, for linear layers, the trimmed task vector is $T(I - B^l (B^l)^\top)$. The multiplication between the matrix $T$ and $B^l (B^l)^\top$ indicates that a task vector with a large magnitude before trimming will likely still have a large magnitude after trimming. As a result, the same over-prioritization problem mentioned in the Introduction could persist. - High computation complexities. Although computing the trim matrix $B_k^l$ and $m_k^l$ does not require explicit training, it relies on eigenvector computation, which has a computational complexity of $O(n^3)$. This can be a significant bottleneck, especially when dealing with large-scale models where the hidden dimension can be extremely high (e.g., 4096). - Activation and non-linear functions. All the formulas and implementations in this paper only consider the linear matrices. However, activation functions are an important part of neural networks, and this paper doesn’t discuss it at all. Could these activation functions can further simplify the trim matrices? For example, after ReLU, some parts of the outputs are deactivated, and we don’t need to consider the corresponding trim matrices. Other Comments Or Suggestions: More comments. - Figure 4(b) shows that the model performs best when c is relatively small (around 2 to 4). However, the trimmed task vector $TB_i^l (B_i^l)^\top$ lies within the column space of B, meaning that only a very low-dimensional subspace (2–4 dimensions) is being trimmed. Does this suggest that knowledge conflicts are not as severe as initially claimed? - Figure 4(a) shows that the model performs stable when lambda is relatively large. Since a small lambda places greater emphasis on inter-task knowledge conflict, we expect that the model’s performance should also be good if knowledge conflict is important. To fully understand the impact of the trade-off between knowledge preservation and conflict suppression, it would be more informative to include experiments where lambda falls into (0, 1). - The value of Eq. (21) is exactly the top c eigenvalues of the matrix $G$. Could authors provide an energy plot that visualizes the eigenvalues of the matrix G? Such a plot would help assess whether the eigenvalues decay sharply or remain relatively large beyond c components. Questions For Authors: Aforementioned in previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1.1: Is the Lipschitz continuity assumption becoming less reliable in Transformer architectures?** **A1.1**: We thank the reviewer for this insightful observation. Indeed, the multiplicative interactions in Transformer architectures complicate the Lipschitz continuity assumption. However, given that both the network parameters and the input data are practically bounded, we can still derive a sufficiently large Lipschitz constant, under which Theorem 5.1 remains valid—albeit with a looser upper bound. Importantly, we would like to emphasize that the primary role of Theorem 5.1 is to provide theoretical insight and motivation for our layer-by-layer trimming strategy, rather than deriving tight practical bounds. Please see response A2.6 for additional details. **Q1.2: Is the trimming method proposed to solve the magnitude issue?** **A1.2**: Not exactly. The proposed trimming method specifically addresses knowledge conflicts during model merging rather than magnitude issues alone. Magnitude-based techniques (e.g., Ties-Merging) attempt to resolve conflicts by masking low-magnitude components. However, as illustrated in Fig. 1, simply masking low-magnitude components does not fully eliminate conflicts, since high-magnitude components can also cause significant interference. In contrast, our CAT Merging explicitly identifies and trims components based on their actual contribution to knowledge conflicts, rather than simply their magnitude. This targeted strategy more effectively mitigates conflicts, leading to improved merging performance. **Q1.3: Does CAT Merging have a high computational complexity?** **A1.3:** The computational overhead of CAT Merging is reasonable and practically efficient. Specifically, CAT Merging involves two main steps: 1. **Feature Extraction:** This step is lightweight and efficient, requiring only a small number (2–3 per task) of unlabeled samples. 2. **Eigendecomposition:** While eigendecomposition has theoretically higher computational complexity, in practice, we efficiently mitigate this through GPU parallelization. Moreover, CAT Merging only requires the eigenvectors corresponding to the top-*c* (2-4 in our work) eigenvalues, enabling further acceleration through specialized methods (e.g., `torch.lobpcg`). Empirical results (provided in the table below, measured on a single RTX3090 GPU in seconds) demonstrate that CAT Merging performs much faster than training-based counterparts (e.g., TA w/ Surgery, AdaMerging). | | ViT-B/32 | ViT-L/14 | | --- | --- | --- | | PCB Merging | 43 | 131 | | **CAT Merging (ours)** | 46 | 150 | | TATR | 176 | 283 | | TA w/ Surgery | 12621 | 36826 | | AdaMerging | 8276 | 16299 | **Q1.4: Could activation functions simplify the trimming matrices?** **A1.4:** We agree that a deeper analysis of activation functions would be beneficial. While CAT Merging does not explicitly model activation functions, it implicitly captures their effects through the layer-wise trimming strategy. For instance, activation functions such as ReLU can cause certain dimensions to consistently remain inactive. Referring to Section 5.2 of our paper, the trimming mask for scaling parameters is computed as: $$\sum_{i\neq k} \left( \sum_{x^l_k} ( x^l_k\circ T^l_i )^2 - \lambda \sum_{x^l_i} ( x^l_i\circ T^l_i)^2 \right)$$ If the $d$-th dimension remains consistently inactive (i.e., $x_k^l[d] \equiv x_i^l[d] \equiv 0$), the corresponding element in the trimming mask naturally becomes zero. Thus, activation functions indirectly simplify the trimming process, even though they are not explicitly modeled in CAT Merging. We will clarify this point explicitly in our revised manuscript. **Q1.5: Why should only a very low-dimensional subspace (2-4 dimensions) be trimmed as suggested in Figure 4(b)?** **A1.5:** In Figure 4(b), trimming only a low-dimensional subspace (2–4 dimensions) is sufficient because knowledge conflicts are predominantly concentrated within a few principal dimensions. Specifically, as shown in [eign.png](https://postimg.cc/jCQwQn8Z), the first few eigenvectors represent the most significant directions, accounting for an average of 78.56% (ViT-B/32) and 87.28% (ViT-L/14) of the total eigenvalues. Thus, while conflicts are indeed severe, their severity primarily manifests along these critical dimensions. Moreover, trimming a higher-dimensional subspace risks unnecessarily degrading the original task performance, as it could remove important task-specific information. Therefore, selecting this low-dimensional subspace effectively balances conflict mitigation and preservation of model performance. **Q1.6: λ in Figure 4(a) should be tuned in the range of (0, 1).** **A1.6:** As suggested, we conducted additional experiments with λ values in the (0, 1) range. The results, provided in [sensitive-lambda.png](https://postimg.cc/xcsrvQkS), indicate that model performance consistently improves as λ increases from 0 toward 1, peaking around 5.
null
null
null
null
null
null
Pretraining Generative Flow Networks with Inexpensive Rewards for Molecular Graph Generation
Accept (poster)
Summary: The paper introduces Atomic GFlowNets (A-GFN), a novel generative model for molecular graph generation that leverages individual atoms as building blocks to explore drug-like chemical spaces more comprehensively. It adopts a pretraining mechanism using ZINC dataset, where A-GFN learns from inexpensive yet informative molecular descriptors such as drug-likeness, and presents a goal-conditioned finetuning process to adapt A-GFN for downstream optimization tasks. Claims And Evidence: The authors provide detailed experimental results that support the claim of proposed A-GFN framework in tasks including property optimization, property targeting, and property-constrained optimization. Methods And Evaluation Criteria: The methods and evaluation criteria are well-aligned with the molecular generation task. Theoretical Claims: The paper applies existing GFlowNet theory. Experimental Designs Or Analyses: The authors provide detailed descriptions of their experiments on the ZINC dataset pretraining and 3 downstream optimization tasks, comparing their approach to fragment-based GFlowNets, REINVENT, and GraphGA. The ablation studies and sensitivity analyses provide valuable insights into the impact of different design choices Supplementary Material: Yes, I reviewed the supplementary parts. Relation To Broader Scientific Literature: This paper extends fragment-based GFlowNets to atomic action spaces. Its pretraining aligns with unsupervised RL pretraining but adapts to molecular design via hybrid online-offline learning. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: 1. The paper introduces Atomic GFlowNets (A-GFN), a novel generative model that leverages individual atoms as building blocks, allowing for a more comprehensive exploration of drug-like chemical spaces compared to traditional fragment-based methods. 2. The authors provide a thorough evaluation of A-GFN across various tasks, including property optimization, property targeting, and property-constrained optimization, showing superior performance compared to other methods. Weaknesses: 1. Figures (e.g., Figure 3,5,6,7,8,9, 10) are blurry or inconsistently scaled, hindering interpretation of molecular structures and training dynamics. 2. Post-hoc validity corrections (e.g., RDKit filtering) during finetuning muddy the contribution of atomic-level modeling. This undermines claims about the standalone effectiveness of the pretrained policy. For instance, the fragment-based GFlowNet baseline might not explicitly enforce validity via RDKit during generation, while A-GFN inherently guarantees validity by design. This creates an unfair advantage for A-GFN. 3. The paper does not provide computational costs comparisons between A-GFN and other methods (fragment-based GFlowNets, REINVENT), preventing a clear assessment of performance-to-resource tradeoffs. Given the substantial pretraining investment (4 A100 GPUs for 12-18 days), explicit efficiency comparisons would help understand if the atomic-level modeling advantages justify these increased computational demands. 4. The paper constructs an extensive framework that somewhat obscures its core contributions. The structure needs to better emphasize how the atomic-level design specifically enhances pretraining capabilities, rather than focusing excessively on comparisons with non-pretrained methods. The introduction of more data through pretraining potentially creates unfair comparisons (For instance, pretraining data from ZINC may partially overlap with evaluation tasks like logP optimization), leading to somewhat trivial conclusions with non-pretraining methods. Additionally, the pretraining comparisons are affected by the design concerns mentioned in 2. regarding validity enforcement. Based on these observations, I believe this work requires further revision to better highlight and clarify its actual contributions. 5. The paper seems not provide detailed descriptions of the implementation of the baseline methods. The baselines, such as fragment-based GFlowNets, MolDQN, Reinvent, and others, are mentioned in the context of comparison, but the paper does not delve into the specifics of how these baseline methods were implemented or fine-tuned. This makes the experimental results difficult to interpret, as differences in implementation details could significantly impact performance comparisons. Other Comments Or Suggestions: Please refer to weakness Questions For Authors: Please refer to weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. Our response and proposed revisions for the concerns raised by the reviewer are as follows # 1 We will ensure that figure sizes remain consistent throughout the appendix, particularly improving the font readability of Figure 3. Thank you for pointing this out. We have separated the molecules into separate high-resolution images for better legibility: https://imgur.com/4pAWr4Q; https://imgur.com/a/jGxFnhY; https://imgur.com/a/uroAmCe; https://imgur.com/a/0biixYL. We’ll update the figure in the revised version of the paper. These molecules were sampled from the same pretrained model as before. # 2 Our approach does not involve post-hoc RDKit filtering or penalization. Instead, A-GFN ensures atomic valency correctness at every generation step, preventing invalid states. This is crucial for atom-based GFlowNets, which operate in a vast combinatorial space, unlike fragment-based GFN, which is limited to predefined fragments. Without valency constraints, the model would frequently generate infeasible structures, making exploration intractable. Molecular validity is just one of many evaluation criteria. The key metric #Modes measures how many diverse molecules satisfy all four pretraining conditionals (TPSA, Num Rings, QED, SAS). A-GFN significantly outperforms fragment-based GFN in this regard, demonstrating superior exploration—not merely benefiting from valency constraints. Thus, valency enforcement is an inherent design choice, not a post-hoc filtering step, and does not confer an artificial advantage. # 3 All baselines (Fragment-GFN, REINVENT, MolDQN, GraphMCTS) were fine-tuned under identical compute constraints: 24 hours on a single V100 GPU, ensuring a fair comparison. A-GFN pretraining lasted \~12 days on 4 A100 GPUs (250K steps), but a 100K-step checkpoint (\~5 days) was sufficient for fine-tuning, as reward plateaued (see https://imgur.com/a/hLITsk5 ; to be included in the revised paper). While pretraining incurs a one-time cost, it benefits multiple downstream tasks. Once pretrained, A-GFN fine-tuning takes just 24 hours on a V100, making it comparable to molecular optimization baselines. This aligns with established pretraining and finetuning practices seen in ChemBERTa and MoLFormer. # 4.1 We would like to clarify that our primary contribution is not that atomic-level design enhances pretraining, but rather the other way, i.e., pretraining is essential for A-GFN to function effectively. Without pretraining, A-GFN fails to generate viable molecules. Our experiments confirm that training A-GFN from scratch (Task train AGFN) consistently fails to satisfy task constraints. # 4.2 Using ZINC for pretraining could introduce overlap with evaluation tasks like logP optimization, but it is common in pretraining large models to have some solutions for the downstream task to be contained in large scale pretraining data as a broadly representative pretraining data distribution is desired. Zero-shot OOD molecule generation is non-trivial, even for experts. To ensure benchmarking remains challenging, we chose property thresholds so that <25% of ZINC molecules meet the optimization criteria. Additionally, our Novelty metric measures unique molecules not present in ZINC, confirming that solutions stem from exploration rather than memorization. # 4.3 Some baselines (Fragment-GFN, REINVENT) were pretrained using their respective methods, while others (MolDQN, GraphMCTS) were trained from scratch, per the PMO benchmark (https://github.com/wenhao-gao/mol_opt/tree/main) . We used publicly available pretrained models where applicable, ensuring fair comparisons. A-GFN’s superior performance is due to better exploration, not an unfair pretraining advantage. # 5 We acknowledge that our paper does not include detailed descriptions of the baseline implementations, and we appreciate you bringing this to our attention. All baseline methods used in our work were directly adopted from the publicly available GitHub repository: https://github.com/wenhao-gao/mol_opt/tree/main. This repository is associated with the preprint https://arxiv.org/pdf/2206.12411, which includes some relevant implementation details, such as whether a method involves pretraining (check Table 7). However, specific aspects such as pretraining compute or further fine-tuning configurations are not comprehensively documented there either. We will add these details in the revised version of the paper for better transparency and reproducibility. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses to my concerns. The clarifications regarding figure quality, validity enforcement mechanisms, and computational constraints are helpful. However, I still have reservations about the presentation of metrics and baseline comparisons. While the paper contains extensive tables and metrics, many specialized terms and evaluation criteria remain insufficiently explained, making it difficult for readers not deeply familiar with the field to assess the significance of the results. For instance, the case study on page 7 featuring five targets shows impressive performance, but without adequate explanation of these targets' significance or the meaning of the specific metrics used, the impact is diminished. In the revised version, I would strongly encourage the authors to: (1) Provide more thorough explanations of specialized metrics and their significance; (2) Offer more context for the case studies and their relevance to the field; (3) Include a more accessible discussion of results for readers who may not be intimately familiar with all molecular optimization metrics. With these improvements, the paper would be significantly strengthened and more accessible to a broader audience, especially for the readers from the ML community. Given the thoroughness of the experimental work and the novelty of the approach, I am adjusting my recommendation slightly upward.
Summary: This paper proposes a training strategy to improve GFlowNet-based molecular generation. First, it uses atom-based policy rather than fragment-based policy to enable access to a larger chemical space. Second, this work proposes using expert trajectories constructed from ZINC to pretrain the network, which improves drug-likeness and sampling efficiency. Third, this work also proposes pre-training the policy network using inexpensive rewards (including QED, NumRings, TPSA, SAS, which can be computed instantly with RDKit). Such pretraining improves the sampling performance for more complex oracle scores. Claims And Evidence: - The first claim of this work is the atom-based model enables exploration of larger chemical space. Table 2 supports this claim, with more modes discovered by A-GFN and higher diversity compared to fragment-based models. - The second claim of this work is the unsupervised pretraining strategy "*enables broader exploration of chemical space while maintaining diversity and novelty relative to existing drug-like molecule datasets*". This claim has been supported by results presented in Table 3, Table 4, etc, which shows that pretraining the model using trajectories derived from ZINC and inexpensive property conditioning can improve sampling. - The last claim is the fine-tuning strategy and its effectiveness in multiple applications, which has also been well-supported by experimental results presented in Section 6.4 and Section 7. - Overall, the main claims of this work have been well-supported by experimental evidence. Methods And Evaluation Criteria: - The proposed method is well-motivated by insights into molecular generation and it is a notable contribution to the area of GFlowNet's application in molecular design. Notably, it makes sense that "*these rewards are computationally cheap to evaluate and serve as proxies for more complex properties*", as complex properties have the foundation of physico-chemical properties which are usually inexpensive to compute. - The overall framework is well-motivated, but the components are mostly based on existing techniques. - The evaluation is comprehensive and supports the main claims of this work. Justification of each component and demonstration of application are both provided. Theoretical Claims: N/A Experimental Designs Or Analyses: No remarks. Supplementary Material: No remarks. Relation To Broader Scientific Literature: This work is a contribution to both GFlowNet application and molecular generation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful and constructive review of our paper. We are pleased to see that you find our work well-motivated, comprehensive in evaluation, and a notable contribution to the application of GFlowNets in molecular design. Additionally, we are grateful for your recognition that our claims are well-supported by experimental evidence, with no significant concerns raised regarding our methodology, theoretical foundation, or experimental design. Based on the feedback from other reviewers, we have further improved our paper to enhance clarity, strengthen key arguments, Given that no major weaknesses have been identified in your review and that our contributions are acknowledged as valuable to the field, we kindly ask you to reconsider and raise your current score. Your support would help ensure that this contribution reaches the broader community and advances research at the intersection of GFlowNets and molecular generation.
Summary: This paper introduces Atomic GFlowNets (or A-GFNs), an atom-based generative framework for molecular design based on GFlowNets, proposing a more general-purpose exploration of the chemical space. The authors propose pre-training A-GFNs on inexpensive molecular properties that act as rewards for training the underlying GFlowNet policy, and then fine-tune for use on a variety of downstream tasks. The authors show the effectiveness of the proposed method on a variety of drug design tasks. Claims And Evidence: Claims are well-supported by clear and convincing experiments. Methods And Evaluation Criteria: The atom-based action space makes sense for exploring novel scaffolds, albeit potentially increasing state space complexity. The hybrid online-offline approach and RTB-finetuning are well-motivated and empirically validated in experiments. The use of inexpensive rewards as proxies for general properties is pragmatic and promising for scaling. The evaluation and baselines are comprehensive and properly demonstrate the effectiveness of the proposed method. Theoretical Claims: None to be discussed. Experimental Designs Or Analyses: Yes, the experiment design is sound and extensive overall. Supplementary Material: Yes, mainly appendix D for details on reward functions and appendix C.1 for details on the policy network. Relation To Broader Scientific Literature: The main contributions are very relevant for designing general-purpose molecular generation models that can be fine-tuned for a variety of useful downstream tasks. Essential References Not Discussed: None to the best of my knowledge. Other Strengths And Weaknesses: None to be particularly mentioned here. Other Comments Or Suggestions: None to be particularly mentioned here. Questions For Authors: - Why does TB sometimes outperform RTB in single-objective tasks (Table 3)? Is it due to over-regularization in RTB? - How sensitive is the method to the choice of inexpensive reward descriptors? - How would a fragment-structured GFN compare to A-GFN? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: 1. Why does TB sometimes outperform RTB in single-objective tasks (Table 3)? Is it due to over-regularization in RTB? Thank you for raising this important question. The observed performance difference stems from fundamental differences in how TB and RTB balance optimization objectives: Yes, RTB's design introduces over-regularization in single-objective tasks due to its explicit anchoring to the pretrained prior. While this prior (e.g., a chemical validity model) helps maintain desirable properties in multi-objective settings, it creates conflicting constraints when optimizing for a single objective (which does not capture these desirable properties). We made initial efforts to address this trade-off in our current draft (lines 297-303), where we discuss how RTB’s prior anchoring leads to its weaker performance when compared to TB. We would like to point out that this effect would diminish when the prior already overlaps with high-reward solutions, and/or if tasks require balancing multiple constraints (e.g., both chemistry rules and potency). We appreciate this opportunity to clarify and will emphasize this trade-off more explicitly in our revision. 2. How sensitive is the method to the choice of inexpensive reward descriptors? This is an excellent question, and we appreciate the reviewer's interest in understanding the sensitivity of our method to the choice of inexpensive reward descriptors. The selection of pretraining properties plays a crucial role in shaping the learned policy, and our choices were guided by well-established principles in drug discovery. Specifically, we selected QED (quantitative estimate of drug-likeness), SAS (synthetic accessibility score), TPSA (topological polar surface area), and the number of rings, as these properties are widely used in molecular generation and optimization tasks due to their strong correlation with drug-likeness and pharmacokinetic properties [1]. Empirically, we found that these four descriptors provided a well-balanced pretraining signal that improved sampling efficiency while preserving chemical diversity. Importantly, we observed that adding additional constraints led to premature convergence and mode collapse, as the optimization problem became overly restrictive, significantly reducing the number of valid and diverse molecules that could be generated. This aligns with findings in other large-scale generative modeling approaches, where excessive constraints during pretraining can limit the exploration capacity of the model and degrade its generalization ability in downstream fine-tuning. 3. How would a fragment-structured GFN compare to A-GFN? We thank the reviewer for this question. The finetuned A-GFN generally outperforms fragment based GFN across all tasks considered in terms of diversity of molecules as well as the success percentage, which quantifies the ratio of molecules simultaneously satisfying all the enforced objectives. We have detailed these results in Table 12- 20 in Appendix F. [1] Wellnitz, James, et al. "STOPLIGHT: a hit scoring calculator." Journal of Chemical Information and Modeling 64.11 (2024): 4387-4391.
null
null
null
null
null
null
null
null
SpikeVideoFormer: An Efficient Spike-Driven Video Transformer with Hamming Attention and $\mathcal{O}(T)$ Complexity
Accept (poster)
Summary: This manuscript introduces a video-based transformer model that implements spiking neural networks (SNNs) and Convolutional Neural Networks (CNN). The work highlights the efficiency of the proposed model in video-related tasks, particularly focusing on computational (parameters) and power efficiency. A key contribution is Hamming Attention, a mechanism that solves the dot-product problem in the SNN within the attenrion mechanism, while maintaining linear computational scaling with respect to tokens (spatial/temporal dimensions) (O(TND²)). The model is validated across three tasks: -Human Pose Tracking -Video Classification -Video Semantic Segmentation after rebuttal, I'll keep my recommendation Claims And Evidence: -Solving dot-product's problem in SNN, replacing this with the Hamming Attention -Linear complexity with respect to tokens (spatial/temporal dimensions) (O(TND²)) -Validation of the model in three task, surpassing the State-Of-The-Art in the SNN category. Methods And Evaluation Criteria: -Dot product: Because SNNs are sparse and do not have signals/elements at certain points (When Spike Query Contains No Elements), the dot product produces erratic Attension Maps. -Lineal complexity: It is demonstrated that the model scale linearly when longer sequences are processed -The model is validated across three tasks: Human Pose Tracking, Video Classification, Video Semantic Segmentation Theoretical Claims: -The derivative of Normalized Hamming Similarity is proved -The proposed Hamming-based attention for SNN is proved -The linear complexity with respect to the length of the tokens is experimentally proved Experimental Designs Or Analyses: No code or video is provided to veryffi the autentisity of the results In attachments we can see some qualitative results comparing the task of human posse tracking. The results of other models in the Video Semantic Segmentation task are not shown. Supplementary Material: The derivative of normalized Hamming similarity and hamming-base attention. The linear complexity with respect to the length in ecuation 24 and reported results. Qualitative results for human posse tracking and Video Semantic Segmentation. Relation To Broader Scientific Literature: While I am not deeply related, an improvement related to meta-spikeformer is made Luo, X., Yao, M., Chou, Y., Xu, b., Andli, g. Integervalued Training and Spike-Driven Inference Spiking Neural Network for High-Performance and Energy-Efficien to Bject Detection.eccv, 2024. However, Object Detection's task is omitted Essential References Not Discussed: Object Detection's task is omitted Other Strengths And Weaknesses: -Strengths The manuscript is well-written and includes a wide variety of experiments supported by mathematical proofs. -Weaknesses While the manuscript focuses on developing spiking neural networks (SNNs), it omits addressing the task of Object Detection. Also, recent advancements in transformers (ANN-based) for Semantic Segmentation are not adequately covered in the related work. Other Comments Or Suggestions: No comments Questions For Authors: No questions Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer ZrRF, We greatly appreciate your time and effort in reviewing our work. Below are our point-by-point responses to your comments. --- **Experimental Designs Or Analyses:** - Thanks for the constructive comment. For a qualitative comparison of video semantic segmentation, please refer to _Supp Figure 3_ on the [anonymous GitHub page](https://anonymous.4open.science/w/AnonymousSpikeVideoFormer-5E5A/). Additionally, video results are provided in _Supp Figures 5 and 6_. **The source code, results, and project website will be publicly released.** --- **Relation To Broader Scientific Literature** & **Essential References Not Discussed:** - **We have cited this paper [A] (Line 477) in our work** and also applied in the experiment of Video Semantic Segmentation (Line 386). Integer-valued spike representation proposed in the suggested paper represents spikes as a set of integers, e.g. {0,1,2,3} (Integer-LIF=4), rather than {0,1} only. During inference the integer spike can be separated as a sum of {0,1} spikes, e.g., 3= 1+1+1, 2=1+1+0. This separation maintains spike-driven efficiency of SNN methods, while improving the model's representation ability. - **We further compare our method on the object detection task** against this work and Meta-SpikeFormer, as shown in the table below. Following prior work, we use spiking-based YOLO as the detection head and evaluate on the COCO 2017 dataset. Our method surpasses Meta-SpikeFormer by 0.6 mAP but lags behind SpikeYOLO by 0.2 mAP. However, unlike SpikeYOLO, which is specifically designed for object detection, our method is more general and applicable to a wide range of downstream tasks. |Method|Param|Timestep|Integer-LIF|mAP@50| |:-|:-:|:-:|:-:|:-:| |SpikeYOLO [A]|13.2M|1|4|59.2| |Meta-SpikeFormer|16.8M|1|4|58.4| |SpikeVideoFormer (ours)|16.9M|1|4|59.0| | [A] Integer-valued Training and Spike-Driven Inference Spiking Neural Network for High-Performance and Energy-Efficient Object Detection. ECCV 2024. --- **Other Strengths And Weaknesses:** - Thanks for the valuable suggestion. Please refer to the response above under **Relation to Broader Scientific Literature** for the results related to the object detection task. - We present recent advancements in ANN-based Transformers for semantic segmentation as follows. - Recent advancements in ANN-based Transformers have greatly enhanced semantic segmentation. SETR [1] pioneers a sequence-to-sequence approach, employing a pure Transformer encoder without convolutional layers. Segmenter [2] builds on a ViT backbone pre-trained on ImageNet and incorporates a mask transformer decoder to capture global context. SegFormer [3] further optimizes Transformer-based segmentation with a hierarchical encoder and a lightweight MLP decoder while eliminating positional encoding. Mask2Former [4] refines segmentation by restricting cross-attention to foreground regions using a masked attention operator. Recent CFFM [5] introduces coarse-to-fine feature assembling and cross-frame feature mining to capture both local and global temporal contexts. In contrast, we explore spiking video transformers to develop a faster, more energy-efficient approach for this task. [1] Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. CVPR 2021. [2] Segmenter: Transformer for semantic segmentation. ICCV 2021. [3] Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. ICCV 2021. [4] Masked-attention mask transformer for universal image segmentation. CVPR 2022. [5] Learning local and global temporal contexts for video semantic segmentation. IEEE TPAMI 2024. --- We sincerely appreciate your feedback and will ensure that all results and discussions are thoroughly reflected in the final version.
Summary: The authors present a novel model called the SpikeVideoFormer – a transformer network based on Spiking Neural Networks (SNN). They use Spike-Driven Hamming Attention (SDHA) instead of the usual dot product based self-attention. They claim their network to have a linear temporal complexity compared to the other model architectures that they explored. They show results on 3 tasks – video classification, video semantic segmentation and human pose tracking, achieving better performance compared to their ANN counterparts. Claims And Evidence: The authors made claims about the computational efficiency of their model with power measured in milli Joules instead of Watts, which is what would be expected when using large power systems like GPUs. They should do a comparison in terms of Watts, especially when comparing the ANNs. Additionally, ANNs benefit from the matrix multiplications being sped up by GPUs, does the same hold true for SNNs? Authors should give insights/comparison along those lines as well. The authors should give more specific insights into their training procedures w.r.t. hardware, compute costs, batch sizes, etc. Methods And Evaluation Criteria: I like the approach authors took in evaluating SpikeVideoFormer on a wide variety of tasks – segmentation, video understanding, human pose tracking. Given the SNNs ability to mimic the neuronal firing activity in the brain, they could also present a few tasks from cognitive psychology. For the primate visual system’s ability in tracking long range motion, they could evaluate their model on tasks like PathTracker for video or PathFinder for images. An easier evaluation might be object recognition in noisy environments like Imagenet-C, and compare their methods against models of visual system like VOneNet or Extreme Image Transforms. Theoretical Claims: Proposition 3.1 is the theoretical aspect in the paper with its proof in the Appendix. On a broader look, it seems well written. Experimental Designs Or Analyses: The three tasks described in the paper for evaluating are set up well, besides my other comments related to reporting the metrics like watts and evaluating on tasks from cognitive psychology. Supplementary Material: Yes, I did review the proofs, architecture and visualization of results in the supplementary. The authors should try to include any ground truths for human pose tracking in the supplementary. Relation To Broader Scientific Literature: The authors present an important and novel architecture useful for a lot of downstream video related tasks. This work is relevant to the broader community of neuro-inspired computing with potential applications in long term tracking. Essential References Not Discussed: Given the authors talk about the affinity of their network to the human brain, they should include relevant references on the cognitive science literature for fundamental tasks like visual object recognition, tracking, etc. Other Strengths And Weaknesses: Authors show impressive results in terms of their performance compared ot the other state-of-the-art literature for ANNs and SNNs. One of the ways to strengthen the paper for the reader would be to show the internal workings of the network through some kind of spike-maps (akin to saliency maps) so that the reader can get insights into where the network focuses. This would give insights into some of the explainability metrics of the model. Other Comments Or Suggestions: Some simplification of the writing to make the ideas more cogent would always help the reader. Questions For Authors: For long-term tracking in the human pose tracking, authors mention their results being close to GLoT in ANNs. Did the authors try to improve the results with something like a memory component or gating to improve the long-term tracking abilities of the model? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 4RFZ, We sincerely appreciate your time and effort in reviewing our paper. Please find our point-by-point responses to your comments below. --- **Claims And Evidence:** - Thanks for the value suggestion. Normally, Power = Watts * Time. According to [B, C], when comparing ANNs and SNNs, we typically assume the hardware operates for the same duration. **Therefore, computational efficiency measured in power is equivalent to that measured in watts.** As suggested by other reviewers, we have also reported latency (inference time) as an additional evaluation metric for computational efficiency in our response to Reviewer 9Qge (W1 Inference Time). - ANNs rely on **floating-point multiplications**, where GPUs can accelerate in parallel but with high energy costs. In contrast, SNNs use **binary spikes to propagate, requiring only additions**—a special case (0/1 binary matrix) that GPUs can speed up as well. However, power consumption and processing time can be further reduced on specialized neuron-computing devices [A, B], where multiplication operator is replaced with much faster addition operators. [A] Neuromorphic computing at scale. Nature 2025. [B] Spike-based dynamic computing with asynchronous sensing-computing neuromorphic chip. Nature Communications 2024. [C] Firefly: A high-throughput hardware accelerator for spiking neural networks with efficient dsp and memory optimization. IEEE VLSI 2023. --- **Methods And Evaluation Criteria:** - **We evaluate our approach on the ImageNet-C dataset** to assess object recognition performance in noisy environments. Notably, Extreme Image Transforms [B] lacks ImageNet-C results, so we compare our method with VOneResNet50 [A] and ResNet50 [C]. ||Noise|Gaussian|Shot|Impulse|Blur|Defocus|Glass|Motion|Zoom| |:-|:-:|:-:|:-:|:-:|:-|:-:|:-:|:-:|:-:| |ResNet50 (25.6M) [F]||29.6|27.9|24.5||38.8|26.1|37.9|34.5| |VOneResNet50 [D]||34.6|33.4|31.9||37.8|35.7|37.4|34.0| |Ours (15M)||33.1|34.8|38.0| |39.1|38.9|38.1|35.9| | ||Weather|Snow|Frost|Fog|Bright|Digital|Contrast|Elastic|Pixelate|JPEG| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |ResNet50 (25.6M) [F]||30.1|36.7|43.6|66.8||38.8|44.8|47.0|55.1| |VOneResNet50 [D]||25.2|36.8|30.1|62.4||28.5|48.7|63.3|61.0| |Ours (15M)||28.3|37.0|42.1|66.2||34.5|49.3|61.4|61.8| | [D] Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations. NeurIPS 2020. [E] Extreme Image Transformations Facilitate Robust Latent Object Representations. arXiv 2023. [F] Deep residual learning for image recognition. CVPR 2016. --- **Supplementary Material** & **Other Strengths And Weaknesses:** - We have included ground truths for human pose tracking and the spike attention maps on the [anonymous github page](https://anonymous.4open.science/w/AnonymousSpikeVideoFormer-5E5A/). --- **Essential References Not Discussed:** - Visual cognitive neuroscience studies how factors like attention, motivation, emotion, and expectation shape visual perception and cognition [1]. While the first three enhance relevant stimuli processing, expectation suppresses predictable inputs [2-3]. Predictive processing suggests perception is inference-driven, refining sensory input through internal models shaped by context and experience [4]. Beyond vision, the visual cortex processes object names and activates in blind individuals, indicating broader cognitive roles in memory, imagery, and language [5-7]. Though its full scope remains debated, these insights inspire the brain-inspired spiking neural network (SNN) approach for complex visual tasks, enabling high-speed, low-energy neural processing. [1] The role of context in object recognition. Trends in Cognitive Sciences 2007. [2] How brains beware: neural mechanisms of emotional attention. Trends in Cognitive Sciences 2005. [3] Expectation (and attention) in visual cognition. Trends in Cognitive Sciences 2009. [4] An integrative, multiscale view on neural theories of consciousness. Neuron 2024. [5] Object domain and modality in the ventral visual pathway. Trends in Cognitive Sciences 2016. [6] The human imagination: the cognitive neuroscience of visual mental imagery. Nature Reviews Neuroscience 2019. [7] Reevaluating the sensory account of visual working memory storage. Trends in Cognitive Sciences 2017. --- **Questions For Authors:** - The performance gain stems from space-time joint attention. As shown in our ablation study (Table 6), using spatial-only attention results in pose tracking error (PA-MPJPE) increase of more than 36%. Moreover, GLoT is limited by the quadratic complexity of ANN-based attention, requiring image features to be represented as a high-level single vector. In contrast, our SNN's linear complexity allows for more detailed spatial and temporal feature fusion through spike-driven attention. --- We are grateful for your thoughtful comments and will carefully integrate all results and discussions into the final version. --- Rebuttal Comment 1.1: Comment: Thank you. I am pleased to see a more comprehensive literature review as part of the paper. For Extreme Image Transforms, the paper in Biological Cybernetics looks more cite worthy because of peer-reviews. I am also okay with the Power calculations and inference times as long as it makes it easier for the reader to understand the requirements. Please also include these are part of your revisions to the manuscript. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 4RFZ, thank you for your encouraging and positive feedback. We are glad that your concerns have been addressed. We will refine our work according to your suggestions. In addition, our source code and project website will be made publicly available. Thank you again for your valuable time and effort in helping us improve our paper.
Summary: The authors propose SpikeVideoFormer, an efficient spike-based Transformer to process videos with linear temporal complexity. Technically, a spike-based Hamming attention mechanism is proposed from a theoretical perspective. Then, the authors further analyze several spike-based attention modules for video processing tasks. Finally, three typical video tasks (i.e., classification, human pose tracking, and semantic segmentation) are conducted to evaluate the effectiveness of the proposed method. The results show that the proposed SpikeVideoFormer outperforms SOTA methods in accuracy and energy efficiency. Claims And Evidence: Yes, the claims are very clear. Methods And Evaluation Criteria: Yes, the authors select three typical video tasks to verify the effectiveness of SpikeVideoFormer. Theoretical Claims: Yes,I have checked the correctness of all proofs. Experimental Designs Or Analyses: The effectiveness of the proposed method has been extensively validated through numerous experiments; however, certain ablation studies warrant further exploration to provide deeper insights. Supplementary Material: The supplementary material provides theoretical proofs as well as a comprehensive set of experimental results. Relation To Broader Scientific Literature: Indeed, Transformers have become the primary architecture in deep learning. Investigating spike-based Transformers is particularly valuable from a power consumption standpoint. The authors introduce a new attention mechanism for video processing, advancing the field beyond prior approaches. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. Investigating low-power versions of Transformer architectures is a highly meaningful research direction. 2. The authors propose a spike-based Hamming attention mechanism and provide extensive theoretical proofs to support it. 3. The authors validate the effectiveness of SpikeVideoFormer on three downstream tasks and supplement the paper with detailed appendices for further clarification. Weaknesses: 1. Inference Time Analysis. Although the authors provide a complexity analysis, the inference time of SpikeVideoFormer is not reported, including the inference times of several open-source baseline methods. It would be valuable to compare the inference times and discuss the feasibility of deploying SpikeVideoFormer-like architectures on edge devices. This analysis would significantly enhance the practical relevance of the work. 2. Real-World Event-based Vision Tasks. The selection of three typical video tasks is commendable. However, the authors do not explore real-world event camera tasks, such as long-term event stream processing, action recognition in real-world videos, or scene understanding. Including such tasks would further demonstrate the versatility and applicability of the proposed method in practical scenarios. 3. Energy Efficiency Analysis. While the authors reference previous methods to analyze the energy efficiency of SNN algorithms, the use of AC or MAC operations for power consumption calculations may not be entirely convincing for SNNs. Given the critical importance of this metric, the rationale behind this approach should be thoroughly justified. The authors are encouraged to provide their insights or at least discuss this limitation in detail, as it significantly impacts the credibility of the energy efficiency claims. 4. Ablation Studies and Parameter Analysis. The ablation experiments in the paper are relatively limited. Although the authors conduct three types of experiments, which may constrain the available space, it is essential to discuss the contributions of key parameters and modules. For instance, a more granular analysis of simulation time steps, hyperparameters would provide deeper insights into the proposed method's design and performance. 5. Some Clarity. The authors should clarify how SpikeVideoFormer differs from existing Video Transformers. Beyond replacing activation functions with binary spikes, what are the unique design elements of this novel Spike Transformer architecture? A detailed discussion on these aspects would help readers better understand the innovation and contributions of the proposed method. Other Comments Or Suggestions: No Questions For Authors: Please see the weaknesses and response each comment. Besides, two questions are listed below: 1. In Table 2, what is the difference between video and event stream as inputs for SpikeVideoFormer? 2. Why does the Spike-based Transformer structure emphasize video? Can it not process event streams? Doesn’t the title "SpikeVideoFormer" make the application seem too narrow? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 9Qge, We are grateful for your insightful feedback. Below, we provide a detailed response to each of your points. --- **Other Strengths And Weaknesses:** --- **W1 Inference Time (per video clip $T\times 256\times 256\times 3$ as input)** - **We report the inference time in the table below**, tested on an A6000 GPU and AMD EPYC 7543 CPU for human pose tracking, averaged over 1,000 video clips with a batch size of 1. As $T$ increases from 8 to 32 (4x), GLoT (quadratic ANN attention) achieves the best performance but experiences a 9.8× increase in inference time, while VIBE (linear ANN-GRU) shows a 5.1x increase but performs the worst. Both SNN methods exhibit only 4.3× and 4.6× increases thanks to our proposed spiking space-time linear attention. |Method|Timestep|Power (mJ)|Inference Time (ms)|PA-MPJPE (mm)| |:-|:-|:-:|:-:|:-:| |VIBE (ANNs)| $T=8$ |392.1|**264**|46.3| ||$T =32$| 1511.2|**1335**|53.6| |GLoT (ANNs)| $T=8$ |487.5|**303**|39.9| ||$T =32$| 4046.1|**2972**|46.5| |Meta-SpikeFormer (SNNs)|$T=8$|95.7|**230**|45.7| ||$T =32$|387.2|**1001**|54.5| |SpikeVideoFormer (SNNs)|$T=8$|96.0|**235**|39.8| ||$T =32$|391.2|**1087**|47.5| | - **For edge device deployment**, SNNs, relying only on additions, significantly reduce power consumption and latency on neuron-computing devices [A, B]. ANNs, requiring floating-point multiplications, achieve high parallel acceleration on GPUs but at a high energy cost. On the AMD Xilinx ZCU104 [C], ANNs process at 691.2 GFLOPs/s, while SNNs reach 5529.6 GFLOPs/s—an 8x speedup. In 45nm technology [D], ANN multiplication consumes 4.6pJ, whereas SNN addition uses only 0.9pJ, a 5.1x energy reduction. [A] Neuromorphic computing at scale. Nature 2025. [B] Spike-based dynamic computing with asynchronous sensing-computing neuromorphic chip. Nature Communications 2024. [C] Firefly: A high-throughput hardware accelerator for spiking neural networks with efficient dsp and memory optimization. IEEE VLSI 2023. [D] 1.1 computing’s energy problem (and what we can do about it). IEEE ISSCC 2014. --- **W2 Event** - We have explored long-term event-based human pose tracking in Table 2. - We present results on event-based action recognition (HAR-DVS [E]) and event-based semantic segmentation for scene understanding (DDD17 [G]) in the table below. |Method|Param|Acc| |:-|:-:|:-:| |ACTION-Net [F] (ANNs)|27.9M|46.9| |Meta-SpikeFormer (SNNs)|15.0M|47.5| |SpikeVideoFormer (Ours)|15.0M|47.9| | |Method|Param|mIoU| |:-|:-:|:-:| |EV-SegNet [G] (ANNs)|13.7M|54.8| |SpikeVideoFormer (Ours)|17.8M|55.5| | [E] Hardvs: Revisiting human activity recognition with dynamic vision sensors. AAAI 2022. [F] Action-net: Multipath excitation for action recognition. CVPR 2021. [G] EV-SegNet: Semantic segmentation for event-based cameras. CVPR 2019. --- **W3 Energy** - Please refer to our response to W1 Inference Time. We have included latency (inference time) to evaluate efficiency in long-term tasks using our linear complexity method. Additionally, we have discussed the potential of deploying our SNN on specialized neurocomputing devices to further enhance its efficiency. --- **W4 Ablation** - We have conducted additional analysis of time steps and hyper-parameters on Human Pose Tracking (evaluated on PA-MPJPE). The results are shown in the tables below. |Timestep $T$|4|8|16|24|32| |:-|:-:|:-:|:-:|:-:|:-:| |PA-MPJPE|39.8|39.8|42.7|45.6|47.5| | |Channel size $C$|32|48|64| |:-|:-:|:-:|:-:| |Param|15.1M|31.3M|55.4M| |PA-MPJPE|44.4|41.7|39.8| | |Blocks|4-Transformer|1-CNN+3-Transformer|2-CNN+2-Transformer|3-CNN+1-Transformer|4-CNN| |:-|:-:|:-:|:-:|:-:|:-:| |Param|12.4M|13.8M|15.1M|16.5M|18.0M| |PA-MPJPE|40.7|40.1|39.8|45.6|54.9| | --- **W5 Clarity** - Our SpikeVideoFormer differs from existing Video Transformers in two aspects: - Unlike **ANN-based Video Transformers**, which focus on **reducing quadratic complexity in space-time attention**, we highlight that spike-driven attention inherently achieves linear complexity (Table 1), making our approach more efficient and scalable. - Unlike **existing SNN-based Transformers**, which focus on **single-image tasks only**, we introduce the first Spiking Video Transformer with an effective spike-driven attention mechanism (Proposition 3.1). --- **Questions For Authors:** - **Q1:** A video comprises a sequence of RGB images with 3×8 bits/pixel. In contrast, an event stream can be represented as a sequence of event frames with 1 bit/pixel (24x more sparse), indicating event occurrences within the time interval. - **Q2:** We emphasize videos due to their practicality in real-world applications, highlighting the need for efficient and effective processing methods. Following prior works, we also incorporate event-based tasks in our experiments to further demonstrate the applicability of SNNs. --- We are grateful for your suggestions and will ensure that the above results and discussions are reflected in the final revision.
Summary: The paper introduces SpikeVideoFormer, an efficient spike-driven video Transformer that leverages normalized Hamming similarity and joint space-time attention to achieve linear temporal complexity. It outperforms existing SNN-based models in video classification, human pose tracking, and video semantic segmentation while matching ANN-based methods in accuracy with significant efficiency gains. Claims And Evidence: The authors emphasize achieving linear temporal complexity with the proposed SpikeVideoFormer, but there does not appear to be a comparison in terms of latency to support this claim. Methods And Evaluation Criteria: This work primarily proposes SDHA and space-time joint attention to enhance the performance of a spike-driven video transformer. - Among them, SDHA appears to be a general method that is not limited to video transformers but can contribute to improving the performance of general SNN transformer architectures. In that case, could transformers for the image domain also benefit from SDHA? This is something I am curious about. - On the other hand, space-time joint attention has already been used in existing ANNs, as mentioned in Section 3.4. Aside from replacing attention with SDHA, the novelty seems insufficient for it to be considered a main contribution. Could the authors further clarify the distinguishing aspects? Theoretical Claims: I have checked the correctness of the proofs for the theoretical claims presented in the paper. Experimental Designs Or Analyses: - The choice of SNN baselines, which are spiking transformers used for image processing, may raise questions from readers. If the authors propose a video-specific model, shouldn't they compare it with other spiking video transformers? If none exist, they should clearly state that theirs is the first. - Among the three video-based vision tasks, space-time joint attention was applied to other Transformer-based SNNs only for human pose tracking (Table 2). Why is this considered a fair comparison? This approach does not seem consistent across other tasks. Supplementary Material: The formatting instructions required the appendix to be submitted as a single file along with the main manuscript, but it was uploaded as a separate PDF file. Nevertheless, I reviewed the supplementary material. Relation To Broader Scientific Literature: This work contributes to the broader scientific literature by expanding the role of spiking neural networks in the video domain. Essential References Not Discussed: The necessary references are sufficiently cited in the paper. Other Strengths And Weaknesses: Strengths - The writing is clear and easy to understand. - The use of Hamming similarity for attention scoring is interesting. - The effectiveness of the method is demonstrated through various video-based vision tasks. Weaknesses - Please refer to the points mentioned in other sections. Other Comments Or Suggestions: There are some formatting errors and typos in this paper. - All table captions should be placed above the tables. - Some notations for the equations in Section 3 appear to be missing. - The figure on page 4 needs a caption. - There are misplaced periods before and after a reference (p.2, line 99). Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer oaZD, We appreciate your time and effort in reviewing our paper. Below, we provide a point-by-point response to your questions. --- **Claims And Evidence:** - **We report the latency in the table below**, based on tests conducted using the same hardware setup—a single A6000 GPU and an AMD EPYC 7543 CPU. The task is human pose tracking, comparing the performance of GLoT (ANNs) and our SpikeVideoFormer (SNNs). The input is an RGB video clip of shape $T\times 256\times 256\times 3$, with a batch size of 1. We conducted tests using 1,000 samples and averaged the time consumption as the latency. As the temporal length $T$ increases from 8 to 32 (4x), GLoT (quadratic attention) experiences a 9.8x latency increase, whereas SpikeVideoFormer (linear attention) shows only a 4.6x increase. GLoT’s increase is lower than the expected ~16× due to its use of a ResNet followed by a Transformer, where ResNet has linear complexity concerning temporal length. |Method|Timestep|Power (mJ)|Latency (ms)|PA-MPJPE (mm)| |:-|:-|:-:|:-:|:-:| |GLoT (ANNs)| $T=8$ |487.5|**303**|39.9| ||$T =32$| 4046.1|**2972 (9.8x)**|46.5| |SpikeVideoFormer (SNNs)|$T=8$|96.0|**235**|39.8| ||$T =32$|391.2|**1087 (4.6x)**|47.5| | --- **Methods And Evaluation Criteria:** - **Q1**: **Yes, our proposed SDHA can also benefit spike-driven transformers in the image domain.** As demonstrated in the table below (also in Appendix F, Table 8), applying SDHA leads to an accuracy improvement of 0.4% (15.1M model) and 0.2% (55.4M model) on ImageNet. In this experiment, our method follows the same model architecture as Meta-SpikeFormer, except for the inclusion of SDHA. |Method|Attention|Param|Power(mJ)|Top-1 Accuracy| |:-|:-:|:-:|:-:|:-:| |Meta-SpikeFormer|SDSA|15.1|16.7|73.2| |Ours|**SDHA**|15.1|16.8|73.6 (**+0.4**)| |Meta-SpikeFormer|SDSA|55.4|52.4|79.7| |Ours|**SDHA**|55.4|52.6|79.9 (**+0.2**)| | - **Q2**: We respectfully believe that our proposed method offers valuable contributions and introduces new insights compared to previous ANN-based works, specifically: - Space-time joint attention in ANNs suffers from quadratic computational complexity, and most related methods propose different space-time attention designs to reduce this complexity (as shown in Table 1). However, to our knowledge, **we are the first attempt to show that space-time spiking attention designs share the same linear complexity.** The experiments demonstrate that our spike-driven solution achieves performance comparable to recent ANN-based methods while offering significant efficiency gains, with improvements of x16, x10, and x5 on three video-based tasks. - **Our primary contribution lies in the design of the first Spiking Video Transformer** with effective spike-driven attention design (Proposition 3.1). This approach achieves SOTA performance compared to existing SNN approaches, with over 15% improvement on human pose tracking and video semantic segmentation. Thanks for the helpful questions. We will clarify the uniqueness and novelty in the final version. --- **Experimental Designs Or Analyses:** - **Q1**: We are sorry that the statement of our method is unclear than intended. **To our knowledge, we are the first to explore spiking video Transformer.** Thanks for the constructive comment. We will clarify the novelty in the final version. - **Q2**: **Other Transformer-based SNNs are image-based approaches** that only use spatial attention. For a fair comparison on video-based tasks, we adapted these approaches to incorporate space-time joint attention. This setting is applied consistently across all three tasks in our work. We will clarify this point in the final version to avoid confusion. --- **Supplementary Material:** - Thanks for pointing out this issue. We will include the supplementary materials as an appendix at the end of the main manuscript in the revised submission. --- **Other Comments Or Suggestions:** - We will carefully address the formatting errors in the table captions, correct the notations in the equations, and fix any typos in the final version. --- We value your feedback and will ensure that the above results and discussions are thoroughly addressed in the final revision.
null
null
null
null
null
null
Dequantified Diffusion-Schrödinger Bridge for Density Ratio Estimation
Accept (poster)
Summary: This paper discusses the challenges of density ratio estimation in applications involving f-divergences, particularly with multi-modal distributions or large distributional differences, known as the density-chasm problem. To address this, the authors propose Dequantified Diffusion-Bridge Interpolants (DDBI), which use diffusion processes for smooth transitions between distributions. By incorporating optimal transport theory, they extend DDBI to solve the Schrödinger-Bridge problem, creating Dequantified Schrödinger-Bridge Interpolants (DSBI). Together, these form the Dequantified Diffusion-bridge Density-Ratio Estimation (D3RE) framework, which theoretically reduces estimation error in asymptotic density ratio estimation. Experiments show D3RE's effectiveness in tasks like mutual information and density estimation. Claims And Evidence: 1. The main result, Theorem 4.1, focuses solely on the support set expansion of DDBI compared to DI. It would be beneficial to also include results regarding the estimation error and convergence rate of the proposed density ratio estimator. 2. Corollary 4.3 asserts that DDBI reduces the variance of the estimator of r*(x). However, the proof provided in Appendix A.6 is somewhat informal. For instance, it mentions that "according to the Delta method (Cox, 2005), the variance of a density ratio estimator is inversely proportional to the effective sample size..." It is unclear what is meant by "effective sample size" in this context. A more rigorous proof is needed. 3. There appears to be some inconsistency between the notation used in the main text and the appendix. For example, the density ratio is defined as r(x) = q1(x)/q0(x) in the main text. However, in the appendix, condition (A2) requires inf_x q1(x) > c for some c >0, which does not make sense. It seems the authors intended to assume inf_x q0(x) > c, or alternatively, that the density ratio in the appendix is defined as r(x) = q0(x)/q1(x). In any case, the condition inf_x q1(x) > c or inf_x q0(x) > c is too strong and should be weakened. Methods And Evaluation Criteria: It would be useful to includd results directly related to the quality of the proposed density ratio estimator, such as convergence rate and error bounds for the estimated density ratio. Theoretical Claims: The conditions for the theoretical claims should clearly stated in the main text, rather than in the appendix. Experimental Designs Or Analyses: The numerical experiments are limited to simple two-dimensional models. It would be more convincing to also include some higher-dimensional examples to assess how the proposed method perform in high-dimensional settings. Supplementary Material: Yes. I review the parts related to the proof of the main result, Theorem 4.1 and its corollaries. Relation To Broader Scientific Literature: This paper proposes a density ratio estimator based on a diffusion Schrödinger bridge process. Density ratio estimation is a challenging problem that has seen renewed interest over the past decade due to its extensive applications in generative modeling and transfer learning. Essential References Not Discussed: It appears that essential references are included in the paper. Other Strengths And Weaknesses: A more detailed description of how the proposed estimator is computed should be included in the main text. Although the main text references Algorithms 1 and 2, these are only found in the appendix. Additionally, the introductory materials in Sections 3 and 4 seem overly lengthy. It might be better to condense these sections and move the details to the appendix. This would allow more space for a comprehensive description of the proposed estimator and its implementation. Other Comments Or Suggestions: No other comments. Questions For Authors: No additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **1. Notation consistency and variance reduction proof** We sincerely appreciate the reviewer’s careful reading and valuable feedback, which have helped us improve the clarity and rigor of our presentation. Below, we address each point: - **Notation consistency**: We have corrected a typo in the appendix, ensuring that the density ratio is consistently defined as $r(x)=q_1(x)/q_0(x)$ and replaced the condition with $\inf_x q_0(x) > c$, aligning with standard density ratio estimation. - **Variance reduction proof**: The link between variance reduction and effective sample size in DDBI follows from the **Delta method** in density ratio estimation. For the plug-in estimator log r̂(x) = log[q̂1(x)/q̂0(x)], the Delta method yields Var[log r̂(x)] ≈ [∇log r(x)]ᵀ Var[q̂(x)] [∇log r(x)], where q̂(x) = (q̂0(x), q̂1(x)). This decomposes into terms scaling as: $\frac{1}{q_0(x) n_{0, \text{eff}}(x)} + \frac{1}{q_1(x) n_{1, \text{eff}}(x)}$ , showing an inverse dependence on effective sample sizes $n_{0, \text{eff}}(x) = n P_0(B_\delta(x))$ and $n_{1, \text{eff}}(x) = n P_1(B_\delta(x))$. DDBI improves these effective sample sizes through **Gaussian dequantization** (ensuring $P_0(B_\delta(x)) > 0$ everywhere) and **diffusion bridging** (adding intermediate sample paths), yielding $n_{i, \text{eff}}^{DDBI}(x) = n [P_i(B_\delta(x)) + C_1 \gamma^2 + C_2 \epsilon]$ We have revised the manuscript to correct the typo and included a more detailed justification of effective sample size. Thank you for your insightful suggestions! **2. Convergence rate and error bounds for the estimated density ratio.** We appreciate the reviewer’s question on the theoretical analysis of our density ratio estimator. In response to this and similar feedback from Reviewers 1, we have added key theoretical results, including: - **Gradient bounds** for the log-density ratio (Prop. 4.4), showing DSBI’s tighter control via OT coupling - **Error bounds** (Theorem 4.5) and **convergence rate dominance** (Proposition 4.6), proving DSBI’s advantage over DDBI For full details, we refer to our responses to Reviewers 1 and 2, where we provide: - Complete derivations and proof sketches. - Interpretation of the $\gamma$-scaling effects. We appreciate the opportunity to strengthen our theoretical foundation and hope this addresses the reviewer’s concerns. **3. Method description improvement**: We sincerely appreciate the reviewer’s suggestions to improve clarity. In response, we have: - **Moved key theoretical conditions** from the appendix to Section 3, now highlighted in remark boxes for better visibility. - **Added a new subsection** (3.4 “Implementation”) providing a clear step-by-step description of the estimator and including pseudocode (Algorithms 1-2). - **Optimized section lengths**, reducing introductory material by 20% and moving technical preliminaries to Appendix B. These changes improve readability, and we are grateful for the reviewer’s guidance. **4. High-dimensional validation missing** We sincerely appreciate the reviewer’s valuable suggestion regarding high-dimensional validation. We apologize for not making these results more prominent in our original submission. Indeed, our experiments already include comprehensive high-dimensional validation through: - **Mutual information estimation** for d={40,80,120}, showing superior convergence to ground truth (now highlighted in Figures 2-3, Sec. 5.2). These results show our approach maintains stable performance as dimensionality increases. - **Large-scale density ratio estimation** on MNIST (d=784), where D3RE achieves better bits/dim scores than competing methods (Tab. 2). The neural network’s ability to learn the 1-d time score function enables efficient scaling to high dimensions. To improve clarity, we have: - Moved high-dimensional results to a dedicated subsection (5.3). - Added explicit discussion of dimensional scaling properties. These revisions better highlight our method’s strong performance across different dimensions. We appreciate the reviewer’s valuable input in refining our presentation.
Summary: The paper introduces Dequantified Diffusion Schrödinger Bridge for Density Ratio Estimation (D3RE), a novel framework addressing the challenges of density-chasm and support-chasm in traditional density ratio estimation (DRE). By leveraging Diffusion Bridge Interpolants (DBI) and Gaussian Dequantization (GD), the proposed method smooths transitions between distributions, enhancing stability in high-dimensional settings. The Dequantified Schrödinger Bridge Interpolant (DSBI) further integrates Optimal Transport (OTR) to solve the Schrödinger Bridge problem, ensuring robust density ratio estimation. Theoretically, the framework broadens support sets (Theorem 4.1) and improves estimation accuracy by reducing variance while maintaining minimal bias (Corollary 4.3). Empirical evaluations demonstrate superior performance in density ratio estimation, mutual information estimation, and likelihood estimation, particularly in multi-modal and high-discrepancy scenarios, highlighting D3RE’s effectiveness over prior methods such as DRE-∞ and TRE. Claims And Evidence: The paper presents a strong theoretical foundation for the Dequantified Diffusion Schrödinger Bridge for Density Ratio Estimation (D3RE) framework, with well-motivated claims regarding its ability to address the density-chasm and support-chasm problems. The authors provide theoretical justifications through Theorem 4.1 (support expansion), Corollary 4.2 (trajectory expansion), and Corollary 4.3 (variance reduction with minimal bias increase). These claims are mathematically well-supported and align with the proposed framework’s conceptual improvements over prior methods such as DRE-∞ and TRE. However, certain claims regarding empirical performance and practical advantages require further substantiation: (1) Support Expansion & Density-Chasm Resolution: While the theory suggests that the broader support and interpolated trajectories mitigate density-ratio estimation failures in high-dimensional, multi-modal distributions, experimental validation of this claim is limited. The paper does not explicitly compare support overlap before and after applying D3RE, making it difficult to assess whether this directly leads to improved density ratio estimation accuracy. (2) Effectiveness of Gaussian Dequantization (GD): The claim that GD improves stability in density estimation is theoretically justified, but it is unclear whether the additional noise injection affects estimation bias in real-world settings. The experiments do not compare results with and without GD, leaving room for uncertainty about its practical necessity. (3) Empirical Performance vs. Baselines: While D3RE demonstrates competitive or improved results on tasks like mutual information estimation and likelihood estimation, its performance relative to Föllmer flow-based methods is not consistently superior. Additional analysis is needed to clarify why D3RE does not outperform Föllmer methods in some cases and whether certain hyperparameter settings could further optimize its effectiveness. (4) Computational Efficiency: The paper suggests that D3RE is computationally efficient due to its diffusion-based approach and OTR integration. However, it does not provide detailed comparisons of training time, function evaluations, or memory usage relative to prior methods, which is essential to validate its practicality. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in this paper are generally well-aligned with the problem of density ratio estimation, particularly in addressing the density-chasm and support-chasm issues. The authors introduce Diffusion Bridge Interpolants, Dequantified Diffusion-Bridge Interpolants, and the Dequantified Schrödinger-Bridge Interpolant as novel approaches to improve density estimation robustness and stability in high-dimensional, multi-modal distributions. The theoretical foundation is solid, leveraging optimal transport and Gaussian dequantization to create smooth interpolants between distributions. Theoretical Claims: The theoretical claims, including support expansion (Theorem 4.1), trajectory expansion (Corollary 4.2), and variance reduction with minimal bias (Corollary 4.3), are mathematically well-structured and follow standard results in diffusion processes and density estimation. Proposition 3.2 (uniform approximation of density ratios) and Proposition 3.1 (Schrödinger Bridge solution) are justified using Gaussian smoothing and optimal transport theory. However, key claims lack direct empirical validation, such as visualizing support expansion, quantifying bias-variance tradeoff, and analyzing computational efficiency. Strengthening these areas would further solidify the paper’s contributions. Experimental Designs Or Analyses: The experimental design aligns with the paper’s goals, evaluating density ratio estimation, mutual information estimation, and likelihood estimation on synthetic datasets and MNIST. However, key claims lack direct empirical validation: (1) Support expansion is not visualized, making it unclear how DDBI mitigates the support-chasm problem. (2) Gaussian Dequantization’s impact is not isolated through ablation studies, leaving its necessity unverified. (3) Computational efficiency is not analyzed, despite the potential overhead from Schrödinger Bridge and Optimal Transport. (4) Performance vs. Föllmer methods needs further justification, as D3RE does not consistently outperform them. Addressing these gaps would strengthen the empirical support for the proposed framework. Supplementary Material: I have reviewed the supplementary material, focusing on the proofs of theoretical claims (but not detailed due to my time limitation), additional experimental details, and implementation specifics. Relation To Broader Scientific Literature: The paper contributes to density ratio estimation (DRE) by addressing support-chasm and density-chasm issues, building on TRE (Rhodes et al., 2020) and DRE-∞ (Choi et al., 2022) while incorporating Diffusion Schrödinger Bridges, aligning with Schrödinger Bridge generative modeling (De Bortoli et al., 2021) and score-based diffusion methods (Song et al., 2020). The theoretical exploration of the connection between diffusion processes and DRE is valuable, but the empirical results fall short of achieving state-of-the-art (SOTA) performance in key benchmarks. If accepted, the authors are strongly encouraged to release fully reproducible code before the camera-ready submission to facilitate broader adoption and inspire future research, to ensure that the broader research community can easily validate and extend the proposed method, further solidifying its relevance and applicability. Essential References Not Discussed: The paper contributes theoretically by exploring the connection between Schrödinger Bridge (SB) and Density Ratio Estimation (DRE), but its novelty is limited, as it primarily applies Schrödinger Bridge in a relatively direct manner to DRE without introducing fundamentally new methodological advancements. While the theoretical insights are valuable, the empirical results and methodological innovation are less compelling, as the approach does not significantly outperform existing DRE methods. Additionally, more follow-up on recent advances in Schrödinger Bridge techniques is needed to better position this work within the broader literature. Incorporating state-of-the-art SB formulations and scalable solvers could enhance both theoretical and practical contributions. Other Strengths And Weaknesses: The paper provides a valuable theoretical perspective by bridging diffusion processes, Schrödinger Bridges (SB), and density ratio estimation (DRE), offering insights into support-chasm and density-chasm issues. The mathematical formulation, including support expansion (Theorem 4.1) and variance reduction (Corollary 4.3), is rigorous, and the role of Gaussian Dequantization (GD) in stabilizing density ratio estimation is well-justified. Conceptually, the paper effectively connects Schrödinger Bridge methods with DRE, potentially inspiring further research on stochastic interpolants for density estimation and generative modeling. However, the method itself lacks strong novelty, as it primarily applies Schrödinger Bridge techniques to DRE without major algorithmic innovations. Empirical results do not consistently establish state-of-the-art (SOTA) performance, with Föllmer-based methods outperforming D3RE in some benchmarks, raising concerns about its practical advantages. Additionally, recent advancements in scalable Schrödinger Bridge solvers are not sufficiently discussed, limiting the work’s positioning within the broader literature. The computational complexity of Schrödinger Bridge and Optimal Transport solvers is also not analyzed, making it difficult to assess the feasibility of the approach in real-world applications. Strengthening the empirical results, methodological novelty, and follow-up on recent SB techniques would significantly improve the impact of this work. If accepted, the authors are strongly encouraged to release fully reproducible code to facilitate further research in this direction. Other Comments Or Suggestions: 1. Empirical Validation of Support Expansion: The paper theoretically claims that D3RE mitigates the support-chasm issue, but there is no direct empirical visualization of support expansion. Adding density plots or trajectory visualizations comparing D3RE vs. prior DRE methods would strengthen this claim. 2. Ablation Study on Gaussian Dequantization: The role of Gaussian Dequantization (GD) in stabilizing density ratio estimation is well-motivated, but an ablation study comparing D3RE with and without GD would clarify its actual impact on bias-variance tradeoff. 3. Computational Efficiency Analysis: The paper does not provide runtime comparisons or computational overhead analysis for the Schrödinger Bridge and Optimal Transport solvers. Given the complexity of these methods, an efficiency study against Föllmer-based or Sinkhorn-based density estimators would help assess practical feasibility. 4. Discussion on Recent Schrödinger Bridge Advances: The paper does not discuss recent scalable Schrödinger Bridge techniques, which are relevant to its approach. Adding comparisons or references to more efficient SB solvers would improve positioning within the broader literature. 5. Performance Against Föllmer-Based Methods: While D3RE improves over DRE-∞ and TRE, it does not consistently outperform Föllmer-based methods. A deeper discussion on why D3RE underperforms in some benchmarks and potential improvements would add clarity. 6. Issue in Corollary 4.2: The definition of T and T' in Corollary 4.2 appears incorrect, which may affect the validity of the trajectory expansion argument. A careful revision of this definition is needed to ensure consistency with the theoretical framework. 7. Typo Corrections: Some minor grammatical and clarity issues were noticed, and careful proofreading before the camera-ready submission would be beneficial. 8. Reproducibility: If accepted, the authors are strongly encouraged to release fully reproducible code to allow the broader community to validate and extend the proposed method. Questions For Authors: I look forward to the authors' responses and revisions addressing all the comments provided. If the responses are insufficient, I will consider adjusting my evaluation score accordingly. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Empirical Validation of Support Expansion Claims and necessity of GD** We sincerely thank the reviewer for raising this important point regarding support expansion and the necessity of gradient descent (GD). We appreciate the insightful feedback and have carefully considered the suggestions. For detailed empirical validation of our support expansion claims and the role of GD in our framework, please refer to the anonymous link provided (https://www.dropbox.com/scl/fi/qk6iy2kof8772rqgvl5xh/DSBI_resulkts.docx?rlkey=keba3z39xd6dghmrbbhysa0xf&st=60scvjjc&dl=0). **2. Computational Efficiency** We appreciate the reviewer’s comment on computational efficiency. D3RE achieves superior efficiency through: (1) Theoretical gains—OT regularization reduces function evaluations (NFE) by 10-30% while maintaining accuracy (Fig. 5); (2) Empirical results—Lower NFE translates to faster training (Sec. 5.3). As NFE is a hardware-agnostic metric [2], we will provide additional wall-clock time comparisons and will expand the timing analysis in the revision. Thank your for your advice!! **3. Novelty vs. direct SB application** We sincerely appreciate the insightful comments regarding the positioning of our work within recent SB advances. We have significantly strengthened our manuscript to better highlight our theoretical contributions and their relationship to state-of-the-art SB methods. Below we address the key points raised: - **(1) Theoretical innovations and novelty**: Our work makes several fundamental theoretical advances that go beyond a straightforward application of SB to DRE: We first generalize the existing DDBI framework, then propose DSBI - the first principled integration of Schrödinger Bridge with density ratio estimation. This yields several key theoretical advantages over conventional approaches: - - Optimal Transport Coupling (Prop. 4.4): DSBI's OT-driven interpolation provides superior control over density ratio smoothness through the $\gamma$-adaptive bound: $\|\nabla \log r_t\| \leq \frac{C}{\gamma\sqrt{t(1-t)}}$ This allows steeper gradients in low-density regions while maintaining smoothness elsewhere, effectively addressing the density-support trade-off. - - Improved Error Bounds (Thm 4.5): DSBI achieves exponentially smaller interpolation error (O(1/γ⁴)) compared to DDBI (O(1/γ²)), particularly crucial for small γ values. - - Faster Convergence (Prop 4.6): We prove DSBI converges faster than DDBI under equivalent conditions. - **(2) Computational advances and practical contributions**: While building on SB theory, our implementation makes several practical advances: - - Developed an efficient neural solver reducing complexity from O(n³) to O(kn) - - Introduced adaptive time discretization for better empirical performance - - Demonstrated 15-20% faster convergence than recent SB baselines (Sec 5.3) - **(3). Relationship to recent SB literature**: We have expanded our discussion of modern SB techniques (now in Related Works), including: - - Comparisons to iterative proportional fitting variants - - Connections to neural SB architectures We believe these additions have strengthened both the theoretical grounding and practical relevance of our work while properly positioning it within modern SB literature. Thank you for the constructive feedback that helped us improve the manuscript. **4. Inconsistent Performance vs. Föllmer Methods** We appreciate the reviewer's insightful comments regarding comparative performance. Our key theoretical and empirical findings are: - Fundamental connection to SB:Recent theoretical work [1] has established that Föllmer flows correspond to specific solutions of the Schrödinger Bridge problem. This explains why both approaches achieve comparable performance in many settings. - Key Advantage of Our Method:While requiring similar computational complexity to linear interpolation baselines, our approach achieves: - - Performance comparable to Föllmer flows (which require more sophisticated interpolation) - - Better stability in high-dimensional settings Our results demonstrate that through proper SB-based regularization, simple linear interpolation schemes can achieve performance competitive with more complex flow-based methods, while being substantially easier to implement and tune. **5. Technical correction and typo corrections** - Corollary 4.2 revision: We appreciate the reviewer’s keen observation. Corollary 4.2 has been revised for precise mathematical formulation, with updated definitions in the response to Q1 of Reviewer 1. - Typo corrections: We have carefully proofread the manuscript, corrected all identified issues, and ensured a polished camera-ready version. We sincerely thank the reviewer for their valuable feedback. [1] Chen Y, et al. Probabilistic Forecasting with Stochastic Interpolants and Follmer Processes. [2] Finlay C, et al. How to train your neural ODE: the world of Jacobian and kinetic regularization[C], ICML2020. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your time and constructive feedback. We sincerely appreciate your efforts in reviewing our manuscript and value your insights. Your suggestions have helped us better highlight the core contributions of our work and improve its rigor, thereby enhancing the overall quality of the paper. If you have any further questions or comments, we would be happy to address them.
Summary: This paper aim to overcome the density-chasm and support chasm problems in density ratio estimation by combining diffusion bridge process and optimal transport theory via Schrodinger bridges. The authors provide theoretical justifications, demonstrating that their proposed DDBI and DSBI expand the support and trajectory sets. Empirical results validate that the D3RE framework consistently outperforms baselines. ## update after rebuttal: I'm maintaining my score. Claims And Evidence: The claims are clear and convincing. Methods And Evaluation Criteria: The benchmark follow prior works and they are comprehensive and make sense. Theoretical Claims: The paper offers rigorous theoretical analyses, clearly proving the advantages of their proposed method, e.g., support and trajectory set expansions, variance reduction. Experimental Designs Or Analyses: The experimental designs are sound and valid. Extensive empirical validations on diverse synthetic datasets and on MNIST demonstrate the effectiveness of D3RE. The benchmark follow prior works. Supplementary Material: The supplementary material is comprehensive, well-organized, and provides enough details. Relation To Broader Scientific Literature: This paper contributes to the broader research on density ratio estimation, which has numerous application in machine learning. Essential References Not Discussed: The current set of references is robust. Other Strengths And Weaknesses: Not applicable Other Comments Or Suggestions: Not applicable Questions For Authors: Not applicable Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate this thoughtful observation and appreciate the reviewer’s comment on our paper. Thank you very much! **1. More theoretical contributions** **Proposition 4.4**: Under the DSBI interpolant $X_t = \alpha_t X_0 + \beta_t X_1 + \sqrt{t(1-t)\gamma^2} Z_t$ with $(X_0,X_1) \sim \pi_{2\gamma^2}^\star$ (OT-optimal coupling), the time-dependent density ratio $r_t(x) = q_t(x)/q_0(x)$ satisfies: $ \|\nabla \log r_t(x)\| \leq \frac{C}{\gamma \sqrt{t(1-t)}}, $ where $C$ depends on $||q_0||$, $||q_1||$. For DDBI (with independent coupling), the bound relaxes to $C/\sqrt{t(1-t)}$. Sketch of Proof: - SB Drift Representation: DSBI’s interpolant $X_t$ follows an SDE with drift $\frac{X_1 - X_t}{1-t} = \gamma^2 \nabla \log p_t(X_1|X_t)$, where $(X_0,X_1)$ are coupled via OT. - Gradient Decomposition: Express $\nabla \log r_t(x)$ as $\mathbb{E}[\frac{X_1 - X_t}{\gamma^2 (1-t)} - \nabla \log q_0(x) | X_t = x]$. - OT Coupling Effect: The OT plan minimizes $\mathbb{E}[||X_1 - X_0||]$, ensuring $||\nabla \log r_t||$ concentrates in low-density regions. - Bound Derivation: Combine Cauchy-Schwarz and the OT plan’s properties to obtain the $\gamma$-scaled bound. **Prop. 4.4 shows** that DSBI’s OT coupling yields smoother density ratios ($||\nabla \log r_t|| \sim O(1/\gamma)$), while DDBI’s independent coupling lacks this smoothing. This shows that DSBI adaptively controls the smoothness of $\log r_t(x)$ via the $\gamma$-scaled bound $\|\nabla \log r_t\| \leq C/(\gamma \sqrt{t(1-t)})$. Unlike DDBI’s uniform bound, this allows steeper gradients in low-density regions (bridging density chasms) while maintaining smoothness elsewhere, thus balancing density- and support-chasm trade-offs through $\gamma$.
Summary: This paper addresses the density-chasm problem in density ratio estimation. The authors propose using diffusive interpolants and Gaussian dequantization, and they theoretically and experimentally verify that these methods can mitigate the problem. Additionally, they demonstrate that incorporating Schrödinger bridges into the proposed method helps improve the performance of density ratio estimation. Claims And Evidence: The claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem. Theoretical Claims: In Theorem 4.1, the fact that the support of $q_t'$ is $\mathbb{R}^d$ is trivial, which makes the theorem and its proof somewhat misleading. Additionally, in Theorem 4.2, the definition of the trajectory set appears to need refinement to be more appropriate within the context of stochastic processes. Experimental Designs Or Analyses: The experimental designs and analyses are sound and valid. Supplementary Material: I reviewed Appendix A. Relation To Broader Scientific Literature: While individual components such as diffusive interpolants have appeared in previous work, this paper makes a novel contribution by applying interpolation using Schrödinger bridges to density ratio estimation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is clearly written. The work would be strengthened by additional theoretical results on DSBI. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Theoretical refinements for Theorems 4.1 and 4.2** We appreciate the reviewer’s insightful feedback, which has helped us improve the clarity and rigor of Theorems 4.1 and 4.2. The key refinements in the revised manuscript are: - **Theorem 4.1:** We have explicitly clarified that the support relation $\text{supp}(q'_t) \supseteq \text{supp}(q_t)$ becomes strict for $\gamma > 0$ and $t \in (0,1)$. The proof now includes (i) an explicit construction of the Minkowski sum $\text{supp}(q'_t) = \text{supp}(q_t) \oplus \text{supp}(\mathcal{N}(0, \Sigma_t))$, and (ii) quantitative analysis of support expansion under Gaussian perturbations. These refinements provide a more precise mathematical justification while maintaining the theorem’s core intuition about noise-induced support expansion. - **Theorem 4.2:** We have refined the definition of the trajectory sets to properly account for stochastic process properties: $\mathcal{T}={\omega : [0,1] \rightarrow \mathbb{R}^d \mid \omega(t) \in \text{supp}(q_t) \ \forall t}$ (DI case) and $\mathcal{T}' = \{\omega : [0,1] \rightarrow \mathbb{R}^d \mid \omega(t) \in \text{supp}(q'_t) \ \forall t\}$ (DDBI case). The analysis now explicitly addresses: (i) path regularity, (ii) the almost-sure containment relation $\mathbb{P}(\mathcal{T}' \supseteq \mathcal{T}) = 1$, and (iii) the role of the noise process $\{Z_t\}$. These refinements improve mathematical precision while preserving the theorems’ key insights on support and trajectory expansion. We believe these updates address the reviewer’s concerns effectively. **2. More theoretical contributions on DSBI** We appreciate the reviewer’s suggestion, which helped us enhance the theoretical analysis of DSBI. Key improvements include results on DSBI’s smoothness control and convergence rates (Theorems 4.5-4.6, Prop. 4.4). These updates strengthen our theoretical foundation, and we thank the reviewer for their valuable feedback. **Theorem 4.5 (Error bounds of DDBI and DSBI)**: Let q_0, q_1 be distributions with finite second moments, and $\epsilon = \Theta(\gamma^2)$ the dequantization noise. Define the variance-to-transport ratio as $ \kappa := \frac{\text{Var}(X_0) + \text{Var}(X_1)}{W_2^2(q_0, q_1)}$. Suppose $\kappa > 1$ (i.e., the sum of variances dominates the squared Wasserstein distance) and $\gamma^2 \ll \min(1, W_2^2(q_0, q_1)/d)$. Then, there exists a critical value $ \gamma_{\max} = \sqrt{ \frac{W_2^2(q_0, q_1)}{\text{Var}(X_0) + \text{Var}(X_1) - W_2^2(q_0, q_1)} }$, such that for all $\gamma \in (0, \gamma_{\max})$, the interpolation errors satisfy $E_{DSBI}<E_{DDBI}$. Sketch of Proof: - Error decomposition: For DSBI, the interpolation error $E_{DSBI}$ is bounded by the Wasserstein-2 distance (due to OT coupling) plus a dimension-dependent term from the entropic regularization: $E_{DSBI}\leq\frac{W_2^2(q_0, q_1)}{\gamma^4}+\frac{4d}{\gamma^2} + C\epsilon^2$. For DDBI, the independent coupling leads to a larger error dominated by the sum of variances:$E_{DDBI} \leq \frac{\text{Var}(X_0) + \text{Var}(X_1)}{\gamma^2} + C\epsilon^2$. - Dominance condition: Compare the leading-order terms:$\frac{W_2^2}{\gamma^4} < \frac{\text{Var}(X_0) + \text{Var}(X_1)}{\gamma^2} \implies \gamma^2 < \frac{W_2^2}{\text{Var}(X_0) + \text{Var}(X_1)}$. The critical value $\gamma_{\max}$ follows from solving for $\gamma$ when $\kappa = \frac{\text{Var}(X_0) + \text{Var}(X_1)}{W_2^2} > 1$. - Validity of $\gamma_{\max}$: For $\gamma < \gamma_{\max}$, the OT term in DSBI decays faster than the IID term in DDBI, ensuring $E_{DSBI}<E_{DDBI}$. **Proposition 4.6 (Convergence Rate Dominance of DSBI)** Under the same assumptions as the Theorem 4.5, the asymptotic interpolation errors satisfy: $\limsup_{\gamma \to 0^+} \frac{E_{DDBI}}{E_{DSBI}} = \limsup_{\gamma \to 0^+} \frac{\frac{\text{Var}(X_0) + \text{Var}(X_1)}{\gamma^2} + C_2(\gamma,d)}{\frac{W_2^2(q_0, q_1)}{\gamma^4} + C_1(\gamma,d)} = +\infty,$ where $C_1(\gamma,d) = \frac{4d}{\gamma^2} + C\epsilon^2$ and $C_2(\gamma,d) = C\epsilon^2$. Sketch of Proof: - Error term dominance: For $\gamma \to 0^+$, the leading-order terms dominate:$E_{DSBI} \sim \frac{W_2^2}{\gamma^4},E_{DDBI} \sim \frac{\text{Var}(X_0) + \text{Var}(X_1)}{\gamma^2}.$ - Ratio analysis: Compute the limit: $\frac{E_{DDBI}}{E_{DSBI}} \sim \frac{(\text{Var}(X_0) + \text{Var}(X_1))/\gamma^2}{W_2^2/\gamma^4} = \kappa \gamma^2$. Since $\kappa>1$ and $\gamma^2 \to 0$, the ratio $\to 0$, implying $E_{DDBI}/E_{DSBI} \to +\infty$ (i.e., DSBI’s error decays strictly faster). - Residual terms:The subdominant terms $C_1(\gamma,d)$ and $C_2(\gamma,d)$ are $o(1/\gamma^4)$ and $o(1/\gamma^2)$, respectively, and thus negligible in the limit. These results highlight DSBI’s advantage: - Theorem 4.5 shows DSBI achieves lower interpolation error, $E_{DSBI} \sim O(1/\gamma^4)$ vs. $E_{DDBI} \sim O(1/\gamma^2)$), especially for small $\gamma$. - Prop. 4.6 proves DSBI’s faster convergence.
null
null
null
null
null
null
DAMA: Data- and Model-aware Alignment of Multi-modal LLMs
Accept (poster)
Summary: In this paper, the authors propose DAMO, an innovative data- and model-aware alignment strategy for Multi-modal Large Language Models (MLLMs), Specifically, a data-aware strategy is introduced to enhance the model's adaptability with data hardness, a model-aware is proposes to facilitate a more effective optimization with model's current responses. The authors conduct extensive experiments for evaluation, and the promising results demonstrate their effectiveness. Claims And Evidence: The authors' claims regarding improving Multi-modal Large Language Model (MLLM) alignment through data- and model-aware strategy are well-supported. Extensive experiments across multiple datasets with multiple metrics demonstrate consistent improvements compared to baseline methods. The results effectively demonstrate that enhancing MLLM alignment via the data- and model-aware strategy improves the alignment performances. Methods And Evaluation Criteria: The proposed method logically integrates data hardness and model responses by modulating the $\beta$, strengthens effectiveness withour introducing additional computations. The evaluation criteria for different benchmarks are standard and appropriate for MLLM tasks. The use of multiple benchmarks, along with comparisons to existing methods, validates the evaluation framework's robustness. Theoretical Claims: The theoretical foundation of the method is sound. The approach is based on the DPO method, which is well-supported by experimental results. Experimental Designs Or Analyses: The experimental designs are comprehensive and valid. The authors evaluate their method on diverse benchmarks, demonstrating consistent performance improvements across different scenarios. Supplementary Material: I reviewed the supplementary material, which provides additional experimental results that strengthen the paper's claims and provide deeper insight into the method's effectiveness. Relation To Broader Scientific Literature: This work makes significant contributions to the field of multi-modal large language models and preference alignment. The proposed method advances the state-of-the-art by demonstrating how data hardness and model responses can be effectively and efficiently integrated into MLLM alignment. Essential References Not Discussed: The paper adequately cites relevant works in multi-modal large language models and preference alignment. Other Strengths And Weaknesses: Strengths: 1. Integrating data- and model-knowledge to MLLM alignment is an innovative idea, bringing new perspectives to the MLLM community. 2. The paper is well-organized, the figures to illustrate the problems and approaches are clearly. 3. Extensive evaluations, encompassing both quantitative metrics and qualitative assessments, demonstrate the effectiveness of the paper. Weaknesses: To deepen the understanding of the proposed approach, the authors could provide more detailed analysis to demonstrate how the data- and model-aware strategy influences performance, for instance, visualize and analyze the dynamics of $\beta$ during the training procedure for further illustration. Other Comments Or Suggestions: See weaknesses. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Response to Reviewer $\color{green}\text{gFQs}$: We sincerely thank you for your invaluable and constructive feedback. We particularly appreciate your positive acknowledgement of our novelty, clear organization, and extensive experimental validations. Below we provide the point-to-point responses to address your concerns about our approach. > **In-depth Analysis of $\beta$.** To deepen the understanding of the proposed approach, the authors could provide more detailed analysis to demonstrate how the data- and model-aware strategy influences performance... Thank you for pointing this out. Your suggestion is invaluable for this paper. To address this, we show the dynamic $\beta$ concerning the data hardness, model responses, and the combination in the Figures 4,5,6 of **[the Anonymous Link](https://anonymous.4open.science/r/Rebuttal_DAMO-05C2/README.md)**. - **1. $\beta_{D}$ for the data hardness (Figure 4 in [Anonymous Link](https://anonymous.4open.science/r/Rebuttal_DAMO-05C2/README.md)).** From Figure 4, we can observe that the range of $\beta_{D}$ falls within (0.0524, 0.1428) with the original $\beta$ initialized as 0.1. Moreover, we observe that the mean value is 0.0999 with a standard deviation of 0.0288. These observations demonstrate that $\beta_{D}$ maintains proximity to the original $\beta$, while adaptively adjusting based on the data characteristics, enabling a more effective capture of the data. - **2. $\beta_{M}$ for the model responses (Figure 5 in [Anonymous Link](https://anonymous.4open.science/r/Rebuttal_DAMO-05C2/README.md)).** From Figure 5, we can find that as the training progresses, the $\beta_{M}$ gradually converges to the original $\beta$. Meanwhile, we find that the $\beta_{M}$fluctuates within a moderate range of (0.0530, 0.1580), demonstrating controlled adaptivity. These observations suggest that as the model training stabilizes, its responsiveness becomes more consistent and eventually approaches a steady state. - **3. $\beta_{C}$ by combining both (Figure 6 in [Anonymous Link](https://anonymous.4open.science/r/Rebuttal_DAMO-05C2/README.md)).** From Figure 6, we can observe that combining both the data- and model-aware strategies yields a more dynamic range of beta values, spanning from 0.0177 to 0.2261, which is wider than either. Moreover, while the value eventually stabilizes around 0.1, we notice that the mean value during the training stage is a bit lower than $\beta_{M}$. This suggests that it relaxes the constraints during training based on the data hardness, enabling the model to better capture fine-grained data patterns and thereby adaptively enhancing its responsiveness to data characteristics. Thank you again for your kind suggestions and supportive feedback. If you have any additional questions, we would be pleased to discuss them with you.
Summary: The paper examines the inherent property of DPO regarding its imbalanced responsiveness to data with varying difficulty levels and proposes Data and Model-aware DPO (DAMO) to address this issue. Experiments across various benchmarks demonstrate that DAMO enhances both trustworthiness and general task performance. **update after rebuttal** The authors have effectively addressed my principal concerns pertaining to the implementation details and have supplemented their work with additional experiments that underscore the generalizability of DAMO. Therefore, I have opted to retain my initial rating. Claims And Evidence: All pivotal claims in the paper are supported by empirical interpretation or systematic experimental validation. Methods And Evaluation Criteria: The proposed method DAMO is rational in tackling the imbalanced responsiveness issue, and the benchmark selection also makes sense. Theoretical Claims: All theoretical claims (the function and design of data-aware/model-aware preference optimization), including the proofs of key formulas, are accurate and validated. Experimental Designs Or Analyses: The ​performance validation in the article (e.g., the selection and analysis of ​hallucination benchmarks reflecting trustworthiness and ​general benchmarks) and the ​construction of ablation studies are ​methodologically sound and comprehensive. Supplementary Material: The submission does not contain any supplementary materials. Relation To Broader Scientific Literature: The paper's key contributions are primely related to Direct Preference Optimization (Rafailovetal., 2024). Essential References Not Discussed: Works directly relevant to contextualizing the paper’s contributions are appropriately cited and discussed. Other Strengths And Weaknesses: **Strengths:** The article demonstrates ​high writing quality with ​a well-organized content structure, ensuring ​ease of comprehension throughout. **Weaknesses:** The paper ​lacks implementation details for the ​response split procedure in data-aware preference optimization. Specifically, the ​prompt templates used in LLaMA3 and ​examples of partitioned sub-sentences should be reported. Other Comments Or Suggestions: Line 208 (right): "the momentum γ is set to 0.9, and **$\bar{H}$** is initialized to 0." **$\bar{H}$** should be **$\bar{R}$**. Questions For Authors: 1. Is the ​probability difference in ​Equation (6) sensitive to the ​LLM’s sub-sentence segmentation strategy, particularly regarding the ​number of sub-sentences generated? 2. If such sensitivity exists, how does the ​segmentation granularity empirically impact the ​performance of DAMO? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Response to Reviewer $\color{red}\text{TUJo}$: We highly appreciate your insightful comments and acknowledgment of our contributions! Your constructive criticism is invaluable in refining our work! We organize your concerns into the following 3 aspects: > **Q1. Clarification about sub-sentence construction.** Thank you for pointing this out! We provide our detailed prompt template as: ``` You are an expert in extracting facts from the given question-answer pair about an image. Your task is to: Analyze the provided question-answer pair based on the image, extract all factual statements from the answer, and rewrite them into self-contained sentences. \n\n Requirements for each sentence are: \n1. complete, each sentence must be self-contained; \n2. factual (omit opinions, subjective statements); \n3. concise (no more than 77 tokens). \n\n Format your result strictly as:\n### Facts:\n- {Fact 1 (e.g., "A red shoe sits on a wooden floor.")}\n- {Fact 2 (e.g., "The shoe has laces and a white sole.")}\n- ...\n\n### Question-answer pair: Question: "{Question}" Answer: "{Answer}" ``` For the very limited number of sub-sentences with more than 77 tokens (2/113156 for preferred, 1/114268 for rejected), we apply truncation over them. Here is a representative example demonstrating our sub-sentence construction: ``` "question": "Is this book related to Literature & Fiction?", "answer": "No, this book is not related to Literature & Fiction. It is a religious or theological book, as evident from the title \"What Love Is This? Calvinism's Misrepresentation of God\" by Dave Hunt." "facts": [ "The book is not related to Literature & Fiction.", "The book is a religious or theological book.", "The title of the book is \"What Love Is This? Calvinism's Misrepresentation of God\" by Dave Hunt."] ``` > **Q2. Evaluation of different sub-sentence granularities.** Thank you for pointing this out! To validate this, we modify the prompt template by replacing the `77` in `no more than 77 tokens` with `60` and `50`. Table 1: Token Length of the segmented sub-sentences from the preferred responses (22626 responses in total) | Tokens | > 77 | 60-77 | 50-60 | 40-50 | 30-40 | 20-30 | < 20 | Total sub-sentences| |-|-|-|-|-|-|-|-|-| | `less than 77 tokens` | 0.002% | 0.005% | 0.047% | 0.540% | 5.161% | 30.659% | 63.586% | 113,156 | | `less than 60 tokens` | 0.007% | 0.004% | 0.048% | 0.536% | 5.176% | 30.778% | 63.451% | 113,040 | | `less than 50 tokens` | 0.006% | 0.004% | 0.040% | 0.532% | 5.194% | 30.734% | 63.490% | 113,085 | Table 2: Token Length of the segmented sub-sentences from the rejected responses (22626 responses in total) | Tokens | > 77 | 60-77 | 50-60 | 40-50 | 30-40 | 20-30 | < 20 | Total sub-sentences| |-|-|-|-|-|-|-|-|-| | `less than 77 tokens` | 0.001% | 0.014% | 0.066% | 0.554% | 5.228% | 30.630% | 63.507% | 114,268 | | `less than 60 tokens` | 0.001% | 0.010% | 0.067% | 0.562% | 5.264% | 30.552% | 63.544% | 114,247 | | `less than 50 tokens` | 0.006% | 0.004% | 0.048% | 0.524% | 5.194% | 30.734% | 63.490% | 114,218 | Tables 1 and 2 show the statistics of the segmented sub-sentences. The statistics strongly support the robustness of our approach, with over 99% of sub-sentences containing fewer than 40 tokens, and more than 63% having fewer than 20 tokens. Due to the inherent inability of the LLM to precisely handle the length constraints [1], we manually truncate the sub-sentences exceeding the length constraints at different scales. The distributions of $\delta$ are in Figures 1,2,3 of **[the Anonymous link](https://anonymous.4open.science/r/Rebuttal_DAMO-05C2/README.md)**, and subtle differences in $\delta$ can be observed over different segmentation strategies. Table 3: Performance over object-hall bench. method | response | mention -|-|- `less than 77 tokens` | 82.54 | 90.64 `less than 60 tokens` | 82.25 | 90.08 `less than 50 tokens` | 81.78 | 90.20 Moreover, we also evaluate the model trained with the given different $\delta$ in Table 3, and observe subtle differences. These comprehensive statistics and experimental results demonstrate DAMO's robustness across different segmentation granularities. > **Q3. Typo Correction.** Line 208 (right): "the momentum $\gamma$ is set to 0.9, and $\bar{H}$ is initialized to 0." $\bar{H}$ should be $\bar{R}$. We acknowledge this typographical error and have corrected this in the revised paper. Thank you again for your valuable and insightful suggestions. We welcome any additional questions and would be happy to discuss them further. [1] Yuan et al. "Following length constraints in instructions."
Summary: Authors propose a variant of DPO where the Beta hyperparameter is adapter dynamically depending on model and data- awareness. Author postulate the existence of easy and hard to distinguish example in alignment training settings, and therefore propose dynamic strategy to adjust those. Evaluation is reported on 5 benchmarks, including Object HalBench. Claims And Evidence: - Authors claim that introducing more regularization through Beta for easy to distinguish examples, and more regularization at the batch level help the model to learn from preferences, leading to improved results on Hallucinations benchmarks. However, this claim might be problematic to verify because all the experiments have been conducted on LLaVA 1.5, which is model lagging fairly behind on current VLM benchmarks compared to more modern alternatives. One could have probably found this work more compelling if those experiments would have been conducted on a more modern LLaVA (e.g LLaVA 1.6) or different models (e.g. InternVL2.5 or QwenVL2). - Besides, this work only focuses on hallucinations, while there is clearly a link with the helpfulness of the responses given by aligned models. A non-helpful model with short answers will always lead to less hallucinations, and hence improved scores on the reported Benchmarks. The claims made in this work would be better supported with an extended view of the problem, ie. not just the hallucinations. - Finally, while the formalization proposed in this work is extensive, and at time even a bit scholar like in Section 2, one could have preferred having a more extended experimental setup with more than just LLaVA 1.5 and more than one training dataset. Methods And Evaluation Criteria: - Authors focus on hallucinations, which is a common pattern in recent Multimodal Alignment papers, but seem to ignore Helpfulness altogether, where the 'preferred response' is more helpful than the 'rejected response'. However, one could argue there is a clear relationship between helpfulness and hallucinations: it's easy to trick the hallucinations benchmarks reported in this work with shorter answers where the likelihood to produce hallucinations is inherently smaller but so the helpfulness. One could have appreciated benchmarks beyond hallucinations, along report on the responses length. - In that regard, it is not surprising to see no improvement on the LLaVA-Bench benchmark, which arguably might be the only benchmark that might give a measure of the helpfulness of a model. Theoretical Claims: - See 'Claims And Evidence'. Experimental Designs Or Analyses: - The LLaVA 1.5 model has been used in numerous multimodal alignment papers in 2024. This is surprising to see it explored in this work yet another time, with a training dataset that was already introduced in the RLAIF-v paper. There is a possible experimental flaw in constraining this work in that small experimental setting. Supplementary Material: - The Appendix only provides two examples of generation. Given the doubt about helpfulness vs. hallucinations, one could appreciate to have a panel of responses, such as LLaVA 1.5 non aligned, LLaVA 1.6 non aligned, InternVL2.5, QwenVL2, GPT-4o, at the very least. Relation To Broader Scientific Literature: Numerous papers have been published in 2023 and 2024 about using LLaVA 1.5 for alignment. Today, in 2025, one hopes to see the community explore further experimental settings beyond LLaVA 1.5. This work here explores the pair LLaVA 1.5 + RLAIF-V preferences that have been already explored in a previous work. The novelty lies only in the defining of the regularization parameter B of the DPO loss. Essential References Not Discussed: - Understanding Alignment in Multimodal LLMs: A Comprehensive Study, Amirloo et al., 2024, performed alignment evaluation on LLaVA 1.6. This is unclear why the work evaluated here (a) report experiment in LLaVA 1.5 instead of LLaVA 1.6, and (b) does not compare results with Amirloo et al. 2024 on that very similar topic. Other Strengths And Weaknesses: - Two important weaknesses of this work are (a) the lack of experimental novelty, - this work uses LLaVA 1.5 like numerous other papers before, and train it on the RLAIF-V preferences, exactly like Yu et al, 2024, (b) the single dimension view of the problem, where the focus is on hallucinations, - leaving alone the helpfulness of the model. It is likely that the aligned model might be less helpful and provide vague answers, explaining the higher scores on hallucinations benchmark, but the lack of improvement on LlaVA-Bench. Other Comments Or Suggestions: - Please do not mix serif (body-text, Figure 4) and non-serif font (Figure 1, 2, 3). - L081: "Similarly to Section 3.1". It is odd to see this statement in Section 1. Is that intended? - Generally the Paragraph L079-L086 does not explain what is Model-aware Preference Optimization. We learn it is similar to another section, but it's not stated how exactly the gap between chosen vs. rejected is used to scale Beta. Can you try to (a) make the overall process explicit, and (b) avoid referring future sections when the structure of the paper is not yet introduced? - All the Section 2 could be removed from the paper has that topic has been already presented in many other papers? For instance (Tang et al., 2024 - Generalized Preference Optimization: A Unified Approach to Offline Alignment) or (Tang et al., 2024 - Understanding the performance gap between online and offline alignment algorithms) are clear reference in that matter. In particular, it is a bit scholar to introduce PPO when this work is about offline direct alignment. - Equation 8 is there an extra dot after D? Questions For Authors: Dear authors, - Have you considered evaluating the helpfulness of your model along the hallucinations? Optimizing only for hallucinations might introduce an important bias in the helpfulness of the responses of the aligned model. One can easily trick Hallucinations benchmark with overly short or vague answers. The lack of improvement on Llava-Bench, arguably the only benchmark evaluating the helpfulness, tend to show that direction. - Have you considered using another model than LLaVA 1.5, which by today standard is fairly old and has been already explored by 10+ other multimodal alignment papers since 2023? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Response to Reviewer $\color{blue}\text{ERND}$: We highly appreciate your insightful comments, which help us a lot to better scrutiny and polish our work! The following are point-to-point responses. > **Q1. Implementation with more advanced models (e.g., LLaVA 1.6 and LLaVA-OneVision) makes DAMO more compelling.** Thank you for your kind suggestion. We extend DAMO to more advanced models like LLaVA 1.6 and LLaVA-one-vision with a more comprehensive alignment dataset[1]. Due to the GPU constraint, **only the MLP layer is finetuned for LLaVA-one-vision**, and all models are finetuned with one epoch. As shown in Table 1, DAMO still demonstrates consistent performance gains across these advanced models. Table 1: Performance on the Object-hal bench. Model | Response | Sentence | Average Length --|--|--|-- LLaVA-1.6 | 84.36 | 91.10 | 194.75 LLaVA-1.6 + DAMO | **85.61** | **92.39** | 182.06 LLaVA-OneVision | 81.48 | 90.29 | 223.08 LLaVA-OneVision + DAMO | **85.82** | **91.22** | 243.86 > **Q2. Measurements over more helpfulness benchmarks further support our effectiveness.** Thank you for your kind suggestion regarding model helpfulness. Our evaluation on **LLaVA-Bench and MM-Vet** (Table 4 of the paper) shows that both our 7B and 13B models achieve competitive performance, with notable improvements of 6% and 9% over their baseline counterparts, respectively. Furthermore, we extend the helpfulness evaluation to the MME Bench with advanced models. As shown in Table 2, DAMO consistently improves both perception and cognition capabilities across different model architectures, demonstrating its effectiveness in enhancing both helpfulness and hallucination resistance. Table 2: Performance on the MME bench. Model | Perception | Cognition --|--|-- LLaVA-1.6 | 1498 | 286 LLaVA-1.6 + DAMO | **1503** | **291** LLaVA-OneVision | 1565 | 335 LLaVA-OneVision + DAMO | **1571** | **339** **Notes:** We sincerely appreciate your valuable feedback regarding the helpfulness and advanced models. Our extensive experiments demonstrate that DAMO achieves consistent and significant improvements across: 1. Multiple model architectures (LLaVA-1.5 / 1.6, LLaVA-one-vision) 2. Extensive benchmark suites, including: - Helpfulness (MME, MMVet, LLaVA-Bench) - Hallucination (Object-Hal, AMBER, MM-Hal) **Most importantly**, DAMO serves as a plug-and-play mechanism that can be seamlessly integrated into various architectures with different alignment data, while maintaining minimal computational overhead. This versatility and efficiency, combined with consistent performance gains across different settings, support the effectiveness and practical value of DAMO. > **Q3. Adding essential related work.** Understanding Alignment in Multimodal LLMs: A Comprehensive Study, Amirloo et al., 2024. Thank you for providing this related work. We have added this method to our revised paper, `As a representative method, BDHS significantly advances the MLLM community by pioneering the alignment techniques into the advanced LLaVA 1.6 architecture, effectively bridging the gap between theoretical MLLM alignment research and practical applications.` > **Q4. Clarification about the model-aware strategy.** Sorry about the confusion. Let us clarify our model-aware strategy that adjusts $\beta$ according to the implicit reward gap between the preferred $y_w$ and rejected $y_l$. Specifically: 1. A larger $\beta$ is assigned to a larger reward gap between $y_w$ and $y_l$, which indicates that the model has already grasped this type of response well. 2. A smaller $\beta$ is assigned to a smaller reward gap between $y_w$ and $y_l$, which suggests that the model needs to improve its responsiveness on such cases. This adaptive scaling mechanism helps the model focus more on cases with less confidence, while maintaining its performance on well-learned cases. > **Q5. Presentation refinement.** * Please do not mix serif (body-text, Figure 4) and non-serif font (Figure 1, 2, 3). Thank you for pointing this out. We have unified the font in our revised paper. * L081: "Similarly to Section 3.1". It is odd to see this statement in Section 1. Thank you for your kind suggestions. We have removed such descriptions and polished this as discussed in **Q4**. * All Section 2 could be removed ... While this section provides essential background for researchers new to preference alignment in MLLMs, we have significantly condensed it to improve paper conciseness. * Equation 8, is there an extra dot after D? Sorry about the confusion. The first dot is the dot product, and the second dot is the period at the sentence end. We have modified as $\beta_{D} = \beta \times \alpha_{D}$. * one could appreciate to have a panel of responses in the appendix ... Thank you for constructive suggestions, we have added these analyses to the revised paper. [1] Yi-Fan Zhang etal MM-RLHF: The Next Step Forward in Multimodal LLM Alignment, 2025.
null
null
null
null
null
null
null
null
Visual Attention Never Fades: Selective Progressive Attention ReCalibration for Detailed Image Captioning in Multimodal Large Language Models
Accept (poster)
Summary: This paper focuses on improving detailed image captioning quality in VLMs. The authors argue that existing models struggle to maintain strong visual attention when generating longer captions, causing increased noise and reduced recall. To fix this, they propose a method that selectively strengthens visual attention by tracking significant changes in attention values over time. The method also reinforces consistently important visual tokens during decoding. Claims And Evidence: - The authors claim their method improves visual attention quality, but the paper only shows results on captioning benchmarks. There’s no direct evidence proving attention actually got better. The attention maps provided still look pretty noisy, both before and after applying their method. - They mention “minimal computational overhead” but don’t back it up with runtime numbers or memory comparisons, so this efficiency claim isn’t convincing. Methods And Evaluation Criteria: The benchmarks (CLAIR, CHAIR) and datasets (IIW-400, DOCCI, MS-COCO) used to evaluate caption quality are standard and appropriate. However, the evaluation is a bit incomplete—the authors rely solely on extrinsic metrics and provide no convincing intrinsic evaluation (e.g., clearer attention visualizations or quantitative metrics) to directly validate their claim that attention quality improved. Theoretical Claims: N/A Experimental Designs Or Analyses: - Benchmarks used (CLAIR, CHAIR, human evaluation) are appropriate and sufficient - Hyperparameter settings (α, β, τ) are arbitrarily chosen without sensitivity analysis, weakening the reliability of experimental results. - Human evaluation uses only 100 examples, limiting generalizability and robustness. - Computational overhead is claimed minimal, but no quantitative runtime or memory analysis is presented to support this. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper studies detailed caption generation in VLMs, connecting closely to previous work on improving attention mechanisms in vision-based models, particularly techniques that aim to enhance visual attention quality. It also aligns with research addressing hallucination in natural language generation, particularly within text summarization, where maintaining factual consistency and reducing irrelevant or incorrect content is a common challenge. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed and helpful review. We apologize for any confusion caused by the incomplete results in the original submission. Your comments have been very valuable, and we provide our detailed responses below. We would also appreciate any further suggestions or feedback. **1. Quantitative Evaluation for Attention Quality** Thank you for raising this important point. We agree that intrinsic, quantitative evidence is necessary to directly support our claims regarding attention quality. To this end, we analyzed **visual attention** by measuring how much attention is assigned to **semantically relevant image regions** during the decoding process, using 5,000 randomly sampled images from the MSCOCO 2014 validation set. Specifically, for each generated token corresponding to a ground truth object in the caption, we calculated the **total attention score allocated to image tokens** within the region of that object. To identify these regions, we used an **open-vocabulary segmentation model (Grounded SAM2 [1])** to generate binary masks for all ground truth objects. During caption generation, for each object word token, we measured the **proportion of visual attention focused on the corresponding object region** out of the total attention score across all image tokens. The table below summarizes the results across three methods: | Method | Attention on Relevant Image Regions (%) | | --- | --- | | Baseline | 17.85 | | Ours | 19.17 | | Naive Attention Scaling | 15.50 | These results indicate that our method achieves better alignment between generated text and relevant visual regions compared to the baseline and naive attention scaling. This suggests that our model's visual attention is **less noisy** and more focused on semantically meaningful parts of the image during captioning. [1] Ren, T., Shen, S. Grounded SAM 2: Ground and Track Anything in Videos. IDEA-Research, GitHub, 2024. --- **2. Computational Efficiency Comparison** Thank you for your comment. We agree that empirical evidence is important to support our efficiency claims. A detailed comparison of runtime and memory overhead—including per-token generation latency and storage requirements—is provided in our response to **Reviewer JgJt**. To avoid redundancy, we kindly refer you to that section for a full breakdown. In brief, our method introduces only minimal overhead compared to the baseline, while remaining significantly more efficient than prior approaches. --- **3. More Ablations for Parameters** Thank you for raising this important concern regarding hyperparameter sensitivity. As mentioned in the main paper, we initially conducted ablation studies on hyperparameters using the **LLaVA-1.5 (7B)** model on the **IIW-400** dataset. The results are presented in Tables 6–9 of the supplementary material. To further strengthen our analysis and address the reviewer’s concern, we extended these ablations to additional models and datasets. Specifically, we conducted experiments using **LLaVA-NeXT (7B)** and **Qwen2-VL (7B)** on the **DOCCI** dataset. We randomly sampled 500 images and evaluated the quality of the generated captions using the **CLAIR score**. The results are summarized below. **LLaVA-NeXT (7B)** Baseline CLAIR score: **62.49** Ours (Layer 20, $\tau$=4, $\alpha$=1.1, $\beta$=0.1): **66.99** | Layer | 10 | 15 | 20 | 25 | 30 | | --- | --- | --- | --- | --- | --- | | Score | 63.97 | 65.25 | 66.99 | 64.61 | 63.58 | | $\tau$ | 2.5 | 3.0 | 3.5 | 4.0 | | --- | --- | --- | --- | --- | | Score | 68.10 | 68.60 | 67.81 | 66.99 | | $\alpha$ | 1.05 | 1.075 | 1.1 | 1.125 | | --- | --- | --- | --- | --- | | Score | 64.20 | 65.10 | 66.99 | 67.26 | | $\beta$ | 0.2 | 0.15 | 0.1 | 0.05 | 0.0 | | --- | --- | --- | --- | --- | --- | | Score | 65.93 | 66.34 | 66.99 | 66.57 | 67.80 | **Qwen2-VL (7B)** Baseline CLAIR score: **79.22** Ours (Layer 18, $\tau$=3, $\alpha$=1.1, $\beta$=0.1): **80.64** | Layer | 10 | 18 | 20 | 28 | | --- | --- | --- | --- | --- | | Score | 79.62 | 80.64 | 79.36 | 79.54 | | $\tau$ | 2.0 | 2.5 | 3.0 | 3.5 | | --- | --- | --- | --- | --- | | Score | 77.99 | 78.98 | 80.64 | 79.77 | | $\alpha$ | 1.05 | 1.075 | 1.1 | 1.125 | | --- | --- | --- | --- | --- | | Score | 80.31 | 80.14 | 80.64 | 78.85 | | $\beta$ | 0.2 | 0.15 | 0.1 | 0.05 | 0.0 | | --- | --- | --- | --- | --- | --- | | Score | 80.36 | 79.52 | 80.64 | 79.76 | 79.61 | Across all models (LLaVA-1.5, LLaVA-NeXT, Qwen2-VL), we observe similar trends regarding the optimal range for the layer, $\alpha$, $\beta$, and $\tau$ parameters—typically favoring mid-to-late transformer layers and slightly scaled values. We also note that our method is **training-free** and imposes **minimal additional computational cost**, making it practical and efficient to perform lightweight hyperparameter searches in real-world applications.
Summary: This work proposes a training-free method to enhance detailed image captioning with improved balance between precision and recall by re-calibrating the attention values in multimodal large language models (MLLMs). This work first analyzes the attention patterns in MLLMs and finds that 1) trivially enlarging the attention values leads to reduced diversity and lower recall in the captions, and 2) as more tokens are generated and the context becomes longer, the attention becomes more noisy and less focused on visual tokens. Then, this work proposes to select important visual tokens based on the attention dynamics and adjust the attention values accordingly. Experiments show improved overall quality of image captioning. Claims And Evidence: - Compared with baselines, the proposed method significantly improves the recall, but the precision is hurt. For example, as shown in Table 2, the precision is ~3% lower than PAI. Similarly, in Figure 6, the human evaluation suggests a lower precision compared with "Naive" (PAI). This may leads to concerns on hallucination and misleading information in the captions. Ideally, the method should be able to adjust its hyper-parameters to achieve a higher recall without hurting the precision. Methods And Evaluation Criteria: - In addition to dense image captioning, it is suggested to include evaluation on other hallucination benchmarks and general VQA tasks, to ensure a more comprehensive comparison with baselines like PAI. Currently this work only tests CLAIR and CHAIR, which are a bit limited considering the scope of the related baselines. - In the proposed method, there are a few hyper-parameters that need to be specified. Some are quite different across base MLLMs (see "Implementation Details" in Section 5.1). It might be challenging to select the hyper-parameters when applying the method to a new MLLM. Theoretical Claims: This work does not include theoretical claims. Experimental Designs Or Analyses: No concerns. Supplementary Material: Yes, the reviewer has reviewed the supplementary material which includes quantitative results and analyses. Relation To Broader Scientific Literature: This work proposes a new approach to dynamically augment the attention to visual tokens in MLLMs to produce better detailed image captioning. However, from the current evaluation, the improvement over existing methods seems not significant, and the usage cases might be limited. Essential References Not Discussed: No concerns. Other Strengths And Weaknesses: No more concerns. Other Comments Or Suggestions: - It might be beneficial to include the actual inference cost comparison. As the method needs to monitor and manipulate attention computation in the model, the generation speed may be slowed. - There are some typos to be fixed. For example, "Specifically, Specifically, " (page 4), "In Section Section 3" (page 4). Questions For Authors: No more questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Concerns Regarding Performance Trade-off** > Compared with baselines, the proposed method significantly improves the recall, but the precision is hurt. For example, as shown in Table 2, the precision is ~3% lower than PAI. Similarly, in Figure 6, the human evaluation suggests a lower precision compared with "Naive" (PAI). This may lead to concerns on hallucination and misleading information in the captions. Ideally, the method should be able to adjust its hyperparameters to achieve a higher recall without hurting the precision. > We appreciate the reviewer’s concern regarding the potential trade-off between recall and precision in our method. However, we would like to clarify that, as shown in **Table 2**, our method achieves **improvements in both precision and recall** compared to the baseline. In contrast, other prior methods, including "Naive" (PAI), tend to **improve precision at the cost of recall**. In particular, while PAI demonstrates a higher precision than our method, it substantially reduces recall, which may not be acceptable for applications where **completeness of information is critical**. This is especially relevant in **real-world scenarios such as medical image reporting** or **automated content generation**, where omitting important visual elements may lead to misleading or insufficient descriptions. In such cases, **recall can be more crucial than precision**, as overly high precision may inadvertently filter out meaningful or necessary content. --- **2. Evaluation on other hallucination benchmark** Thank you for this valuable suggestion. While our primary focus is on **enhancing detailed image captioning**, we also performed additional experiments on the **POPE hallucination benchmark** [2], Inspired by the evaluation setup proposed in [1], where caption-based reasoning improves MLLM performance on general multimodal tasks, we adopt a similar strategy. Specifically, we first generate a caption for the image and then use it as part of the input to answer the question. We evaluated Qwen2-VL (7B) using three approaches: the baseline model, our proposed method, and naive attention scaling (PAI). For the naive method, we identified the optimal hyperparameter ($\alpha$ = 0.2) before evaluation. The table below summarizes the accuracy on the POPE benchmark: | Method | Accuracy (%) | | --- | --- | | Baseline | 82.01 | | Ours | 83.13 | | Naive Attn. Scaling | 81.45 | Our method shows an improvement over the baseline, while the naive attention scaling slightly reduces accuracy. Additionally, we evaluated the instruction-following behavior, measured as the proportion of responses in which the model correctly generates an output that includes both a caption and an answer. | Method | Instruction Following (%) | | --- | --- | | Ours | 92.72 | | Naive Attn. Scaling | 76.84 | These results indicate that naive attention scaling may reduce the model's sensitivity to the input prompt, whereas our method retains alignment with instruction while improving grounding accuracy. --- **3. Concerns Regarding the Choice of Hyperparameters** We appreciate the reviewer’s thoughtful concern regarding the generalizability of hyperparameter settings across different models. To address this, we conducted additional ablations on LLaVA-NeXT (7B) and Qwen2-VL (7B) using the DOCCI dataset. For detailed results, please refer to our response to **Reviewer 4gUZ**. Briefly, across all models (LLaVA-1.5, LLaVA-NeXT, Qwen2-VL), we observe similar trends regarding the optimal range for the layer, $\alpha$, $\beta$, and $\tau$ parameters—typically favoring mid-to-late transformer layers and slightly scaled values. We also note that our method is **training-free** and imposes **minimal additional computational cost**, making it practical and efficient to perform lightweight hyperparameter searches in real-world applications. --- **4. Computational Efficiency Comparison** Thank you for this valuable suggestion. A detailed comparison of runtime and memory overhead—including per-token generation latency and storage requirements—is provided in our response to **Reviewer JgJt**. To avoid redundancy, we kindly refer you to that section for a full breakdown. In brief, our method introduces only minimal overhead compared to the baseline, while remaining significantly more efficient than prior approaches. --- Rebuttal Comment 1.1: Comment: The authors' response is greatly appreciated. Most of the previous concerns are addressed, so I will adjust my rating accordingly. Please include the details of additional evaluation and hyper-parameter ablation study in the revision. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the kind response. We are pleased to hear that the concerns have been addressed. As suggested, we will include the additional evaluation and hyper-parameter ablation study in the revision.
Summary: The paper introduces an ​adaptive attention enhancement mechanism aimed at improving the precision of image captioning while maintaining an acceptable recall rate. Specifically, the ​selective attention enhancement strategy seems powerful according to its significant improvement in the precision of long caption generation, effectively alleviating the ​hallucinations problem in MLLM. Claims And Evidence: The paper is well-motivated, with all insights grounded in ​empirical evidence, making the claims convincing. The paper proposes three main insights: - ​Naive attention amplification reduces attention diversity, validated in Figure 2. ​- Noise increases with caption length, demonstrated in Figure 3. - ​Visual focus weakens in long contexts, illustrated in Figure 5. Through ​extensive experiments, the paper effectively proves the limitations of existing attention amplification methods and naturally derives the design of ​SPARC. The results also strongly support the proposed approach. Methods And Evaluation Criteria: Yes. With EMA, the formulation of Relative Activation Score seems plausible. The token selection and attention amplification methods based on the score are easy for implementation. Theoretical Claims: The paper does not involve proofs. Experimental Designs Or Analyses: Yes. The method is evaluated by CLAIR measuring image-caption alignment and CHAIR measuring hallucination. The comparisons in Table 1 and 2 cover most of SOTA methods for attention strengthening The paper also evaluate their methods plugged into different MLLMs including LLaVA-1.5, LLaVA-Next, and Qwen, demonstrating the consistent effectiveness of SPARC. Supplementary Material: Yes. Mainly the qualitative results. Relation To Broader Scientific Literature: Precision dropping of MLLM in long caption generation remains a long-standing problem. Most of previous works focus on attention amplification but do not obtain satisfactory results. The paper reveals **the underlying causes supported by experiments** and propose a simple but effective method to address it, which is indeed meaningful. Essential References Not Discussed: To my knowledge, the paper includes most related works. Other Strengths And Weaknesses: ## Strengths - The observations validated by the paper may motivate many MLLM task, since hallucination in long-context scenarios is a generic problem in LLM. Besides, its attention patterns are also consistently investigated by the community. ## Weaknesses - Since the paper involves additional attention amplification procedures,efficiency comparisons can be included, e.g., token generation speed. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for taking the time and effort to evaluate our paper. We truly appreciate your insightful comments. Please find our detailed response to your comment below. If you have any further feedback, we would be grateful to hear it. **1. Efficiency Comparisons** > Since the paper involves additional attention amplification procedures, efficiency comparisons can be included. e.g., token generation speed. > Thank you for your valuable suggestion. While our method involves attention amplification procedures, these introduce **only minimal overhead** compared to the original decoding process. In contrast, many previously proposed approaches require **additional decoding passes**, resulting in significant computational cost. To provide a clearer comparison, we measured the **token generation time** (i.e., generation time per output token) across various methods. The following table presents the average generation time per token (in milliseconds) using an **RTX8000 GPU**: | Method | Token Generation Time (ms/token) | | --- | --- | | Baseline | 30.37 ± 0.73 | | Ours | 31.21 ± 0.61 | | Volcano | 109.98 ± 17.71 | | PAI | 57.75 ± 0.86 | | VCD | 59.44 ± 0.84 | | OPERA | 322.28 ± 118.26 | As shown, our method performs similarly to the baseline, whereas other methods introduce 2x to 10x slower generation speeds. This demonstrates that our approach achieves **efficient generation with minimal computational overhead**. Regarding memory usage: our method only requires storing the **head-wise averaged attention scores for image tokens** at each layer from the previous decoding step. For example, in the case of LLaVA-1.5 (7B), this amounts to: 32 (layers) × 576 (image tokens) × 2 bytes (float16) < 40 KB This overhead is negligible in modern hardware setups.
Summary: The authors study the effect of attention variability spatially and temporally and its impact on detailed image captioning with Visual Language Models (VLMs). The authors provide a detailed analysis of methods that tackle attention leaking from the image into the text as the caption grows, and they find that simply increasing the attention on the image makes the captioning focus on only a few objects and noise. Therefore, existing methods that improve the precision of captions do so at the cost of the recall. To mitigate this, they propose a new attention rescaling framework that highlights tokens whose attention scores frequently vary with the captioning process. They show that this approach can improve the alignment with reference captions in relevant datasets with three state-of-the-art VLMs. ## Update after rebuttal The authors have satisfied all my requests, and I hope the new results and discussions are reflected in the final version. Therefore, I update my score to "Accept". Claims And Evidence: The authors show a detailed analysis of naive attention scaling and a proposed method to numerically improves image captioning, based on the diversity of attention scores throughout the captioning generation. Attention diversity caused by their method is effectively shown in Fig 11c, but it is unclear whether this diversity is allocated to more or less “noisy” scores. The only example is Figure 2, which indeed shows a significant amount of “noisy” scores with the proposed method. The authors base a significant part of their analysis on the attention scores of a querying token from the caption to the image tokens at a mid-to-late layer of the model. I agree with the implied statement that tokens preserve significant information relating to their initial embedding, but multiple layers of attention should also lead to a significant mixing of information. Therefore, asserting that a particular attention score is “noisy” in this setting needs more substantial evidence. If feasible, a saliency map of the image patches w.r.t. the attention scores of that layer could make this claim more solid. Besides, it is unclear to me that such “noisy” attention scores are inherently harmful as they have been shown to be used for computation by ViTs [1]. Furthermore, the authors claim that this attention diversity correlates with the model not focusing on only a few relevant objects, therefore improving recall. This seems clearly supported by their improved recall scores. [1] Darcet et al. Vision Transformers Need Registers. ICLR’24. Methods And Evaluation Criteria: After a detailed analysis of the limitations of current methods, the authors propose a reasonable method to mitigate the observed shortcomings. The metrics used to evaluate this method, namely object-matched and GPT-4o scored matching between gold captions and generated captions, make sense. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Overall, the authors chose two datasets and two reasonable and comprehensive metrics and showed their approach in different models. I would appreciate an analysis of the “noisy” scores for their approach and naive attention scaling. I would also appreciate ablations like those shown in Tables 6-9 for all models, ideally in all datasets. Even if fewer parameters are tried (e.g., only three values per hyperparameter), it would be good to see how sensitive these are across models since, e.g., the final result for Qwen-2-VL uses a different layer. Supplementary Material: Yes, the appendix. The additional qualitative examples are helpful, and the diversity plot with the proposed method is very relevant, and as suggested before it could even be included in the main text. Similarly, the ablations of the method’s parameters are useful for potential users, although performing this ablation across models and datasets would demonstrate the sensitivity of the approach to there hyperparameters. Relation To Broader Scientific Literature: The authors discuss prior work on improving captioning, specifically precision, by mitigating hallucinations. Recent work in this direction [2,3,4] tries to focus the textual generation process more on the visual input. Such past works show the benefits not only for image captioning but for other tasks such as visual question answering. However, the authors of this work show that temporally varying attention is key for higher recall and that simply scaling attention leads to higher precision at the cost of lower recall. In this sense, these findings could also be applied to other image-text tasks. It is, however unclear how other techniques [5] that do not rely on increasing attention scores for improving visual attention in VLMs compare to the proposed method. [5] shows they can slightly improve recall while making larger improvement on CHAIR specific metrics. [2] Huo et al. Self-Introspective Decoding: Alleviating Hallucinations for Large Vision-Language Models. ICLR’25. [3] Liu et al. Paying more attention to image: A training-free method for alleviating hallucination in lvlms. ECCV’24. [4] Li et al. Mitigating Hallucination for Large Vision Language Model by Inter-Modality Correlation Calibration Decoding. arXiv’25. [5] Xing et al. Mitigating Object Hallucination via Concentric Causal Attention. NeurIPS’24. Essential References Not Discussed: The assumption that high attention scores in mid-to-late layers to tokens related to background patches, is harmful or “noisy” is not empirically supported. No relevant literature is mentioned on that aspect. In contrast, relevant works [1,6] show that large attention scores and/or activations are a key component of ViTs, and can even be employed for better performance. [6] Sun et al. Massive Activations in Large Language Models. COLM’24. Other Strengths And Weaknesses: Main strengths: - Detailed analysis of naive scaling of attention for image captioning, and proof that this correlates with recall in image captioning - New training-free method to improve recall and precision in image captioning. - Well written paper. Summary of weaknesses: - No analysis of “noisy” attention scores. If feasible, use saliency map w.r.t. attention scores instead of attention scores directly; in such a deep layer, these might not correspond only to the original patch. - Fig 5 does not account for a share of total tokens. - No ablations like tables 6-9 for other models & other datasets already shown in the paper. Unclear how difficult it’d be to tune this framework in a new model. - The resulting variation of attention scores applying their method is currently in the appendix but seems relevant enough to be in the main paper. Other Comments Or Suggestions: Is Figure 5 weighting for the decreasing share of image tokens w.r.t. total tokens? If not, I think that is not very useful. It seems intuitive that as captioning goes on and the image tokens aren’t 100% of the previous tokens, the model has to devote some attention to the text it has already been output. I am not saying they should be weighted the same (clearly from previous work, they are not), but this plot would assume that text tokens need not be attended to. A more reasonable plot would weigh the attention score by the share of total tokens from each type. Typos: L204: “Specifically, Specifically, “ L350: “We compar our method” Questions For Authors: L372: “To ensure a robust evaluation, we randomly sample 500 instances and repeated the evaluation five times.” Does the “repeating” here come from 500 sampling subsets or for generating captions with non-determinism (temperature different than 0)? This could be made clearer, and if the reason is the latter, more details on the generation parameters should be provided. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful feedback and insightful question. Your comments greatly helped us refine and clarify the paper. Please find our detailed response below—we’re happy to address any further concerns. **1. Analysis of the “noisy” scores** We understand the reviewer’s concern regarding whether the increased attention diversity in our method may be attributed to “noisy” scores. To investigate this, we conducted an analysis of attention “noisiness” by measuring the proportion of attention assigned to **semantically relevant image regions** during decoding, using 5,000 randomly sampled images from the MSCOCO 2014 validation set. Specifically, for each generated token corresponding to a ground truth object word in the caption, we computed the **proportion of total attention allocated to image tokens within the region of that object**, relative to the total attention across all image tokens. These object regions were identified using binary masks obtained from an open-vocabulary segmentation model, Grounded SAM2 [1], applied to each object in the image. The table below compares the average attention allocated to relevant image regions across three methods: Baseline, Ours, and Naive Attention Scaling. A higher proportion indicates that the model is focusing more accurately on the relevant visual regions, suggesting lower attention noise. | Method | Attention on Relevant Image Regions (%)| | --- | --- | | Baseline | 17.85 | | Ours | 19.17 | | Naive Attn. Scaling | 15.50 | These results indicate that our method assigns more focused attention to semantically meaningful image areas compared to the baseline and naive attention scaling, suggesting that our method results in *less noisy* visual attention. Furthermore, in Fig. 13(a), we report the **caption similarity score**, which measures the semantic similarity between sentences within a generated caption. Captions generated by our method show greater diversity in sentence content, reinforcing that the diversity is not due to randomness or noise in attention, but rather stems from meaningful differentiation in visual grounding and language generation. [1] Ren et al. Grounded SAM 2: Ground and Track Anything in Videos. IDEA-Research, GitHub, 2024. --- **2. More Ablations for Parameters** We appreciate the reviewer’s suggestion regarding additional ablation studies on different models and datasets. To address this point, we conducted additional ablation studies on LLaVA-NeXT (7B) and Qwen2-VL (7B) using the DOCCI dataset, analogous to Tables 6–9 in the paper. To avoid redundancy, we kindly refer you to our response to **Reviewer 4gUZ**, where detailed results and observations are provided. In brief, we observed consistent trends across models, with performance generally favoring mid-to-late layers and modest parameter scaling. Since our method is training-free and efficient, such tuning remains practical. **3. Clarification Regarding “Noisy” Attention Scores** We sincerely apologize for any confusion caused by our use of the term “noisy” to describe certain attention scores. As discussed in [2], background tokens can carry important global context and are often beneficial in vision transformers. Our intent was not to imply that such attention is inherently harmful. Rather, our goal is to **enhance attention to semantically relevant visual regions** during decoding—not to suppress background tokens altogether. In fact, we agree that background tokens can carry useful information, and our method does not explicitly penalize them. However, we found that naive attention scaling may unintentionally amplify attention to globally dominant tokens (including background regions), which can **overwhelm local, task-relevant signals**. This may explain why captions generated with naive attention scaling often exhibit high precision but reduced recall—they tend to overemphasize prominent visual cues while neglecting finer details. We sincerely thank the reviewer for pointing this out and will revise the manuscript to replace “noisy” with more accurate wording. [2] Darcet et al. Vision Transformers Need Registers. ICLR’24. **4. Attention weight trends for text and image tokens as context length increases** > Fig. 5 does not account for the share of total tokens. > Thank you for the insightful comment. As noted, Fig. 5 did not normalize for the total number of tokens. To address this, we analyzed the average attention per token and found that attention to image tokens decreases disproportionately faster than to text tokens as context length increases. **5. Clarification on the meaning of “repeating” in evaluation** Thank you for the question. The “repeating” refers to resampling 500 instances from the MSCOCO validation set five times, following prior work [3]. Generation was deterministic with temperature set to 0. Liu et al. Paying more attention to image: A training-free method for alleviating hallucination in lvlms. ECCV’24. [3] --- Rebuttal Comment 1.1: Comment: I thank the authors for additional experiments and clarifications. I would like to raise several points: **1. Analysis of the “noisy” scores** I initially cited as a weakness “No analysis of “noisy” attention scores. If feasible, use saliency map w.r.t. attention scores instead of attention scores directly; in such a deep layer, these might not correspond only to the original patch.” It seems this later suggestion has not been considered, although I believe it is very relevant. Furthermore, the presented analysis does not address the doubt that more attention diversity does not come from “noisy” tokens. This new experiment measures attention from text tokens of objects to relevant image patches. However, the attention diversity is measured from all text tokens, including those not referring to objects. Therefore, although this experiment is insightful and shows that attention with the proposed method aligns better with relevant patches, it doesn’t help in understanding the nature of the improved attention diversity (which authors deem key to better captioning recall) or true faithfulness to input (since attention scores of tokens at a deep layer might not relate to the original patch). **2. More Ablations for Parameters** This ablation is helpful. Unfortunately, the optimal setting for each model is not ablated for all other models, e.g. the optimal setting for LLaVA-NeXT-7B is not shown for Qwen-2-VL. Therefore, the impact of using the existing hyperparameters on a new model is unclear. Since some of these hyperparameter settings (e.g. if one would use layer 20 as optimal for LLaVA-NeXT-7B in Qwen-2-VL) would offer little benefit over the baseline, it should be made clear in the paper that tuning these parameters is quite essential. **3. Clarification Regarding “Noisy” Attention Scores** I thank the authors for the clarification. I believe the re-framing of “noisy” tokens to tokens with global context could help clarity. However, as I mentioned in my original review and reiterated in point 1, I believe an analysis beyond attention scores, for instance, using saliency scores, would have shown more faithfully the relationship to the input regions. **4. Attention weight trends for text and image tokens as context length increases** Thank you for the corrected plot. **5. Clarification on the meaning of “repeating” in evaluation** Thank you for the clarification. It is still unclear why one would not sample 5x500 samples without replacement from the MSCOCO validation set once. This would be more statistically sound at the same cost, as I understand it. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking the time to read our rebuttal and for engaging in this discussion. We truly appreciate your thoughtful feedback and would like to address your follow-up comments below. Please feel free to let us know if you have any further questions or suggestions. --- **1. Concerns Regarding the Use of Attention Scores for Analysis** > "If feasible, use saliency map w.r.t. attention scores instead of attention scores directly; in such a deep layer, these might not correspond only to the original patch." > Thank you for the insightful suggestion. Following your recommendation that employing saliency maps with respect to attention scores — rather than relying solely on attention scores — could provide a more reliable basis for our analysis, we conducted additional experiments. Specifically, we computed and visualized a **saliency map by weighting attention scores at the given layer with their gradients with respect to the model’s output**. Our implementation follows the gradient-weighted attention approach introduced in [1]. In the figure linked [here](https://anonymous.4open.science/r/images-EBFF/saliency_map.pdf), we compare the results of the original analysis in Figure 4 of our paper with the new results using the saliency maps. We observe that the saliency-based analysis exhibits similar trends: as the context length increases during caption generation, the saliency map becomes increasingly noisy. This supports our original interpretation. Moreover, recent work such as [1] has shown that the attention patterns of MLLMs do align with relevant image patches, especially in tasks like visual question answering. [1] Zhang et al., MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs, ICLR’ 2025. > "Furthermore, the presented analysis does not address the doubt that more attention diversity does not come from 'noisy' tokens. This new experiment measures attention from text tokens of objects to relevant image patches. However, the attention diversity is measured from all text tokens, including those not referring to objects." > We are grateful for your thoughtful observation. To address this concern, we conducted an additional experiment in which we **measured attention diversity using only the patches corresponding to actual objects** in the image, rather than across the entire image. This was done to ensure that the diversity values are not influenced by background noise. In particular, for each image in the MSCOCO 2014 validation set (5,000 images), we identified the **foreground regions** by aggregating the ground-truth segmentation masks of all annotated objects. During caption generation, we collected the attention scores of image tokens and retained only those corresponding to the foreground patches. We then computed attention diversity using the same methodology as in Figure 11 of the paper. The figure linked [here](https://anonymous.4open.science/r/images-EBFF/diversity.pdf) plots attention diversity across foreground regions for the baseline, naive scaling, and our method. As with Figure 11, our method exhibits higher attention diversity than naive scaling, even when noise from background regions is excluded. We hope this addresses your concern and strengthens the interpretation of our results. Thank you again for your valuable feedback. --- **2. Concerns Regarding Hyperparameter Settings** Thank you for highlighting the importance of hyperparameter tuning. As you pointed out, identifying optimal parameter settings is indeed necessary to achieve the best possible performance for each model. We will make it clear in the revised version of the paper that tuning these hyperparameters is essential when applying our method to different architectures. That said, as previously mentioned, since our method is training-free and computationally lightweight, we believe that identifying optimal hyperparameters for new models is relatively straightforward and does not pose a significant barrier in practical use. --- **3. Regarding Repeated Sampling** Thank you for your helpful comment. Following your suggestion, we re-evaluated the experiments reported in Table 2 of the paper using a single set of **2,000 samples drawn without replacement** from the MSCOCO validation set. The results are shown in the table below: | Method | Precision | Recall | F1 | | --- | --- | --- | --- | | Baseline | 84.72 | 79.55 | 82.05 | | OPERA | 84.76 | 79.30 | 81.94 | | VCD | 83.55 | 77.79 | 80.57 | | VOLCANO | 87.58 | 77.91 | 82.46 | | PAI | **90.94** | 72.31 | 80.56 | | Ours | 87.54 | **80.16** | **83.69** | The reason we originally used 5×500 samples was to align with the setup used in prior work, which typically evaluated on 500 samples.
null
null
null
null
null
null
A Mathematical Framework for AI-Human Integration in Work
Accept (poster)
Summary: This paper develops a model of job success probability by viewing jobs as a composition of tasks that need to be accomplished, and workers supply skills that affect the probability tasks as successfully completed. The authors then calibrate the model using the O*NET database's skill descriptions associated with computer programming. The authors calibrate the skill of workers and the skill of GenAI tools in their model to examine whether merging the human + the GenAI tool can lead to higher job success probabilities than either working in isolation. I provide a more detailed description of the theoretical model and experimental results before presenting my comments below. Claims And Evidence: Please see my discussion of theoretical claims and experimental designs. Methods And Evaluation Criteria: Please see my discussion of theoretical claims and experimental designs. Theoretical Claims: The authors theoretical model has the following key components: * A job consists of $m$ tasks and each task $T_i$ depends on $n$ skills. Each skill $s$ has two components: decision-level and action-level sub-skills. * A worker in the model is associated with two ability profiles $\alpha_1(s), \alpha_2(s)$ that summarizes their ability across the two subskills for each skill. The ability profiles are governed by a probability distribution that is summarized by two parameters: (i) the average ability for the skill, and (ii) the noise for the skill. * For each skill $j \in [n]$, the worker's skill error function is given by the random variable $h(\zeta_{j1}, \zeta_{j2})$ where $z_{j,l} = 1 - X_{jl}$ for $X_{jl}$ sampled according to the skill profile $\alpha_{l}(s_{jl})$. The task error then aggregates the skill errors with $g(h(\zeta_{11}, \zeta_{12}), ..., h(\zeta_{n1}, \zeta_{n2}))$. Finally, the job error aggregates the task errors $f \colon [0,1]^m \rightarrow 1$. Consequently, task success and ultimate job success are random variables due to the randomness in the worker's skill errors. * The authors' main object of interest is the job success probability which is the probability the job error is less than some threshold $\tau$. (Equation 1) * The authors then provide two results: (1) the authors show that the job success probability is increasing in the average skill of a worker, (2) there can be gains in the job success probability by merging two workers together. One of the strengths of the paper is the model's generality -- in particular, the setup can accommodate a wide variety of choices of skill errors, task errors, and job errors. Akin to the authors example of the max, you could carry this forward and imagine an "O-ring" (Kremer, 1993) style model which would involve: skill errors as being either zero or one, tasks $g() = 1$ only be completed if all skills are successfully completed $h() = 1$ and jobs $f() = 1$ only being completed if all tasks are completed. One of the weaknesses is that while the model is general, the results are relatively weak and unsurprising. Consequently, it is not clear why this model helps me reason through the suite of empirical studies that analyze the productivity effects of generative AI. The authors rely heavily on returning to stylized versions of the model to build intuition (e.g., linear ability functions) but I did not find that enlightening. I would have found it immensely valuable for the authors to either (i) pick a particular result (e.g., Brynjolfsson et al.), or (ii) recurring finding across empirical analyses (e.g., the finding that the introduction of GenAI tool leads to a compression in the productivity distribution across workers) and discuss whether the model has anything to say about those results. Another weakness of the paper is that I struggled to map the model into any concrete job. Take the example of computer programmers that the authors study in the experiment. If I think about the model as describing one worker, it describes a worker completing many jobs and describes the fraction of jobs that the worker successfully completes. So, for a computer programmer, a job actually corresponds to a specific programming project and we would be describing the success rate of the computer programmer across many such jobs? In this view, the randomness in the skill successes arises from randomness in the worker's skill across such tasks (maybe some days I am tired and other days I have a lot of coffee?). I can see how this works for a job like programmer where there is a somewhat discrete output being produced, but for other jobs it is not obvious this applies. Experimental Designs Or Analyses: To illustrate the model, the authors derives data on tasks and skills for computer programmers using O*NET. O*NET describes a computer programmer as consisting of 18 skills and 17 tasks with associated with proficiency levels for each skill. The authors prompt GPT-4o to provide the relationship between skills and tasks, the division of skill proficiencies into action and decision skills, and prompt GPT-4o using big-bench lite to construct skill profiles for an average person and an LLM. (1) Prompting GPT-4o plays a key role in the empirical results. It would have been useful for the authors to describe this more explicitly in the main text -- in particular, the GPT-4o outputs are used to build the task-skill dependency network; GPT-4o outputs are used to divide proficiencies into decision and action skills; and GPT-4o outputs are used to construct the proficiencies of LLMs and human rates based on big-bench lite. I am not sure how seriously to take this except as a way to construct some numbers used for the calibration of the model. (2) The authors pick particular parametrizations of the skill profiles based on truncated normals -- how were these chosen? Why this specific choice and not the uniform alternative? It would be valuable to see the sensitivity of the calibration results to that specific modelling choice here. (3) Related to my earlier comment about the generality of the choice of skill, task and job functions, it would be interesting to see alternative variations on the choice of the JER function -- the authors choose a rather simple function that is the weighted average of the skills. What if you instead took the max error associated with each skill in a task and then max error across tasks? How would the comparisons change? Supplementary Material: The supplementary material contains additional details about the model, the proofs of the main claims in the paper, and additional discussion of how the authors implemented the simulations based on O*NET and Big-Bench Lite. I did not carefully review the proofs of the authors' main theoretical results, but read the additional details for the simulations calibrated to O*NET and Big-Bench Lite. Relation To Broader Scientific Literature: As discussed by the authors in their introduction, a large and highly active literature studies the productivity effects of generative AI across a wide variety of settings. But much of this work lacks a clear theoretical framework for understanding when/how generative AI tools affect productivity. This paper aims to provide such a framework by viewing jobs as a collection of tasks that need to be completed and workers as having skills that affect their probabilities of successfully completing tasks. I find this to be a valuable exercise and a potentially important contribution to this active empirical literature. At the same time, as I discussed above, the model is simultaneously specific yet opaque. Moreover, there results provided are obvious -- Theorem 3.2 has a lot of leg work involved to show that increasing the average skill of a worker leads to an increase in the job success probability; and Theorem 3.3 has a lot of leg work involved to show that combining two workers that are more skilled along different dimensions leads to an increase in the job success probability. While the authors motivated the framework using these empirical findings, I struggled to link this model back to those findings. Essential References Not Discussed: The paper provides an thorough review of empirical research studying the impacts of generative AI on worker productivity. There are no glaring omissions to me from this literature. In particular, I am not aware of existing paper that attempts to write a stylized model to understand of (i) jobs as a composition of tasks to be completed, and (ii) skills leading to imperfect completion rates of underlying tasks. Other Strengths And Weaknesses: Please see my earlier comments. Other Comments Or Suggestions: As a small comment, between Section 2 and Section 3, the authors appear to switch notation for the average skill/ability. In Section 2, it is introduced as $E(s)$ and in Section 3 it is denoted by $\mu$. The paper would be easier to digest if the authors used consistent notation throughout. Questions For Authors: Please see my previous comments on the theoretical claims and the experiment. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thoughtful, detailed, and insightful feedback. In response, we added new theoretical and empirical analyses that sharpen our results, test their robustness across modeling choices, and highlight connections to real-world phenomena such as productivity compression. Please see this PDF for new figures: https://acrobat.adobe.com/id/urn:aaid:sc:eu:fef7d6b2-24f6-4a59-9386-fdf09456ed99. > *"Pick a particular empirical result (e.g., Brynjolfsson et al.)... and discuss whether the model explains it."* Thank you for the suggestion. We show that our model formally supports the **productivity compression** effect observed in Brynjolfsson et al. (2023). Consider two workers from the same ability family (e.g., constant, linear, polynomial), with equal decision-level ability and noise: a low-skilled worker $W_1$ with action-level ability parameter $a_1$ and a high-skilled worker $W_2$ with action-level ability parameter $a_2>a_1$. Let $P_1,P_2$ be their success probabilities before merging with GenAI (whose decision-level ability is always weaker than $W_\ell$ and action-level ability is in the same family with $W_1$ and $W_2$), and $P'_1, P'_2$ after merging. We define productivity compression as: $\mathrm{PC}=(P_2-P_1)-(P'_2-P'_1)$. **Theoretical insight:** A corollary of Theorem 3.2 shows that if $a_{AI}>a_1$, then $W_1$’s merged gain $\Delta_1=P'_1-P_1$ can be large-up to $1-2\theta$ for some small $\theta$—while $W_2$'s gain $\Delta_2\approx 0$. Thus, $\mathrm{PC} \approx 1-2\theta$. **Empirical results:** Our experiments (Figure 6 in the PDF) show that even when the GenAI ability profile differs from that of the worker, compression increases with the skill gap. For $a_1=0.1$, $a_2=0.8$, compression reaches 0.8, closely matching empirical findings from Brynjolfsson et al. To our knowledge, this is one of the first formal explanations of the compression effect under realistic assumptions. > *"..uniform alternative?"* We chose truncated normal distributions to approximate ability variability observed in empirical benchmarks (e.g., Figure App.9 in Bench authors, 2023). That said, we now include experiments with uniform noise distributions (Figures 3 and 4 in the PDF) and find qualitatively similar results. > *"What if you instead took the max error..."* This excellent suggestion led us to study max-based aggregation, where job success probability is given by $P = \prod_{j} \Pr[h_j \leq \tau]$. We conducted both theoretical and empirical analyses: - **Sharper Phase Transition:** With identical skills, the transition window scales as $\frac{1}{n}$, sharper than the average-based case $\frac{1}{\sqrt{n}}$. - **Negative Example:** We identify a natural setting where no phase transition occurs. With one hard skill (e.g., 0.1) and others easy (e.g., 0.8), success is dominated by the hard one: $P \approx \Pr[h_1 \leq \tau]$, yielding a smooth, non-abrupt transition. - **Empirical Results:** Experiments (Figures 1 and 2 in the PDF) confirm these effects. Notably, line-crossing in average-based settings disappears under max aggregation due to its monotonicity. We will include these results in the final version. > *"struggled to map the model into any concrete job..."* Our model is intended to be flexible. For project-based roles (e.g., programming), a job may represent a single project composed of subtasks. For ongoing roles (e.g., teaching or customer service), a job could aggregate performance over a time period. The randomness in subskill outcomes captures intra-personal variability (e.g., fatigue) and task-level heterogeneity. While stylized, we believe this abstraction enables us to reason about human-AI collaboration in both discrete and continuous work environments. > *"Prompting GPT-4o plays a key role..."* We will clarify the role of GPT-4o in the main text and include API-based prompting code in the final version to ensure consistency and reproducibility. > *“Results are relatively weak and unsurprising.”* While some results may seem intuitive, our theorems precisely characterize when phase transitions in job success occur and quantify non-trivial merging gains—analytically nontrivial and not evident without formal modeling. Our extensions further show when phase transitions disappear (e.g., max-based errors, strong skill dependencies), underscoring the value of our framework. > *“Rely heavily on stylized versions of the model (e.g., linear ability functions)...”* We use linear ability functions as an interpretable baseline aligned with prior empirical work (e.g., Brynjolfsson et al., 2023). This choice illustrates key effects clearly and enables tractable analysis. In Appendix C.3, we extend our results to polynomial ability functions, and our experiments confirm that the core insights persist under more complex profiles. We thank you again for your thoughtful engagement, which has helped us clarify, extend, and strengthen the contributions of our work. --- Rebuttal Comment 1.1: Comment: I thank the authors for thoughtfully engaging with my review. I was originally positive about the paper, but I am encouraged by the authors new discussion about how the model can explain the ``compression effect'' of GenAI tools documented in existing empirical evaluations. I will revise my score upwards, and I would strongly encourage the authors to further emphasize how the model can make sense of empirical findings on the productivity effects of GenAI tools.
Summary: This paper presents a mathematical framework for modeling jobs, workers, and worker-job fit, focusing on subskill decomposition into decision-level and action-level tasks to highlight the distinct strengths of humans and AI. The study examines how variations in subskill abilities affect job success and identifies conditions under which collaborative skill division leads to superior performance compared to relying on a single worker. The framework's effectiveness is validated using O*NET and Big-Bench Lite datasets, demonstrating its real-world applicability. The results emphasize that Generative AI (GenAI) is best suited to complement human workers' skills rather than replacing them. Claims And Evidence: Yes, the claims made in the submission are well-supported. For example, the authors argue that conflating reasoning skills with action skills can lead to misattributions of success or failure, resulting in biased or incomplete assessments. To address this, they provide a detailed framework for decomposing skills into two distinct subskill types: decision-level skills (problem-solving) and action-level skills (solution execution), with skill difficulty quantified on a 0-1 scale. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-aligned with the problem being addressed. The paper introduces a mathematical framework that systematically decomposes skills into decision-level and action-level subskills, which is a logical and structured approach to analyzing worker-job fit in the context of human-AI collaboration. For evaluation, the use of O*NET and Big-Bench Lite datasets provides a real-world grounding for their framework, ensuring that the findings are not purely theoretical. These datasets contain job-related skill data and AI task benchmarks, making them appropriate for assessing the practicality of skill decomposition. Theoretical Claims: I am not very familiar with the topic of assessing job accuracy of AI and humans, so I cannot fully judge whether all the theoretical components are meaningful. However, I have reviewed the mathematical symbol definitions, and they are clearly defined and well-presented. Experimental Designs Or Analyses: The approach appears comprehensive and sound. The authors define a job-success probability metric that integrates error rates across skills and tasks to assess overall performance. This metric provides a structured and quantitative evaluation of human-AI collaboration effectiveness, ensuring a more holistic assessment of task execution. Supplementary Material: I reviewed the section A and D. Relation To Broader Scientific Literature: By refining how AI-human performance is measured, this work contributes to future AI deployment strategies, helping design collaborative work environments where AI complements human abilities rather than replacing them. The proposed methodologies provide insights into the effective allocation of human and AI resources, ultimately improving job success probability and fostering productive human-AI collaboration. Essential References Not Discussed: I am not familiar with workforce optimization, so I cannot fully assess the essence of the related work. Other Strengths And Weaknesses: A potential limitation is whether the chosen datasets fully capture the complexity of skill attribution in dynamic work environments. Additional experiments with task-specific benchmarks or real-world AI-assisted work scenarios could further validate the robustness of the proposed approach. Other Comments Or Suggestions: The study uses O*NET and Big-Bench Lite, but it would be useful to discuss potential biases or limitations in these datasets. Are there task distributions that may favor either AI or human workers? Beyond theoretical modeling, empirical user studies involving real-world AI-human collaboration could strengthen the paper’s conclusions. Questions For Authors: Please refer to Section of Other Comments Or Suggestions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your detailed and encouraging review. We are especially grateful for your recognition of our framework’s real-world applicability and for your thoughtful suggestions regarding dataset limitations and empirical grounding, which have directly shaped our additional experiments. Please see this PDF for new figures (https://acrobat.adobe.com/id/urn:aaid:sc:eu:fef7d6b2-24f6-4a59-9386-fdf09456ed99). > *"A potential limitation is whether the chosen datasets fully capture the complexity of skill attribution in dynamic work environments. Additional experiments with task-specific benchmarks or real-world AI-assisted work scenarios could further validate the robustness of the proposed approach."* We agree that additional empirical results could further ground our mathematical model and its implications. As one of the first formal frameworks for understanding task-level human-AI collaboration, our empirical results—based on O\*NET and Big-Bench Lite—serve primarily as calibration tools to illustrate how the model can be instantiated and to demonstrate that key theoretical insights remain valid in practice. While we recognize the limitations of these datasets—O\*NET reflects static, survey-based data, and LLM-based estimates may introduce bias—we have conducted several new empirical analyses (see accompanying PDF) to test the robustness of our findings across alternative modeling choices. We will describe these limitations clearly and include the new results in the final version. We also agree that task-specific benchmarks (e.g., HumanEval for programming, customer support transcripts) could further strengthen empirical grounding and plan to explore such extensions in future work. - **Alternative Error Functions:** We replace the job/task error aggregation functions $g$ and $f$ with $\max$ (suggested by Reviewer 1zaz), to simulate more fragile task environments. The main patterns remain consistent (Figures 1 and 2 in the PDF linked above), though line-crossings disappear due to monotonicity in the max-based error aggregation. - **Alternative Ability Distributions:** We substitute truncated normals with uniform noise in ability profiles (Figures 3 and 4 in the PDF linked above), verifying that our key findings hold across distributions. - **Robustness to Task-Skill Graph Variations:** We randomly modify 5 edges in the task-skill dependency graph (Figures 7 and 8 in the PDF linked above). Despite these changes, the phase transition behavior and heatmaps remain stable. > *"It would be useful to discuss potential biases or limitations in these datasets... are there task distributions favoring humans or AI?"* We will explicitly discuss biases in O\*NET and Big-Bench Lite datasets regarding task distributions potentially favoring humans or AI. We summarize key observations below: - **Tasks Favoring Humans:** Tasks that are context-rich and require nuanced judgment, creativity, or interpersonal skills (e.g., strategic planning, ethical decisions) are better captured by human-centered data like O\*NET. - **Tasks Favoring AI:** Structured, repetitive tasks (e.g., basic arithmetic, data classification) are common in benchmarks like Big-Bench Lite, which may overrepresent tasks where AI excels. These differences suggest that while O\*NET may underrepresent emerging digital tasks, Big-Bench Lite might favor tasks with clear, rule-based responses. > *"Beyond theoretical modeling, empirical user studies involving real-world AI-human collaboration could strengthen the paper’s conclusions."* We agree this would be valuable, though it is outside the scope of this paper. Our primary goal is to provide a rigorous theoretical framework with proof-of-concept empirical validation. We believe our framework offers a foundation for future empirical and behavioral studies on AI-assisted work. As noted, several of our new experiments aim to move further in that direction. We thank you again for your thoughtful comments, which have significantly improved our work's clarity and empirical reach. We hope our additions meaningfully address the concerns raised. --- Rebuttal Comment 1.1: Comment: Thank the authors for providing detailed explanations in this rebuttal.
Summary: This paper models human-AI collaboration in jobs. In particular, it models jobs as being composed of multiple different subtasks, each of which involve different skills. The ability of different agents is noisy and ordered (e.g. the same agent can’t perform worse on easier subtasks on average than they do on easier subtasks). This paper studies multiple effects within this model, such as how the probability of task success varies with average ability and the effect of “merging” multiple workers (e.g. combining their relative strengths). Claims And Evidence: I appreciated that the paper took an especially nuanced view on how humans may integrate algorithmic tools into their workflow.The paper includes a rigorous theoretical analysis of the problem, as well as an empirical analysis. Methods And Evaluation Criteria: This paper has a fairly involved theoretical model of tasks - my suspicion is that most of the core results would generalize to other models, but given that the results are relatively intuitive, it makes me wonder whether another (or a simpler model) would have sufficed. One core concept that this paper explores is that of “merging workers”, which in this context likely means the benefits from partial automation of jobs. Given the model where humans and AI tools may have complementary skills sets (e.g. in decision-level or action-level skills), it seems natural that there would be benefits in merging workers. However, empirical work has shown that humans are sometimes unable to know when algorithmic tools are more or less accurate (e.g. https://arxiv.org/abs/2406.01382). It would have been useful to see more discussion of how these results change when workers are imperfectly “merged”. Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: See below Essential References Not Discussed: The idea of modeling jobs as given by multiple subtasks has been studied extensively in prior work, especially since some core parts of this paper could also model human-human integration (e.g. merging multiple jobs into one). For example, this lecture https://economics.mit.edu/sites/default/files/2024-09/Autor-Schumpeter-Expertise-20240829-handout.pdf, and related papers https://www.nber.org/system/files/working_papers/w32140/w32140.pdf studies similar issues. Other Strengths And Weaknesses: This is more a stylistic point, but the paper is very densely written, where the main body of the paper often includes more technical details than is necessary, which may make it difficult for the reader to follow the high-level story. Making the main body be written at a more high level (removing more technical discussion to the appendix) would be helpful. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. We especially appreciate your recognition of our modeling approach and the suggestion regarding imperfect merging, which we have now incorporated. Please see this PDF for new figures (https://acrobat.adobe.com/id/urn:aaid:sc:eu:fef7d6b2-24f6-4a59-9386-fdf09456ed99). > "...when workers are imperfectly 'merged'." Thank you for this suggestion and the reference. We extend our merging analysis by introducing a trust parameter $\lambda$ to the merging experiment in Section 4 (Line 358 right column - 439 left column), which models imperfect merging by letting the estimated ability $\hat{c}=\lambda c$ deviate from the true action-level ability $c$ of worker $W_2$. We then assign action-level subskills to $W_2$ when its scaled ability, $\lambda c$, exceeds $W_1$'s ability (i.e., $1 - 0.78s_{j2}\le \lambda c$, extending the setting in Lines 414–415, left column), even though $W_2$ completes skills at level $c$. We analyze the probability gain $\Delta=P_{\text{merge}}-\max\\{P_1,P_2\\}$ across different values of $c$ and $\lambda$ (see Figure 5 in the PDF linked above). We find that even modest errors in $\lambda$ can sharply reduce $\Delta$. For example, when $\lambda=1.14$ and $c=0.2$, the probability gain becomes $\Delta=-0.2$, indicating that merging reduces job success. This illustrates the critical importance of accurate ability estimation, and complements the findings in Section E.2 on belief-driven merging. We will create a separate subsection in Appendix E to present these findings clearly. > "...wonder whether another (or a simpler model) would have sufficed." Our model combines a task-skill graph, subskill-level abilities, and error rate functions to estimate job success (Section 2). We clarify three key points: - **Generality:** The model allows flexible task-skill dependencies, ability profiles, and error functions. Our framework includes two variants—noise in abilities and subskill division—to capture variation in worker performance. It can be adapted to simpler models while retaining its predictive power. - **Simplicity vs. expressiveness:** A special case with a single task, noise-free abilities, and no subskill split yields a binary job success probability (0 or 1) and misses correlations across skills. Without subskill division, it is difficult to analyze how merging a human and GenAI tool—each excelling in different subskills—affects performance. - **Non-triviality of results:** While some conclusions may seem intuitive, formally establishing phase transitions in job success requires careful analysis, as illustrated by the following examples: - **Dependent skills:** By introducing a dependency parameter $p \in [0,1]$, subskill errors are drawn from a shared latent status with probability $p$ and independently otherwise. The phase transition window becomes $\gamma_1=\frac{L\sqrt{\mathrm{sg}(A_1)\cdot\ln\frac{1}{\theta}}}{\mathrm{Infg}(A_1)\cdot\sqrt{1-p}}$, which vanishes as $p \to 1$, showing that stronger dependencies smooth out abrupt transitions. - **Max aggregation:** We show that Theorem 3.1 may not yield a phase transition when using the max operator for error rate functions $g,f$. Consider a job where one critical task is very difficult (e.g., skill difficulty 0.1) while all other tasks are relatively easy (e.g., skill difficulties 0.8). Since the job error rate is defined as $\max_{j\in[n]}h_j$, the overall error is dominated by the hard task. In this case, the job success probability is essentially $P=\Pr[h_1\leq\tau]$, making it a smooth function of the ability on that single, critical task—thus, no sharp phase transition occurs. Notably, the Lipschitz constant here is significantly higher (e.g., $L=1$) compared to the average-case model (e.g., $L=1/(2n)$ in the linear case), which prevents an abrupt transition. These findings show that richer modeling is necessary to rigorously capture how noise, dependencies, and aggregation affect outcomes like merging. We will include these as theorems in the appendix. > "Modeling jobs as multiple subtasks has been studied..." Thank you for this pointer. We will cite related works (e.g., Autor and Thompson) and clarify that our contribution lies in offering a formal, ML-grounded framework for modeling skill decomposition and human-AI complementarity. While inspired by similar questions, our analysis tackles fine-grained questions around merging, noise, and error functions not typically addressed in existing models. > "Making the main body.. more high level..." Thank you. We will revise the presentation to focus on high-level takeaways in the main paper and move derivations to the appendix. We anticipate using the ICML final version page allowance to improve clarity without altering content. Once again, we thank you for your detailed feedback and for helping us improve the quality and impact of our work. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses! Unfortunately, I think my concerns largely stand. Interestingly, I feel like this paper would be a much better fit for an economics venue than a CS one, partially because of its modeling style and analysis. I do appreciate the effort that the authors put into their rebuttal and discussion of my comments!
Summary: The authors propose a model of workforce replacement by AI and run some simulations based on it. Claims And Evidence: The authors claim to uncover deep truths about the job market, but they rest upon a foundation of assumptions that are not justified. They also do not make any real claims beyond stating they have a working model. This make evaluating the paper difficult as it presents theoretical arguments, but the empirical claims are vague or unusable outside their framework. Methods And Evaluation Criteria: No, they only make claims based on the model. There is not modeling of real labour force dynamics or attempts to test the framework. Just an examination of how it could be applied to an existing dataset. But even when the dataset is used the framework is implicitly enforced thanks to the use of an LLM to generate the data. Theoretical Claims: I did not check the theorems. I found the assumptions of independence and lack of connection to the real world to make the actual model irrelevant to my analysis. Experimental Designs Or Analyses: See abov Supplementary Material: No, see above Relation To Broader Scientific Literature: If the paper delivered what is promised this might have relevance, but as written does not. I suggest the authors consider either increasing the empirical groundedness of the work before resubmitting to CS-society venue, or do more work to build on the theory and submit to an econ theory venue. Essential References Not Discussed: None come to mind Other Strengths And Weaknesses: I am particularly concerned with the assumptions that skills are independent in LLMs and that they will generalize reliably. Both of these are still open questions in the literature, so some discussion of these limitations should at least be included. Other Comments Or Suggestions: No Questions For Authors: What am I supposed to gain from the heatmaps? They seem very sensitive to initial conditions/parameter selection. Showing that there are transition points in an econ model is not novel. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to evaluate our submission and for the opportunity to clarify key aspects of our work. Please see this PDF for new figures (link: https://acrobat.adobe.com/id/urn:aaid:sc:eu:fef7d6b2-24f6-4a59-9386-fdf09456ed99). > "..assumptions that skills are independent.,.." Our model does **not** assume skill independence. Each skill $s \in [0,1]$ is assigned an ability function, and nearby skills (e.g., 0.7 and 0.8) have similar expected abilities. Thus, correlation across skills is encoded by design. For instance, coding and debugging are correlated due to their proximity in skill space. The independence assumption applies only to **skill execution**: once ability functions are fixed, realizations of performance across tasks are treated as independent—assuming task inputs are drawn independently. This is standard in modeling both humans and LLMs. We will clarify this in the final version. Additionally, Sec. 4 (Lines 373L–356R) already considers dependent skill executions: a parameter $p\in[0,1]$ introduces correlation via a shared latent variable $\beta$. We extend Theorem 3.1 to this setting (details appear in response to Reviewer aCoN). > "deep truths about the job market... but do not make any real claims." We make no such claim. Rather, we introduce a theoretical framework for analyzing human-AI task-level collaboration, supported by empirical validation. As stated in the abstract and introduction, our contributions include formalizing how job success varies with ability profiles and quantifying the benefit of merging human and AI workers—both supported mathematically and empirically. We also added new empirical results that may be of interest to practitioners. Notably, we now show that our model captures the **productivity compression** phenomenon reported in Brynjolfsson et al. (2023). Specifically, Theorem 3.2 implies that merging low-skilled worker $W_1$ with a GenAI tool yields a larger gain in job success probability than merging a high-skilled worker $W_2$, thus narrowing the gap $P_2-P_1$. Since the job success probability correlates with productivity, this explains the empirical narrowing in productivity observed in the cited study. Our heatmaps (Figure 6 in the PDF) empirically reproduce this effect. To our knowledge, our model is among the first to offer a formal explanation of this compression under realistic ability assumptions (details appear in response to 1zaz.) > "No modeling of real labour force dynamics or attempts to test the framework." Our model is not intended as a macroeconomic tool. It is a **task-level, microeconomic framework** that enables reasoning about **skill decomposition** and **human-AI integration**. It is grounded in real-world data: O\*NET (task-skill-job structure), Big-Bench Lite (AI capabilities), and GPT-generated task-skill mappings. Section 4.4 discusses modeling limitations and directions for broader validation. > "I did not check the theorems. I found the assumptions... make the actual model irrelevant." We respect your decision. However, the theoretical results—such as phase transitions and merging theorems—are central to our work. Understanding when merging yields gains or when success changes abruptly is non-trivial and requires careful analysis. Other reviewers have highlighted the value of this contribution: Reviewer aCoN noted the rigor of the merging results, and Reviewer 1zaz emphasized the model’s potential for interpreting labor studies. > "What am I supposed to gain from the heatmaps?" The heatmaps illustrate how job success probability $P$ and merging gain $\Delta$ vary with ability profiles: - **Phase transition behavior:** The heatmaps confirm whether transitions in $P$ or $\Delta$ occur gradually or abruptly. Distinct color boundaries validate the predicted transitions. - **Extension beyond theory:** While Theorem 3.2 assumes identical profiles, Figure 4 shows that phase transitions still emerge when profiles differ. - **Effort required to achieve gains:** The size of the bright region in Figures 2 and 4(b) indicates how easily a large $\Delta$ can be achieved by merging. Figure 2(b) vs. Figure 4(b) shows that merging identical profiles yields more abrupt gains than merging distinct ones. These visualizations help connect our analytical results to observable behavior in practical scenarios. > "There is no connection to the real world." While our model is stylized to allow formal analysis, it is **empirically grounded** and addresses concrete, real-world questions about human-AI integration at the task level. It uses real datasets (O\*NET and Big-Bench Lite) and models skill-action decompositions and merging—challenges at the heart of current research in both ML and labor economics. We hope our clarifications and new results help make this connection clearer. We thank you again for your time, and hope our responses convey the relevance, rigor, and potential of our work.
null
null
null
null
null
null
XAttnMark: Learning Robust Audio Watermarking with Cross-Attention
Accept (poster)
Summary: This paper presents a robust watermarking scheme XAttnMark for audio content, where the embedding and detection of the watermark is performed using neural networks. A key aim of the work is to improve robust attribution (the ability to recover a binary code hidden in the content) while retaining robust detection (the ability to determine whether the content is watermarked or not). The proposed approach builds on AudioSeal, contributing several architectural modifications and a new loss function to improve imperceptibility of the watermark to the human ear. The architectural changes include: (1) sharing of the message conditioning module between the watermark embedding and detection networks, which involves using cross-attention in the detection network; (2) using a learned) linear function for the message conditioning module in place of mean-pooling with temporal-axis repetition. Empirical evaluations indicate that XAttnMark achieves significantly higher attribution accuracy than AudioSeal, with comparable detection accuracy and perceptual quality. XAttnMark is also shown to be the only watermark that achieves reasonably robustness (accuracy > 90%) under editing using audio diffusion models. ## Update after rebuttal Prior to the rebuttal my main concerns were around: 1. The significance of the empirical results due to small sample sizes 2. Confusion around the cause for the improved performance 3. Unclear motivation for introducing a new perceptual loss The authors addressed all of these concerns: 1. They explained that the sample size is much larger than 100 as they produce 100 watermarked samples _per_ original audio sample. 2. They corrected my misinterpretation of the ablation study results, reassuring me that parameter sharing is in fact the leading cause for the improved performance. 3. They summarized limitations of AudioSeal's perceptual loss in their response, and noted that this is discussed in an appendix. I encourage to the authors to include a summary of this discussion in the body of the paper. The authors also provided several new experimental results during the rebuttal period that further enhance the comprehensiveness of the empirical evaluation (evidence of statistical significance, investigation of localized watermarks, more comprehensive attack results, inclusion of false attribute rate for comparison with AudioSeal). I am now convinced that the paper is sound and will make a strong contribution to the audio watermarking literature. I have therefore increased my score to recommend acceptance. Claims And Evidence: - The claims of improved robustness and attribution accuracy are based on validation sets of size 100, whereas the AudioSeal paper uses a validation set of size 10,000. For a validation set of size 100, the standard error of accuracy/FPR/TPR could be as large as 5 percentage points, which may call into question the statistical significance of the claims. - The paper claims that “the fully disjointed architecture of AudioSeal ($\Theta_\mathcal{G} \neq \Theta_\mathcal{D}$) often converges fast for watermark detection learning but struggles to learn the message decoding part efficiently and accurately” (p. 4). However, the ablation study in Fig. 4 suggests that the main limitation may not be the lack of parameter sharing, but rather the choice of message conditioning module. Swapping out the proposed linear message conditioning module with AudioSeal’s results in a message bit accuracy drop from ~98% to ~62%. On the other hand, the use of cross-attention seems to have far less impact on accuracy (between 5-10 percentage points). - The paper claims that XAttnMask is consistently more robust than AudioSeal against adversarial watermark removal attacks (p. 8). However, the attacks against XAttnMask are generally weaker in terms of their perceptibility than the attacks against AudioSeal, as measured by PESQ, SI-SNR and ViSQOL. Hence the comparison may not be entirely fair. More broadly, the experiments supporting this claim are not as comprehensive as those performed in the AudioSeal paper, which includes stronger gradient-based attacks in the semi-black box and white box settings. Methods And Evaluation Criteria: Yes, the empirical evaluation largely follows norms established in prior work – in terms of datasets, evaluation metrics, and the kinds of benign/adversarial transformations considered for watermark removal. It’s great to see an ablation study (Table 4 and Figure 4) to assess the impact of the proposed architectural changes in isolation. My concerns around the evaluation are: - The use of much smaller validation sets than prior work. - The definition of attribution accuracy is unclear. In prior work (San Roman et al., 2024), the attribution accuracy is the fraction of examples for which the detection is positive _and_ the attribution is correct. However, in the paper, it appears to be defined as the fraction of detected examples for which the attribution is correct. - The paper does not report false attribution rate alongside attribution accuracy (see San Roman et al., 2024). This is important as there is a trade-off between false-positives and false-negatives. - There are no results comparing computational efficiency. Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: I looked over parts of Appendices A and C. Relation To Broader Scientific Literature: - Post-hoc neural-network based watermarking. The proposed architecture and training procedure builds on AudioSeal (San Roman et al., 2024) as explained in Appendix A. Similar approaches have been proposed in the image domain – e.g., StegaStamp by Tancik et al. (2020) which is not cited. The claim that AudioSeal “pioneered the disjointed generator-detector paradigm for neural watermarking” (p. 3) is incorrect. StegaStamp is the earliest example I’m aware of, but there may be others. - New loss for imperceptible watermarks. The proposed loss is inspired by psychoacoustic masking principles (Gelfand, 2017; Holdsworth et al., 1998), recognizing that human listeners struggle to detect small changes in the temporal/frequency proximity of loud sounds. San Roman et al. (2024) also proposed a perceptual loss for audio called TF-Loudness. The paper does not clearly articulate why a new loss is needed, nor how the two losses differ. **References** - Tancik, Matthew, Ben Mildenhall, and Ren Ng. "StegaStamp: Invisible hyperlinks in physical photographs." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. Essential References Not Discussed: I think it’s important to mention the Content Authenticity Initiative (CAI), as an alternative solution to watermarking. It enables tracking of content provenance (including source attribution) via the addition of C2PA metadata, secured by cryptographic means. Other Strengths And Weaknesses: S1. The empirical evaluation is generally well-executed, apart from my criticism about statistical significance due to small sample sizes. It’s great to see the inclusion of multiple baselines, a range of attacks/benign transformations (including a new diffusion-based attack), and multiple validation datasets. Incidentally, the results for other datasets in Appendix C.6 should be referenced in the body of the paper. S2. The writing is generally clear. However, I feel the introduction could be made more accessible for readers who are unfamiliar with watermarking and audio. W1. The new perceptual loss introduced in Section 4.2 is not adequately compared with prior work. I would like to see a qualitative and quantitative comparison with the TF-Loudness loss introduced in AudioSeal. The new loss seems more complicated than TF-Loudness in its construction, so it’s important to provide evidence that the additional complexity has some benefit (e.g., increased detection accuracy for a given level of imperceptibility). W2. The proposed watermarking scheme does not seem to include _localization_ as a design criterion. In contrast, both WavMark (Chen et al., 2023) and AudioSeal (San Roman et al., 2024) seek to embed _localized_ watermarks in audio, to enable detection of small segments of watermarked audio (e.g., AI-generated speech or copyrighted music) within longer audio clips. By abandoning localization as a constraint, XAttnMark may have an unfair advantage in its ability to achieve high detection/attribution accuracy. I'd like to see some discussion of this in the paper. W3. A key focus of the paper is on improving source attribution of audio watermarking. However, there is limited discussion explaining why source attribution is important and explaining whether the proposed watermarking scheme addresses the problem. For example, if users of a service are regarded as “sources”, then is a message pool of 10,000 large enough in practice? Other Comments Or Suggestions: - Table 3: The names of the quality metrics are introduced in Sec 5.3, after the table is introduced. The columns are missing arrows, indicating whether higher/lower values are better. Questions For Authors: 1. Could the authors comment on the statistical significance of the empirical results. 2. How is attribution accuracy defined? Why are the results for WavMark and AudioSeal different to those reported in the AudioSeal paper? 3. What is the motivation for introducing a new perceptual loss? Is there a problem with the perceptual loss used in AudioSeal? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their meticulous and constructive feedback. We will revise the manuscript by improving the introduction and mentioning the CAI initiative and the C2PA standard in the body of the paper. Our responses are as follows: > Q1. On the statistical significance of the results, and concerns on the #validation set. We want to clarify that, while our test set is indeed composed of 100 audio files, we follow AudioSeal's protocol by hiding 100 messages per file, corresponding to a total of 10k watermarked audio samples. We then apply 16 audio transformations to each one of them. So the total number of samples contributing to our scores is *160k*. To further validate the statistical significance of our results, we perform McNemar's test and Wilcoxon signed-rank tests across edits with 1e4 users. The results are reported in [Tables A and B](https://tinyurl.com/utkjb4yr), which shows that the results on our setup are statistically significant in both attribution performance and perceptual quality. > Q2. On the definition of attribution accuracy, report of false attribution rate, and the results discrepancy from the two baseline papers. In our work we define the attribution accuracy in a different way, as the fraction of correct attributions among the *detected* audio inputs, which equals to $1 - FAR$ (False Attribution Rate reported in AudioSeal). This definition decouples the attribution performance from the detection performance. To show a direct comparison between the different metrics, we report the results on both MusicCaps and VoxPopuli setups in [Table C](https://tinyurl.com/3kjjr95f). > Q3&W.1 On the motivation of the proposed perceptual loss, the comparison with the TF-Loudness loss in AudioSeal, and the justification of additional complexity with score gain. In Appendix C.8, we discussed the advantage of our proposed masking loss compared to the TF-loudness loss. The TF-loudness loss employs a coarse approach based on loudness differences within each tile, neglecting sophisticated auditory masking effects, such as the interactions between masker and maskee across tiles. Additionally, we found that using loudness difference as a discrepancy measure provides only weak supervision. In contrast, we have designed a more sophisticated TF-weighted MSE loss, which simulates a two-dimensional energy decay in the temporal-frequency domain, effectively identifying masker-maskee pairs, leveraging psychoacoustic principles. Furthermore, we utilize mean-square error in the mel-spectrogram domain as our discrepancy measure, providing more fine-grained guidance (see our qualitative comparison in Figure 10 in the appendix). To quantitatively justify the effectiveness of our proposed loss, we evaluate the attribution accuracy under different watermark strengths (controlled by PESQ ranges) in [Figure A](https://tinyurl.com/y38skm78). These results clearly indicate that our method consistently achieves significantly higher attribution accuracy at each imperceptibility level. > Q4. On the lack of results comparing computational efficiency. We additionally report the results on computational efficiency in [Table D](https://tinyurl.com/2p9p5cpd). > W2. On the localization capabilities of XAttnMark. Thanks for this insightful point. Although we have not explicitly discussed the localization capabilities of XAttnMark, our model can be easily extended to have this ability with sliding window detection. Specifically, since our model includes shifting-robust transformations and operates on 1s segments, we can distribute the per-segment detection probability to the per-frame level with multiple overlapping detection windows as the BFD in WavMark does. We have implemented this and report the results in [Figure B](https://tinyurl.com/yxtft2mb). Results show that XAttnMark can achieve comparable localization performance to AudioSeal and significantly outperforms WavMark. > Q5. On the comparison with stronger gradient-based attacks in the semi-black box and white box settings and fairness on the HSJA. We additionally report the robustness against the white-box, semi-black-box attacks in [Figure C](https://tinyurl.com/2xmc3km3). Results show that XAttnMark is slightly more vulnerable to these two attacks compared to AudioSeal (might be attributed to our smaller detector). On the fairness concern on HSJA, we clarify that we use the same attack budget for the two methods, and the higher perceptibility score in ours is because HSJA fails to successfully find more adversarial samples within the given budget compared to AudioSeal's case. > Q6. On the contribution of the cross-attention and the conditioning module. Regarding the interpretation of Fig. 4, we clarify that our claim is that, when considering a single module in isolation, the cross-attention module (acc. of 62%) is more effective than the temporal conditioning module (acc. of 50%) in enhancing efficiency. --- Rebuttal Comment 1.1: Comment: I appreciate your detailed rebuttal. The new empirical results you've shared will round out the paper nicely. I'm satisfied with the responses to my concerns, and will update my score accordingly. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their careful consideration and updated assessment. We are pleased that our response has addressed your concerns. We will incorporate the additional results into the revised version of the manuscript accordingly.
Summary: This paper focuses on robust audio watermark detection and source attribution, which is more a technical report than a top-tier conference paper. Specifically, it adopts blended architecture of disjointed generator-detector and fully shared-parameter architecture. Besides, temporal conditioning mechanism and per-tile temporal-frequency masking loss are utilized to improve watermarking performance. In general, the writing is good from the technical aspect. It emphasizes the details of each contribution, lacking more thorough and deep analysis. The experimental results show its effectiveness of proposed method. Claims And Evidence: The experimental results, including comparison with SOTA and ablation studies show its effectiveness of proposed method. Methods And Evaluation Criteria: As discussed above, the paper focus on illustrating technical details while lacking more deeper insights for robust audio watermarking. Theoretical Claims: There are no proofs for theoretical claims. Experimental Designs Or Analyses: The experimental results, including comparison with SOTA and ablation studies show its effectiveness of proposed method. Supplementary Material: Yes, it provides more details and experimental results. Relation To Broader Scientific Literature: It achieves both better performance on audio watermark detection and source attribution compared with SOTA methods. Essential References Not Discussed: The references seem adequate. Other Strengths And Weaknesses: As discussed above, the motivation behind the key contributions is not clear, which merely introduce problems and propose detailed technique and verify it with experiments. Although most papers are similar style with it, this paper is much more to be a technical report than other papers to be reviewed. Other Comments Or Suggestions: The writing is good in general. However, in my opinion, it is more like a technical report. Questions For Authors: How about the comparison of model's parameters, training speed and inference speed? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's insightful attention and precise feedback. We have addressed the reviewer's concerns as follows: > Q1: How about the comparison of model's parameters, training speed and inference speed? **Response:** We additionally report the model size, training speed, and inference speed comparison with AudioSeal in [Table D](https://tinyurl.com/2p9p5cpd). Our model uses fewer parameters and has a smaller size for both the generator and the detector. Although our generator has higher FLOPs and a slightly increased inference time per segment (~0.3 ms/segment), our detector significantly reduces FLOPs while maintaining similar inference efficiency overall. In training speed, our model achieves a similar second-per-iteration rate (around 1.15s/iter) as AudioSeal, with faster convergence speed in learning message decoding. Specifically, as shown in Appendix C.2, our model takes ~4k steps to reach perfect detection accuracy, and ~10k steps to reach perfect attribution accuracy, while AudioSeal takes ~32k steps to reach perfect detection accuracy, and 50k steps to reach around 70% attribution accuracy. This demonstrates that XAttnMark achieves 5 to 8 times better training efficiency than AudioSeal. --- > Q2: As discussed above, the paper focuses on illustrating technical details while lacking more deeper insights for robust audio watermarking. **Response:** Thank you for this valuable point. We will refine our presentation to include more design insights. Due to the space limit, we put a significant part of the details on the design motivation in the appendix sections. In the appendix, we have provided more analysis and discussion on the proposed modules, including the cross-attention architecture and the proposed temporal-frequency perceptual loss. Specifically, - In Appendix C.2. (Analysis of the Training Dynamics of Models with Different Architectures), we analyze the training dynamics of different architectures under a controlled experimental setup to better understand their inherent learning capabilities. - In Appendix C.8, we provide a comprehensive comparison with the TF-loudness loss of AudioSeal. During the revision of the paper, we will add these deeper technical insights to the main text for better readability.
Summary: The paper introduces a novel neural audio watermarking framework called XATTNMARK. The key contributions include: A cross-attention mechanism that enables efficient message retrieval by sharing an embedding table between the generator and detector. A temporal conditioning module that distributes the message temporally, improving learning efficiency. A psychoacoustic-aligned temporal-frequency masking loss that enhances watermark imperceptibility by leveraging human auditory masking effects. The main findings show that XATTNMARK achieves state-of-the-art performance in both detection and attribution, demonstrating superior robustness against a wide range of audio transformations, including challenging generative editing. Claims And Evidence: The claims made in the paper are well-supported by clear and convincing evidence. Methods And Evaluation Criteria: The evaluation on diverse audio transformations and benchmark datasets provides a rigorous assessment of the method's robustness and practical applicability. Theoretical Claims: The theoretical claims in the paper are supported by empirical evidence and are grounded in well-established principles. The paper mentions adversarial attacks but does not provide detailed theoretical analysis on the robustness of XATTNMARK against such attacks. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are generally sound and provide strong evidence to support the claims. The subjective listening test involves a relatively small number of participants. The ablation study on the adaptive bandwidth (constant weight γ = 1) is limited to a single configuration. Supplementary Material: No Relation To Broader Scientific Literature: The key contributions of the paper are well-grounded in the broader scientific literature on audio watermarking and generative audio technologies. XATTNMARK builds upon previous work by introducing innovative mechanisms for message retrieval, temporal conditioning, and psychoacoustic alignment. Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for taking the time to review our manuscript and providing valuable feedback. We have carefully considered each point raised and provide our detailed responses below: > Q1. The subjective listening test involves a relatively small number of participants. **Response:** We launch our subjective listening test with 18 participants initially following the ITU-R BS1534-1 [1] standard and practice in related audio publications [2,3]. Specifically, ITU-R BS1534-1 suggests that **when the conditions of a listening test are tightly controlled on both the technical and behavioral side, experience has shown that data from no more than 20 subjects are often sufficient for drawing appropriate conclusions from the test**. Our internal test with unified software/process and participants from a similar background profile satisfies this control requirement. During the post-screening process, we further filtered out 6 participants who missed the reference audio to ensure the result's validity, resulting in 12 valid evaluators. Similarly, SilentCipher[3] also performs post-processing on the test group results with 12 valid evaluators in total. While these references support our setup, we acknowledge that the population size for our MUSHRA test is relatively limited. If needed, we will expand the test population and update our results in the final version of the paper. [1] Method for the subjective assessment of intermediate quality level of coding systems (Recommendation ITU-R BS.1534-1), International Telecommunication Union. (2003). [2] Davidson, G., Vinton, M., Ekstrand, P., Zhou, C., Villemoes, L., & Lu, L. (2023). High Quality Audio Coding with MDCTNet. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). [3] Singh, M. K., Takahashi, N., Liao, W., & Mitsufuji, Y. (2024). SilentCipher: Deep Audio Watermarking. In Proc. Interspeech 2024 (pp. 2235-2239). --- > Q2. The ablation study on the adaptive bandwidth (constant weight $\gamma$ = 1) is limited to a single configuration. **Response:** Our ablation study on the adaptive bandwidth mainly focuses on the two most representative cases, the constant weighting and the adaptive weighting for per-mel masking radii, which we propose in our work. In our design, the per-mel-bin masking radii $r^m$ are adjusted based on the frequency instead of being set as a constant across all frequencies. Adjusting different $\gamma$ values can be viewed as a hyperparameter tuning process on the base radius $r^m_b$, which still assigns the same radii across all the mel-bins and conceptually belongs to the same class of constant weighting. Due to the time constraint, we leave the exploration on this hyperparameter searching for constant weighting as a future work.
Summary: This paper proposes XATTNMARK, a novel neural audio watermarking system designed to achieve both robust detection and accurate message attribution, two goals that are difficult to achieve simultaneously in prior work. The authors blend the architectural benefits of WavMark and AudioSeal by introducing partial parameter sharing between the generator and detector, enabled via a cross-attention decoding mechanism and a shared embedding table. Additionally, a temporal message conditioning module and a psychoacoustic-aligned time-frequency masking loss are proposed to enhance imperceptibility and robustness. Experiments demonstrate that XATTNMARK achieves SOTA performance across a wide range of audio transformations, including generative edits, and adversarial attacks, while maintaining high perceptual quality. Claims And Evidence: The central claim is that XATTNMARK simultaneously achieves robust detection and accurate attribution across diverse audio transformations, outperforming existing SOTA methods. The empirical evidence, especially Table 1 and Table 2, supports this claim convincingly. The robustness against generative editing and adversarial removal is particularly noteworthy, as prior methods degrade significantly under such settings. Ablation studies (Figure 4) and quality assessments (Table 4) further strengthen the evidence for each architectural component’s contribution. Methods And Evaluation Criteria: The methodology is technically sound. The partial parameter sharing via a shared embedding table and cross-attention decoding is well-motivated and novel. The experimental protocol is thorough, using 16 types of transformations, two generative editing models (AudioLDM2, Stable Audio), and adversarial perturbations (HSJA). Baselines are strong (AudiowMark, WavMark, TimbreWM, AudioSeal), and evaluation metrics include detection/attribution accuracy, perceptual audio quality, and robustness under various threats. Theoretical Claims: The paper is mostly empirical, but the formulation of the psychoacoustic-aligned temporal-frequency masking loss is theoretically grounded in auditory perception literature. The architecture and cross-attention design are sound from a deep learning perspective. Experimental Designs Or Analyses: The paper extensively benchmarks detection and attribution across a wide range of realistic scenarios, including speed edits, generative model edits, and adversarial attacks. The performance gains are statistically significant, and trade-offs are clearly analyzed. Ablations are especially useful in isolating the impact of core contributions. Supplementary Material: While the supplementary is referenced several times (e.g., App. C.3, C.4, C.7), the main paper stands strong on its own. Inclusion of subjective MUSHRA results in the appendix is a good touch. Relation To Broader Scientific Literature: The paper properly contextualizes its contribution within prior watermarking methods (AudiowMark, WavMark, AudioSeal), as well as broader work on dataset attribution and copyright auditing. References are recent and well-curated. Essential References Not Discussed: None glaring. Other Strengths And Weaknesses: Strengths: - Strong empirical gains across detection and attribution. - Well-structured methodology with insightful architectural design. - Broad experimental coverage (standard, generative, adversarial). - High practical relevance in the age of generative audio content. Weaknesses: - The model still struggles with extreme transformations like speed changes (acknowledged in the text). - Attribution performance under generative edits was not deeply analyzed; only detection is reported. - The paper does not evaluate robustness under white-box adversarial attacks. Other Comments Or Suggestions: - The paper would benefit from clarifying the decoding pipeline under attribution evaluation with large user pools (e.g., scalability of Hamming decoding). - Consider releasing code/models to improve reproducibility and adoption. Questions For Authors: - Can the attribution method be extended to a continuous space (e.g., using embeddings) to improve robustness under generative edits? - How sensitive is the performance to the architecture of the embedding table and temporal conditioning module? - The paper does not evaluate robustness under white-box adversarial attacks. How would the proposed method perform under white-box adversarial attacks compared with existing methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable feedback. We have addressed the reviewer's concerns as follows: > W1. The model still struggles with extreme transformations like speed changes (acknowledged in the text). In the paper we show that the model is able to effectively perform the detection task under speed changes transformations. We also acknowledge that the model struggles against speed changes (and other challenging transformations like generative edits) for the attribution task. However, in the Appendix C.3.1, we show that, for challenging speeding operation, we could build up a simple speed reversion layer, that greatly improve the attribution performance without significant overhead (as shown in Table 7 in appendix). > W2. Attribution performance under generative edits was not deeply analyzed; only detection was reported. As mentioned earlier, we acknowledge that our model is still limited in attribution robustness against generative edits. However, we believe that this still marks a significant step forward in watermarking against generative audio edits. To the best of our knowledge, we are the first to report non-trivial detection robustness (90%+) against generative editing in a zero-shot manner. With additional specialized training on those transformations, the attribution performance might be further improved. We leave this as future work. > W3. The paper does not evaluate robustness under white-box adversarial attacks. We additionally report the robustness against white-box adversarial attacks in [Figure C](https://tinyurl.com/2xmc3km3). > C1. The paper would benefit from clarifying the decoding pipeline under attribution evaluation with large user pools (e.g., scalability of Hamming decoding). Please refer to the 'Evaluation Setup' part of Appendix A and Table 6, where we have provided a detailed discussion on the Hamming decoding process used in the attribution evaluation and also the scalability aspect. > C2. Consider releasing code/models to improve reproducibility and adoption. Thank you for the suggestion. We will consider releasing the code and models upon the paper's publication. > Q1: Can the attribution method be extended to a continuous space (e.g., using embeddings) to improve robustness under generative edits? This is an interesting idea. Currently, the existing watermarking methods are mostly designed for embedding discrete bit-strings (e.g., 0s and 1s). However, our experiments show that, under the challenging generative editing, previous methods fail to succeed at both detection and attribution. One potential reason is that we treat the source as discrete informationless bit-strings, without leveraging the semantic information that could help the attribution task (e.g., style attribution [1]). For example, in the audio domain, a copyrighted timbre might have countless reference audio files, which could be leveraged as attribution anchors for more robust attribution. This is an orthogonal direction to our research, which we leave as future work. [1] Wang, Sheng-Yu, et al. "Evaluating data attribution for text-to-image models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. > Q2: How sensitive is the performance to the architecture of the embedding table and temporal conditioning module? We additionally provide the sensitivity study on the embedding hidden dimension and temporal conditioning architecture in [Figure D](https://tinyurl.com/49xfer89). For the embedding table, we found that the embedding dimension $H$ will affect the convergence speed of the watermark model in both detection and message decoding (more sensitive to $H$ in the message decoding part). With grid search over $[\frac{b}{2}, b, 2b, 4b, 8b]$, where $b$ is the bit-length of the secret message ($b=16$), we found that $H$ values ranging from $H=\frac{b}{2}$ to $H=4b$ both yield fast convergence except for $H=8b$. For the temporal conditioning module, we additionally provide an ablation study with different numbers of MLP layers (linear, 2-layer MLP, and 3-layer MLP). Results show that the linear projection proposed in XAttnMark is the only one that converges, indicating that the convergence is sensitive to the architecture choice of the temporal conditioning module. > Q3: The paper does not evaluate robustness under white-box adversarial attacks. How would the proposed method perform under white-box adversarial attacks compared with existing methods? We additionally report the robustness against white-box, semi-black-box, and Gaussian noise attacks in [Figure C](https://tinyurl.com/2xmc3km3). The results show that XAttnMark is more robust than AudioSeal in Gaussian noise attacks. In the white-box and semi-black-box attack scenario, we observe that XAttnMark is slightly more vulnerable to white-box attacks compared to AudioSeal, which might be due to the smaller model size of the detector module (XAttnMark is 7.59M, while AudioSeal is 8.65M).
null
null
null
null
null
null
Dimension-Free Adaptive Subgradient Methods with Frequent Directions
Accept (poster)
Summary: In machine learning, the seminal work [DHS'11] proposed the adaptive subgradient method with full matrices (ADA-FULL), which requires maintaining a preconditioning matrix with $O(d^2)$ space and $O(d^3)$ running time. However, ADA-FULL suffers from high-dimensional dependence in its regret bound and computational complexity, making it inefficient for large-scale optimization problems. To address these limitations, several methods have been proposed, including ADA-FD [WZ'18] and S-ADA [FCSAH'23], which leverage matrix sketching techniques to reduce computational costs. This paper further advances the acceleration of adaptive subgradient methods by integrating Frequent Directions (FD), a powerful matrix sketching approach. Specifically, it introduces the Follow-the-Sketchy-Leader (FTSL) method, which improves regret bound, space efficiency, or time complexity compared to prior works. Additionally, the paper extends these techniques to Shampoo, a second-order optimization method, resulting in the FTSL-Shampoo algorithm, which enhances memory efficiency and achieves dimension-free theoretical guarantees. Experiments on real datasets for online classification and image classification tasks demonstrate that the proposed methods outperform existing approaches in terms of test accuracy and running time. Claims And Evidence: I believe so, though I haven’t carefully reviewed the proofs in the appendix. Methods And Evaluation Criteria: Yes, this paper compares with prior works on two tasks: online classification and image classification, in terms of test accuracy, training or test loss, and running time. Theoretical Claims: I didn't read the proofs in the appendix. Experimental Designs Or Analyses: Yes, I reviewed the experiments in Section 5 and Appendix D. In online classification, the proposed methods are compared with prior works on the datasets Gisette and Epsilon of LIBSVM; in image classification, the experiments are implemented on CIFAR-10 and CIFAR-100. Supplementary Material: Yes, I briefly reviewed all the supplementary material. Relation To Broader Scientific Literature: This paper integrates FD with Shampoo to develop FTSL-Shampoo, which provides improved efficiency for optimization problems with matrix variables (e.g., neural network training). Essential References Not Discussed: None. Other Strengths And Weaknesses: $\textbf{Strengths:}$ (1) This paper combines FD with ADA-FULL and proposes some new methods, including FTSL, FTFSL, and FTSL-Shampoo. On the theoretical aspect, these new methods improve the existing works with respect to regret bound, space or running time. (2) The experimental results indicate the efficiency of the proposed methods on two real tasks. $\textbf{Weaknesses:}$ (1) In Table 1, the proposed method FTSL has the same space and time complexity with the method ADA-FD (P), and their regret bounds are quite close due to $\sqrt{\sum_{t=1}^{T} \rho_t} \le \sum_{t=1}^{T} \sqrt{\rho_t} \le \sqrt{T \sum_{t=1}^{T} \rho_t}$. Likewise, in Table 2, FTFSL and ADA-FFD (P) have the same situation. Consequently, from a theoretical perspective, the proposed methods FTSL and FTFSL do not offer a clear advantage over existing works. (2) From a technical perspective, both this paper and the works ADA-FD/ADA-FFD adopt the framework that integrates FD with ADA-FULL, which somewhat limits the novelty of this paper. (3) For the experiments, in Figures 3 and 4, the proposed method FTFSL doesn't have a significant advantage over other methods, like ADA-FFD (M); similarly, in Figures 5 and 6, the proposed FTSL-Shampoo method exhibits performance comparable to the prior methods Shampoo and S-Shampoo. (4) This paper evaluates the proposed methods on only four real datasets for online classification and image classification, which is relatively limited and weakens the assessment of their effectiveness. Other Comments Or Suggestions: None. Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Many thanks for your constructive feedback! --- Q1. FTSL and FTFSL have the same complexities with ADA-FD(P) and ADA-FFD(P), and regret bounds are close. A1. We acknowledge that the proposed methods have the same time and space complexities with ADA-FD(P) and ADA-FFD(P). However, we want to clarify that our regret bounds have a significant improvement compared to them. * The additional error terms in ADA-FD(P)/ADA-FFD(P) and FTSL/FTFSL are $\sum_{t=1}^T \sqrt{\rho_{t}}$ and $\sqrt{\sum_{t=1}^T\rho_{t}}$, respectively. * As you pointed out, we have $\sqrt{\sum_{t=1}^T\rho_{t}}\leq\sum_{t=1}^T\sqrt{\rho_{t}}\leq\sqrt{T\sum_{t=1}^T\rho_{t}}$, which means that there exists‌ a large gap of $\sqrt{T}$ in the worst case. Note that $T$ is the number of iterations, which keeps on increasing, and cannot be ignored. * Moreover, as discussed in Observation 2 of [FCSAH'23], the regret bound of ADA-FD(P)/ADA-FFD(P) is $\Omega(T^{3/4})$ in some cases, while FTSL/FTFSL achieves a better $O(T^{1/2})$ bound, providing a substantial advantage. We provide the proof below. According to [FCSAH'23], there exists a situation, we receive the linear loss $f_t(\textbf{x}) = \langle\textbf{x},\textbf{g}_t \rangle$, where $\textbf{g} _t \in \mathbb{R}^d$ is a random vector drawn iid from any distribution over $r\leq d$ orthonormal vectors. As pointed out by [FCSAH'23], for any $\tau\leq r$, the bound on the expected regret of ADA-FD(P)/ADA-FFD(P) is $\Omega(T^{3/4})$. The regret of FTSL/FTFSL is $\eta \text{tr}(G_T^{1/2})+\frac{1}{\eta}\sqrt{\sum _{t=1}^T\rho _{t}} \leq \eta \text{tr}(G_T^{1/2})+\frac{1}{\eta}\sqrt{T\max _{t\in [T]}\rho_t} \leq O(T^{1/2})$, where the last inequality is due to $\text{tr}(G_T^{1/2}) =O(T^{1/2}), \rho_t = 0$ or $1$, and setting $\eta = O(1)$. --- Q2. Both this paper and ADA-FD/ADA-FFD integrate FD with ADA-FULL, which limits the novelty. A2. While both our paper and ADA-FD/ADA-FFD integrate FD with ADA-FULL, there do exist significant differences from them. * Compared to ADA-FD and ADA-FFD, our algorithm tracks the information discarded in FD and incorporates it back into the preconditioning matrix, achieving a better regret bound. * Our work additionally considers a more practical setting, optimization problems with matrix variables. We integrate FD with Shampoo and provide a novel analysis under the primal-dual framework. The dimension-free theoretical guarantee is another contribution of this work. --- Q3. In Figures 3 and 4, FTFSL doesn't have a significant advantage over ADA-FFD (M). In Figures 5 and 6, FTSL-Shampoo exhibits performance comparable to Shampoo and S-Shampoo. A3. We believe there might be some misunderstandings in this part. * Actually, in Figures 3 and 4, FTFSL (red line) outperforms ADA-FFD (M) (pink dashed line) in terms of testing accuracy and training loss. * We want to clarify that FTSL-Shampoo is an approximate version of Shampoo, aiming to enhance computational efficiency while maintaining comparable effectiveness. FTSL-Shampoo utilizes less information in each round for updates (some eigenvalues are discarded). In Figures 5 and 6, FTSL-Shampoo exhibits performance comparable to Shampoo, while significantly improving memory efficiency and reducing running time, which aligns with the theoretical guarantees. * As demonstrated by the experimental results, FTSL-Shampoo (orange line) substantially outperforms S-Shampoo (blue dashed line) across all metrics, including testing accuracy, testing loss, and training loss. --- Q4. This paper evaluates methods on only four datasets. A4. Following your suggestion, we conduct experiments on an NLP task, and the results can be found at the following anonymous link: https://anonymous.4open.science/r/ICML-14491/results.pdf. Concretely, we train a 2-layer Transformer over the WiKi-Text2 dataset. We use 256 dimensional word embeddings, 256 hidden unites and 2 heads. The batch size is set as 64 and all methods are trained for 40 epochs with dropout rate 0.1. As can be seen, FTFSL and FTSL-Shampoo suffer lower loss and obtain better perplexity compared to other sketching based algorithms, indicating the effectiveness of the proposed methods. Additionally, we would like to take this opportunity to clarify that the primary contribution of this paper lies in the theoretical aspects, which includes the following three key points: * First, we propose FTSL, which achieves a dimension-free regret bound and maintains the same memory complexity as previous works. * Second, we develop Fast S-ADA and FTFSL to further reduce the time complexity, while preserving the same regret bounds. * Next, we investigate optimization problems with matrix variables, a scenario commonly encountered in deep learning tasks. FTSL-Shampoo enjoys an enhanced theoretical guarantee than S-Shampoo [FCSAH'23].
Summary: This work proposed an adaptive online subgradient method with frequent directions. The main contribution is the regret bound is dimension-free and the algorithm only requires the time complexity of $O(\tau d)$ in each iteration. ## update after rebuttal All of my questions have been addressed. Hence, I would like to increase my overall rating. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The theoretical claims sound correct. Experimental Designs Or Analyses: The experimental designs and results sound reasonable. Supplementary Material: I have roughly review the proof in supplementary (not all details). Relation To Broader Scientific Literature: See the part "Questions For Authors". Essential References Not Discussed: See the part "Questions For Authors". Other Strengths And Weaknesses: See the part "Questions For Authors". Other Comments Or Suggestions: See the part "Questions For Authors". Questions For Authors: The main comments: 1. The literature review should be more appropriate. The idea of adaptive FD by adding the cumulative discarded information of FD back was first proposed by Luo et al. (2019) and Chen et al. (2020), and their time complexity have achieved $O(\tau d)$ by using the similar trick in Section 4.2. The main contribution of Feinberg et al. (2023) is improving the regret bound of Luo et al. (2019). 2. The comparison with Spectral Compensation Frequent Directions (SCFD) (Chen et at., 2020) should be discussed. It seems that both this paper and Feinberg’s et al. (2023) method use SCFD to approximate $G_T$. 3. Assumption 3.3 looks very strong although it is introduced in previous work. Is it possible to replace the low-rank assumption with an approximately low-rank assumption? 4. Can we guarantee $\tilde G_T$ be non-singular during iterations? 5. The experiments test the algorithms on nonsmooth hinge loss, and I find the analysis also does not rely on the smoothness of the loss function. Therefore, the word “gradient” in many sentences should be replaced with “subgradient”. The notation $\nabla f_t$ is also somewhat inappropriate. The minor comments: 1. The font of vec in Lemma B.7 should be consistent to the description before this lemma. 2. The domain of $\bf x$ is required in line 1051. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! --- Q1. The literature review should be more appropriate. A1. Thank you for bringing these related works to our attention. After checking the papers, we acknowledge that the idea of adaptive FD by incorporating the cumulative discarded information is first introduced by Luo et al. (2019) and Chen et al. (2020). We sincerely apologize for the omission of these work and will include a discussion of them in the revised version. --- Q2. The comparison with SCFD (Chen et al., 2020). A2. Although our sketching technique is similar to SCFD, the setting and algorithmic design of our work are fundamentally *distinct*: * Chen et al. (2020) focus on linear contextual bandits, while our work investigates the *general online convex optimization* problem. * Their method is based on LinUCB, whereas our FTSL/FTFSL and FTSL-Shampoo are based on ADA-FULL and Shampoo, respectively. * Due to the *essential distinctions* in settings, our algorithmic design and theoretical analysis are different from Chen et al. (2020). --- Q3. Is it possible to replace the low-rank assumption with an approximately low-rank assumption? A3. First, we would like to emphasize that Assumption 3.3 is _only_ used in the analysis of FTSL-Shampoo. Moreover, the analyses of Shampoo and S-Shampoo also utilize this assumption. In our paper, Assumption 3.3 is used in Lemma B.9 to give the lower bounds of the sketching preconditioning matrices. Second, we can replace Assumption 3.3 with the approximately low-rank assumption. However, it would introduce an additional approximation term in the final regret, leading to a difference from the bounds of Shampoo and S-Shampoo. We consider this modification as a direction for future research. --- Q4. Can we guarantee $\tilde{G}_t$ be non-singular during iterations? A4. In fact, $\tilde{G}_t$ is not always non-singular. It is _singular_ in the early stages of the iteration. According to Step 8 of Algorithm 2, it becomes _non-singular_ after a certain number of iterations. When $\tilde{G}_t$ is singular, we use the Moore-Penrose pseudoinverse in our analysis. In practice, we can ensure its non-singularity by adding a small regularization term $\epsilon I_d, \epsilon > 0$ into $\tilde{G}_t$. --- Q5. The word “gradient” in many sentences should be replaced with “subgradient”. Q6. Minor comments. A5 & A6. Thank you for pointing out these typos. We will correct this misuse and some minor typos in the revised version. **Reference:** Luo Luo, Cheng Chen, Zhihua Zhang, Wu-Jun Li, and Tong Zhang. Robust frequent directions with application in online learning. JMLR, 2019. Cheng Chen, Luo Luo, Weinan Zhang, Yong Yu, and Yijiang Lian. Efficient and robust high-dimensional linear contextual bandits. IJCAI, 2020. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. All of my questions have been addressed. Hence, I would like to increase my overall rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer GoSK, Thank you for your kind response! We will improve our paper according to your constructive reviews. Best regards, Authors
Summary: The paper proposes adaptive subgradient methods for online convex optimization that have better regret bounds and time complexities than existing methods. This is achieved by analyzing the frequent directions in the primal-dual framework. Claims And Evidence: The claims are supported by clear evidence. Methods And Evaluation Criteria: The proposed evaluations seem standard and make sense. They showed some advantages of the proposed approach. Theoretical Claims: I have gone through the proofs quickly, and they look mostly fine. Since the loss functions are only assumed to be convex and hence can be non-smooth, the paper should be careful not to mix gradients and subgradients and clearly indicate whether their arguments work for every subgradient or just one particular subgradient in the subdifferential. Experimental Designs Or Analyses: I have gone through the experiment designs and analyses and found them to be in order. Supplementary Material: Only the part concerning the numerical experiments. Relation To Broader Scientific Literature: The paper makes use of a number of ideas from the literature, including frequent directions, the primal-dual framework, and adaptive methods in online convex optimization. The results are obtained by combining these elements in a new manner. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: Strengths: - Improved regret bounds and time complexities over existing methods. - Numerical results show the efficacy of the proposed methods. Weaknesses: - As mentioned earlier, the non-smoothness of the loss functions should be handled more carefully in the analysis. - The work makes heavy use of existing techniques. It will be good to explain how the new techniques developed in this paper have applications in other settings. Other Comments Or Suggestions: See the comments above. Questions For Authors: See the "Other Strengths And Weaknesses" section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your constructive comments! --- Q1: Since the loss functions are only assumed to be convex and hence can be non-smooth, the paper should be careful not to mix gradients and subgradients and clearly indicate whether their arguments work for every subgradient or just one particular subgradient in the subdifferential. A1: Thank you for pointing out this misuse. Actually, the analysis of FTSL does not require the smoothness. We utilize the convexity of the loss functions in Lemma B.1, which uses the subgradients and works for every subgradient. We will correct this misuse in the revised version. --- Q2: The work makes heavy use of existing techniques. It will be good to explain how the new techniques developed in this paper have applications in other settings. A2: Thank you for your suggestion. We present two potential applications of the new technique: * **LLM fine-tuning.** Our sketching technique can be incorporated into LLM fine-tuning. For example, parameter-efficient fine-tuning (PEFT) methods (Zhao et al., 2024) update models for each task within multiple subspaces. We can use this technique to merge them into a single subspace, making the updates more stable. * **Bandit problem.** Our sketching technique can also be applied to bandit settings, such as the logistic bandit (Filippi et al., 2010) and the multinomial logistic bandit (Amani and Thrampoulidis, 2021). For example, in the multinomial logistic bandit problem, MNL-UCB needs to maintain a high-dimensional Hessian matrix, which incurs high computational costs. By applying the sketching technique, we can reduce its space and time complexities. **Reference:** Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian. GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection. ICML, 2024 Sarah Filippi, Olivier Cappe, Aurélien Garivier, Csaba Szepesvári. Parametric bandits: The generalized linear case. NeurIPS, 2010. Sanae Amani and Christos Thrampoulidis. UCB-based algorithms for multinomial logistic regression bandits. NeurIPS, 2021.
null
null
null
null
null
null
null
null
Dynamic Mixture of Curriculum LoRA Experts for Continual Multimodal Instruction Tuning
Accept (poster)
Summary: This paper presents an algorithm D-MoLE for continual multimodal instruction tuning. The algorithm solves the challenges of task architecture conflict and modality imbalance by dynamically assigning LoRA experts and a gradient-based continual curriculum. Experimental results show the effectiveness of the proposed algorithm. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: With the popularity of large-scale pre-training, continual multimodal instruction tuning has become an important topic. This paper focuses on it, which is helpful for the development of multimodal large language models. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The paper has the theoretical analysis 2. The setting the paper focuses on is significant 3. The writing is good and easy to read. Weaknesses: 1. "fixed architecture models inevitably face the dilemma..." mentioned in paragraph 2 of the Introduction is not unique to MLLMs. In fact, continual learning has been extensively studied on this dilemma. The description here is confusing. 2. How to select the threshold hyperparameter of equation 9 and its sensitivity should be discussed. 3. The experimental conditions of figure 5 in Appendix G are not explained. Also, is the use of a 2-layer mlp sufficient to distinguish between different tasks, and is this related to the task difficulty of the dataset itself? Other Comments Or Suggestions: see above weaknesses Questions For Authors: see above weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the helpful and encouraging comments. We appreciate that they note the inclusion of theoretical analysis, find our writing clear and easy to read, and recognize the significance of the problem setting. We hope the following explanations provide sufficient clarification. --- **Q1. Tasks-Architecture Conflict in MLLMs** Thank you for the question. Our intention here is to emphasize why fixed architectures pose additional challenges for continual learning (CL) in MLLMs. We would like to clarify it further here. MLLMs have inherent architectural heterogeneity, especially in the commonly adopted vision encoder + projector + LLM framework. Unlike unimodal LLMs, MLLMs process inputs from multiple modalities. As tasks vary in their reliance on each modality, abstraction requirements differ across modules. This leads to greater task-wise architectural sensitivity during CL. As shown in Figure 1, the sensitivity of transformer layers in both the vision encoder and the LLM varies substantially across tasks, indicating that different components must adapt differently. This observation motivates us to move beyond fixed architectures and adopt dynamic expert allocation, and further supports our design of an architecture evolution mechanism tailored to the evolving demands of CL in MLLMs. We will revise the Introduction to better explain this motivation. --- **Q2. Threshold Selection and Sensitivity** Thank you for the question. The threshold in Equation 9 is selected based on the reconstruction loss distribution of each autoencoder on its corresponding task’s training data. It is set moderately above the typical loss range to accommodate minor distribution shifts at evaluation and to allow transferable samples from other tasks to activate relevant experts. We observe that the loss distributions are concentrated with few outliers, and the same thresholding strategy works across tasks without tuning. Moreover, in-task and out-of-task losses are typically well-separated, as supported by the t-SNE visualization in Figure 5. The threshold serves two purposes: (1) avoiding irrelevant experts from being activated solely due to top-2 ranking, and (2) enabling detection of unseen tasks. As such, it plays a filtering role rather than directly driving routing decisions. This allows for coarse-grained threshold choices without requiring precise tuning. To assess sensitivity, we evaluate the final checkpoint of D-MoLE under different scaling factors of the thresholds $\\{\tau_t\\}$ and report the corresponding average Last scores across tasks, with $1\times$ denoting the default setting. Full table can be found at https://anonymous.4open.science/r/D-MoLE/table10.jpg. **Table 1: Average task performance under different threshold scaling factors, evaluated on the final checkpoint.** |Scaling Factor|$0.1\times$|$0.5\times$|$1\times$|$2\times$|$10\times$| |-|-|-|-|-|-| |*Avg. Last*|80.37|81.36|**82.18**|81.31|80.34| Note that this experiment applies threshold scaling only during evaluation. Due to current computational resource constraints, we do not retrain the model under each new threshold setting. As a result, expert collaboration patterns during evaluation may differ from those formed during training under the default thresholds, which may slightly affect performance. Despite this, performance remains stable across a wide range of threshold scaling factors, indicating that our method is not sensitive to the exact threshold values, as long as they are within the same order of magnitude. We will include this analysis in the revised version. --- **Q3. Setup of Figure 5 and Autoencoder Capacity** Thank you for pointing this out. The experimental setup for Figure 5 (Appendix G) is as follows: we randomly sample 500 training samples from each task and extract their multimodal instruction sequence embeddings using the method described in Section 4.1. We then compute the reconstructed embeddings using the corresponding autoencoders and visualize all embeddings together using t-SNE. The clear separation among clusters provides supporting evidence that task-specific autoencoders learn distinct representations and can effectively function as routers. We will include these experimental details in the revised version for clarity. Regarding the use of a 2-layer MLP (one encoder and one decoder layer) for the autoencoder, we find it sufficient in our setting. In typical CL benchmarks for MLLMs, tasks are defined by dataset boundaries, and samples from different tasks often differ significantly in terms of image domains, question types, or textual styles. As such, the autoencoders can effectively distinguish tasks even with simple architectures. We adopt this lightweight design to minimize the additional computational overhead introduced by the routing process. Exploring more challenging scenarios with blurred task boundaries may be a promising direction for future work.
Summary: The paper presents D-MoLE, a framework for continual multimodal instruction tuning (CMIT) in multimodal large language models (MLLMs). It dynamically allocates LoRA experts across layers using zero-cost metrics and addresses modality imbalance through gradient-based inter-modal curriculum learning. By resolving task architecture conflicts and mitigating modality imbalance, D-MoLE preserves performance on previously learned tasks, achieving a 15% average performance improvement over state-of-the-art baselines. ## update after rebuttal I have carefully read the authors' rebuttal and the feedback has well addressed my questions and concerns. Thus, I would like to insist on the score of accept. Thanks. Claims And Evidence: yes. Methods And Evaluation Criteria: yes. Theoretical Claims: yes, theorem 3.1. Experimental Designs Or Analyses: yes, all. Supplementary Material: yes, all. Relation To Broader Scientific Literature: It could be applied in scientific fields related to MLLM, e.g. medical imaging. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Pros:** - The paper provides clear and well-motivated illustrations of the problems and challenges in continual multimodal learning. The issues of task architecture conflict and modality imbalance are effectively addressed by the method's design, particularly through the dynamic expert allocator and modality-specific curriculum. - The approach is both well-motivated and intuitive. The concept of dynamically adding LoRA parameters is an interesting and potentially practical solution for scaling models while preserving efficiency. This method could have significant real-world applications, especially in resource-constrained environments. - By resolving task architecture conflicts and mitigating modality imbalance, D-MoLE successfully preserves performance on previously learned tasks, leading to an impressive 15% average improvement over state-of-the-art baselines. This highlights the method’s effectiveness. - The ablation studies are thorough and provide strong evidence of the effectiveness of each module in the framework. Additionally, supplementary experiments demonstrate that the method does not add many additional computational costs. **Cons:** - It would be beneficial to include some case studies or examples of specific tasks to further illustrate the method's performance in different settings and help contextualize its real-world applications. - Some of the figures could be enhanced for better clarity. For instance, the fonts in Figure 1 are too small, which may hinder readability, especially for audiences engaging with the paper on printed formats. - Showcasing the dynamic training process in more detail would be helpful. For example, visualizing how experts are assigned throughout the continual learning process would provide a clearer understanding of the method's adaptability. - Could the proposed method be combined with other continual learning approaches, such as O-LoRA, to further enhance performance? - Is there a need to remove experts over time, in addition to adding them? Other Comments Or Suggestions: Please see cons. Questions For Authors: Please see cons. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the encouraging and constructive feedback. We are pleased that they find our approach well-motivated, intuitive, and effective, and acknowledge the impressive performance gains. We hope our responses below address the remaining concerns. --- **Q1. Including Some Case Studies or Examples** Thank you for the valuable suggestion. In the revised version, we will include case studies in the appendix. For example, we plan to show some multimodal instruction examples from early tasks and compare the model’s responses to them after training on later tasks. This will help illustrate how well D-MoLE preserves prior knowledge by examining model behavior on early-task examples at different stages of continual learning. --- **Q2. Figure Clarity and Font Size** Thank you for the helpful suggestion. We will increase the font size in Figure 1 to improve readability, and recheck all figures in the paper to ensure visual clarity in the revised version. --- **Q3. Visualizing the Dynamic Training Process** Thank you for the valuable suggestion. We have created two heatmaps to illustrate the dynamic behavior of our method and better convey the adaptability of D-MoLE. We provide preview versions at anonymous links below, and will incorporate them into the revised version: * Architecture evolution dynamics (https://anonymous.4open.science/r/D-MoLE/figure6.jpg): shows how experts are allocated across different layers during the CMIT process. * Expert activation dynamics (https://anonymous.4open.science/r/D-MoLE/figure7.jpg): visualizes how task-specific experts are activated over time during training. --- **Q4. Compatibility with Other Continual Learning Approaches** Thank you for the question. Combining our method with other continual learning approaches is certainly feasible. Our method focuses on architectural evolution, dynamically expanding model capacity throughout the CMIT process. This enables flexible task adaptation without being constrained by a fixed parameter budget. In contrast, approaches like O-LoRA mitigate forgetting through parameter regularization, but often face a trade-off between retaining prior knowledge and adapting to new tasks due to their fixed capacity. These two directions are compatible in principle. While our current framework does not include regularization terms in the loss function, incorporating them may be a promising extension. For example, enforcing orthogonality between different experts could help reduce redundancy and improve task separation. --- **Q5. Need for Expert Removal** Thank you for the question. While our current experimental setting does not involve expert removal, since we evaluate all tasks after each training stage to assess knowledge retention (see Appendix F), our framework can easily support it. For example, one practical extension is to track expert activation frequency using a sliding window over recent inputs. Experts associated with inactive tasks could then be unloaded to reduce memory and computation overhead, with the option to reload or reinitialize them if needed. This may be a promising direction for future work, particularly in real-world online continual learning scenarios. --- Rebuttal Comment 1.1: Comment: I have carefully read the authors' rebuttal and the feedback has well addressed my questions and concerns. Thus, I would like to insist on the score of 4 (accept) . Thanks. --- Reply to Comment 1.1.1: Comment: Thank you for your kind response. We are glad to hear that our clarifications addressed your concerns. We sincerely appreciate the time and effort you devoted to reviewing our work.
Summary: This paper addresses the challenge of continual multimodal instruction tuning (CMIT) for Multimodal Large Language Models (MLLMs) by proposing a novel Dynamic Mixture of Curriculum LoRA Experts (D-MoLE) method. Unlike fixed-architecture models that struggle with adapting to new tasks, D-MoLE dynamically evolves the model’s architecture within a parameter budget by allocating LoRA experts layer-wise and adjusting update ratios based on modality difficulty. Experimental results show that D-MoLE outperforms state-of-the-art baselines by 15% on average, making it the first study to tackle continual learning for MLLMs from an architectural perspective. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths:** The paper is well-written and easy to follow, making it accessible to a broad audience. The topic of continual learning for multimodal large language models is highly relevant in contemporary research, with significant real-world applications. The study effectively highlights the challenges of task architecture conflict and modality imbalance, supporting these discussions with both empirical results and theoretical analyses. I find the experimental results particularly impressive, as they demonstrate a substantial improvement over state-of-the-art baselines. The proposed method is intuitively designed and does not introduce excessive complexity in its implementation, which enhances its practicality and reproducibility. The paper presents a strong contribution to the field, addressing key issues with clear and well-supported findings. **Weaknesses:** The presentation of results in Table 2 could be confusing due to the presence of two 'average' metrics—one in the dataset row and another in the method row. It would be helpful to clarify how these averages are computed and their intended interpretation. The Seq-FT baseline achieves the best performance on KVQA in the 'Last' and 'BWT' metrics. Could you provide an explanation for this result? It seems counterintuitive that the most naive baseline would outperform more sophisticated approaches in these measures. The proposed D-MoLE method significantly outperforms other baselines on the first task, VizWiz-Cap. Could this be an indication that the model is primarily excelling at fitting the initial dataset, rather than truly demonstrating strong continual learning capabilities? Is there any evidence to rule out the possibility that the method behaves more like fine-tuning rather than an effective continual learning approach? Could you provide further insight into the key factors driving the performance improvements of your method? Specifically, which aspects of the design contribute most to the observed gains? A more detailed breakdown of the improvements would strengthen the paper’s claims. Other Comments Or Suggestions: See weaknesses. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and positive feedback. It is encouraging that they find our paper well-written and easy to follow, our results impressive, and our method practical, intuitive, and concise. We hope the clarifications below address the reviewer’s remaining concerns. --- **Q1. Clarification of the Two 'Average' Metrics in Table 2** Thank you for the valuable suggestion. To clarify: * The *Average* column (rightmost) shows the mean performance of each method across all datasets (i.e., row-wise average). * The **Average** row near the top refers to one of the continual learning (CL) metrics introduced in Section 5.1, alongside Last and BWT. Their formal definitions are provided in Appendix F. To improve clarity, we will consider renaming the CL-specific **Average** row to **AVG** in the revised version. --- **Q2. Unexpectedly Strong Seq-FT Performance on KVQA** Thank you for pointing out this observation. Seq-FT achieves relatively high accuracy on KVQA immediately after training and experiences minimal forgetting after training on PMC-VQA. These two factors together explain its higher Last and BWT scores on KVQA. We attribute this behavior to two possible factors. First, KVQA is a relatively difficult dataset, where most methods show only modest improvements over the zero-shot baseline. CL methods that incorporate regularization mechanisms (e.g., LwF-LoRA, EWC-LoRA) may struggle to fully adapt to such tasks due to limited flexibility, whereas Seq-FT’s unconstrained fine-tuning can more easily fit the data. Second, PMC-VQA is a medical-domain task with different visual domains, question types, and prompt formats compared to KVQA. This reduces representational overlap between the two tasks and thus limits interference during sequential training. However, regularization-based methods may apply global constraints which could result in unintended interference with prior task-specific adaptation. A similar pattern is observed between VizWiz-Cap and SK-VG, where Seq-FT also exhibits reduced forgetting, likely due to the same reason. Overall, while Seq-FT achieves higher Last and BWT scores on KVQA, the margins are small. Across the full task sequence, it still suffers from severe forgetting, as reflected in its low overall CL scores. --- **Q3. Strong Performance on the Initial Task** Thank you for the thoughtful question. Our training protocol treats all tasks in the sequence uniformly and does not apply any special treatment to the initial task, so there is no risk that our method simply overfits to the first dataset. As evidenced by the overall experimental results, our method outperforms all state-of-the-art baselines on all three CL metrics across most tasks, not just the initial one. The seemingly large performance gains on VizWiz-Cap can be explained as follows: * Following the CoIN benchmark [1], we use CIDEr as the evaluation metric for captioning tasks. Since CIDEr scores are not upper-bounded (often exceeding 1), numerical improvements may appear larger in magnitude. * In continual learning, forgetting accumulates over time and amplifies performance degradation on earlier tasks. As D-MoLE better mitigates forgetting, its advantage is more visible on tasks like VizWiz-Cap that appear early in the sequence. --- **Q4. Key Factors Driving Performance Improvements** Thank you for the question. The performance gains of D-MoLE stem from the integration of two complementary modules, each addressing a key challenge in CMIT. The dynamic layer-wise expert allocator addresses task architecture conflict by assigning LoRA experts only to the most relevant layers and routing inputs via task-specific autoencoders. This enables precise architectural adaptation and selective expert activation during inference. The gradient-based inter-modal continual curriculum mitigates modality imbalance by adjusting training dynamics when tasks rely unevenly on different modalities, ensuring stable multimodal adaptation. Our ablation study (Section 5.3) supports this analysis. Removing the expert allocator (v4) leads to the largest performance drop, underscoring the importance of architectural flexibility. Removing the curriculum module (v3) results in lower performance than the full model, demonstrating its complementary benefit. Limiting updates to a single modality (v1 or v2) also yields suboptimal results, highlighting the need for full-modality adaptation. Overall, while the expert allocator contributes most directly to performance gains, both components are essential and jointly enable robust and balanced continual learning. We will add more detailed discussion of these factors in the revised version. --- **Reference** [1] CoIN: A Benchmark of Continual Instruction Tuning for Multimodel Large Language Models. NeurIPS 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' substantial effort in addressing my concerns. I have also read the discussions between the authors and other reviewers. I will maintain my acceptance of this submission and hope the authors carefully incorporate the suggestions. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and for taking the time to review our responses. We will carefully incorporate the suggestions in the final version.
Summary: This paper presents D-MoLE, a framework designed to tackle the challenges of continual multimodal instruction tuning (CMIT) in Multimodal Large Language Models (MLLMs). D-MoLE employs a dynamic layer-wise expert allocation strategy to overcome task architecture conflicts and a gradient-based inter-modal continual curriculum to address modality imbalances. This approach facilitates adaptive and efficient learning while maintaining a constrained parameter budget. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes, I checked theorem 3.1. Experimental Designs Or Analyses: yes, results of in sec 5. Supplementary Material: yes, appendix j. Relation To Broader Scientific Literature: no much related Essential References Not Discussed: no Other Strengths And Weaknesses: pros: 1. The paper is well-structured and clearly presents the challenges of continual learning in multimodal large language models. In Section 3, two preliminary studies effectively demonstrate why traditional continual learning methods may not be suitable for MLLMs. The inclusion of theoretical analyses provides deeper insights into the observed empirical phenomena and highlights the necessity of architectural evolution in continual learning. 2. The idea of automatically evolving the model architecture by incorporating LoRA modules into MLLMs during the continual learning process is particularly inspiring. This approach expands the model's capacity to accommodate new tasks while mitigating performance degradation on previously learned tasks. The concept of gradually increasing parameters over time could become an essential strategy for enabling continual learning in large-scale models, and this paper serves as an important step toward that direction. 3. The experimental results are strong, demonstrating substantial performance gains over state-of-the-art baselines. The improvements over the benchmarks verify the effectiveness of the proposed approach and highlight its potential for advancing continual learning in multimodal scenarios. Cons: There are no major concerns with the paper. However, there are some minor points that could benefit from clarification—please refer to the suggestions and questions for further details. Other Comments Or Suggestions: The notation could be refined to more clearly differentiate between scalars and vectors, ensuring consistency and readability throughout the paper. Using distinct formatting, such as boldface or arrows for vectors and standard italicization for scalars, would enhance clarity and prevent potential ambiguity in mathematical expressions. Questions For Authors: 1. Several continual learning baselines perform significantly worse than fine-tuning on the first task. However, Seq-FT, which appears to be functionally equivalent to fine-tuning in the initial task, shows notably lower performance. Could you clarify the reasons behind this discrepancy? Are there specific factors, such as optimization dynamics or architectural constraints, that might contribute to this difference? 2. In Table 6, the proposed method demonstrates a shorter training time compared to joint learning and O-LoRA. Given that the architecture evolution process introduces additional parameters to be trained, this result seems somewhat counterintuitive. Could you elaborate on why this occurs? Does the efficiency stem from selective parameter updates, optimized training strategies, or some other factors? 3. Regarding the routing mechanism, is there any specific design choice implemented to ensure a balanced load distribution among experts? If so, could you provide details on how the router dynamically manages workload allocation, especially in scenarios with varying modality distributions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and encouraging feedback. We are glad that they find our paper well-structured and our approach to be inspiring. We hope our responses below help clarify the remaining points. --- **Q1. Notation Refinement** Thank you for the valuable suggestion. We will revise the notation to use bold symbols for vectors (e.g., $\mathbf{x}$) and standard math notation for scalars (e.g., $x$) to improve clarity. --- **Q2. Difference Between Finetune and Seq-FT in Table 2** Thank you for the thoughtful question. The Finetune results refer to models trained independently on each task from scratch using LoRA fine-tuning, and evaluated only on that task. These results serve as upper bounds without any continual learning (CL) constraints. In contrast, Seq-FT is a vanilla CL baseline that performs sequential LoRA fine-tuning across the task sequence, without any specific mechanism to mitigate forgetting. As expected, it performs poorly on all three CL metrics. All CL methods (i.e., all rows except Zero-shot, Finetune, and Joint-learning) are evaluated under the same protocol: after training each new task, the model is tested on all tasks in our benchmark. This differs from traditional CL setups, which typically evaluate only on seen tasks. Since the pretrained MLLM has zero-shot capabilities, our protocol also assesses how well this ability is retained. The details of this evaluation protocol can be found in Appendix F. We will clarify this distinction in the revised version. --- **Q3. Training Efficiency Despite Architecture Evolution** Thank you for noticing this point. We also observe that D-MoLE brings an additional benefit in training efficiency, as shown in Table 6. While we briefly discussed this below the table, we would like to elaborate on it here. The main source of this efficiency gain lies in the selective placement of LoRA modules. Unlike methods such as joint-learning and O-LoRA, our method inserts LoRA modules only into a subset of the most sensitive transformer layers for each task. This design ensures that, although our method slightly increases the LoRA rank, the number of trainable parameters remains comparable to these baselines due to fewer insertion points. To further illustrate the efficiency, we conduct a toy experiment comparing GPU runtime under different LoRA ranks. Specifically, we generate a random input matrix of shape 64 × 1024 and perform 10,000 iterations of multiplying it with two simulated LoRA modules of rank 4 and rank 8, respectively. The measured average runtime per iteration is summarized below: **Table 1: GPU runtime comparison for different LoRA ranks.** |Rank|Avg. Runtime| |-|-| |r = 4|4.40 ms| |r = 8|4.70 ms (+6.8%)| Despite doubling the rank, the runtime increases by only 6.8%, suggesting that in low-rank settings, the startup overhead of matrix multiplication dominates the total runtime rather than the rank itself. Since our method inserts LoRA modules into fewer transformer layers, the total number of such operations is reduced. This compensates for the slightly higher cost per operation and results in a modest overall speedup. Moreover, our architecture evolution mechanism is lightweight by design. As shown in Table 7, the preprocessing time for computing the zero-cost proxy accounts for less than 2% of the total training time. We will make this explanation clearer in the revised version. --- **Q4. Load Balancing in Expert Routing** Thank you for the question. We do not explicitly introduce load balancing mechanisms in our routing design. Our routing mechanism is primarily intended for knowledge retention and transfer in the CL setting, rather than for inference efficiency or capacity scaling as in traditional MoE models. In our framework, expert collapse is unlikely to occur. During training, task IDs are known and each corresponding expert is explicitly activated and updated. During evaluation, task-level routing is based on reconstruction losses from task-specific autoencoders, each trained on its own task data. As shown in Appendix G, the reconstructed embeddings form well-separated clusters, enabling reliable expert selection and preventing collapse. The router operates on the entire multimodal instruction sequence, rather than on individual modalities. Therefore, varying modality distributions do not directly affect expert activation. In real-world deployments, especially under online CL settings, load balancing may become important. For example, some tasks may dominate the input stream, leading to expert overuse, while others remain underutilized. One possible solution is monitor expert activation frequency and incorporate regularization or usage-aware training objectives to maintain stable generalization. These strategies are orthogonal to our current design and may be worthwhile to explore in future work.
null
null
null
null
null
null
PARM: Multi-Objective Test-Time Alignment via Preference-Aware Autoregressive Reward Model
Accept (poster)
Summary: This paper studies the reward-guided multi-objective alignment problem. A prior work GenARM uses a token-level reward model to guide the decoding process, and requires to train two separate reward models to guide the multi-objective decoding process. This work equips GenARM with a preference-aware LoRA-like adapter ($BW_1A+BW_2(\alpha)A$). Their performance is better and more efficient than GenARM. Claims And Evidence: They claim that PARM is more effective and inference-efficient than GenARM. And this is correct both intuitively and empirically. Methods And Evaluation Criteria: The benchmarks (Helpful Assisitant & Safe RLHF) are commonly used in this area. The method PARM makes sense. Theoretical Claims: The only theoretical claim is Theorem 4.1. I've checked its proof, and it is correct. Experimental Designs Or Analyses: The GenARM is the only baseline in evaluation. As shown in Figure 6 of [1], GenARM is not a strong approach for multi-objective alignment (only slightly stonger than RS, which has been beaten by many recent works like [2,3,4]). Therefore, the value of PARM is not very convincing. Since training a DPO model is not harder than training a reward model, it would also beneficial to include comparision with policy-guided approaches. [1] GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment. ICLR 2025. [2] Decoding-Time Language Model Alignment with Multiple Objectives. NeurIPS 2024. [3] PAD: Personalized Alignment at Decoding-time. ICLR 2025. [4] Conditional Language Policy: A General Framework for Steerable Multi-Objective Finetuning. EMNLP finding 2024. Supplementary Material: Yes, I've read the appendix, including the proof and additional experiments. Relation To Broader Scientific Literature: The key contribution of this paper is to propose a preference-aware adapter, which would be of great value if the empirical advantages can be well supported. The proposed guided-generation process, and the weak-to-strong guidance are already explored by prior works [1,2,3]. Essential References Not Discussed: I don't think there is any work essentially related to the work not cited. Other Strengths And Weaknesses: **Strengths** - This paper is clear and well-written. - The improvement over GenARM shown in Figure 2 is impressive. **Weakness** - The only originality comes from the preference-aware adapter, however, the necessity of this design is not well demonstrated. For example, in Figure 3, SVD-LoRA is comparable with PBLoRA. - Comparison with policy-guided approaches is missed. And thus it is unknown whether people should use PARM instead of [2,3,4]. Other Comments Or Suggestions: > However, it focuses on aligning with personalized preferences rather than managing trade-offs across different preference dimensions. I don't think there is much difference between personalization and managing trade-offs, since you can always compare them on a same benchmark. Questions For Authors: - In Figure 2(b), why would PARM be much better than GenARM in solely optimizing humor, while only comparable in optimizing helpfulness? - Why does PARM show better numerical performance than GenARM in Table 1,4, while only slightly better than GenARM in Figure 1,4? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your thoughtful review and valuable feedback. We address your concerns as follows. > **[Experimental Designs Or Analyses]**. GenARM is the only baseline in evaluation ... include comparision with policy-guided approaches [2,3,4,5]. > [5] Rewarded soups (NeurIPS 2023) > **[Weaknesses 2]**. Comparison with policy-guided approaches. > **[Other Comments Or Suggestions]**. difference between personalization and managing trade-offs. > **[Relation To Broader Scientific Literature]**. (2) guided-generation process and weak-to-strong guidance are already explored in [1,2,3]. The goal of our PARM is fundamentally different from [2,3,4,5]. Here are the key distinctions (summarized in Table R1 at https://anonymous.4open.science/r/4525-TCL8/rebuttal_to_TCL8.pdf): - PAD [3] only aligns personalized preferences but **fails to handle trade-offs among multiple dimensions**. Specifically, PAD can handle discrete preference combinations ("helpful", ..., "helpful and harmless", ...) but fails to handle continuous preference combinations (like "60% helpful and 40% harmless"). This limitation is also acknowledged in PAD's rebuttal (see their reply to Weaknesses 1 raised by Reviewer NqpU, https://openreview.net/forum?id=e7AUJpP8bV). - MOD [2], CLP [4], and RS [5] **require training $k$ policy LLMs** ($k$ is the number of preference dimensions). This is computationally infeasible, especially when the policy LLM is large (e.g., 65B). Additionally, **MOD** [2] directly combines the logits from multiple trained policy LLMs at inference, causing significant computational overhead, and it **does not contain a guided-generation process and weak-to-strong guidance**. In contrast, **we keep the policy LLM frozen** and use a smaller reward model to guide the larger frozen policy LLM (e.g., 7B guides 65B in our experiment). This avoids training multiple large 65B models as in [2,4,5] and only requires training a 7B reward model, making our method more efficient and promising for users with limited computational resources. As suggested, we conduct additional experiments to compare PARM with MOD [2] and RS [5]. The experimental setup is the same as in Section 5.1. Results in Figure R1 and Table R2 (in the anonymous link above) show that PARM achieves better Pareto front than RS and MOD, and significantly outperforms them in terms of HV and MIP, demonstrating its effectiveness and high alignment quality. > **[Relation To Broader Scientific Literature]**. (1) ... preference-aware adapter ... would be of great value if the empirical advantages can be well supported. > **[Weaknesses 1]**. the necessity of preference-aware adapter design is not well demonstrated. ... SVD-LoRA is comparable with PBLoRA. We appreciate your recognition of the originality of our proposed preference-aware adapter (PBLoRA), but we argue that we've provided sufficient evidence for its design necessity in our paper. We clarify the evidence for the necessity of designing PBLoRA as follows. - **PBLoRA outperforms SVD-LoRA significantly**: (1) The Pareto front of PBLoRA entirely covers that of SVD-LoRA (yellow vs. purple curves in Figure 3). (2) PBLoRA achieves 11.4% and 59.9% improvements in HV and MIP compared to SVD-LoRA (Table 3). - **General framework**: As detailed in Lines 220-226 of our paper, SVD-LoRA is a special case of PBLoRA, which means PBLoRA has a greater exploration space and can achieve better results. - **Ablation evidence**: In Section 5.3, we systematically analyze different configurations of PBLoRA, showing the effectiveness of each component. - PARM with a single PBLoRA **addresses the inefficiency and misalignment issues in GenARM** using $k$ LoRAs. Please refer to our reply to Weaknesses 1 raised by Reviewer 3zGf for details. > **[Questions 1]**. In Figure 2(b), why would PARM be much better than GenARM in solely optimizing humor, while only comparable in optimizing helpfulness? Humor is a narrower objective than helpfulness. PARM learns from multiple preference dimensions jointly, benefiting from shared knowledge, while GenARM trains separate reward models for each preference, preventing it from leveraging others data to enhance humor. This difference leads to PARM’s superior performance on the Humor dimension. Conversely, helpfulness, being a broader dimension, can be effectively learned even with GenARM’s isolated approach, resulting in comparable performance on this dimension for both models. > **[Questions 2]**. Why does PARM show better numerical performance than GenARM in Table 1,4, while only slightly better than GenARM in Figure 1,4? HV measures the area under the Pareto front. In the Figures, it is clear that PARM has a larger area than GenARM, leading to a higher HV. MIP measures the uniformity of the solutions on the Pareto front. Obviously, the distribution (the markers in the figures) of PARM is more uniform, so its MIP is higher. --- Rebuttal Comment 1.1: Comment: > The goal of our PARM is fundamentally different from [2,3,4,5]. Thank you for correcting me! Yes, the goal of PAD is indeed different from PARM. But it seems that the goals of MOD [2], CLP [4], and RS [5] are same as PARM, since they are all focusing on balancing different objectives given a human preference vector. I appreciate the supplementary experiments, but I have additional questions: - The MOD, RS approaches can also use low-rank adapters. Considering the fact that PARM uses PBLoRA, it would be unfair to say that MOD and RS are computationally infeasible. - Besides, let's observe the equation (3), which is the same as training a DPO model (just removing the $\pi_\textup{ref}$ model). Thus guiding the generation using token-wise reward model $\pi_\theta$ is equivalent to guiding the genration using DPO model $\pi_\phi/\pi_\textup{ref}$, which has already been well-explored. Therefore, there is not much difference between PARM and policy-guided approaches. If the authors would like to highlight the experimental advantages of token-wise reward models, it would be necessary to polish the story-telling and show more empirical evidence. - As for weak-to-strong guidance, please see Figure 6 in GenARM and Appendix C.3 "Multi-objective proxy-tuning" in MOD. The multi-objective weak-to-strong guidance is not a novel extension. Anyway, not being novel is still acceptable in ICML. This is not a very big issue. And in figure 4, why do the points of GenARM concentrate on helpfulness? --- Update: Q1. MOD can also use 7B model to guide 65B model. I still think the experimental results (only comparing GenARM) are limited. --- Thank you for your updates. Now I can raise my rating to 3, and I hope the authors can put all the contents covered in rebuttal in their submisson later. --- Reply to Comment 1.1.1: Comment: Thanks for your further comments. We deeply appreciate that our previous reply has addressed most of the concerns raised in your initial review. We address the remaining concerns as follows. > **Q1**. the goals of MOD [2], CLP [4], and RS [5] are same as PARM, ... all focusing on balancing different objectives ... > MOD, RS can also use low-rank adapters. Considering the fact that PARM uses PBLoRA, it would be unfair to say that MOD and RS are computationally infeasible. Although MOD, CLP, RS, and our method all target at multi-objective preference alignment, our method based on test-time alignment aims to achieve this with **limited compute resources**. To guide a 65B LLM with two preferences (the experiment introduced in Lines 631-652 of our paper), **MOD and RS need to finetune two 65B LLMs**; in contrast, **PARM only needs to finetune a 7B LLM** while keeping the 65B LLM frozen. Obviously, LoRA finetuning a 65B LLM is much more expensive than LoRA finetuning a 7B LLM in terms of computation cost and hardware requirements. For this experiment, our method using PBLoRA can run on one A100 (80G) GPU within 0.85 hours. However, for MOD and RS, finetuning each 65B LLM using LoRA needs 8 A100 (80G) GPUs for 1.65 hours. In total, MOD and RS require $1.65\times 8 \times 2 = 26.4$ GPU hours. Hence, our method is **more memory-efficient and $31\times$ more computationally efficient** than MOD and RS. > **Q2**. ... there is not much difference between PARM and policy-guided approaches ... Our PARM significantly differs from policy-guided approaches like MOD from the methodological perspective. MOD **independently** trains policy models for each preference dimension **without awareness of other dimensions**, while our PARM trains a **unified** model conditioning on all preference dimensions to **explicitly manage trade-offs between different preferences** (for a preference vector $\alpha$, our model is $\pi_{\theta(\alpha)}$, and the training loss is $\sum_{i=1}^k \alpha_i\ell(\pi_{\theta(\alpha)}, D_i)$). Due to independent training, MOD suffers from preference conflicts when combining the logits from multiple policy models during inference; However, our PARM model, thanks to unified training on all preferences, can mitigate preference conflicts and achieve better alignment with preference vectors, as shown by the significantly higher HV and MIP scores (26% and 20% improvements over MOD) in Table R2 in the anonymous link provided in our previous reply (https://anonymous.4open.science/r/4525-TCL8/rebuttal_to_TCL8.pdf). A key insight of our paper for the community is that, for multi-objective test-time alignment, our PARM, which trains a **unified** reward model conditioned on **all** preference dimensions, aligns better than GenARM/MOD which combines **separate** reward models trained **individually** for each dimension. > **Q3**. ... see Figure 6 in GenARM and Appendix C.3 "Multi-objective proxy-tuning" in MOD. The multi-objective weak-to-strong guidance is not a novel extension. Weak-to-strong guidance appears in GenARM, "Multi-objective proxy-tuning" in MOD, and our method. We just want to highlight that our novel method achieves better multi-objective weak-to-strong guidance than GenARM. Specifically, in "1.1B guides 7B" experiment (Table 2), our method achieves 85% and 53% improvements in HV and MIP scores over GenARM. Additionally, in "7B guides 65B" experiment (Table 4), our method shows a 91% improvement in MIP compared to GenARM. > **Q4**. in figure 4, why do the points of GenARM concentrate on helpfulness? GenARM independently trains reward models for each preference dimension without awareness of each other, resulting in imprecise control over the two competing preferences (helpfulness vs. harmlessness). To mitigate this issue, our PARM trains a single reward model conditioned on all preference dimensions, leading to better alignment with two preferences. ---- > **[Update Q1]**. ... experimental results (only comparing GenARM) are limited. As suggested, we conducted an additional experiment to compare PARM with MOD-w2s (the weak-to-strong extension of MOD in Appendix C.3 of the MOD paper) on the helpful assistant task. The experimental setup is the same as in Section 5.2. Results are attached at https://anonymous.4open.science/r/4525-TCL8/rebuttal_to_TCL8_2.pdf. Figure R2 shows that PARM has a better and more uniformly distributed front than MOD-w2s and GenARM. Moreover, Table R3 shows that PARM outperforms MOD-w2s (91% improvement in HV, 54% improvement in MIP, and 53% speed-up), demonstrating that PARM is more effective and efficient in multi-objective weak-to-strong guidance. --- ## **Update** Results of "7B guides 65B" are attached at https://anonymous.4open.science/r/4525-TCL8/3.pdf, demonstrating that PARM outperforms MOD-w2s in multi-objective weak-to-strong guidance again. Thank you for raising the score! We will add all experiments and discussions to our revision.
Summary: This paper introduces Preference-aware ARM (PARM), a method for guiding large language models (LLMs) at test time based on user preferences. PARM builds upon GenARM, which trains a separate preference model for each human preference. In contrast, PARM employs a unified model that conditions all preferences on a single vector, enabling more flexible adaptation. Compared to GenARM, PARM generates responses that better align with human preferences. Claims And Evidence: The experimental results support the paper’s claim that PARM enhances response alignment with human preferences. However, my main concern is the significance of the research problem. Controlling text generation through prompt engineering is a straightforward alternative that can improve alignment with human preferences without requiring an additional model for preference guidance. This approach is not considered a baseline in the paper, which raises questions about the necessity of training an extra model. Additionally, relying on an additional 7B or smaller LLMs for preference guidance may not be a practical solution due to the computational overhead. Methods And Evaluation Criteria: The methods and evaluation metrics make sense to me. Theoretical Claims: I quickly went through the Theorems. Experimental Designs Or Analyses: I examined both the qualitative and quantitative analyses, and they appear well-structured and sound. Supplementary Material: I reviewed the metrics and additional results in supplementary material. Relation To Broader Scientific Literature: Controllable text generation for LLMs in inference time Essential References Not Discussed: The paper overlooks several key works on controllable text generation at inference time. For instance, "Controllable Text Generation for Large Language Models: A Survey" provides a comprehensive overview of various methods for controlling text generation. Additionally, "Plug and Play Language Models: A Simple Approach to Controlled Text Generation" is one of the pioneering works in this area, introducing a flexible approach to guiding language models without retraining. Including discussions on these references would strengthen the paper’s positioning within the broader literature. Other Strengths And Weaknesses: Strengths: 1. The paper is well-structured and easy to follow. 2. The proposed method is clearly explained and logically sound. 3. The experimental results effectively demonstrate the method’s effectiveness in improving alignment with human preferences. Weaknesses: 1. The significance of the research problem is unclear, as human preference alignment in LLM generation can often be achieved through prompt engineering without additional models. 2. The approach introduces additional inference costs and requires extra training data, which may make it impractical compared to simpler alternatives. Other Comments Or Suggestions: Please refer to Strengths And Weaknesses. Questions For Authors: Can a much smaller language model (e.g., less than 1B parameters) achieve comparable performance while significantly reducing inference latency? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your thoughtful review and valuable feedback. We address your concerns as follows. > **[Essential References Not Discussed]**. Discussions on controllable text generation are missing. Controllable Text Generation (CTG) generates text from LLMs with specific attributes or constraints. Our PARM is a specific CTG implementation for multi-objective test-time alignment. Compared to the mentioned method PPLM (ICLR 2020), which uses multiple attribute models and requires forward and backward passes during generation, PARM employs a single reward model to dynamically adjust text during inference, achieving lower computational costs. We will expand this discussion and include the CTG references in the revision. > **[Weaknesses 1]**. The significance of the research problem is unclear, as human preference alignment in LLM generation can often be achieved through prompt engineering without additional models. > **[Claims And Evidence]**. prompt engineering is not considered a baseline. **Prompt engineering alone is insufficient for aligning LLMs with complex human preferences**. Most successful alignment methods (like RLHF) require post-training, and our method can be viewed as a test-time alternative to post-training to reduce training computations. As suggested, we conducted an additional experiment to compare our method with a prompt-based baseline whose instruction is "Please ensure your response is X" (adapted from Personalized Soups (NeurIPS 2024 workshop) and PAD (ICLR 2025)), where "X" is "helpful", "harmless", or "a% helpful and b% harmless". The experimental setup is the same as in Section 5.1. As shown in https://anonymous.4open.science/r/4525-isvK/rebuttal_to_isvK.pdf, prompting has little effect on both two preferences. Moreover, **prompting cannot achieve precise control over preferences trade-offs** and its results fail to form a Pareto front, demonstrating that **prompting is not a good choice for aligning LLMs with complex human preferences**. In contrast, our method has a much better Pareto front, enabling precise control over multiple competing preferences simultaneously. > **[Weaknesses 2]**. introduces additional inference costs and requires extra training data ... impractical compared to simpler alternatives. > **[Claims And Evidence]**. relying on an additional 7B or smaller LLMs for preference guidance ... computational overhead. As discussed in our reply to Weaknesses 1, the prompt-based method, a simpler alternative, is not effective enough to align LLMs with complex human preferences. Recently, many advanced alignment methods (e.g., RLHF, GenARM) have emerged, and our work extends them to multi-objective test-time alignment. **Training data problem**: we would like to clarify that obtaining a multi-objective preference dataset is **not very difficult** and can be derived from a traditional single-objective dataset (see our reply to Weaknesses 2 raised by Reviewer 3zGf). **Inference cost problem**: we would like to clarify that the increase in inference cost is **small** and can be effectively mitigated through distributed deployment as follows: - **Addressed by distributed deployment**. Our method uses a single reward model to guide the frozen policy LLM. Since both models can generate the next token in parallel, the inference time remains the same as using the frozen LLM directly. - **Without distributed deployment, our method is still practical** for two reasons: (i) our method enables weak-to-strong guidance, such as using a 7B reward model to guide a frozen 65B LLM in our experiments. It increases inference time by about 30% compared to directly using the 65B LLM. We believe that the increase in inference time is worthwhile since our method **significantly improves preference alignment without extensive training of the large policy LLM** (Figure 4). (ii) Compared with GenARM, our method significantly reduces inference costs. Please refer to our reply to Weaknesses 1 raised by Reviewer 3zGf for details. Hence, by leveraging a smaller reward model to guide a larger LLM, PARM offers a practical and efficient method for achieving multi-objective alignment without requiring extensive training. This is especially beneficial for users lacking resources to fine-tune the policy LLM. > **[Questions]**. Can a much smaller language model achieve comparable performance while significantly reducing inference latency? There is a trade-off between the capacity of the reward model and its guiding effect. An extremely smaller reward model can reduce inference costs but may also compromise the guiding effect. Note that in our experiments, the reward models are already very small compared to the frozen LLMs (e.g., 7B vs. 65B and 1.1B vs. 7B). Moreover, as discussed in our reply to Weaknesses 2, the inference cost problem can be effectively mitigated through distributed deployment. Thus, we leave the exploration of much smaller reward models for future work.
Summary: The authors proposed a preference-aware ARM for multi-objective test-time alignment. PARM is an ARM conditioned on user preferences through the proposed PBLoRA, which manages trade-offs across multiple preference dimensions during inference. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Almost. Relation To Broader Scientific Literature: The problem studied in the paper is interesting to many researchers. Essential References Not Discussed: NA. Other Strengths And Weaknesses: Strengths of the paper: 1. The paper is well-written and easy to follow. 2. The problem is of great value to investigate. 3. Source code of the proposed model is provided in the paper. Weaknesses of the paper: 1. The proposed model is the extension of the existing model ARM and in particular the GenARM. The main difference is that the proposed model is to condition the massive of model parameters on the k-dimensional preference vector alpha. Given this, the contribution of the paper sounds limited. 2. As the proposed model works for multi-objectives, training material needs to be sufficient for training the model, which may be bottleneck in some setting where training data are difficult to obtain. 3. To achieve the goals of multi-objectives learning, there are a number of strategies. And one of them is the studied strategy in this paper, i.e., introducing an autoregressive reward model. Why don’t the authors consider fine-tune the original model? Other Comments Or Suggestions: NA. Questions For Authors: Please see the above weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your thoughtful review and valuable feedback. We address your concerns as follows. > **[Weaknesses 1]**. The proposed model is the extension of the existing model ARM and in particular the GenARM. The main difference is that the proposed model is to condition the massive of model parameters on the k-dimensional preference vector alpha. Given this, the contribution of the paper sounds limited. While PARM builds upon the foundation of ARM and GenARM, it introduces significant innovations and improvements that address key limitations of GenARM. We clarify it in detail as follows. Unlike GenARM, which requires multiple ARMs for different preference dimensions, PARM uses a single ARM conditioned on the preference vector, resulting in several advantages: - **More Parameter-efficiency**: GenARM requires storing $k$ separate ARMs, while PARM needs only a single one, making it approximately $k\times$ more parameter-efficient (Table 2 in our paper). - **Faster Inference**: PARM is significantly faster during inference (Table 2), as it computes rewards from a single ARM rather than $k$ separate ARMs in GenARM. - **Better control of trade-offs with preference vector**: In GenARM, the $k$ ARMs are trained independently on different preference dimensions without awareness of each other, leading to potential conflicts when their rewards are combined during inference. PARM, on the other hand, is explicitly trained to manage trade-offs between different preferences (as detailed in Section 4.3). As a result, our model's behavior aligns more directly with the specified preference vector, making control more intuitive and predictable. This leads to better alignment with user-specified preferences and simplifies usage, as demonstrated by the significantly higher HV and MIP scores in Tables 1 and 2. Examples 1 and 2 in our paper further demonstrate how PARM effectively balances competing objectives like helpfulness and harmlessness according to the specified preference weights. Therefore, our PARM provides a more efficient, scalable, and controllable method for multi-objective test-time alignment of LLMs. We believe these contributions are substantial and address important challenges in the field of preference-aligned language models. > **[Weaknesses 2]**. As the proposed model works for multi-objectives, training material needs to be sufficient for training the model, which may be bottleneck in some setting where training data are difficult to obtain. We appreciate your concerns regarding the potential bottleneck of training data for multi-objective models. We want to point out that **multi-objective dataset is not very difficult to obtain, as it could be derived from a traditional single-objective dataset as follows**. Specifically, we can take standard preference datasets $\\{x, y^1, y^2, z\\}$ (which are widely available, $y^1$ and $y^2$ are two different responses to the prompt $x$, and $z=1$ if $y^1$ is better than $y^2$, otherwise $0$) and extend them to multi-objective datasets $\\{x, y^1, y^2, z_1, \dots, z_k\\}$ by adding preference labels $z_i$ for different dimensions. These additional labels can be obtained using GPT judges or publicly available reward models/classifiers. For example, in our Helpful Assistant experiment (Section 5.2), the humor preference labels were obtained using a public classifier (as detailed in footnote 10), following the approach of previous works [1,2]. [1] MetaAligner: Towards generalizable multiobjective alignment of language models. NeurIPS 2024. [2] Rewards-in-context: Multi-objective alignment of foundation models with dynamic preference adjustment. ICML 2024. > **[Weaknesses 3]**. To achieve the goals of multi-objectives learning, there are a number of strategies. And one of them is the studied strategy in this paper, i.e., introducing an autoregressive reward model. Why don’t the authors consider fine-tune the original model? While **fine-tuning** the original model is a possible approach, this method **requires enormous computational resources** that many researchers and practitioners cannot access when the model is **large** (such as Alpaca-65B). Our PARM is a **test-time alignment** approach that addresses this limitation by training a **small** reward model rather than the original LLM, reducing computation cost largely and **making multi-objective alignment accessible with limited computing resources**. Our method enables **weak-to-strong guidance, allowing a smaller reward model to guide a larger frozen LLM without expensive training**. For example, in our experiments, a 1.1B reward model guides a 7B LLM, and a 7B reward model guides a 65B LLM. This capability is particularly valuable for users who cannot afford to train large LLMs but still need to leverage their capabilities. Therefore, based on test-time alignment, our PARM provides **a practical solution** that makes multi-objective alignment more accessible to the broader research community.
null
null
null
null
null
null
null
null
Aligning Multimodal Representations through an Information Bottleneck
Accept (poster)
Summary: In this paper, the authors study the alignment of representation in multimodal learning through information theory. For a positive pair $X_\alpha, X_\beta$ from modalities $\alpha, \beta$, they formulate the essence $Y$ and nuisance $N_\alpha, N_\beta$ of the inputs as the common and modality-specific parts (in the mutual information sense) of the inputs, respectively. Then, they define a representation $Z_\alpha$ of $X_\alpha$ to be sufficient if it preserves all information in $Y$ and say $Z_\alpha$ is minimal if it contains no information about $N_\alpha$. After that, they relate these notions to the maximization of $I(Z_\alpha; Y)$ and minimization of $I(Z_\alpha; N_\alpha)$, and show that minimizing a regularized version of InfoNCE can be used as a surrogate of this optimization task. Finally, they provide experimental evidence supporting the validity of their formulation and the utility of their regularizer. ## update after rebuttal I thank the authors for correcting my misunderstandings and the new discussion on the regularizer. I will keep my score. Claims And Evidence: * This paper proposes an information theoretical interoperation (essence, nuisance, and sufficient/minimal representations) of the misalignment phenomenon in multimodal learning. They provide both theoretical results and toy experiments to support their interpretation. * Based on their interpretation, they propose a regularizer to reduce the amount of nuisance contained in the learned representation and verify on real-world datasets and it improves the performance of the model. Methods And Evaluation Criteria: * They verify thier interpretation on various toy datasets (DSprites, MPI3D, Shapes3D), which allow them to control the ratio between essence and nuisance in the data. * They approximate the amount of preserved nuisance (which is not computatable in general) by training a linear classifier to predict the nuisance from the learned representation. This makes sense and is also used in previous works. * For those more realistic experiments, they use CIDEr and BLEU@4 to test the performance of the regularized/unregularized models and provide some examples for the image retrieval task. I'm not familiar with the more empirical side of this field, but these datasets seem to be rather old (from 2015 and 2002). Theoretical Claims: Yes. The theoretical claims are valid. However, the role of Theorem 1 is rather unclear. It says that if one can solve all downstream tasks using a representation $Z_\alpha$, then it is sufficient in the sense of Definition 4. To justify the definition, the reverse direction looks more natural. Experimental Designs Or Analyses: Yes. The experiment designs and analyses are sound. For the real-world applications, it would be good to have experiments on more recent datasets. Supplementary Material: Yes. Appendix A. Relation To Broader Scientific Literature: The key contribution of this paper is to extend the idea of (Tian et al., 2020b) to the multimodal setting, where the definition of common and modality-specific parts of inputs are less clear, and based on this extension, provide an explanation on the misalignment phenomenon in multimodal learning. Essential References Not Discussed: No Other Strengths And Weaknesses: Overall, this is a neat paper, well-written and easy-to-follow. Other Comments Or Suggestions: Consider using the mathrm or texttt when writing, say, HSIC and infoNCE in an equation. Questions For Authors: If we normalize the output representation $\mu$ to have unit norm, then the $l^2$ distance between $\mu_\alpha$ and $\mu_\beta$ is equivalent to the (negative) cosine similarity of them. Meanwhile, (at least when the temperature is high,) we can linearize/Taylor expand the softmax in InfoNCE to get the cosine similarity out. Is it possible to combine these to explain the model's behavior under the proposed regularization? I'm asking this purely out of curiosity, and the response is unlikely to affect the score. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to begin by thanking you for the time dedicated to give us feedback to improve our work. We address your main concerns next: > these datasets seem to be rather old (from 2015 and 2002). We believe that you may be refering to the metrics instead of to the datasets. These are still widely used despite of the fact that they are old. See for example [1] and [2], two recent papers with a great impact in the field. [1] Li, J., Li, D., Savarese, S., & Hoi, S. (2023, July). Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. [2] Team, G., L. (2023). Gemini: a family of highly capable multimodal models. > the role of Theorem 1 is rather unclear. Theorem 1 states that sufficiency is a necessary condition to solve all the downstream tasks (that can be derived from the essence). In other words, if a representation is not sufficient, then it cannot solve all the downstream tasks. The opposite is also true, but we do not find it so valuable. Having a representation that can potentially solve all the downstream tasks is not valuable because it does give no information on how “easy-to-use” the information to solve this task is. In the extreme case, the input can be seen as a representation of itself (since it satisfies Definition 3) and it can potentially (through an oracle model) solve any downstream task. Thus, obtaining a representation that can potentially solve all the downstream tasks is not meritorious nor theoretically valuable. However, the importance of this theorem lies in the fact that any representation that is not sufficient cannot be used to perfectly solve any downstream task, no matter how “easy-to-use” the information it contains is, which enhances the value of sufficient representations. This idea connects to that of usable information [3], which allows to formulate this differentiation between having information present in a representation and how “easy-to-use” this information is. This clarification can be made in the paper. [3] Xu, Y., Zhao, S., Song, J., Stewart, R., & Ermon, S. (2020). A theory of usable information under computational constraints. arXiv preprint arXiv:2002.10689. > Consider using the mathrm or texttt... mathrm will be used. > If we normalize the output representation $\mu$ ... In the case in which the embeddings are unit-norm, our loss becomes $\mathcal{L}_i = \log\frac{\exp(s\_{ii}/\tau)}{\sum_k \exp(s\_{ik}/\tau)} + 2\beta(1-s\_{ii})$, where $s\_{ik}$ is the cosine similarity between $z^{(i)}$ and $z^{(k)}$. We have the following: - $\frac{\partial{\mathcal{L}_i}}{\partial s\_{ii}} = -\frac{1}{\tau} \left(1 - \frac{\exp(s\_{ii}/\tau)}{\sum_k \exp(s\_{ik}/\tau)} \right) - 2\beta$ - $\frac{\partial{\mathcal{L}_i}}{\partial s\_{ij}} = \frac{1}{\tau} \frac{\exp(s\_{ij}/\tau)}{\sum_k \exp(s\_{ik}/\tau)}$ We also analyze the gradients of a modification of the $\mathrm{InfoNCE}$ in which a different temperature is used for the numerator and the denominator, i.e., $\mathcal{L}'_i=\log\frac{\exp(s\_{ii}/\tau')}{\sum_k \exp(s\_{ik}/\tau)}$. Then, it can be easily checked that: - $\frac{\partial{\mathcal{L}'_i}}{\partial s\_{ii}} = -\frac{1}{\tau'} \left(1 - \frac{\exp(s\_{ii}/\tau')}{\sum_k \exp(s\_{ik}/\tau)} \right)$ - $\frac{\partial{\mathcal{L}'_i}}{\partial s\_{ij}} = \frac{\partial{\mathcal{L}_i}}{\partial s\_{ij}}$ Thus, we have that optimizing our loss is equivalent to optimizing a modification of $\mathrm{InfoNCE}$ with different temperature in numerator and denominator. Rearranging, $\beta=\frac{1}{2} \left[ \frac{\tau-\tau'}{\tau\tau'} + \frac{\exp(s\_{ii}/\tau) - \exp(s\_{ii}/\tau')}{\sum_k \exp(s\_{ik}/\tau)} \right]$. Thus: - The difference in the temperature between numerator and denominator depends on the similarity with respect to all the elements in the batch. - If $s\_{ii} \ll \sum_k \exp(s\_{ik}/\tau)$, i.e., the predictions are far from the target distribution, then $\beta \approx \frac{1}{2} \Delta\tau$, where $\Delta\tau = \frac{\tau-\tau'}{\tau\tau'}$. Thus, the larger the $\beta$, the larger is the difference between the numerator and denominator temperatures. - If $s\_{ii} \gg \sum_k \exp(s\_{ik}/\tau)$, i.e., the predictions are close to the target distribution, then $\beta \approx \frac{1}{2} \left[ \Delta\tau + 1 - \exp\Delta\tau \right]$. Thus, the temperature difference between numerator and denominator for a given value of $\beta$ is lower than in the previous case. Thus, our term can be seen as a regularizer the adapts the value of the temperature in the denominator based on the how close to the true distribution the prediction is. These comments will be added in an appendix. Thank you for your interest. Please let us know if you have any other insight with respect to this last analysis. We hope that your questions have been addressed. If this is the case and you consider that our paper deserves an increase in the score, we would thank you if you made it effective.
Summary: The manuscript shows that contrastive learning methods for multimodal representations do not remove modality-specific information, which leads to misaligned representations. It uses an Information Bottleneck approach to add a regularization term to the loss function to filter out this extra information while preserving the alignment. The approach demonstrates improved performance in tasks such as image captioning and multimodal retrieval. ## update after rebuttal The score was increased from 3 (Weak Accept) to 4 (Accept). Most of the concerns have been addressed. The authors reran the experiments multiple times and included additional experiments using different architectures to evaluate how architectural choices affect the URR. They also clarified the questions regarding nuisances, which resolved my earlier confusion. Claims And Evidence: - While the paper supports its claims about contrastive losses failing to remove nuisance information using the Uncertainty Reduction Ratio (URR), I have a few concerns. First, could the choice of image encoder introduce bias in the URR measurements? Second, how are invariance and equivariance with respect to different factors accounted for in this analysis? Finally, I recall an ICLR 2021 paper suggesting that neural networks tend to focus on the easiest factor to minimize the training objective rather than learning all factors. How does this observation affect the interpretation of the paper’s results on nuisance information removal? - Minimal sufficient representations argue that achieving representational alignment requires that the learned representations be both sufficient (containing all shared “essence”) and minimal (excluding nuisances). Hence, the manuscript introduces a regularization term for alignment. This seems to be supported by the results in Table 2. - For the Information Homeostasis phenomenon, the evidence seems preliminary. Considering that InfoNCE employs a softmax-like objective, could this phenomenon be linked to issues inherent to softmax formulations, as discussed by Veličković et al. (2024)? Moreover, Zhai et al. (2023) suggest that using sigmoids instead for sigmoids. Veličković, Petar, et al. "softmax is not enough (for sharp out-of-distribution)." arXiv preprint arXiv:2410.01104 (2024). Zhai, Xiaohua, et al. "Sigmoid loss for language image pre-training." Proceedings of the IEEE/CVF international conference on computer vision. 2023. Methods And Evaluation Criteria: The methods and evaluation criteria seem reasonable. Theoretical Claims: I think Equation 18 should be derived in the appendix. Experimental Designs Or Analyses: Table 2 needs to show std by running repeated experiments with different seeds. Supplementary Material: Yes, all the appendix sections. Relation To Broader Scientific Literature: Broadly speaking, this work seems to be InfoNCE with regularization on the latent space to ensure that multimodal representation aligns, where latent space is VAE with Gaussian priors but without reconstruction. Essential References Not Discussed: I do not have concerns on this. Other Strengths And Weaknesses: Strengths: - Multiple datasets - Real-world experiments - Ablations for the hyperparameters Other Comments Or Suggestions: - I think preliminaries can be cut down to related work without the equations. - Also, it would be great if you annotated the essence and nuisances in the examples. Questions For Authors: See above. I will be open to increasing the score based on more clarification. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to begin by thanking you for the time dedicated to give us feedback to improve our work. We address your main concerns next: > could the choice of image encoder introduce bias in the URR measurements? To analyze this point, we have performed experiments that are identical to those in Table 1, but using a small ViT as image encoder. We show the results next: ||DSprites|MPI3D|Shapes3D| |-|-|-|-| |Location|$2.8\pm 0.6$|$2.8\pm 0.3$|$1.1\pm 0.1$| |Shape|$64.9\pm 1.4$|$7.0\pm 0.4$|$5.7\pm 0.2$| |Size|$30.7\pm 3.5$|$20.8\pm 3.5$|$6.9\pm 1.5$| |Objects Color|-|$63.5\pm 9.8$|$53.5\pm 1.6$| By comparing the latter and Table 1 we can observe that: 1. Local attributes, such as *Location* and *Size*, are better preserved in convolutional encoders, which is consistent with the inductive biases towards local structures of convolutional layers. 2. Global attributes, such as *Shape* and *Objects Color* are almost equally conserved in both image encoders. Subsection 5.1 would be extended with these two experiments and conclusions. Thanks for noticing. > how are invariance and equivariance with respect to different factors accounted for in this analysis? Sorry, we do not understand this question, could you please further develop it? > I recall an ICLR 2021 paper... We think that the paper that you could be referring to is [1]. If this is the case, your question is very interesting and points of [1] and ours perfectly compatible. On the one hand, what the mentioned paper states is that, if the task to be solved is correlated with other simpler features, then a model trained with SGD will tend to learn these simpler features rather than the task. If we relate this idea with the concepts used in our work, the point of [1] would be something like: “given a task Y, models trained with SGD tend to learn minimal sufficient statistics of Y instead of Y”. For example, if our task were to classify between images of bananas and strawberries, our model would simply learn to differentiate between yellow and red. On the other hand, our paper argues that, even given the previous statement, models tend not to remove all the information that is unnecessary to solve the task (i.e., nuisances). In the previous example, a model could be learning, for instance, the background color of the images. Thus, our model could learn the fruit color and the background color, which makes the point of [1] and ours perfectly compatible. If this is the paper you refered to, thanks for bringing it up, its results are very interesting. If this is not the case, we are open to discuss other works. [1] Ahmed, F., Bengio, Y., Van Seijen, H., & Courville, A.. Systematic generalisation with group invariant predictions. > For the Information Homeostasis..., could this phenomenon be linked to issues inherent to softmax formulations? First, for higher values of $\beta$, representations tend not to retain the information that is unique to their input. Then, all the representations are more similar to each other and closer in the space. Thus, maybe predictions are less sharp, so the temperature is decreased when $\beta$ increases in order to maintain the same level of sharpness in the predictions no matter the value of $\beta$. Thus, an hypothesis that could connect to Veličković et al. (2024) is the fact optimization processes "prefers" a given level of sharpness. Also, since the level of sharpness changes with the batch size, using Sigmoids instead, as Zhai et al. (2023) suggests, would make the temperature to be invariant to changes in $\beta$. Intuitively, since the rest of the similarities are ignored for the Sigmoid, the fact that representations of negative pairs are closer in the space, should not affect the predictions. This point is pure speculation and excessive importance should not be attributed to it. > I think Equation 18 should be derived in the appendix. The derivation of this equation will be included in an appendix. > Table 2 needs to show std by running repeated experiments with different seeds. Please see the second answer to Reviewer DSZo. > I think preliminaries can be cut down to related work without the equations. We agree that the expressions of HSIC and CKA are not necessary for the understanding of the paper. They will be removed from the paper. Thanks for noticing. > Also, it would be great if you annotated the essence and nuisances in the examples. Essence and nuisances will be annotated in the examples. We suppose that you refer to the examples in section 3. Please let us know if you refer to other examples. We hope that your questions have been addressed. If this is the case and you consider that our paper deserves an increase in the score, we would thank you if you made it effective. If this is not the case, we are open to continue with the discussion in the next stage of the rebuttal process. Similarly, if you could elaborate on the question that we were unable to answer, we will try to address it. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns and questions. I have decided to increase the score to Accept.
Summary: The paper analyzes the problem that the contrastive losses in multimodal representation learning fail to align representations effectively due to their retention of modality-specific information. To address this, the authors propose a variationally-derived regularization term that reduces modality-specific information, enhancing alignment based on the Information Bottleneck Principle. Claims And Evidence: The spherical Gaussian assumption for representation and indendical covariance for different modalities' representation are not supported by convincing evidence. Such assumptions should be critically assessed and justified clearly in the context of general multimodal representation learning. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: The soundness of experimental results on real-world tasks is weak. 1. Quantitative results of image retrievals are missing. It is not clear how well the model performs on this task. 2. The analysis of the toy example is interesting. However, providing the quantification of essence and nuisances and real-world dataset (e.g., image-text) will strengthen the soundness of the paper. Supplementary Material: I have reviewd Proof of Theorem 1, Proof of equation (17), Experimental Details and More Results of Section. Relation To Broader Scientific Literature: This paper is relelated to the theoretical analysis on multimodal representation learning. Essential References Not Discussed: The ideas of essence and nuisances are similar to the unique and shared information concepts in [1]. The discussion between relevant references is missing. [1]. Liang, Paul Pu, et al. "Factorized contrastive learning: Going beyond multi-view redundancy." Advances in Neural Information Processing Systems 36 (2023): 32971-32998. Other Strengths And Weaknesses: It is interesting to see the analysis on the toy examples. However, I have the following concerns: Is the spherical Gaussian assumption reasonable for general multimodal representation learning? If the data are non-Gaussian, this assumption might be questionable. Particularly, considering your use of layer-wise normalization (e.g., with per-sample operations like those in the provided setting), it is difficult to justify that the learned representations strictly follow a simple Gaussian distribution. Moreover, Eq. (20) assumes identical covariances for the two modalities, which is even less realistic given typical differences in modality-specific representation distributions. A more general and rigorous form that relaxes this assumption would strengthen the analysis. Other Comments Or Suggestions: See my weaknesses and questions. If they are all addressed, I am willing to raise my score. Questions For Authors: In Theorem 1, could you clarify what exactly $p(t|z_{\alpha})$ represents? The notation $t$ is not explained. In Table 2, 0.1$L_M$ has the best CIDEr and BLEU scores. What will the model perform when having a stronger $L_M$ constraint? Do you have an ablation study on this? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to begin by thanking you for the time dedicated to give us feedback to improve our work. We address your main concerns next: > 1. Quantitative results of image retrievals are missing. > What will the model perform when having a stronger $L_M$ constraint? Please see second answer to Reviewer DSZo. > 2. providing the quantification of essence and nuisances and real-world dataset will strengthen the soundness of the paper. Quantifying the essence and nuisances is impossible in general because they are abstract variables that we simply define for our formulation. For this reason (and also because computing mutual information is expensive), it is in general impossible to quantify the essence and nuisances in the representation. In fact, this is the reason why we use variational approximations for the derivation of our loss function. If it were possible to exactly calculate the amount of the essence or nuisances in the representations, then we could simply maximize and minimize these quantities, respectively. The toy datasets are precisely used to set by ourselves the essence and nuisances and having a scenario in which it is straightforward to calculate a reliable estimation of the mutual information between the essence (or the nuisances) and the representations. > The ideas of essence and nuisances are similar to the unique and shared information concepts in [1]. Thank you for your recommendation. This paper will be included in the related work section. > Is the spherical Gaussian assumption reasonable for general multimodal representation learning? On the one hand, the use of Gaussian distributions in the representation space is not an assumption but a design choice that we make mainly for tractability reasons of the KL divergence in equation 17. We argue its reasonability next. > If the data are non-Gaussian, this assumption might be questionable. On the other hand, the fact that the data distribution $p(x)$ is not Gaussian should not be problematic. For example, most of the VAEs (or other SoTA generative models, such as Diffusion Models and Flow Matching) use a encoder $p_\theta(z|x)$ that is Gaussian (also mainly for tractability reasons of the KL divergence) and they provide impressive performance in real-world applications, in which the data $p(x)$ is far from being Gaussian. > ...it is difficult to justify that the learned representations strictly follow a simple Gaussian distribution. Finally and most importantly, we must note that choosing the encoder $p_\theta(z|x)$ as Gaussian, does not imply at all that the representation space is Gaussian. What is Gaussian is the distribution of an embedding given a single input $p_\theta(z|x)$, but not the whole representation distribution $p_\theta(z)$. More specifically, we have that $p_\theta(z)=\int p_\theta(z|x)p(x)dx$. If we approximate the data distribution $p(x)$ as its empirical distribution given by $N$ datapoints, i.e., $p(x) \approx \sum_{i=1}^N \delta\left(x-x^{(i)}\right)$, then the distributions of the representation space $p_\theta(z)$ becomes a Gaussian mixture, i.e., $p_\theta(z) \approx \frac{1}{N} \sum_i \mathcal{N} \left(z; \mu_\theta\left(x^{(i)}\right), \sigma^2 I\right)$, which "is a universal approximator of densities, in the sense that any smooth density can be approximated with any specific nonzero amount of error by a Gaussian mixture model with enough components" [1]. Furthermore, we note that using a stochastic encoder (even if it is Gaussian) results in a distribution that is richer than in the case of deterministic encoders (i.e., vanilla encoders). In the latter, $p_\theta(z|x)$ is a delta distribution, i.e., simpler (one parameter) than a gaussian (two parameters) and using deterministic encoders is rarely seen as problematic. [1] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning., p. 65 > Eq. (20) assumes identical covariances for the two modalities, which is even less realistic given typical differences in modality-specific representation distributions. Following with the previous answer, this should not be problematic. In fact, having the same covariance matrices in the representation spaces of both modalities could be considered as desirable since we would like these spaces to be as similar (or aligned) as possible. > The notation $t$ is not explained. $t$ means a realization of the task $T$. In probability theory, random variables are usually denoted by uppercase letters while their realizations are denoted by their corresponding lowercase letters (see https://en.wikipedia.org/wiki/Notation_in_probability_and_statistics). We will add a paragraph in section 2 to clarify this. We hope that your questions have been addressed. If this is the case and you consider that our paper deserves an increase in the score, we would thank you if you made it effective. If this is not the case, we are open to continue with the discussion in the next stage of the rebuttal process. --- Rebuttal Comment 1.1: Comment: Thank the author for the response to my questions. However, some of my concerns still remain. I'm not assuming your $p(x)$ is Gaussian, but the latent representation $p(z)$ or $p(z|x)$. VAEs use a variational encoder to effectively parameterize a Gaussian latent space, which is reasonable. However, I'm confused now. Do you mean you are using a deterministic encoder to parameterize the latent distribution? How can this be a probabilistic approach, like VAEs, or capturing uncertainty? As you mentioned VAEs use Gaussian prior for the tractability reasons of the KL divergence. However, in your case, the KL divergence term becomes problematic when comparing a delta distribution with a standard Gaussian prior. In this sense, you are using $p(z)$ instead of $p(z|x)$ as the mapping is deterministic. If you use $p(z|x)$, why not calculate KL divergence instead of the L2 distance? Eq. 18 merely relies on the mean of the latent representation. It seems you calculate the mean of a batch of latent representations from two modalities and compare the L2 distance between them. In this sense, you are assuming $p(z)$ is Gassuian, which is questionable. Further, if Eq. 18 only measures the L2 distance between the mean of two modalities, InfoNCE has the same functionality. According to [1], InfoNCE essentially contains the alignment term (see Sec. 4.1.1), which performs pairwise alignment (L2 distance). In this sense, the L2 distance between the mean is naturally minimized. The proposed loss is an enhancement on the alignment term with the same formulation. [1]. Wang T, Isola P. Understanding contrastive representation learning through alignment and uniformity on the hypersphere[C]//International conference on machine learning. PMLR, 2020: 9929-9939. " having the same covariance matrices in the representation spaces of both modalities could be considered as desirable since we would like these spaces to be as similar (or aligned) as possible.". This statement is not correct to me. Yes, we want two modalities to have the same covariance after alignment. However, if you cannot effectively parameterize their original distribution, like covariance, how can you optimize the two into the same covariance? Hence, many KDE-based methods, like MMD, require a kernel to effectively parameterize the shape of the distributions, which means simply assuming the two distributions have identical variance, e.g. 1, is not reasonable. --- Reply to Comment 1.1.1: Comment: Thank you for your answer. We believe some of our comments have been misunderstood and some concepts of variational inference have been mixed here. We try to clarify these points next. > I'm not assuming your $p(x)$ is Gaussian Of course, $p(x)$ **is not Gaussian in general** and, thus, this is never assumed. > but the latent representation $p(z)$ or $p(z|x)$ The distribution $p(z|x)$ is chosen to be Gaussian in page 5. Then, as explained in our previous answer, we can marginalize as $p(z) = \int p(z|x)p(x)dx$. Thus, $p(z|x)$ **is Gaussian** but $p(z)$ **is not Gaussian**. > VAEs use a variational encoder to effectively parameterize a Gaussian latent space, which is reasonable. In VAEs, $p(z|x)$ is chosen to be Gaussian and $p(z)$ could be calculated by marginalizing, **exactly as in our case**. > Do you mean you are using a deterministic encoder to parameterize the latent distribution? If by *deterministic encoder* you mean that its parameters $\theta$ are deterministic, then our encoder is deterministic in the sense that it is not a Bayesian Neural Network. If by *deterministic encoder* you mean that it is used to model a delta distribution (i.e., it outputs a single embedding instead of a distribution of embeddings), then our encoder is not deterministic, since it outputs $p(z|x)$, i.e., a Gaussian distribution. To make it clear, **our encoder is exactly the same way as in VAEs** and it serves to obtain the parameters of a Gaussian distribution. > In this sense, you are using $p(z)$ instead of $p(z|x)$ as the mapping is deterministic. No, not at all, $p(z|x)$ is the distribution that we are considering to optimize equation 18. The process to calculate equation 18 is very simple: 1. Given a batch of pairs of inputs of modalities $\alpha$ and $\beta$, i.e., $\left\lbrace x_\alpha^{(i)}, x_\beta^{(i)} \right\rbrace\_{i=1}^N$, we calculate the output distributions $p\left(z|x_\alpha^{(i)}\right)$ and $p\left(z|x_\beta^{(i)}\right)$ for $i=1,\dots,N$. We choose $p\left(z|x_\alpha^{(i)}\right)$ and $p\left(z|x_\beta^{(i)}\right)$ to be Gaussians. 2. We calculate the KL divergence between each of pair of distributions, i.e., $D\_{KL} \left( p\left(z|x_\alpha^{(i)}\right) \mid \mid p\left(z|x_\beta^{(i)}\right) \right)$ for $i=1,2,\dots,N$. Since these distributions are Gaussians, this term is tractable. 3. We estimate the expectation of the KL Divergences as $\frac{1}{N}\sum\_{i=1}^N D\_{KL} \left( p\left(z|x_\alpha^{(i)}\right) \mid \mid p\left(z|x_\beta^{(i)}\right) \right)$. > If you use $p(z|x)$, why not calculate KL divergence instead of the L2 distance? We are minimizing the KL divergence, which is equivalent to minimizing the L2 distance when the covariance matrices are constant (see https://mr-easy.github.io/2020-04-16-kl-divergence-between-2-gaussian-distributions/). > Eq. 18 merely relies on the mean of the latent representation. It relies on the mean of $p(z|x)$ but not on the mean of $p(z)$. As it is clearly stated in equation 18, the KL divergence of $p(z|x)$ is calculated first and the expectation of this is calculated afterwards. > It seems you calculate the mean of a batch of latent representations from two modalities and compare the L2 distance between them. No, not at all. We do not do this and we do not know where it seems that. As stated in the previous answer, we calculate the expectation of the KL divergences and not the KL divergences of the expectation. > In this sense, you are assuming $p(z)$ is Gassuian, which is questionable. As previously stated, **$p(z)$ is never assumed to be Gaussian**. What is Gaussian is $p(z|x)$, but not $p(z)$. > According to [1], InfoNCE essentially contains the alignment term (see Sec. 4.1.1), which performs pairwise alignment. As demonstrated in Theorem 4.1 of [1], InfoNCE global minimum happens when a perfect alignment exists **but when the number of negative samples tends to infinite**. One of the main contribution of our work is Theorem 2, which demonstrates that misalignment is caused by information misbalance. Then, the solution for the information misbalance when the encoders are Gaussian results in the alignment definition given in [1], thus making our proposal consistent with the literature. But we note, that we give a theoretical derivation which is more general than the one in [1], since the latter is a particular case of ours. > many KDE-based methods, like MMD, require a kernel to effectively parameterize the shape of the distributions... Density estimation and what we are doing are two different worlds. In our case, **$z$ is defined through $p(z|x)$, so this is the original distribution**. In KDE methods, it is assumed that an unkown true distribution generated a set of data, and some methods are used to find distributions that could have likely generated the given set of data. In our case, it is ourselves who are defining $z$, so we know its true distribution, which is $p(z|x)$.
Summary: This paper addresses the challenge of misalignment in multimodal representation learning when using contrastive loss functions. The authors argue that this misalignment stems from modality-specific information present in the representation space that contrastive objectives fail to remove. Leveraging the Information Bottleneck Principle, they provide a theoretical framework to explain this phenomenon and propose a novel regularization term that enhances representational alignment by reducing modality specific information and find the sufficient minimum information. Through empirical validation, the authors demonstrate that their approach not only improves alignment but also enhances performance in real-world tasks such as image captioning, aiming at highlighting that balancing information preservation and compression in multimodal learning is important. Claims And Evidence: The claims in the paper are generally well-supported by both theoretical and empirical studies, with the authors providing meaningful contributions in both areas. The theoretical framework leverages the Information Bottleneck Principle to explain misalignment caused by modality-specific nuisances, while empirical validation includes controlled experiments and real-world applications (e.g., image captioning), demonstrating the effectiveness of their proposed regularization term. However, certain claims require further substantiation. The paper assumes that modality-specific information always contributes to misalignment, but this may depend on the dataset and task. More evidence is needed to establish the universality of this claim. Additionally, the Information Homeostasis phenomenon—where encoders purportedly adjust internal parameters to preserve nuisance information—is an intriguing hypothesis but lacks deeper causal analysis. Conducting additional experiments to isolate confounding factors would strengthen this argument. Moreover, the paper does not compare its method against several state-of-the-art techniques designed for alignment beyond contrastive learning. Including such comparisons would provide a clearer assessment of its advantages and limitations. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally speaking well-suited to the problem of multimodal representation alignment. The authors leverage the IB principle to design a regularization term that explicitly reduces modality-specific nuisances, effectively addressing misalignment. Their evaluation includes both controlled experiments and real-world applications, providing a solid empirical foundation. However, there is room for improvement: (i) Expanding the evaluation to include additional multimodal benchmarks would enhance generalizability. (ii) Incorporating additional task-specific metrics beyond alignment (e.g., retrieval accuracy, downstream task performance) would strengthen the assessment. (iii) Rather than solely validating the proposed approach, benchmarking against existing alignment techniques (e.g., cross-modal transformers, adversarial methods) would provide a clearer comparison and better contextualize the contribution of this work. Theoretical Claims: The paper presents several theoretical claims, primarily leveraging the IB principle to explain modality-specific misalignment. The proofs for key results, including Theorems 1, 2, and Lemma 1, appear correct and logically sound. However, some aspects could be further clarified or refined. For instance, a formal guarantee on how different encoders may lead to equivalent partitions in practical settings could be added in Lemma 1. Theorem 1 assumes that tasks derived from the essence fully capture all relevant downstream tasks, which may not always hold universally across applications. Theorem 2 relies on the assumption that perfect alignment can only be achieved if modality-specific information is entirely removed, which may need fine-tuning, as reasonable alignment can still be attained in practice even if some nuisances remain. While the theoretical foundations are solid, addressing these points would enhance the rigor and practical relevance of the claims. Experimental Designs Or Analyses: The experimental design in the paper is generally sound and well-structured, combining controlled experiments on disentanglement datasets with real-world applications. The controlled experiments effectively analyze the impact of different factors on alignment and nuisance removal, while the image captioning task serves as a strong practical validation. The use of CKA as a metric is appropriate, but incorporating additional task-specific evaluations, such as retrieval accuracy, could further strengthen the analysis. A key limitation is the lack of comparisons with state-of-the-art alignment methods beyond contrastive learning, which would provide a clearer perspective on the proposed method’s relative advantages and potential shortcomings. Additionally, the study of the information homeostasis phenomenon, while interesting and relevant, lacks causal analysis, making it difficult to fully substantiate its claims. Conducting more ablation studies to isolate potential confounding factors would improve the robustness of this finding. Supplementary Material: The supplementary material contains the code - it has not been tested (i.e., executed) to assess its reproducibility. Relation To Broader Scientific Literature: The key contributions of the paper build upon and extend foundational ideas in multimodal learning, contrastive learning, and information theory. It is closely related to prior work on contrastive representation learning (Oord et al., 2018; Radford et al., 2021), which aims to align representations by maximizing mutual information between different modalities. However, contrastive objectives alone have been shown to be insufficient for achieving true alignment due to the presence of modality-specific nuisances (Liang et al., 2022). This paper addresses this limitation by leveraging the Information Bottleneck Principle (Tishby et al., 2000; Alemi et al., 2016; Achille & Soatto, 2018) to derive a principled regularization term that explicitly reduces nuisances, providing a more structured approach than heuristic modifications proposed in prior works (Li et al., 2021; 2022; 2023). The study also aligns with recent efforts in multi-view learning (Tian et al., 2020) and information-based disentanglement (Wang et al., 2022), which emphasize the role of minimal sufficient representations in improving alignment. Additionally, the paper introduces the Information Homeostasis hypothesis, which suggests that encoders adjust internal parameters to maintain a balance in information entropy—a concept related to implicit regularization in deep networks (Shwartz-Ziv & Tishby, 2017) but not yet extensively studied in the context of multimodal alignment. Nevertheless, incorporating additional literature and direct comparisons with alternative alignment techniques would further strengthen its positioning within the broader research landscape. Essential References Not Discussed: Some key works in the area that could be added: adversarial alignment (Lample et al., NeurIPS 2018), cross-modal transformers (imagebind of Girdhar et al., CVPR 2023), minimum sufficient representation learning (Wang et al., CVPR 2022), platonic Representation Hypothesis (Huh et al., 2024 - https://arxiv.org/abs/2405.07987) for self-supervised, multimodal IB (https://arxiv.org/abs/2210.17444), PID appraoches (https://arxiv.org/pdf/2401.13503, https://arxiv.org/pdf/2409.07402v1 (for alignement), and https://arxiv.org/pdf/2402.06223v1 (Multimodal Contrastive Representation Learning through Latent Partial Causal Models). Other Strengths And Weaknesses: The strengths and the weaknesses have been discussed in the previous sections. Nothing major to add here. Other Comments Or Suggestions: Most questions have been raised in the previous sections but a key one is how this approach is related to partial information decomposition (PID). PID seems to be relevant to this problem and can be leveraged to formalize some of the claims. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to begin by thanking you for the time dedicated to give us feedback to improve our work. We address your main concerns next: > (i) additional multimodal benchmarks would enhance generalizability. More experiments were not included due to space limitations. > (ii) task-specific metrics beyond alignment would strengthen the assessment. In section 5, we have Fig.6 for this purpose. For section 6, we have calculated retrievals, shown next. ||CIDEr|BLEU@4|I2T R@1|T2I R@1| |-|-|-|-|-| |ITC+LM|$91.7\pm 0.2$|$28.6\pm 0.1$|$64.2\pm 0.2$|$52.3\pm 0.4$| |ITC+LM+ITM|$91.8\pm 0.5$|$28.8\pm 0.2$|$61.4\pm 0.6$|$49.7\pm 0.8$| |ITC+LM+$0.01\mathcal{L}_M$|$92.3\pm 0.8$|$29.1\pm 0.4$|$64.0\pm 0.3$|$52.3\pm 0.5$| |ITC+LM+$0.03\mathcal{L}_M$|$92.6\pm 0.3$|$29.2\pm 0.2$|$63.9\pm 0.4$|$52.1\pm 0.5$| |ITC+LM+$0.1\mathcal{L}_M$|$93.0\pm 0.3$|$29.4\pm 0.3$|$63.0\pm 0.5$|$50.4\pm 0.5$| |ITC+LM+$0.3\mathcal{L}_M$|$90.5\pm 0.4$|$28.5\pm 0.2$|$59.6\pm 0.4$|$47.1\pm 0.4$| We have that: 1. Text generation (TG) and retrieval performances are inversely correlated. This makes sense, since TG gets benefited from minimal representations and retrieval from sufficient representations (retrieval is a specific type of downstream task). This inverse correlation can be observed also in Fig. 6. 2. Our loss increases TG performance for low or medium values of $\beta$, since its goal is to increase the minimality of representations. 3. For $\beta=0.3$, the performance starts to excessively decrease the part of the essence that representations retain, similarly to in Fig. 6. This would be included and better explained in the final version. > additional experiments to isolate confounding factors would strengthen this argument. We agree that the experimental setup that analyzes the Information Homeostasis phenomenon does not serve to demonstrate any causal relationship. However, this analysis would require more space and, additionally, it could confuse the main line of the paper. > the paper does not compare its method against several state-of-the-art techniques designed for alignment... We compare in section 6 our loss function with the ITM, which is used in most of the state-of-the-art methods to obtain alignment between text and image modalities. As explained in lines 379-382, ITM is defined for a very specific architectural choice so it cannot be used in experiments in section 5. > a formal guarantee on how different encoders may lead to equivalent partitions in practical settings could be added in Lemma 1. We are not sure to understand this point very clearly. Lemma 1 is stated in line 134 and representations are defined in line 161, so there is no notion of encoder or representation at the point in which Lemma 1 is stated. Thus, this lemma is completely independent of the encoders. > Theorem 1 assumes that tasks derived from the essence fully capture all relevant downstream tasks, which may not always hold universally across applications. This is never assumed. Theorem 1 simply states that sufficient representations are a necessary condition to solve all the tasks that can be derived from the essence. Sometimes, contrastively trained models are used to solve downstream tasks that are not in the essence. However, there are no theoretical guarantees that this should work no matter if the representation is sufficient or not. > The paper assumes that modality-specific information always contributes to misalignmentt, but this may depend on the dataset and task. > Theorem 2 relies on the assumption that perfect alignment can only be achieved if modality-specific information is entirely removed. This is not an assumption but a theorem that is demonstrated in Appendix A.3. If the demonstration is correct, then it is universally true. Apart from that, alignment is totally independent of the task, since it is an intrinsic property of the representations. We are open to elaborate on this, but we are not sure to understand your point. > Essential References Not Discussed Thank you for the given references. Some of them were already mentioned in the paper and the rest of them will be discussed in the related work. > how this approach is related to PID. We believe that the connection is very weak. PID analyzes how two source inputs contribute to the information in a target variable. This is relevant in the cases in which we are using two inputs at the same time, but this is rarely the case in multimodal learning. The main connection that we could make between our work and PID is that, in case that we wanted to use both representations to solve a downstream task, then the redundant information would be equal to zero for minimal representations. We hope that your questions have been addressed. If this is the case and you consider that our paper deserves an increase in the score, we would thank you if you made it effective. If this is not the case, we are open to continue with the discussion in the next stage of the rebuttal. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns and questions and for providing clarifications. Thank you as well for the proposed additions in the final version (e.g., second comment). Nevertheless, some clarifications are necessary to avoid a potential misunderstanding. Regarding the questions on the theoretical results (mainly Lemma 1, and Theorems 1-2). Indeed, Lemma 1 is formulated entirely in information-theoretic terms, before introducing encoders or representations, and is thus independent of any practical implementation. However, given that the theoretical results are intended to support the paper's overall objective and empirical evaluation, and that the venue focuses on AI/ML rather than purely theoretical information theory, a discussion of their practical relevance and applicability would strengthen the contribution (and is expected in a comprehensive study). So, while the lemma is valid under idealized assumptions, one can expect in real-world systems, due to various impairments, two encoders even trained on the same datasets may not induce representations that are bijectively related, and thus may fail to yield equivalent partitions of the input space, even if they aim to capture the same essence. We tend to believe that there is a gap between theory and practice, which could deserve acknowledgment or further discussion/study. A similar perspective applies to Theorem 2. The concern is not related to the correctness of the theoretical result, but mainly to the proof’s assumptions and the result’s applicability in real-world systems. The theorem implies that perfect alignment is achieved only when all modality-specific information (nuisances) is removed. However, learned representations typically retain some modality-specific content (isn’t this the case in section 5, e.g., Fig. 5?). Therefore, if perfect alignment requires or implies complete removal or absence of nuisances, a discussion on the practical feasibility of this condition would be beneficial. It could be that partial minimization of nuisances is a sufficient and meaningful proxy and serves as an effective approximation, but this has to be shown. Finally, regarding the statement that alignment is "totally independent of the task", it would be helpful to clarify whether this refers to the geometric property of the learned representation space rather than downstream performance. This would improve clarity and prevent misinterpretation. We hope the original review comments are clarified. Finally, regarding the potential application of PID to multimodal scenarios, see: https://arxiv.org/html/2302.12247v5 although the links might be deeper and rooted in the information-theoretic formulation (https://arxiv.org/html/2405.07665v1).
null
null
null
null
null
null
PROXSPARSE: REGULARIZED LEARNING OF SEMI-STRUCTURED SPARSITY MASKS FOR PRETRAINED LLMS
Accept (poster)
Summary: The paper introduces ProxSparse, a learning-based framework designed to improve the efficiency of large language models (LLMs) through semi-structured pruning. Claims And Evidence: Yes. It presents detailed experiments comparing ProxSparse with state‐of‐the‐art baselines across multiple LLM families and tasks. Additionally, the authors offer convergence proofs and theoretical guarantees for the proximal gradient descent and the EnumALM solver. Methods And Evaluation Criteria: The methods and evaluation criteria in the paper are well-suited. Theoretical Claims: Yes. Overall, the proofs are mathematically rigorous within their theoretical framework. However, the assumptions that are relied on for proof may need more considerations. Experimental Designs Or Analyses: The experimental design appears to be robust and well thought out. Supplementary Material: NA Relation To Broader Scientific Literature: The paper's contributions are closely tied to existing work in model compression and network pruning. Essential References Not Discussed: NA Other Strengths And Weaknesses: Advantages: 1. The paper introduces a unique regularization framework that transforms the rigid mask selection problem into a differentiable one, enabling end-to-end learning. 2. The development of the EnumALM solver and the efficient proximal gradient descent approach significantly improves the speed and scalability of finding the optimal mask. Disadvantages: 1. The work is specifically focused on 2:4 sparsity. Why are these hyperparmeters beneficial? 2. The convergence proofs rely on assumptions which may be not possible, and what will that lead to? Other Comments Or Suggestions: NA Questions For Authors: 1. M is 2:4 sparse. Why do we choose 2:4? Have the authors tested other hyperparameters like 3:6 or 4:8? Is there any explaination? 2. What is the reason the semi structuring is better than other method? In my opinion, the semi structured method is less flexible? 3. What is the relationship between ALM ans EnumALM? 4. More tests should be conducted on larger model size. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer for acknowledging the strength of our paper! Below we address the questions regarding **2:4 ratio, semi-structured benefits, assumption of the theoretical proof, ALM and EnumALM as well as model size justification.** # W1:`The focus on 2:4 pruning sparsity`: 2:4 pruning is the most practical semi-structured sparsity pattern and the only one currently supported by commercial hardware We thank the reviewer for the consideration! In [Appendix G](https://https://openreview.net/pdf?id=zkxe5vASi8#page=13), we have more discussion on the practical relevance of the 2:4 sparsity pattern and the extensibility of ProxSparse. We focus on the 2:4 sparsity pattern in this paper because it is the most practical semi-structured format and the only one currently supported by commercial hardware. To the best of our knowledge, existing hardware such as NVIDIA Ampere GPUs only support 2:4 sparsity [1]. ProxSparse aligns directly with this hardware feature, making it readily applicable to real-world use cases. # W2:`Why semi-structured pruning considering its lack of flexibility`: Semi-structured pruning strikes a balance between efficiency and accuracy, and also benefiting from real hardware support. The reviewer is correct that compared to unstructured pruning, semi-structured method cast more constraints, which is less flexible. However, unstructured pruning often does not directly translate to faster inference because it induces irregular access, and modern hardware exploits regularities in computation for faster speed. On the other hand, sturctured pruning typically offers the highest efficiency but suffers from huge accuracy loss due to its rigid constraints. Semi-structured pruning is an important problem to study [2][3] in pruning, as it strikes a balance between efficiency and accuracy. A key benefit is its direct support on commercial hardware like NVIDIA Ampere GPUs and high-performance libraries for sparse operator, enabling real-world speedups. In this paper, we focus on semi-structured pruning and propose a relaxed end-to-end mask selection approach to identify optimal pruning masks for LLMs. # W3: `relationship between ALM ans EnumALM`: ALM is the subrouting of the EnumALM. Thanks for the question! ALM is a subroutine used within EnumALM. Specifically, EnumALM solves the proximal operator for the 2:4 regularizer by enumerating and evaluating three candidate sparsity patterns: a 2-sparse solution (selected directly by top-k), a 3-sparse solution, and a dense (4-sparse) solution. For the latter two cases (3-sparse and 4-sparse), EnumALM invokes ALM to efficiently solve the corresponding convex subproblems with convergence guarantees. # W4: `assumption of the proof`: in general cases, our assumption holds because the "ReLU is weakly differentiable" and "weights are bounded", as explained below. We thank the reviewer for the discussion of the assumptions! We acknowledge that the convergence analysis assumes the loss function is continuously differentiable and the weights remain bounded during optimization. While the use of ReLU in the loss may technically violate differentiability at a single point (zero), this is a well-known and standard issue in deep learning. In practice, ReLU is differentiable almost everywhere, which is typically sufficient for convergence analyses in nonconvex optimization. Moreover, the population loss, as an expectation over a smooth data distribution, can remain continuously differentiable even when ReLU is used. The other assumption—that the weights remain bounded—is rather mild and commonly used in convergence analyses. It is satisfied as long as the optimization does not diverge, which can often be ensured by using a sufficiently small learning rate. In our case, we observe stable behavior throughout, supporting the validity of this assumption. Meanwhile, ProxSparse's consistent performance across a variety of LLMs and tasks further supports the practical utility of our method. # W4: `Larger model size experiment`: we apologize for the current resource infeasibility on running experiments on larger (>30B) model. Our experiments show consistent good performance across different size of model. We apologize for the lack of models larger than 30B due to limitations in our current resources. Notably, prior work on learning based LLM pruning [3] has also only conducted experiments on models up to ~15B in size. Nevertheless, our paper includes results on 14B model to demonstrate the effectiveness of our method. Across various model sizes, our approach consistently outperforms other baselines, highlighting its robustness and applicability. [1] [NVIDIA AMPERE GA102 GPU ARCHITECTURE](https://www.nvidia.com/content/PDF/nvidia-ampere-ga-102-gpu-architecture-whitepaper-v2.pdf#page=27) [2]Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch [3]MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models --- Rebuttal Comment 1.1: Comment: The rebuttal makes sense to me. However, it cannot reach score 4, so I will keep the score. --- Reply to Comment 1.1.1: Comment: ## Thanks for acknowledging our work! We are happy to hear back that our rebuttal addresses the questions and makes sense to the reviewer! Those discussion raised are very thoughtful and we shall integrate them into the next version of the paper. Again we truthfully appreciate the reviewer's positive feedback on our paper.
Summary: This paper introduces a learning-based approach for semi-structured pruning of LLMs using a structured sparsity regularizer and proximal gradient descent. It enables global mask optimization without retraining and improves efficiency. Experiments on seven models show superior perplexity and zero-shot accuracy over existing pruning methods. Claims And Evidence: Cons: 1. the claim that the method achieves the SoTA performance is insufficient. It should be compared with learning-based methods like MaskedLLM [1], and layer-wise methods like OWL [2] and AlphaPruning [3]. 2. The comparison between MaskLLM and ProxSparse has been made in Table 6, but it is only done with a limited sample size. It would be helpful tp provide evaluation on a larger sample size. Reference: [1] Fang et al. Maskllm: Learnable semi-structured sparsity for large language models [2] Yin et al. Outlier weighed layerwise sparsity (owl): A missing secret sauce for pruning llms to high sparsity [3] Lu et al. AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models Methods And Evaluation Criteria: It is necessary to compare how the method is different from MaskedLLM [1] since both use the learnable soft mask with thresholding to gain the binary mask. The evaluation criteria are standard. Reference: [1] Fang et al. Maskllm: Learnable semi-structured sparsity for large language models Theoretical Claims: I have checked the convergence proof and didn't catch any issues. Experimental Designs Or Analyses: It would be better to provide more inference cost measures like FLOPs or latency. Supplementary Material: I have reviewed the supplementary documents. Relation To Broader Scientific Literature: It is related to LLM efficiency. Essential References Not Discussed: Has been mentioned in the previous sections Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback! Below we share responses on **comparison w/ layer-wise method and MaskLLM, inference efficiency.** # W1:`Comparing newer layer-wise method`: We achieve better results than OWL and AlphaPrune OWL[1] and AlphaPrune[2] are important works in pruning, aiming to determine layer-specfic ratio to protect important layers. We are happy to discuss them in our paper! In the meantime, we would like to respectfully argue that they are not very well-suited in semi-structured pruning, as the sparse operator supported by hardware typically requires all blocks to strictly adhere the pattern, making applying varying ratios hard. Nevertheless, we conduct more experiments on AlphaPrune and OWL for comparison. We follow mixed sparsity proposed in OWL and AlphaPrune with Wanda, that layers can have varying ratios, while the overall ratio remains 2:4. We see ProxSparse outperforms OWL and Alphaprune on Anon.Model-1 and Mistral on PPL and acc, showing the strength of our end-to-end optimization. Further, as pruning patterns become more fine-grained (e.g., 2:4), varying layer-wise pruning ratios become less effective as critical weights might still be removed within each block. This was reported in both paper, where 4:8 pruning performed just similarly to uniform pruning in Wanda. This highlights the benefits of ProxSparse in identifying fine-grained semi-structured masks. We have more baseline (ADMMPrune) discussion as proposed by reviewer hyi6. ProxSparse achieves better results compared to ADMMPrune. Please kindly refer [here](https://openreview.net/forum?id=zkxe5vASi8&noteId=rlLpTKNcTH) for more details! | Anon.Model-1 | Weight Update | Wikitext PPL | ARC-C | ARC-E | SIQA | HellaSwag | OBQA | PIQA | TruthfulQA | Average | |-------------|----------------|---------------|---------|---------|--------|------------|--------|--------|-------------|---------| | OWL | No | 13.17 | 0.287 | 0.591 | 0.407 | 0.420 | 0.228 | 0.695 | 0.339 | 0.425 | | AlphaPrune | No | 13.01 | 0.293 | 0.607 | 0.406 | 0.411 | 0.238 | 0.69e | 0.317 | 0.424 | | **ProxSparse** | No | **8.51** | **0.331** | **0.656** | **0.407** | **0.478** | **0.242** | **0.716** | **0.328** | **0.452** | |Mistral-7b-v0.3|Weight Update | Wikitext PPL | ARC-C | ARC-E | SIQA | HellaSwag | OBQA | PIQA | TruthfulQA | Average | | OWL | No | 13.03 | 0.275 | 0.594 | 0.406 | 0.417 | 0.188 | 0.688 | 0.320 | 0.413 | | AlphaPrune | No | 13.58 | 0.265 | 0.529 | 0.398 | 0.407 | 0.190 | 0.668 | 0.335 | 0.399 | | **ProxSparse** | No | **8.68** | **0.362** | **0.697** | **0.429** | **0.525** | **0.242** | **0.751** | **0.321** | **0.476** | Table 2: Comparison with OWL, AlphaPrune and ProxSparse on Mistral-v0.3-7b and Anon.Model-1. ProxSparse achieves the best. # W2:`Comparison w/ MaskLLM`: Complementary method with fundamentally different design, we excel in low-scale regime. Thanks for the question! We note that ProxSparse and MaskLLM differs fundamentally in approaching semi-structured mask, and we view MaskLLM to be a complementary as it focuses on larger data regime - Mechanism difference: MaskLLM and ProxSparse take fundamentally different ways in pruning. We both tackle the non-differentiable task of selecting N out of M per block. MaskLLM sidesteps this with a probabilistic sampling approach, learning to sample the correct weights. In contrast, ProxSparse relaxes the hard constraint into a smooth optimization and performs optimization via proximal gradient descent, with theoretically provable properties for the proposed 2:4 regularizer. - Larger batch exps: | Method | 1024 | 2048 | |-------------|------|------| | MaskLLM | 10| 9.5 | | SparseGPT | 10.18| 10.16| | Wanda | 11.38| 11.4| | **ProxSparse** | **8.38** | **8.23** | Table 3: PPL on Anon.Model-1 on extended sample size. ProxSparse achieves the best Table 3 evaluates ProxSparse with larger sample size. In this small-scale data regime, ProxSparse outperforms all baselines, demonstrating its superiority. We note that our targeted low-scale calibration regime is practical in LLM contexts, making our method more accessible in the real world. # W3:`Real world efficiency`: ProxSparse leads to 1.35x inference speedup and 37.3% reduced peak memory usage Thanks for the comments! As discussed in [Appendix H](https://openreview.net/pdf?id=zkxe5vASi8#page=14), our analysis demonstrates that, beyond reduced FLOPs, ProxSparse achieves 1.35× inference speedup and 37.3% reduction in peak memory usage. These results highlight the practical efficiency gains enabled with semi-structured sparsity induced by ProxSparse. # Thanks again! [1]Yin et al. OWL [2]Lu et al. AlphaPruning --- Rebuttal Comment 1.1: Comment: I want to thank the authors for conducting the new experiments, and I believe my concerns have been fully addressed. I would recommend that the authors include the new results in the updated draft. I will increase my score to 3. --- Reply to Comment 1.1.1: Comment: ## Thanks for the acknowledgement and score raising! We are excited that we addressed all the reviewer's concerns! The comments are very thoughtful and helpful for our improvement and we will accommodate those discussions and results into the revised manuscript. We sincerely thank the reviewer again for the acknowledgement and raising the score!
Summary: The authors propose ProxSparse, a method for learn a semi-structured pruning mask using two regularisors, one is analogues to l1 regularisation and the other promotes a locality constraint for semi-structured pruning. Claims And Evidence: From my understanding, the main claim is that previous methods, which rely on computing the Hessian, do not take into account the information between layers, whereas Proxy, which uses a global heuristic, does. Methods And Evaluation Criteria: The main experiments are on zero-shot evaluations of the pruned models, which are trained using a small calibration dataset. Theoretical Claims: One of the main claims is that using a soft regularisation for structured pruning is effective for exploring a wider search space. Additionally, the local heuristics are ill-suited for pruning LLMs. Experimental Designs Or Analyses: Experimental design seems valid and follows previous works. Supplementary Material: The supplementary covers proofs of convergence which appears to be technically correct. Relation To Broader Scientific Literature: This has broad applications in improving the efficiency of deployed LLMs across many areas. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: Why do the authors not consider other pruning ratios? i.e. not just 2:4 sparse. One of the main claims is that a hessian based heuristic is local/layer-wise. This is not clear to me. When using hessian/sensitivty based pruning, the sensitivity of each weight takes into account the downstream loss [1] Small concerns: Theorem 4. Why do the authors anonymise this citation? this is not done for any of the other citations and after following the reference, it is clear that this may be the authors of this submission.. Similarly, why are the authors using an anonymous model family for some experiments? is this just for reviewing purposes? it is very odd to me. [1] Pruning Convolutional Neural Networks for Resource Efficient Inference. ICLR 2017 Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments! Below, we address several questions raised including **sparsity pattern selection, Hessian-based pruning, anonymized citations, and the anonymized model family**. # W1: `The focus on 2:4 pruning sparsity`: 2:4 pruning is the most practical semi-structured sparsity pattern, and the only pattern currently supported by commercial hardware We thank the reviewer for the consideration! In [Appendix G](https://https://openreview.net/pdf?id=zkxe5vASi8#page=13) in our paper, we have more discussion on the practical scenario of the 2:4 sparsity pattern and the extensibility of ProxSparse. We focus on the 2:4 sparsity pattern in this paper because it is the most practical semi-structured format and the only one currently supported by commercial hardware. To the best of our knowledge, existing hardware such as NVIDIA Ampere GPUs only support 2:4 sparsity [1]. ProxSparse aligns directly with this hardware feature, making it readily applicable to real-world use cases. # W2: `A misunderstanding on Why Hessian based method is local-wise`: Our claim is localized pruning (i.e. w/ Hessian) hinder the pruning. Meanwhile, the heavy hessian computation makes it hard for global optimization. We thank the reviewer for raising the great point! We would like to first clarify on our claim, that we argue previous layer-wised pruning utilizes hessian metrics [2] or per-output based importance score [3] fail to select mask well because of the localized information constraints. Our method enables an end-to-end pruning mechanism that makes the pruning having well-informed solution. In the meantime, while it is true as mentioned in [4], that Hessian can be evaluated through the global loss. We note that this is impractical in pruning LLM. Even in [4], which focuses on smaller CNN model, the authors report using Hessian incurs 30x inefficiency, leading to huge overhead. For an LLM with enormous parameter size (~B), it is even harder to calculate the Hessian, let alone to compute it round by round in end-to-end optimization. As further evidence, SparseGPT specifically highlights the computational burden of Hessian estimation. To overcome this, it proposes a Fast Approximate Reconstruction method to simulate Hessian more efficiently. Yet it is still limited in layer-wised pruning. In contrast, our proposed end-to-end optimization scheme delievers better performance compare against those layer-wised pruning methods. # W3: `Anonymous citation`: a private communication and we will update it upon acceptance The anonymous citation is a private communication currently under a double-blind submission policy. We have provided detailed discussions of it in Section 3 and confirm that we will update the it upon acceptance. # W4: `Anonymous model family`: We anonymized some models (Anon.model-1,2,3) due to IP constraints, and we believe the 7 models presented have broad coverage and ProxSparse exhibits consistent trends. We appriciate the reviewer for the understanding! We didn't reveal the anonymous model family name due to internal policy constraints. However, we note that these anonymized models are among the most competitive LLMs, as evidenced with good PPL and accuracy in our benchmarks. We hope those models serve as additional data points to further support the effectiveness of our method. Meanwhile, we believe our experiments offer broad coverage on top-performing models, including Mistral, OpenLlama and Qwen family, plus the anonymous model family. The consistent results highlight the superiority of our method, which is robust and widely applicable to the top-tier LLMs. # Thanks again! [1] [NVIDIA AMPERE GA102 GPU ARCHITECTURE](https://www.nvidia.com/content/PDF/nvidia-ampere-ga-102-gpu-architecture-whitepaper-v2.pdf#page=27) [2]SparseGPT: Massive Language Models Can be Accurately Pruned in One-Shot [3]A SIMPLE AND EFFECTIVE PRUNING APPROACH FOR LARGE LANGUAGE MODELS [4]Pruning Convolutional Neural Networks for Resource Efficient Inference. ICLR 2017 --- Rebuttal Comment 1.1: Comment: The authors have addressed all my concerns. I would encourage the adding these comments (motivation for 2:4 pruning) and the limitation of Hessian pruning for LLMs in the introduction of the paper. After reading through the other reviewer comments I maintain my original score - which is leaning towards acceptance. --- Reply to Comment 1.1.1: Comment: ## Thank You for the Recognition! We are glad to hear the reviewer's acknowledgement that we have addressed all the concerns! Those suggestions are more than valuable and helpful to enhance our work and we will integrate those discussion into the updated draft. We would like to express our gratitude again for the insightful comments and recognizing our paper.
Summary: This work introduces ProxSparse, a learning-based framework for mask selection via regularized optimization. The key design is a sparsity regularization $Reg_{2:4}$ that forces 2:4 sparsity and a weight regularization $Reg_{W_0}$ to avoid significant differences between the tuned parameters and the original parameters. The authors validate their approach through experiments on seven LLMs, demonstrating significant performance improvements over state-of-the-art pruning baselines such as SparseGPT and Wanda. ## Update After Rebuttal Thanks for all the clarifications. I will keep my initial & positive score for this submission. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: Yes - The full supplementary material. Relation To Broader Scientific Literature: The paper primarily focuses on a learnable approach for achieving N:M sparsity through regularization. The proposed technique is conceptually related to existing methods such as Lasso, SparseGPT, and Wanda, sharing similarities in its sparsification strategy. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths 1. Unlike prior methods that rely on local heuristic-based mask selection, ProxSparse employs an end-to-end differentiable optimization that considers global feedback, leading to more effective and stable pruning results. 2. The method achieves effective mask selection with only ~100 calibration samples, this makes the proposed method very practical. 3. Across seven LLMs, ProxSparse consistently outperforms existing pruning baselines like SparseGPT and Wanda in both PPL and zero-shot accuracy. For example, on Mistral-v0.1-7b, ProxSparse improves PPL from 9.43 to 8.92. ## Weaknesses 1. The experimental section is not entirely convincing. First, it is unusual that the paper does not report results on LLaMA-1/2/3, which are crucial base models in benchmarks such as Wanda. Additionally, the selected SOTA baselines appear somewhat outdated, as several recent methods, such as [1], have demonstrated comparable or superior performance. For instance, [1] also achieves a ~1.00 improvement in PPL. It would be beneficial if the author could include a fair comparison to different methods. 2. If my understanding is correct, the proposed method updates the parameters of the LLMs, implicitly influenced by regularization and end-to-end optimization. To ensure a fair comparison, it would be beneficial to include additional experiments where the remaining weights in SparseGPT, Magnitude, and Wanda models are fine-tuned on for example 400 samples. 3. Although the paper claims that MaskLLM is resource-intensive, it remains unclear how significant the gap is between the proposed method, ProxySparse, and MaskLLM on LLaMA-2. Providing a direct comparison on the PPL, consumed tokens would help clarify the relative efficiency and effectiveness of ProxySparse. [1] Fast and optimal weight update for pruned large language models. Other Comments Or Suggestions: N/A Questions For Authors: Please see the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the effectiveness and practicality of ProxSparse! Below, we address the questions raised regarding **Llama, ADMMPrune comparisons, clarification on weight updates and MaskLLM comparison**. # W1: `Lack of Llama results`: We anonymized Anon.model-1,2,3 due to IP constraints, and we believe the 7 models presented have broad coverage and ProxSparse exhibits consistent trends. We didn't reveal the anonymous model family name due to internal policy constraints. However, we note that these anonymized models are among the most competitive LLMs, as evidenced with good PPL and accuracy in our benchmarks. We hope those models serve as additional data points to further support the effectiveness of our method. We appriciate the reviewer for the understanding! Meanwhile, we believe our experiments offer broad coverage on top-performing models, including Mistral, OpenLlama and Qwen family, plus the anonymous model family. The consistent results highlight the superiority of our method, which is robust and widely applicable to the top-tier LLMs. # W2: `Comparison with ADMMPrune`: we achieve better results! We appreiciate the reviewer's notes on newer baseline named ADMMPrune[1]! We benchmarked ADMMPrune to compare with ProxSparse on Anon.Model-1 and Mistral model, using same evaluation settings discussed in paper. As shown, ProxSparse outperforms ADMMPrune in both models, achieving lower PPL (8.51 vs. 9.67) and higher acc (47.6% vs. 45.5%), highlighting its effectiveness. We attribute the superority of ProxSparse to its end-to-end optimization process, which goes beyond solely relying on local layer-wised information. We are happy to discuss ADMMPrune in our revised paper! In the meantime, we include more baselines (OWL and AlphaPrune) discussion as proposed by reviewer 2EPP. ProxSparse consistently achieves better performance on them. Please kindly refer [here](https://openreview.net/forum?id=zkxe5vASi8&noteId=dogVmgWiVP) for more details! | Anon.Model-1 | Weight Update | Wikitext PPL | ARC-C | ARC-E | SIQA | HellaSwag | OBQA | PIQA | TruthfulQA | Average | |-------------|----------------|---------------|--------|--------|--------|------------|------|------|-------------|---------| | ADMMPrune | Yes | 9.67 | 0.328 | 0.653 | **0.413** | 0.440 | **0.248**| 0.714| 0.302 | 0.442 | | ProxSparse | **No** | **8.51** | **0.331** | **0.656** | 0.407 | **0.478** | 0.242| **0.716**| **0.328** | **0.452** | | Mistral-v0.3-7b | Weight Update | Wikitext PPL | ARC-C | ARC-E | SIQA | HellaSwag | OBQA | PIQA | TruthfulQA | Average | | ADMMPrune | Yes | 9.06 | 0.340 | 0.680 | 0.416 | 0.471 | 0.240| 0.739| 0.299 | 0.455 | | ProxSparse | **No** | **8.68** | **0.362** | **0.697** | **0.429** | **0.525** | **0.242**| **0.751**| **0.321** | **0.476** | Table 1: Comparison between ADMMPrune and ProxSparse on Mistral-v0.3-7b and Anon.Model-1. ProxSparse consistently achieves better performance # W3: `Clarification on weight update`: our method does NOT update the unpruned LLM parameters. ProxSparse is an learned method that identifies the optimal mask without further weight updates on retained weights. In other words, the retained weight after applying ProxSparse-selected mask remains identical to its initialization, similar to Wanda and magnitude prune. In ProxSparse, the end-to-end optimization with calibration data is sololy used to determine the mask. Even when compared to SparseGPT and ADMMPrune—which update weights after pruning—ProxSparse consistently achieves higher performance, underscoring its effectiveness in identifying high-quality pruning masks. # W4: `Comparison with MaskLLM on effectiveness`: ProxSparse consumes 25x less tokens compared to MaskLLM. We are happy to provide more intutive and direct comparison between MaskLLM and ProxSparse. As discussed in our paper, MaskLLM employs a fundamentally different design with Gumbel Softmax sampling for learning the mask. We view MaskLLM as a complementary to ours as it focuses on large sample regimes, whereas ProxSparse operates effectively with much smaller sample size. This makes ProxSparse more practical and accessible in LLM era. Here we present direct comparison betwen MaskLLM and ProxSparse on Anon.Model-2. In terms of consumed tokens, ProxSparse achieves a PPL of 8.51 with 1638400 tokens, outperforming Wanda (11.42), SparseGPT (10.298) and MaskLLM (11). For MaskLLM to achieve a comparable PPL, it consumes 40960000 tokens, 25x larger than ProxSparse. This, in general, demonstrates the superiority of our pruning method with small-scale calibration data like ADMMPrune, Wanda and SparseGPT. # Thanks again! [1]ADMMPrune:Fast and optimal weight update for pruned large language models --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. Most of my concerns (W2–W4) have been addressed. However, regarding W1, one question remains: Is the proposed method still superior to SparseGPT when applied to LLaMA-2? This is an important point, as LLaMA-2 has become a widely adopted benchmark with numerous well-established results. For instance, as reported in the Wanda paper, the PPL on LLaMA-2 changes from 5.12 to 10.17. These numbers are reliable since they can be reproduced easily by follow-up papers. Therefore, it would be helpful if the authors could provide results on LLaMA-2 as well. This would make the results on other models more convincing. --- Reply to Comment 1.1.1: Comment: ## Thanks for the recognition! We are encouraged to hear back the reviewer's acknowledgement that we have addressed most concerns (W2-W4)! ## Response to the remained question: The superiority of ProxSparse is consistent across different models, and here we copy and restate the Anon.Model-1 results from our paper to table 1 below for demonstration and addressing the reviewer's concern. In this experiment, ProxSparse again outperform both Wanda and SparseGPT. Specifically, in our evaluation, we see that SparseGPT achieved PPL of 10.3 and Wanda achieved 11.42. These results are aligning with those reported previously [1,2]. In comparison, our proposed ProxSparse achieved a PPL of 8.51, delivering better performance than both baselines, with similar improvement also observed in QA task. We hope these results further demonstrate the strength of ProxSparse and help make our method more convincing! | Method | Weight Update | Wikitext PPL | ARC-C | ARC-E | SIQA | HellaSwag | OBQA | PIQA | TruthfulQA | AVG | |-------------|----------------|---------------|--------|--------|--------|------------|------|------|-------------|--------| | Anon.Model-1| - | *5.12* | 0.433 | 0.763 | 0.461 | 0.571 | 0.314| 0.781| 0.321 | 0.521 | | magnitude | No | 54.74 | 0.301 | 0.618 | 0.411 | 0.454 | 0.216| 0.701| 0.322 | 0.432 | | SparseGPT | Yes | 10.30 | 0.326 | 0.655 | **0.412** | 0.435 | 0.246| 0.713| 0.304 | 0.441 | | Wanda | No | 11.42 | 0.311 | 0.623 | 0.403 | 0.413 | **0.248**| 0.706| 0.305 | 0.430 | | ProxSparse | No | **8.51** | **0.331** | **0.656** | 0.407 | **0.478** | 0.242| **0.716**| **0.328** | **0.452** | Table 1: Comparison between baselines and ProxSparse on Anon.Model-1. ProxSparse consistently achieves better performance In the mean time, we totally understand the reviewer's point and we apologize again for the IP restrictions we are facing, but we believe those results spanning multiple model families are robust and clearly demonstrate the effectiveness of our method. ### Code open source for reproducibility and evaluation by future works At the same time, we do hope our work will be followed up, evaluated, and compared by future work in the community. To support this, we will release our code upon acceptance, so as to help future research and the community to reproduce our results and advance this line of research. ## Thanks again! We hope the above justification are helpful in assessing our method! If there might be any further suggestion or concerns, please don't hesitate to comment and let us know. we sincerely thank the reviewer once again for the thoughtful consideration! [1]MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models [2]A SIMPLE AND EFFECTIVE PRUNING APPROACH FOR LARGE LANGUAGE MODELS
null
null
null
null
null
null
Point-Level Topological Representation Learning on Point Clouds
Accept (poster)
Summary: The paper proposes to extract point-level features given the global structure of the point cloud, using concepts from algebraic topology and differential geometry. Claims And Evidence: The proposed method can compute point-level topological features conditioned on the global topological structures of the point cloud. The proposed method outperforms other methods in down streaming tasks. It also achieves provably meaningful representation, and is robust to noise. The design of the module is grounded with strong theoretical foundation and explained clearly. The qualitative results and experiments verify the effectiveness of the proposed method. Methods And Evaluation Criteria: The proposed method is novel and grounded with theoretical foundation. What I found interesting and important is, the proposed module does not require training. Theoretical Claims: The theoretical claims make sense to me. But the detailed math is not carefully checked. Experimental Designs Or Analyses: The experiments and the visualization are good. I wonder whether it is possible to evaluate on more diverse tasks, like ModelNet40 classification, ShapeNet segmentation, and S3DIS segmentation, like many point cloud papers evaluate. Supplementary Material: Supplementary material is briefly skimmed. The codes in the supplementary material look very well organized and documented. Relation To Broader Scientific Literature: This work seems to be very useful in point cloud segmentation tasks, because it can extract pointwise feature conditioned on the global information. I will be very interested in seeing the performance of the proposed method on a point cloud segmentation task. Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: As mentioned in the previous sections, I found the proposed method interesting and strongly grounded, while showing an experiments on point cloud segmentation will make the paper much stronger. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and your valuable feedback! > The experiments and the visualization are good. I wonder whether it is possible to evaluate on more diverse tasks, like ModelNet40 classification, ShapeNet segmentation, and S3DIS segmentation, like many point cloud papers evaluate. Thank you for your comment! The point clouds in datasets like ModelNet40 or ShapeNet have none to very few topological features, while the neural architectures learn to extract some different form of features. However, TOPF was specifically design to extract the topological features from homology. It might be interesting to extend the ideas from TOPF to geometrical information as well in future work, but this will require many new ideas and is out of scope for the current paper. The above is the reason why we have introduced the novel Topological Clustering Benchmark Suite to benchmark TOPF.
Summary: The paper introduces TOPF (Topological Point Features), a method for extracting point-level topological features from point clouds using tools from algebraic topology and differential geometry. The authors propose leveraging persistent homology and harmonic representatives from the Hodge Laplacian to relate global topological structures to local point features. Experiments on synthetic and real-world datasets demonstrate that TOPF outperforms existing methods in clustering tasks, exhibits robustness to noise and heterogeneous sampling, and scales to high-dimensional data. A new topological clustering benchmark suite is introduced to evaluate performance. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I'm not an expert in this field, which cost me a lot of time to understand the meaning of each theoretical claim. I didn't recognize any obvious proof of error. Experimental Designs Or Analyses: The experimental design is sound. Supplementary Material: I have checked the appendix and the provided example code. Relation To Broader Scientific Literature: The work builds on persistent homology [1] and harmonic representatives [1], extending TPCC (Grande & Schaub, 2023a) by reducing computational cost and improving robustness. [1] Edelsbrunner H, Harer J. Persistent homology-a survey[J]. Contemporary mathematics, 2008, 453(26): 257-282. [2] De Silva V, Vejdemo-Johansson M. Persistent cohomology and circular coordinates[C]//Proceedings of the twenty-fifth annual symposium on Computational geometry. 2009: 227-236. [3] Grande V P, Schaub M T. Topological point cloud clustering[C]//Proceedings of the 40th International Conference on Machine Learning. 2023: 11683-11697. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: This paper introduces a novel benchmark suite for topological clustering. The paper is well-written, with comprehensive theoretical analysis and visualizations to support the hypothesis. Weakness: It could be better to analyze the runtime of the computational complexity of persistent homology regarding different point cloud sizes. Other Comments Or Suggestions: See the comment above Questions For Authors: See the comment above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your careful review and feedback! > Weakness: It could be better to analyze the runtime of the computational complexity of persistent homology regarding different point cloud sizes. Thank you for this suggestion! We analysed the computational complexity in appendix E.2 in detail, broken up into the different steps of the TOPF pipeline. Furthermore, we provided numerical experiments for runtime while increasing the size of the point cloud up to 40,000 points in Figure 11. Finally, we provide the number of points in the point clouds of the Topological Clustering Benchmark Set in Figure 8 in the appendix and provide the mean runtime of TOPF in Table 1. We hope to have addressed your concerns and want to thank you once again for your review!
Summary: The paper presents a method (TOPF) to extract point-level topological features of point clouds, i.e., to assign to each point in the cloud a feature vector that encodes to which generators of homology it contributes. Topological features are thereby computed across all scales using persistent homology on a (Vietoris-Rips or alpha) filtered simplicial complex built on the point cloud, and then, the 'correct' scale of interest is selected by a heuristic. To identify, which simplices of the complex generate which topological features Hodge theory is used which ensures the existence of unique harmonic representatives of (co)homology classes. In the last step, the topological features of the simplices in the complex are converted to topological features of the points in the underlying point cloud by an averaging procedure. Empirically, the generated features are evaluated on a clustering task and their robustness against noise, downsampling and the addition of outliers is studied. ## Update after rebuttal I thank the authors for their rebuttal and the additional explanations. Unfortunately, the main issue I identified, i.e., a limited empirical evaluation, see Claims and Evidence (i), was not addressed. I will therefore keep my score. Claims And Evidence: The paper makes the following claims (in the 'contributions' paragraph) 1. "*TOPF (i) outperforms other methods and embeddings for clustering downstream tasks on topologically structured data.*" 2. "*it 'returns provably meaningful representations*" 3. "*it is 'robust to noise and heterogeneous sampling*" Regarding 1. This is true for the clustering tasks studied in the empirical evaluation. However, the underlying data is introduced with this work and specifically designed such that the method can shine. That is, the only signal in the data is the topological information, see Figure 8. Therefore, the **claim (1) is not wrong, but the bar is set rather low.** Regarding 2. This likely refers to Theorem C.1 in the appendix which states that the method returns correct results when points are sampled uniformly from a sphere in $\mathbb R^d$. While it seems plausible that the method is correct under more general settings, ***provably meaningful representations* are showed only in this very simple setting**. Regarding 3. This is true for the addition of outliers and noise in Figure 6. The downsampling results in Figure 5 are mixed, however. The method outperforms the baselines to a large extent under moderate downsampling (factor 1--10) but fails (much more than the baselines) for larger factors. However, while the author exaggerate when listing the contributions, in the remainder of the paper they are actually very nuanced and careful about the scope, limitations and underlying assumptions of their method. This gives a very positive and honest impression. Methods And Evaluation Criteria: The chosen evaluation criteria do make sense. That being said, the empirical evaluation is of too limited scope to verify the claims from above, as (aside from the qualitative evaluation in Figure 3) the generated features are only evaluated with respect to clustering on one dataset which was specifically designed so that the method can shine (see above). It is surprising that the generated features are not evaluated for different tasks, particularly as a central motivation to this work is that *"common machine learning applications like classification require point-level information and features"*. When reading this sentence in the abstract I was already expecting experiments which verify that the topological features computed by TOPF provide complementary information that is beneficial for these common machine learning tasks the authors had in mind. Regarding the clustering results in table 1. I was wondering what performance classic clustering algorithms based on manifold distances would achieve on the datasets, e.g., PAM using distance estimates from Isomap (without the MDS step) or something similar. Theoretical Claims: Theoretical results (even the statements) are only presented in the supplementary material, which according to the reviewing instructions "encouraged (but not required) to read". Due to the high reviewing load I decided not to thoroughly check the these results. Experimental Designs Or Analyses: There are no issues apart from the ones already listed in §Methods And Evaluation Criteria. Supplementary Material: I read sections **A -- D** and **I** (but did not thoroughly check the proof in section **C**), but only skimmed Sections **E -- H**. Relation To Broader Scientific Literature: The paper is situated in the domain of topological data analysis and introduces a method to generate point-level features that encode topological information. The connection to machine learning consists in that these features can be used by machine learning models and that this is done in the experiments in this paper. From my perspective, ICML might not be the best fit for the paper, but it still fits. Closely related is work by Grande & Schaub (Topological point cloud clustering, ICML 2023) who also cluster points based on topological features computed from the Hodge laplacian and proceed in similar steps, i.e., building a complex, computing homology representatives using the laplacian, aggregating information from the complex to the point cloud. In fact, the authors list sevaral limitations of the former (TPCC) and state that they revamp their pipeline in a way that resolves them, but they do so only in the supplementary material. Given the close relation between these two works, a short description of TPCC should be added to the main part together with a list of changes done for TOPF and their effects. Essential References Not Discussed: I am unaware of missing related work. Other Strengths And Weaknesses: A major strength of this work (which regrettably did not fit into the previous points) is the proposed method itself. The way the point features are computed leads to interpretable features, appears to be easy to implement, computationally efficient and overall, a good idea. In addition to the usefulness of the features for downstream learning tasks (which as discussed above should have been analyzed more extensively) they can serve also for visualization purposes, cf. Figures 2 & 3. Moreover, source code is provided. Other Comments Or Suggestions: Typos: In the caption of figure 5, there is a missing ':' after 'Left' There appears to be a typo in Step 3 regarding the multiple usage of $\epsilon$ around line 261 and one has to guess what is actually meant there. Questions For Authors: - When you describe the main ideas in section 2, you relate the kernel of the Laplacian to the homology groups. However, the Laplacian typically operates on **co**chains and thus, Hodge isomorphy typically holds between **co**homology and the space of harmonic forms. Would you elaborate on how you identify homology and the kernel of the Laplacian and how this relates to Step 3 in section 3? Without this information, I found the latter (which is a key component of the method) a bit confusing. (I am aware of your comparison with de Rham homology in Appendix B.2 but did not find the neccessary information there). - Would you please compare your method to TPCC by Grande & Schaub. I am specifically curious why TOPF is able to outperform TPCC on some datasets (HalfCircles, Ellipses, ...) by a large extent but achieves similar performance on SphereInCircle . - Minor: Do you have an explanation or conjecture why TOPF performs rather weak on the 4Circles+Grid data? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your detailed and thorough review. We will reply to the raised issues in as much detail as the character limit permits. **Claims And Evidence:** Thank you for your detailed feedback! While we still believe that the listed contributions are technically true, we now realise that parts might be misleading. We omit the “provably” from (ii). Regarding point (iii), we interpret this differently: When downscaling and using heterogeneous downsampling, from a certain degree onwards, there is no topological signal left in the data. The other algorithms detect simpler structural features and thus are not affected. However, without topological features, the performance of TOPF degrades, which is some form of sanity check. In Figure 16, Bottom left, we see that even a downsampling factor of 0.2 significantly deteriorates the topological signal. However, we will write "[...] robust to moderate noise [...]". > I was wondering what performance classic clustering algorithms based on manifold distances would achieve on the datasets, This is a good suggestion: PAM using Isomap distances has a mean ARI of 0.58 with 0.39/0.56/0.78/0.13/0.63/0.58/0.94 on the individual datasets. This puts PAM+ISOMAP between the performance of clustering algorithms directly on the points (kMeans mean ARI: 0.44) and TOPF (mean ARI: 0.86). We will add this to table 1. > a short description of TPCC should be added to the main part together with a list of changes done for TOPF and their effects. Thank you for this good suggestion, we will do this! **Comments and Suggestions:** Thank you very much, we have addressed your comments. **Questions for authors:** > [...] Would you elaborate on how you identify homology and the kernel of the Laplacian and how this relates to Step 3 in section 3? [...] This is a good question! Formally, the cochain space is the dual of the chain space. For finite-dimensional vector spaces with the standard inner product, there is a canonical isomorphism between the two spaces induced by sending a vector $v$ to the linear extension of the map sending $v$ to $1$. Thus, we can identify the chains and cochains. We have the boundary maps $B_i$ and the coboundary maps $B_i^T$. By the univ. coefficient theorem *real-valued* homology and cohomology are isomorphic, with $H_i=\ker B_i/Im B_{i+1}\cong H^i=\ker B^T_{i+1}/Im B_i^T$. Thus, using the canonical isom. to the dual vector space, $ker B_i$ are homology and $ker B^T_{i+1}$ are cohomology representatives. $\ker L_i$ is the intersection of $\ker B_i$ and $\ker B^T_{i+1}$. Hence every element in $\ker L_i$ automatically is a homology representative. This gives us an explicit isomorphism between $\ker L_i$ and the (co-)homology. In step 3, we start with some arbitrary homology representative r in $\ker B_i$. However, we want a harmonic representative h in $\ker L_i = ker B_i\cap ker B^T_{i+1}$ for the same homology class, i.e. with h =r - c for c in $im B_{i+1}$, the curl space quotiened out in homology. By the orthogonality of the Hodge decomposition, this c is given by the projection of r to the curl space $im B_{i+1}$. We will discuss this in step 3 of section 3 and in more detail in appendix B.2! > I am specifically curious why TOPF is able to outperform TPCC on some datasets (HalfCircles, Ellipses, ...) by a large extent but achieves similar performance on SphereInCircle . As you mentioned, we already give a brief comparison. The performance differences of TPCC and TOPF on different comes mainly down to how “difficult” and “robust” the topological structure encoded in the datasets is. In particular, 2Spheres2Circles and SphereInCircle are directly sampled from unions of manifolds without any noise and sufficient sampling density. In comparison, in Ellipses and Spaceships, the topological features have shorter life times and live at different scales. There is no single scale containing all features, and for most holes/voids, there is not even a scale whithout noisy holes with small persistence present in the filtration. TOPF can deal with these situations, whereas TPCC cannot. For the difference on HalvedCircle, we posit that this is due to the better feature aggregation of TOPF. TPCC requires to perform subspace clustering on an harmonic edge embedding, which is both unstable and sensitive to parameters, which we think TPCC has no information to choose correctly in this setting. > Do you have an explanation or conjecture why TOPF performs rather weak on the 4Circles+Grid data? Figure 8&9 show the ground truth and TOPF clustering on 4Circles+Grid. In short, TOPF chooses suboptimal scales which are not well-enough connected. Figure 10 supports this interpretation, increasing the interpolation hyperparameter lambda improves TOPF performance on this dataset. We have used a fixed set of hyperparameters for all experiments on the TCBS for transparency. We will add an extended discussion of this to the paper.
Summary: Inspired by TDA, the authors proposed a point-level topological representation learning method for point cloud data analysis. Specifically, they introduced topological point features (TOPF) to extract point-level features from point clouds through discrete algebraic topology and differential geometry. The TOPF allows local feature extraction while retaining high-order geometric information. Experimental results show that TOPF outperforms existing methods in clustering and robustness tests, and is verified on synthetic data and real datasets. ### Update after rebuttal I maintain my rating. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The author introduces some theoretical concepts of topology and persistent homology in detail. Experimental Designs Or Analyses: Yes. In Table 1, the authors compared TOPF clustering with other feature/clustering algorithms, showing some effectiveness. Supplementary Material: Yes. The author provides the concept of simplicial complexes and the continuous homology process based on different simplicial complexes in the supplementary materials. Finally, the topological features on the point cloud are shown. Relation To Broader Scientific Literature: The authors use TDA and geometric learning for point cloud analysis. They include current literature on persistent homology, Hodge Laplace operator, and topological clustering methods. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The authors propose a new approach based on persistent homology to link global topological features with point-level information. 2. This method is robust to noise and non-uniform sampling. 3. TOPF does not require a large amount of training data and is more interpretable and widely applicable. Weakness: 1.Although the author analyzed TOPF from a theoretical perspective, I still think that the author's TDA cannot be actually applied in the current field of point cloud analysis, and it is currently only in the theoretical analysis stage. Its high complexity and time consumption make it difficult to apply in practice. 2. At the same time, the authors use simple point cloud datasets. Can you provide the results and time complexity on ModelNet40 and ScanObjectNN, or even experimental results in large-scale scenarios? 3. This method may not be applicable to very high dimensional point clouds, and the authors should explicitly discuss its limitations. 4. Is it possible to provide a detailed computational complexity analysis of TOPF? 5. Compared with existing neural network-based methods, can TOPF outperform them? Please provide relevant experiments and analysis. Other Comments Or Suggestions: 1. The legend of Figure 1 is too long, and it is recommended to split it into multiple sentences. 2. The resolution of Figure 3 needs to be improved to enhance readability. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for your feedback! We are happy about the many strengths of TOPF identified by you. We will now address your comments: > Weakness 1: Although the author analyzed TOPF from a theoretical perspective, I still think that the author's TDA cannot be actually applied in the current field of point cloud analysis, and it is currently only in the theoretical analysis stage. Its high complexity and time consumption make it difficult to apply in practice. Thank you for your comment! We applied TOPF on synthetic and a variety of real-world datasets and conducted experiments on large-scale point clouds in appendix E. Furthermore, persistent homology, which has similar runtimes, has been used successfully in the literature and many applications. We believe that this is convincing evidence that TOPF is a valuable scientific contribution. Because TOPF has the single goal of extracting topological information from homology, we do not claim it to be relevant in all point cloud tasks. (Indexing continues at weakness 4 in your review.) 4. Please refer to our rebuttal to reviewer yYZ5. Furthermore, we evaluate large-scale scenarios of point clouds with up to **40,000** points in E.2, showing that TOPF, particularly with the proposed landmark downsampling heuristic, is **feasible even on large point clouds**. 5. >This method may not be applicable to very high dimensional point clouds, Thank you for your comment! **TOPF works well on high-dimensional data sets**. In the paragraph “Embedding Space of Variational Autoencoders and High-dimensional spaces” of sec. 4, we apply TOPF successfully to high-dimensional latent spaces and directly to an **8478(!)-dimensional** image space. In Figure 4, we show a projection of the results of TOPF on a 24-dimensional point cloud. > and the authors should explicitly discuss its limitations. We agree with your comment: We already give a summary of the limitations of TOPF in sec. 5. We discuss the limitations in great detail in section I: Limitations in the appendix. 6. > Is it possible to provide a detailed computational complexity analysis of TOPF? Thank you for your comment! We discuss the computational complexity in detail in section E.2 in the appendix, split into the different steps of the algorithm and conduct experiments regarding the scaling behaviour. 7. > Compared with existing neural network-based methods, can TOPF outperform them? Please provide relevant experiments and analysis. Thank you for your comment! **TOPF outperforms neural network architectures** on the task TOPF was designed for, namely extracting topological features from point clouds. We benchmark TOPF’s performance on the TCBS against the neural network-based methods PointNet and WSDesc, see Table 1. Furthermore, we have now added a benchmark against DGCNN (Wang et al.). TOPF outperforms these approaches (pretrained on large-scale part segmentation datasets) with a mean ARI of TOPF of **0.86** to 0.44 (PointNet) 0.39 (WSDesc) 0.64 (DGCNN). We discuss this in more detail in Section 4. We have pretrained the neural architectures on large-scale shape segmentation data sets. For a detailed experimental setup, see Appendix F: More Details on the experiments. > The legend of Figure 1 is too long, and it is recommended to split it into multiple sentences. Thank you for this suggestion! We hope the caption of Figure 1 to be self-contained and an explanation for the figure, thus we do not know how to further shorten it. Do you have any suggestions? We already broke down the caption into 10 short to medium-length sentences. Does this suffice for you? > The resolution of Figure 3 needs to be improved to enhance readability. We are apologise, but we do not understand this comment. On our end, the figure has sufficient resolution and an associated file size of 1.5 MB. Could you further elaborate on your suggestion? Thank you again for your review! Based on your feedback, we believe the paper is in an even stronger state than before! We believe to have addressed a majority of your concerns and would be interested in hearing back from you!
null
null
null
null
null
null
Score-of-Mixture Training: One-Step Generative Model Training Made Simple via Score Estimation of Mixture Distributions
Accept (spotlight poster)
Summary: This paper proposes a framework for training one-step generative models, called ScoreMix. The proposed method is derived by minimizing the $\alpha$-skew Jensen-Shannon Divergence ($\alpha$-JSD) between the generated distribution $q_{\theta}$ from an implicit generative model and the data distribution $p$ (or the generated distribution from the pretraining diffusion model in the distillation setting). The gradient of $\alpha$-JSD can be computed from the score of the mixture distribution of $q_{\theta}$ and $p$. Hence, training ScoreMix includes this mixture score network $s_{\psi}$ training. ScoreMix demonstrates competitive performance on ImageNet 64x64 and CIFAR-10. Claims And Evidence: The claims regarding the methodology are generally well-supported, such as Prop 3.1, 3.2, and Cor 4.1 Methods And Evaluation Criteria: One claimed advantage of ScoreMix in the manuscript is the stable training. However, this claim is not explicitly evaluated through empirical experimental results. Currently, it is supported only by assertions, such as the adoption of Multiple Noise Levels (Line 172) and denoising score matching techniques (Line 422)), without direct evidence. To substantiate this claim, the authors could consider various options, such as reporting the variance of scores or loss in Table 2 or Fig 2, or evaluating ScoreMix across diverse hyperparameters. Theoretical Claims: I reviewed the proofs of Prop 3.1, 3.2, A.1, and A.2 presented in Appendix A. Experimental Designs Or Analyses: I checked the validity of any experimental designs. Supplementary Material: I reviewed the proofs in Appendix A and the algorithms in Appendix C. Relation To Broader Scientific Literature: **Contributions** - Novel training framework based on $\alpha$-JSD - (Probably) stable training scheme motivated by DSM - Supports both scratch training and distillation from a pretrained diffusion model - Demonstrates competitive performance - In Prop 3.2, estimating the mixture score using the mixture for the score matching loss is interesting. - In Cor 4.1, the proposed method leads to a new way of training the discriminator in GAN. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strength** - This paper suggests a novel method for both training a one-step generator from scratch and distilling a pretrained diffusion model. - The suggested approach is well-motivated by theoretical results. - In Prop 3.2, estimating the mixture score using the mixture for the score matching loss is interesting. - In Cor 4.1, the proposed method leads to a new way of training the discriminator in GAN. **Weakness** - Without the GAN regularizer, the performance of ScoreMIX is not competitive in Fig 2(b), which limits the novelty of the proposed method in Sec 3.1-3.4. - The proposed method relies on GAN regularization in scratch training (Eq. 11), which might incur training instability as in GANs. Also, the distillation method requires three networks, i.e., generator, score network, and discriminator, which increases complexity. - The proposed method requires initialization, such as warm-up training. Other Comments Or Suggestions: - Can we derive the relationship between the score of the mixture distribution $s_{\psi}$ between different values of $\alpha$ and utilize it as an additional regularizer? - Could you clarify the 'expensive regularizers' in Line 189? **Typo** - Fig 1a in Line 381 - Fig 1b in Line 392. Questions For Authors: Questions are included in other sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the effort in reviewing our work and the helpful suggestions for improving the readability of our paper. Below, we provide clarifications on the identified weaknesses and responses to the questions. ### Clarifications on Weaknesses * `On stability of ScoreMix training`: We appreciate the reviewer’s suggestion to include training curves to further demonstrate the claimed stability of ScoreMix. While we included the FID curve over training iterations in Figure 2, we agree that plotting additional statistics would better illustrate training stability. In Figure D of this [anonymized link](https://docs.google.com/document/d/e/2PACX-1vTw0koYOvxXorKuyTBw3C4FPnB_tX6IE6sYZRUnuFhQfy2ixqVt5fDk5GjnDFlcInYUBy83hNKe10gT/pub), we have plotted an example training trajectory from our best ImageNet model, showing training losses and gradient norms. This further illustrates the stability during training. We will also include results for different parameter settings in the revision. * `On the role of the GAN regularizer`: We acknowledge that the GAN regularizer significantly helps accelerate convergence and improve FID. However, we note that we did not train the version without the GAN regularizer long enough to reach full convergence. To clarify whether the GAN regularizer is essential for achieving SOTA FID or primarily helps speed up convergence, we will conduct a training run on CIFAR10 without the GAN regularizer until convergence and report the result. * `On the stability and complexity of the GAN regularizer`: Similar to DMD2, our GAN discriminator is built on top of the score network, with only a few additional MLP layers. This score-model-dependent design allows the full model to benefit from the training stability provided by denoising score matching, while the GAN discriminator loss only trains the small auxiliary MLP. (For ImageNet, the generator has 296M parameters and the discriminator has 18M.) Thus, the discriminator represents a small fraction of the overall model size and has a negligible impact on training speed. As a result, our use of the GAN regularizer is both efficient and stable. ### Answers to Questions * `Additional regularization using consistency between scores of mixtures?`: We note that the score of a mixture distribution can be expressed as a weighted sum of the scores of the true and fake distributions, as shown in Eq. (12), where the weight is determined by the density ratio and $\alpha$. We leverage this relation in our distillation scenario, which is in the same spirit as the reviewer’s suggestion regarding consistency. However, for training from scratch, the relation between the scores for different $\alpha$ values is more implicit. It would be very interesting if a similar consistency regularization could be achieved in this context, and we agree that this is a promising direction for future work. * `On expensive regularizers`: We apologize for any confusion caused by our insufficient explanation of the term "expensive regularizers." We provide a clarification below and will address this point clearly in the revision. * One example of an expensive regularizer is *the regression loss* used in the DMD paper. To address mode collapse in the DMD framework without any regularization, the authors simulate the reverse process of a diffusion model and sample several thousand noise-image pairs to anchor the generator’s outputs. Each noise-image pair requires evaluating the diffusion denoiser 256 times for ImageNet 64×64, which is extremely costly in practice. Moreover, the cost of collecting this regression dataset scales poorly with dimensionality. * Another example of costly training is the approach in the CTM paper. To ensure consistency between random points along the PF ODE trajectory, the reverse diffusion sampler must be run for an arbitrary number of steps per minibatch, which also results in high computational cost. Here we note that we mistakenly referred to this expensive procedure as a "regularization" technique, and we will correct this terminology in the final revision. * In contrast, our method does not require sampling from a diffusion model. It relies only on score estimation of mixture distributions, making it significantly more efficient. --- We thank the reviewer again for their insightful questions and helpful feedback. We will incorporate all of the above points in our revision, along with the proposed additional ablation studies. --- Rebuttal Comment 1.1: Comment: Initially, I submitted an official comment that was not visible to the authors, so I am reposting it here: I appreciate the authors for the response and the additional Figure D, which supports the stable training dynamics of the proposed method. I am happy to raise my score from 3 to 4.
Summary: This paper proposes a generalization of the KL-minimization procedure for learning one-step generators from score-based models. The authors introduce an "$\alpha$-skew Jensen–Shannon divergence", which interpolates between the KL divergence and the reversed-KL divergence. They propose two settings: one training from scratch, another one training with a pre-trained diffusion model. They add a GAN-based regularization strategy. Finally, they demonstrate with several experiments on image generation the interest of their method. Claims And Evidence: The paper mainly claims that minimizing the proposed $\alpha$-skew Jensen–Shannon divergence leads to better one-step generative models than minimizing the KL-divergence, which is a current practice in generative modelling. This is mainly evaluated in an ablation study on CIFAR-10 (Figure 1b), where the authors compare training with $\alpha=\{0,1\}$ and with random $\alpha$. I would also appreciate an ablation with standard KL or reversed KL divergence minimization in this Ablation study. Methods And Evaluation Criteria: Method: The method is sound, well-formulated, and elegant. Evaluation criteria: The evaluation criteria is the FID on image generation benchmarks. The authors could consider relying on other type of metrics. Mainly, my biggest concern is the interaction between GAN losses and the FID. It has been shown that the FID is biased by adversarial losses (see "Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models", Stein et al., 2023). However, the ablation study shows that the improvement in FID does not only come from adversarial losses. Theoretical Claims: The theoretical claims mostly extend known results to $\alpha$-skew JSD. I did not check the proofs in detail. Experimental Designs Or Analyses: I checked the experimental design and analyses. As mentioned above, my biggest concern is with the use of the GAN regularization. Indeed, GAN regularization is not used in most other methods. It is thus a bit unfair to compare ScoreMix with other methods, since the other methods would also benefit from an adversarial loss. It could be interesting to include models trained without GAN regularization in Table 2. This would allow to assess the superiority of ScoreMix over other methods for one-step generator training. Moreover, it would be interesting to compare the proposed GAN regularization with a standard GAN regularization to assess its interest. Provided improvements on these crucial points, I would be willing to raise my score to Accept. Supplementary Material: I read the supplementary material. Relation To Broader Scientific Literature: The proposed paper is extending a widely used method for training one-step generators (KL-divergence minimization with score-based models). It proposes the minimization of $\alpha$-skew JSD, which interpolates between KL divergence and reversed KL divergence. This is new and original to me. Essential References Not Discussed: Not that I know. Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: See weaknesses raised above (in Claim and Evidence, and in Evaluation Criteria). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s effort in reviewing our paper and constructive comments. We will address the raised concerns in our revision as follows. * `On FID evaluation and adversarial loss`: We appreciate the reviewer for the thoughtful comment on the limitation of FID evaluation and its interaction with adversarial losses (Stein et al., 2023). We will clearly explain these points in our revision. Here, we wish to clarify our standpoint. * While we acknowledge that FID is not a perfect metric as argued in the reference, it remains the most popular benchmark for measuring perceptual quality. Our ablation study shows that ScoreMix achieves stable and competitive performance even without the GAN regularization, indicating that the performance improvement is not solely due to adversarial losses. * Regarding fairness in comparison, we highlight that some recent distillation methods in Table 2 such as DMD2 use adversarial regularizer. Similar to DMD2, our GAN discriminator is built on top of the score network, with a few layers of MLP. This score-model-dependent design allows the entire model to enjoy training stability driven by denoising score matching, while the GAN discriminator loss only trains the small additional MLP. (For ImageNet, the generator has 296M params and the discriminator has 18M params.) However, for other distillation or training-from-scratch methods such as consistency training or distillation, it is nontrivial to implement the GAN discriminator assuring training stability. For example, the notable example of the consistency model framework differs conceptually from traditional distribution matching approaches. While a GAN regularizer fits within this framework, its implementation requires more than a small discriminator like ours. Designing a full discriminator network (similar to those in StyleGAN or its XL variants) would be necessary, and this could involve substantial hyperparameter tuning and careful initialization. Moreover, due to the inherent instability of the consistency training framework, introducing an adversarial loss could exacerbate these instabilities, given the well-known challenges of GAN training. Hence, investigating GAN regularization to an existing method such as consistency training is an interesting research direction, but it is beyond our current scope. * `Comparison of the proposed GAN regularization with standard GAN regularization`: This will be an informative ablation study that can clarify the role of skew divergence in GAN regularizer. We will add an additional result for CIFAR10 with standard GAN regularization, i.e., only using $\\alpha=\\frac{1}{2}$ in the regularization. * `Additional ablation with standard KL or reversed KL divergence minimization`: We appreciate the reviewer’s suggestion. We will add a result only using the reverse KL divergence for CIFAR10 in the revision. In the meantime, we performed an additional experiment with a toy dataset, as shared in the global response. * `Toy experiment on Swiss roll dataset`: We ran a toy experiment on 2D Swiss roll dataset, to demonstrate the effectiveness of our ScoreMix framework compared to the existing schemes for a simpler setting; see Figure A in this [anonymized link](https://docs.google.com/document/d/e/2PACX-1vTw0koYOvxXorKuyTBw3C4FPnB_tX6IE6sYZRUnuFhQfy2ixqVt5fDk5GjnDFlcInYUBy83hNKe10gT/pub). In particular, our results for training from scratch and distillation are presented in Figure A (d, f, g). All three methods successfully capture the modes of the underlying distribution. While the impact of the GAN regularizer is less pronounced than in our high-dimensional experiments, we observe that enabling it reduces the number of samples in low-density regions in (g). The distillation results in (d) appear slightly noisy, likely due to the quality of the pre-trained score model. This highlights the advantage of training from scratch, as it avoids amplifying existing estimation errors in the pre-trained model. We will add this experiment to Appendix in our revision. We thank the reviewer again for their suggestions and comments, and will certainly incorporate all the points in our revision including additional ablation study results with CIFAR10. If these clarifications satisfactorily address the reviewer's concerns, we kindly ask if the reviewer would consider updating the score to reflect what we believe is a paper with noteworthy contributions to the community.
Summary: The paper presents the ScoreMix, a new type of one-step generative model, trained using the $\alpha$-JSD from $f$-divergence. ScoreMix can be trained from scratch and used for distillation. It achieves SOTA performance in the 1-NFE regime. The paper grounds the theoretical approach and performs extensive experiments/analyses to showcase its significance. Claims And Evidence: All the claims made in this paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: Image generation is a testbed for evaluating generative models (especially for diffusion/flow models). The authors follow standard evaluation criteria that evaluate the method on the community-defined datasets (ImageNet64 & CIFAR10). Theoretical Claims: The reviewer has not carefully verified derivations line-by-line from the appendix. However, the reviewer believes the idea is straightforward and intuitively makes sense. Experimental Designs Or Analyses: Yes, the soundness of the following experimental designs by the author is verified and looks good. Supplementary Material: Yes, I reviewed the sections B, C and E. Relation To Broader Scientific Literature: This paper presents the new type of generative model that is 1NFE and seems easy to train and could be of interest to various communities such as audio, applied, and AI4Science. Essential References Not Discussed: Rectified flow models and their variations also fall under the 1NFE requirement. Hence, for a better comprehensive understanding, they should be included in Table 2. [1] "Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow" [2] "Improving the Training of Rectified Flows" [3] "One Step Diffusion via Shortcut Models" Other Strengths And Weaknesses: Strengths: - The paper has been written concisely and clearly. - ScoreMix achieves the SOTA performance on the ImageNet64 and CIFAR10 that can be trained from scratch and contains the ablation on $\alpha$ parameter that clearly showcases the advantage of the design choices made for ScoreMix. Weaknesses: Overall, the paper is in good shape, except for some minor improvements that could be made. - **Missing baselines:** Rectified Flow models and their variants, such as RF++ and ShortCut models, could be included as part of the main results in Table 2. - **Efficiency comparisons:** Although it is claimed that the training budget is smaller for ScoreMix, it is important to share the total GPU hours for each experiment and how much it does for at least some baselines. Otherwise, training the two models seems a slow and expensive process. - **Code release**: Authors claim to release the code to the public after the reviewing process; however, they have not even shared it as a part of supplementary material for reviewers. Hence, it is hard to verify the actual reproducibility. That said, I trust the authors will release it; hence, my current rating is also conditionally. Other Comments Or Suggestions: - Instead of Fig. 2b, many sentences (Lines 256, 387, 392, 409) contain the typo and mention Fig. 1a. Questions For Authors: - Could authors elaborate more on why $\alpha=0$ for 25% of the time during the score training (Lines 261-263)? - ScoreMix currently supports only the 1NFE approach. Can authors share some insights (if any) on how this could be extended to multi-step inference, as many downstream tasks such as editing and inverse problems might depend on it? - Is it possible to perform inversion at all? - What does the linear interpolation between two random input noises result in? Take two noises, then get interpolated noises in-between, and create a gif or image collage of corresponding generations. This could shed some light on the latent space of ScoreMix. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the effort in reviewing our manuscript and providing constructive comments. We will incorporate all the feedback in our revision to improve the manuscript. ### On Weaknesses * `Missing reference and baselines`: We thank the reviewer for pointing out the missing references and baselines (rectified flow models and their variants). We will certainly add and discuss these in our revision to better clarify and contextualize our contribution. * `Efficiency comparisons`: We thank for the suggestion. We will include GPU hours for our experiments in the revision to highlight the efficiency of our framework. To obtain the best FID result for ImageNet reported in Table 2, for example, it took 80 hours using 7 × A100 GPUs (200k iterations with an overall batch size of 280). As a comparison, both consistency training (CT) and improved consistency training (iCT) used 800k iterations with a much larger batch size of 2048. They do not provide the number of GPU hours, making a direct comparison difficult. We do note that ECT is able to *finetune* a pre-trained diffusion model on 4 × H100 GPUs within 8.5 hours. However, this consistency model is initialized with a pre-trained EDM2 diffusion model that was trained on 32 × A100 GPUs with a batch size of 2048 for a total of 2500 million images. Again, the EDM2 paper does not report the number of GPU hours, but its predecessor, EDM, reports a total training time of 2 weeks with a similar computational budget on ImageNet 64×64 — which alone exceeds our entire training-from-scratch effort. This shows that our proposed method can train a one-step model from scratch with a far smaller computational budget in terms of number of GPUs, training iterations, and overall batch size. Moreover, the two-model update is not more expensive than existing SOTA methods. * `Code release`: As promised, we will release our codebase and model weights upon acceptance for reproducibility. * `Typo`: We appreciate the reviewer’s careful reading of our paper. We will correct the typos and thoroughly check the manuscript. ### Answers to Questions * `Why α=0 for 25% of the time during the score training?`: This is due to the nature of the gradient of the skew JSD in Eq. (4). In that expression, the score of the mixture distribution with $\alpha=0$ is *always* used, which implies its particular importance and the need for accurate estimation. We will clarify this point in our revision to avoid any confusion. * `Extension to multi-step inference?`: This is a great question, and we agree that having a multi-step refinement feature would be of great interest. One potential direction is to develop a hybrid method that combines consistency models with our approach. We leave this for future work. * `Invertibility?`: Our current manuscript focuses on developing a high-quality, efficient sampling scheme. Whether our model can be inverted is indeed an interesting question from a representation learning perspective, and we leave this for future investigation. * `Linear interpolation?`: Thanks for the helpful suggestion. We conducted the interpolation experiment for CIFAR10 and ImageNet and uploaded the results at this [anonymized link](https://docs.google.com/document/d/e/2PACX-1vTw0koYOvxXorKuyTBw3C4FPnB_tX6IE6sYZRUnuFhQfy2ixqVt5fDk5GjnDFlcInYUBy83hNKe10gT/pub); see Figures B and C. Similar to GANs and consistency models, we found that “spherical interpolation” leads to natural interpolations. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing a detailed response and clarifications. Diffusion/flow/consistency models have been explored to solve inverse problems (inpainting, deblurring, super-resolution, etc.) [1-3]. Do authors believe that ScoreMix can help in such problems, especially because 1NFE inference could improve the convergence rate? **Overall, I like the current draft and rebuttal. Happy to maintain my current score.** [1] Ben-Hamu et al., "D-Flow: Differentiating through Flows for Controlled Generation" [2] Chung et al., "Diffusion Posterior Sampling for General Noisy Inverse Problems" [3] Patel et. al., "Steering Rectified Flow Models in the Vector Field for Controlled Image Generation" --- Reply to Comment 1.1.1: Comment: Thank you once again to the reviewer for their insightful feedback and comments. We apologize for the delayed response to your most recent question. We agree that the ScoreMix framework holds promise for solving inverse problems, and this is a key focus of our ongoing and future research. In works like DPS [2], the primary quantity of interest is the posterior expectation $p(\mathbf{x} | \mathbf{y})$, or more formally, its score. While DPS approximates this quantity by leveraging a pre-trained denoiser, recent studies [4, 5, 6] have explored using distribution matching frameworks based on distillation to train a posterior score model, along with a generator capable of sampling from this posterior distribution in just one NFE. We believe that our ScoreMix-distillation framework can be similarly adapted to address this problem. To the best of our knowledge, there are very few, if any, frameworks that allow for training a 1NFE posterior sampler from scratch. A naive extension of our training-from-scratch approach to minimize the skewed Jensen-Shannon divergence between the true posterior $p(\mathbf{x} | \mathbf{y})$ and the fake posterior $q_\theta(\mathbf{x} | \mathbf{y})$ results in a tractable gradient for updating the generator. However, the amortized score estimation loss would need to be augmented to allow for computing expectations over $p(\mathbf{x} | \mathbf{y})$. This represents an intriguing extension, and we believe that developing a solution to it could provide new insights into training the amortized score model. [4] Mammadov, Abbas, Hyungjin Chung, and Jong Chul Ye. "Amortized Posterior Sampling with Diffusion Prior Distillation." arXiv preprint arXiv:2407.17907 (2024). [5] Wu, Zihui, et al. "Principled probabilistic imaging using diffusion models as plug-and-play priors." Advances in Neural Information Processing Systems 37 (2024): 118389-118427. [6] Lee, Sojin, et al. "Diffusion prior-based amortized variational inference for noisy inverse problems." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024.
null
null
null
null
null
null
null
null
Surrogate Prompt Learning: Towards Efficient and Diverse Prompt Learning for Vision-Language Models
Accept (poster)
Summary: This paper presents a novel Surrogate Prompt Learning (SurPL) framework for vision-language models (VLMs). SurPL aims to achieve efficient and diverse prompt learning by replacing explicit diverse prompt learning with a Surrogate Feature Generator (SFG) that generates diverse text features without requiring complex gradient computations. This approach significantly reduces the computational burden while maintaining competitive performance. The authors provide extensive experimental validation across multiple benchmarks, showing that SurPL improves both efficiency and performance compared to existing prompt learning methods. The paper also introduces SurPL-G, an extension designed to improve generalization through Sharpness-aware Minimization (SAM). Claims And Evidence: Efficiency Improvement Claim: SurPL reduces computational costs by avoiding gradient-based optimization for diverse prompts. Evidence: Tables 1 and 2 compare SurPL with diverse and single-prompt learning baselines, showing reduced GPU memory usage and faster training/testing times. Performance Enhancement Claim: SurPL achieves state-of-the-art accuracy while being computationally efficient. Evidence: Tables 4 and 5 demonstrate that SurPL outperforms existing diverse and single-prompt methods in few-shot learning and generalization. Flexibility in Prompt Learning Claim: SurPL generalizes to multiple diverse prompt learning strategies. Evidence: The authors show how SurPL can generate instance-dependent and fine-grained prompt representations. Generalization with SAM (SurPL-G) Claim: SurPL-G enhances generalization using SAM. Evidence: Table 5 shows that SurPL-G achieves superior performance on base-to-novel generalization. Methods And Evaluation Criteria: Methods and Evaluation Criteria The paper employs rigorous evaluation methodologies: Benchmarks: 15 widely used datasets (e.g., ImageNet, EuroSAT, UCF101). Comparison with state-of-the-art methods: Covers both single and diverse prompt learning approaches. Ablation studies: Justifies the effectiveness of individual components (e.g., instance-dependent and fine-grained surrogate prompts). Efficiency analysis: Demonstrates reduced GPU memory and computational overhead. Potential Issues: The paper does not explore the impact of different types of conditional signals on surrogate text feature generation. While the SAM optimization improves generalization, the effect of different perturbation radii is not analyzed in detail. Theoretical Claims: The paper does not introduce significant new theoretical contributions but relies on well-established concepts like contrastive learning, attention mechanisms, and SAM. The derivations of loss functions (Equations 1-11) appear correct but lack deeper theoretical justification on why surrogate features retain meaningful semantic information. Experimental Designs Or Analyses: Strengths: The experimental setup is comprehensive and includes efficiency, accuracy, and generalization evaluations. The comparisons are fair, using the same baseline (Dense Visual-Language Prompt, DVLP). Ablation studies effectively demonstrate the impact of different components. Weaknesses: There is no qualitative analysis of how surrogate text features differ from learned diverse prompts. The paper does not discuss potential failure cases (e.g., when SurPL might underperform compared to explicit prompt learning). Hyperparameter sensitivity analysis is missing. Supplementary Material: The paper builds on prior work in: Prompt Learning: CoOp (Zhou et al., 2022), CoCoOp (Zhou et al., 2022b), PSRC (Khattak et al., 2023b). Vision-Language Models: CLIP (Radford et al., 2021), ALIGN (Jia et al., 2021). Generalization Techniques: Sharpness-aware Minimization (SAM) (Foret et al., 2021). The work is well-motivated and aligns with the ongoing trend of efficient fine-tuning methods for large pre-trained models. Relation To Broader Scientific Literature: The paper builds on prior work in: Prompt Learning: CoOp (Zhou et al., 2022), CoCoOp (Zhou et al., 2022b), PSRC (Khattak et al., 2023b). Vision-Language Models: CLIP (Radford et al., 2021), ALIGN (Jia et al., 2021). Generalization Techniques: Sharpness-aware Minimization (SAM) (Foret et al., 2021). The work is well-motivated and aligns with the ongoing trend of efficient fine-tuning methods for large pre-trained models. Essential References Not Discussed: The paper could benefit from a discussion on meta-learning approaches for efficient adaptation, such as: - Ha, Hyeonmin, et al. "Meta-Learning of Prompt Generation for Lightweight Prompt Engineering on Language-Model-as-a-Service." Findings of the Association for Computational Linguistics: EMNLP 2023. 2023. Other Strengths And Weaknesses: Strengths Addresses an important limitation (high computation cost of diverse prompt learning). Provides a strong empirical validation across various benchmarks. The proposed method is flexible and compatible with existing VLMs. Weaknesses The theoretical foundations of surrogate feature generation are not well explored. No qualitative analysis (e.g., feature visualization) to show how surrogate features compare to explicitly learned prompts. The limitations section is brief and does not discuss potential failure modes. Other Comments Or Suggestions: Feature Interpretability: Provide visualization of the generated surrogate features. Robustness Analysis: Evaluate the sensitivity of SurPL to different choices of conditional signals. Ablation on Surrogate Feature Generator: Analyze how reducing SFG complexity affects performance. Questions For Authors: How does SurPL handle domain shifts compared to explicit diverse prompt learning? Can the surrogate feature generator be trained separately from the main model? Does SurPL introduce additional latency during inference? How sensitive is SurPL to the choice of conditional signals? Have you considered using knowledge distillation techniques to refine surrogate feature generation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >Q1: Lack deeper theoretical justification on why surrogate features retain meaningful semantic information. A1: Thanks for the comments. We provide a theoretical analysis based on the universal approximation theorem. Due to character limits, please refer to the Q&A 1 of Reviewer jTad for detailed analysis. >Q2: No qualitative comparison between surrogate and explicitly learned prompts. A2: Thanks for the insightful comments. We provide direct visualization map comparisons between surrogate and explicit FG prompts (due to the computation resource limit, we can’t afford to reproduce explicit ID prompt learning). As shown in [explicit-versus-surrogate.png](https://postimg.cc/sQGnBFP5), heatmaps generated from explicit and surrogate prompts focus on the same regions of the image, particularly at large FG scales ($Z=3,4$). For small FG scales ($Z=1,2$), while the heatmaps show minor differences, they still consistently focus on the target object. These comparison results indicate the effectiveness of SFG. >Q3: No discussion about the failure cases of SurPL. A3: We first provide the direct comparison between explicit FG prompt learning and surrogate FG prompt learning on 9 datasets (except ImageNet and SUN397 due to the computational resource limit). The averaged results indicate surrogate features don’t exhibit significant failures compared to explicit features. | Explicit | Surrogate | |:--------:|:---------:| | 86.79 | 86.70 | We further compare the generalization ability between explicit and surrogate prompts on 10 datasets (excluding ImageNet). The results indicate that surrogate prompts initially exhibit weaker generalization ability. We attribute it to the lightweight architecture of SFG. Generating features from SFG may overfit more easily. However, through our proposed SurPL-G, we effectively address this limitation and achieve remarkable performance. | | Base | Novel | HM | |:-------------:|:-----:|:-----:|:-----:| | Explicit | 86.76 | 73.15 | 79.38 | | Surrogate | 87.11 | 72.13 | 78.91 | | Surrogate+SAM | 87.13 | 76.90 | 81.70 | Finally, we compare SurPL with the state-of-the-art work GalLop. While GalLoP achieves marginally superior results on certain datasets (e.g., ImageNet), we attribute this advantage to its multiple global prompt learning and dropout strategy. This strategy induces diversity through randomization, enhancing the performance. We will further explore how to incorporate such strategy into our SurPL. >Q4: Hyperparameter sensitivity analysis is missing. A4: We provide hyperparameter analysis at the Q&A 2 of Reviewer jTad. The results indicate that the hyperparameters applied are reasonable and relatively stable. >Q5: Analyze how reducing SFG complexity affects performance. A5: We appreciate this insightful suggestion. Due to the time limit, we only explore this trade-off by reducing the dimension of $d_{mid}$ in $\theta _ {fc1}$ and $\theta _ {fc2}$ in SFG. The results indicate that accuracy remains relatively stable despite parameter reduction. This inspires us to further reduce the parameter of projection layers $\theta _ {in1}$, $\theta _ {in2}$, $\theta _ {in3}$ and $\theta _ {out}$. | $d_{mid}$ | Param(M) | Acc | |:-------:|:--------:|:-----:| | 256 | 1.31 | 85.12 | | 128 | 1.18 | 85.00 | | 64 | 1.11 | 84.99 | | 32 | 1.08 | 85.00 | >Q6: How does SurPL handle domain shifts compared to explicit diverse prompt learning? A6: Thanks for the question. Domain shift tightly associates with model’s generalization ability. Due to the computation resource limit, we can’t reproduce explicit prompt learning on cross-domain experiment (based on ImageNet). We alternatively conduct the comparison of explicit and surrogate prompt learning on base-to-novel setting. Please refer to Q&A 3 for detailed analysis. >Q7: How sensitive is SurPL to the choice of conditional signals? A7: Thanks for the question. The conditional signals are pre-defined according to the existing diverse prompt learning methods. Please refer to Q&A 4 of Reviewer Zno2 for details. >Q8: Can SFG be trained separately from the main model? Consider knowledge distillation techniques to refine surrogate feature generation. A8: Thanks for the valuable suggestion. At this stage, SFG cannot be trained separately since it is intrinsically a fine-tuning approach and builds upon $w$. We will keep on research to explore the idea of applying pre-trained models and knowledge distillation to replace the cross-attention module for SFG. >Q9: Does SurPL introduce additional latency during inference? A9: No significant additions compared with baseline DVLP. Please refer to Table.2 for details. >Q10: Effect of $\rho$. A10: We explore the effect of $\rho$ on cross-dataset setting. The results indicate a seen-unseen trade-off. | | Seen | Unseen | |:---:|:-----:|:------:| | 0.1 | 74.2 | 65.13 | | 0.2 | 73.33 | 66.61 | | 0.3 | 72.74 | 66.74 |
Summary: This paper introduces Surrogate Prompt Learning to address efficiency issues in diverse prompt learning for vision-language models (VLMs). SurPL leverages a lightweight Surrogate Feature Generator (SFG) to directly generate diverse prompted text features from a single basic prompt, avoiding the computational overhead of conventional approaches. Experiments across 15 datasets show SurPL's effectiveness. ## update after rebuttal The author has adequately addressed my concerns, thanks! Claims And Evidence: The paper provides empirical validation through experiments on 15 different datasets across multiple settings. The efficiency claims are backed by direct comparisons of GPU memory usage, training time, and testing time. The effectiveness of the Surrogate Feature Generator is demonstrated through ablation studies, and the visualizations in Figures 3 and 4 provide qualitative evidence of the surrogate features' ability to capture relevant visual information. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate for the problem. The authors address the efficiency bottleneck in diverse prompt learning through a surrogate generation approach, which directly targets the computational overhead issue. The comparison metrics are relevant for assessing both performance and efficiency. The authors also appropriately evaluate their method against both single prompt learning approaches and diverse prompt learning approaches to demonstrate comprehensive improvements. Theoretical Claims: The paper does not contain formal mathematical proofs for theoretical claims. It presents algorithmic formulations and describes the approach using mathematical notation, but doesn't make rigorous theoretical claims requiring proof. Experimental Designs Or Analyses: Some potential improvements to the experimental analysis could include: 1. The paper doesn't provide much theoretical justification for why surrogate features effectively replace the original prompted features, relying instead on empirical results. 2. Analysis of performance sensitivity to hyperparameter choices, particularly for different loss components, the number of fine-grained text features and multi-scale constant factor. 3. While the paper shows improved efficiency and competitive performance, there's limited analysis of whether the surrogate features might be less effective for certain types of tasks or datasets. Supplementary Material: The supplementary material contains the algorithm's source code, additional ablation studies, and implementation details. Relation To Broader Scientific Literature: Related to prompt engineering in VLM. Essential References Not Discussed: Nil. Other Strengths And Weaknesses: Nil. Other Comments Or Suggestions: Nil. Questions For Authors: Nil. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >Q1: The paper doesn't provide much theoretical justification for why surrogate features effectively replace the original prompted features, relying instead on empirical results. A1: Thanks for the valuable comments. We provide a theoretical analysis to demonstrate the effectiveness of the surrogate features. Notation. Visual instance $x$, basic text prompt $s$, explicit diverse text prompt $s_E$, conditional signal $\alpha \in \mathbb{R}^d$, basic prompted text feature $w=T([s,c])\in \mathbb{R}^d$, surrogate prompted text feature: $h=\theta _ {SFG}(w,\alpha) \in \mathbb{R}^d$, the original explicit prompted text feature $w_E=T([s_E,c])\in \mathbb{R}^d$. Proof. To validate the effectiveness of replacing $w_E$ with $h$ in diverse prompt learning, we analyze their feature similarity through $||h-w_E||=||\theta _ {SFG}(w,\alpha)-w_E||$. Here, we mainly focus on providing the proof of surrogate instance-dependent prompt learning, which imposes stricter constraints (in surrogate fine-grained prompt learning, $\alpha$ is also learnable parameters, which directly simplified the condition to $h=\theta _ {SFG,\alpha} (w)$). $w$, $\alpha$, and $w_E$ are bounded-dimensional representations within the vector space $\mathbb{R}^d$ (outputs of continuous neural network mappings), and their corresponding inputs $s$, $x$ and $s_E$ are related via $s_E=s+\alpha=s+V(x)$ in instance dependent prompt learning (CoCoOp). Hence, there exists a continuous function $g$, such that: $w_E = g(w, \alpha)$. According to the universal approximation theorem, for any continuous function $g(\cdot)$ and any $\epsilon>0$, there exists a neural network with sufficient capacity (here, we utilize the cross-attention module $\theta_{SFG}$, which has been proved as a proper universal approximator$^\text{a}$) such that for all $(w,\alpha)$, $$ || g(w,\alpha)-\theta_{SFG}(w,\alpha)||<\epsilon. $$ By taking the approximation error $\epsilon \to 0$, we obtain $||h-w_E||\to 0$ at the optimal parameters $\theta_{SFG}^{\star}$, which confirms that $h$ effectively approximates and replaces $w_E$. This analysis validates the theoretical existence of $\theta_{SFG}^{\star}$, while extensive empirical studies in the manuscript further elaborate on its implementation and optimization, and substantiate its practical effectiveness. a. Are Transformers universal approximators of sequence-to-sequence functions? ICLR 2020. >Q2: Analysis of performance sensitivity to hyperparameter choices, particularly for different loss components, the number of fine-grained text features and multi-scale constant factor. A2: Thanks for the valuable suggestion. Here we carefully analyze the effect of different hyperparameters. Due to the character limit, we only report the averaged results on 11 datasets. Different loss components ($\lambda_1$ and $\lambda_2$): The results are quite stable under different loss coefficient hyperparameter settings. Since $\lambda_1=25$ and $\lambda_2=10$ achieve the best averaged performance, we choose them as the default setting. | $\lambda_1=15$ | $\lambda_1=20$ | $\lambda_1=25$ | $\lambda_1=30$ | $\lambda_1=35$ | |:------------:|:------------:|:------------:|:------------:|:------------:| | 85.02 | 85.04 | 85.12 | 85.09 | 85.06 | | $\lambda_2=5$ | $\lambda_2=10$ | $\lambda_2=15$ | $\lambda_2=20$ | |:-----------:|:------------:|:------------:|:------------:| | 84.93 | 85.12 | 85.12 | 85.09 | The number of fine-grained text features Z: Comparing the results between $Z=0$ (No FG prompts applied) and other settings validates involving FG information can significantly boost the performance. We observe consistent performance gains when increasing $Z$ from 1 to 4, which indicates applying sufficient FG prompt is necessary to capture comprehensive FG information. However, further increasing Z leads to performance degradation, which we attribute to the inclusion of ineffective information (e.g., background noise). Therefore, we select $Z=4$ as the default setting. | $Z=0$ | $Z=1$ | $Z=2$ | $Z=3$ | $Z=4$ | $Z=5$ | $Z=6$ | |:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:| | 83.96 | 84.91 | 85.02 | 85.08 | 85.12 | 85.04 | 84.95 | Multi-scale constant factor $\eta$: In this paper, we utilize multi-scale strategy to obtain FG visual information under different scales. The results show that applying a relatively small $\eta$ (5 or 10) can achieve better performance, since applying over-large scale may involve ineffective information (e.g., background noise), thus leading to a negative effect. | $\eta=5$ | $\eta=10$ | $\eta=15$ | $\eta=20$ | |:--------:|:---------:|:---------:|:---------:| | 85.08 | 85.12 | 84.99 | 84.86 | >Q3: Limited analysis of whether the surrogate features might be less effective for certain types of tasks or datasets. A3: Thanks for the comments. Due to the character limits, please refer to Q&A 3 of Reviewer 4GML for details analysis.
Summary: Prompt learning is an efficient fine-tuning technique that learns text prompts. Learning multiple text prompts instead of just one can improve performance while increasing computational cost. This paper proposes learning diverse text prompts without initializing additional parameters by generating specific text prompts with a lightweight model to avoid overwhelming the gradient process. The performance on several classification benchmarks is better than that of compared methods. Claims And Evidence: The claims in the submission are well-supported by clear evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no theoretical claims for the proposed methods, such as how the fine-grained loss promotes global minimum results through spatial feature consistency. The provided theoretical explanation for the optimization stages appears to be a simple regularization loss that mitigates overfitting, offering no new insights. Experimental Designs Or Analyses: The experimental designs and analyses are suitable and sufficient for this task. Supplementary Material: I confirm that I have read the entire supplementary material. Relation To Broader Scientific Literature: This paper proposes a more efficient approach to diverse text prompt learning for image classification transfer tasks. Essential References Not Discussed: Nothing to supplement. Other Strengths And Weaknesses: Strengths: 1. The method is simple and easy to follow. 2. The experiments are sufficient and show better speed compared to other diverse text prompt methods. Weaknesses: 1. How does the FG loss work with images that contain multiple classes? Would other instances in the figure disturb the classification since we only need to predict the most significant object? For instance, the author could provide heatmap visualization on such images. 2. The description in Figure 2 is not clear enough, such as how the ID conditional signals interact inside the Surrogate Feature Generator. Moreover, the structure on the right does not match Eq(4), where ω∈M×d, α∈Z×d; then how does the output h_FG become Z×M×d? If the conditional signals interact individually, you should improve the figure to show it, or it could be confusing to the reader. 3. Since the fine-grained module improves performance according to the ablation study, has the author tried to adopt this idea to existing prompt learning techniques? I wonder why there is no such usage in the current community. Other Comments Or Suggestions: None Questions For Authors: I am interested in the initialization of conditional signals, why is the ID signal derived from the visual feature while the FG signal is randomly initialized? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >Q1: How does the FG loss work with images that contain multiple classes? Would other instances in the figure disturb the classification since we only need to predict the most significant object? For instance, the author could provide heatmap visualization on such images. A1: Thanks for the insightful comments. Other instances will not disturb the classification process. In both diverse prompt learning and surrogate prompt learning, each input text description is comprised by the text prompts and the classname. Classnames are pre-defined and remain fixed during optimization, thereby providing consistent class-level semantic guidance. To this end, every FG prompted text feature embeds information that strongly associated with its corresponding class, which ensures the classification procedure not being disturbed by other instances in the image. To further validate this problem, we provide additional visualization results. We select images that contain at least two distinct object categories, and visualize the attention heat maps of different surrogate fine-grained text features on such images. The corresponding heat maps are shown in “[multiclass-heatmap.png](https://postimg.cc/sBK9jYJB)”, which demonstrate the effectiveness of FG prompted text features in our proposed method. >Q2: The description in Figure 2 is not clear enough, such as how the ID conditional signals interact inside the Surrogate Feature Generator. Moreover, the structure on the right does not match Eq(4), where $w \in M\times d$, $\alpha \in Z\times d$; then how does the output $h^{FG}$ become $Z\times M\times d$? If the conditional signals interact individually, you should improve the figure to show it, or it could be confusing to the reader. A2: Thanks for the valuable suggestion. We are sorry for the unclear description of Eq(4). The conditional signals interact individually with each basic prompted text feature $w_m$. We rewrite Eq(4) as: $$ h_m = \theta _ {SFG}(w_m,\alpha). $$ This equation clearly states that for the basic prompted text feature corresponds to $m$-th class $w_m$, we will generate $Z$ fine-grained surrogate prompted text features $h^{FG} _ {m} \in \mathbb{R} ^ {Z \times d}$. To this end, we have $h^{FG} \in \mathbb{R} ^ {Z\times M\times d}$ for total $M$ classes. We will correct the corresponding description in the next version of the manuscript. We also add the input and output notations of SFG in Fig.2 for clear description, which is shown in “[Revised-Fig2.png](https://postimg.cc/JD1zLdt8)”. >Q3: Since the fine-grained module improves performance according to the ablation study, has the author tried to adopt this idea to existing prompt learning techniques? I wonder why there is no such usage in the current community. A3: Thanks for your comments. First, the idea of fine-grained prompt learning has been utilized in several VLM-based prompt learning methods, which we have already cited in the manuscript (e.g., Chen et al., 2023, Lafon et al., 2024). However, these methods suffer from the huge computation complexity problem, hindering their practical application in real-world scenarios. Second, we claim that SurPL is adaptable and integrable to implement different diverse prompt learning ideas based on any single prompt learning approach. To this end, we have tried to implement SurPL on different existing single prompt learning methods, including CoOp, MaPLe and PSRC. Specifically, we consider these methods as baseline and additionally introduce both surrogate instance-dependent and fine-grained text features on each method. The comparison results are actually reported in Appendix Sec.E Table.12, and demonstrate that utilizing both instance-dependent and fine-grained can significantly improve the performance of existing single prompt learning methods. To explicitly explore the effectiveness of the fine-grained idea, here we further conduct the following experiment. We consider CoOp, MaPLe and PSRC as baseline approaches, and exploit our method to only generate surrogate fine-grained features based on each approach. We provide the comparison of average performances on 11 datasets below. | | Method | Method+FG | $\Delta$ | |-------|:------:|:---------:|:--------:| | CoOp | 79.89 | 83.15 | 3.26 | | MaPLe | 81.79 | 83.80 | 2.01 | | PSRC | 82.87 | 83.33 | 0.46 | >Q4: Why is the ID signal derived from the visual feature while the FG signal is randomly initialized? A4: Thanks for the question. For existing ID prompt learning methods (such as CoCoOp), text prompts are not randomly initialized for optimization, but require visual information as prior knowledge. So, the ID signals are derived from the visual feature, thus providing visual guidance. For existing FG prompt learning methods (such as PLOT and GalLoP), text prompts are randomly initialized and supposed to capture fine-grained information during optimization. Therefore, we also keep FG signals randomly initialized.
Summary: This paper proposes Surrogate Prompt Learning (SurPL), a new approach to enhance the efficiency and diversity of prompt learning for VLMs. Instead of learning multiple diverse prompts, SurPL directly generates surrogate prompted text features through a lightweight Surrogate Feature Generator (SFG), reducing computational overhead while maintaining diversity. The key contributions of this paper include: 1) A novel method that efficiently generates diverse text features instead of learning multiple prompts. 2) A cross-attention-based module that produces instance-dependent and fine-grained surrogate text features. 3) Extensive tests on 15 vision classification datasets show that SurPL achieves comparable or superior performance to existing diverse prompt learning methods, while significantly improving computational efficiency.4) SurPL-G, an extension of SurPL with Sharpness-aware Minimization (SAM), further enhances generalization across base-to-novel, cross-dataset, and cross-domain scenarios. Overall, this paper introduces an efficient and effective approach to prompt learning, addressing the computational cost issue while maintaining strong adaptation ability in VLMs. ## update after rebuttal I appreciate the author's efforts during the rebuttal period and am willing to maintain my original score. Claims And Evidence: The paper presents strong empirical results, but some claims need further support: 1. The claim that SurPL reduces complexity to O(M) is based on experiments only, without a formal proof. A theoretical complexity analysis would make this more convincing. 2. The surrogate feature generator (SFG) is effective, but there is no comparison with alternative feature generation methods. 3. SurPL is only tested on classification tasks, making its generalization to other vision-language tasks unclear. Methods And Evaluation Criteria: The proposed SurPL framework is well-motivated, and the SFG is a reasonable design. However, some aspects could be improved: 1) The method is only tested on classification tasks, limiting its applicability. Evaluating SurPL on VQA, object detection, or video understanding would better assess its generalization. 2) The paper compares SurPL with diverse prompt learning methods but lacks comparisons with non-prompt-based fine-tuning approaches (e.g., Adapters, LoRA). Adding such baselines would clarify whether prompt learning is the best approach for efficiency. 3) The claim of O(M) complexity is reasonable, but no detailed breakdown is provided. A formal complexity analysis would strengthen this argument. Theoretical Claims: The paper does not provide formal theoretical proofs for its main claims, particularly the O(M) computational complexity of SurPL. The argument is mainly supported by empirical results, which are convincing but not mathematically rigorous. Experimental Designs Or Analyses: The experimental setup is well-structured, but some aspects could be improved: 1) The experiments focus only on classification, limiting the assessment of SurPL’s generalization. Testing on VQA, object detection, or video understanding would provide a broader evaluation. 2) While the paper compares against diverse prompt learning methods, it lacks comparisons with non-prompt-based methods (e.g., Adapters, LoRA). Including these baselines would clarify whether prompt learning is the best approach. 3) Key parameters (e.g., Z, η) are fixed without sensitivity analysis. Investigating their impact would strengthen the robustness of conclusions. Overall, the experiments are well-executed, but broader evaluations and deeper analyses would improve their reliability. Supplementary Material: I reviewed the supplementary material and appendix, focusing on the codes, implementation details, and additional experiments. The material provides useful information. Relation To Broader Scientific Literature: The paper situates SurPL well within the literature on prompt learning for vision-language models, referencing key works on single and diverse prompt learning. However, there are some missing discussions: 1. The paper does not compare SurPL with Adapters, LoRA, or other efficient fine-tuning methods, which are widely used alternatives to prompt learning. 2. While the paper focuses on reducing computation, it does not connect with broader studies on efficient deep learning, such as low-rank adaptations, pruning, or quantization techniques. 3. Most cited works focus on classification tasks, but SurPL's relevance to VQA, object detection, or multimodal reasoning is not discussed. Essential References Not Discussed: The paper covers prompt learning literature well, but some essential references are missing (see previous section for details): 1) Comparison with efficient fine-tuning methods 2) Computational efficiency research 3) Prompt learning in broader tasks Other Strengths And Weaknesses: Strengths 1. The paper proposes an innovative method (SurPL) that improves prompt learning efficiency while maintaining diversity. The Surrogate Feature Generator (SFG) effectively reduces computational overhead, avoiding the high cost of traditional diverse prompt learning. 2. The experimental results are strong, tested on 15 datasets, demonstrating that SurPL significantly improves computational efficiency while achieving comparable or superior performance to existing state-of-the-art methods. 3. The paper is well-structured, with a clear motivation, well-designed experiments, and strong baseline comparisons. The methodology and experimental sections are logically presented and easy to follow. Weaknesses 1.The O(M) computational complexity claim lacks a formal derivation and is currently supported only by empirical results. A complexity analysis would enhance the theoretical rigor of the argument. 2. The method is tested only on classification tasks, making it unclear whether SurPL generalizes to VQA, object detection, and multimodal reasoning. Expanding the evaluation to broader tasks would verify its generalizability. 3. The paper does not compare SurPL with non-prompt-based fine-tuning methods such as Adapters, LoRA, and BitFit, which are widely used for efficient VLM adaptation. Including such comparisons would clarify whether SurPL is the most effective approach for efficient model adaptation. Other Comments Or Suggestions: The writing is generally clear, but a few areas could be improved for better readability and precision. Below are some minor suggestions: - The introduction could better emphasize the novelty of SurPL compared to previous diverse prompt learning methods. While the paper discusses existing works, a more explicit contrast with traditional prompt learning and fine-tuning methods would help highlight its contributions. - Some notations and explanations in the methodology section could be clarified. For example, the role of Z (fine-grained feature count) and η (multi-scale factor) is not well explained, and a brief intuitive description would improve clarity. - The figures are informative, but adding a computational flow diagram for SurPL compared to traditional prompt learning would make the efficiency argument clearer. - There are some minor typos and grammar inconsistencies, particularly in the experimental section. Careful proofreading would improve the overall presentation. - The appendix could include more details on hyperparameter settings, optimizer choices, and training dynamics to enhance reproducibility. Questions For Authors: 1. The paper claims that SurPL reduces complexity to O(M) compared to O(BM) or O(ZM), but there is no formal derivation. Could you provide a step-by-step complexity breakdown? 2. SurPL is compared against diverse prompt learning methods, but not with non-prompt-based adaptation techniques like Adapters, LoRA, or BitFit. Why were these omitted? 3. The paper focuses only on classification tasks. Have you tested SurPL on VQA, object detection, or multimodal reasoning? 4. Some hyperparameters, like Z (fine-grained feature count) and η (multi-scale factor), are fixed in the experiments. How sensitive is SurPL’s performance to these choices? 5. SFG generates surrogate features efficiently, but why was cross-attention chosen over other feature generation techniques (e.g., Transformer-based mechanisms)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >Q1: The O(M) computational complexity claim of SurPL lacks a formal derivation. A1: Thanks for the valuable comments. We provide a theoretical analysis of computational complexity here. Notations. Loss $L$, classnames ${c}=(c_m) _ {m=1}^M$, text encoder parameter: ${T}=({T}^{k}) _ {k=1}^K$, SFG parameter: $\theta_{SFG}$, text prompts parameter: $s=(s^k) _{k=1}^K$, encoder layer depth: $K$, the parameter size of SFG is much smaller than that of text encoder: $\theta _ {SFG}<<T$. Derivation. Optimizing the text prompt $s^1$ at the text encoder input position requires computing the gradient through the entire model. For each single surrogate output text feature, the back-propagation computation of SurPL can be written as: $$ \frac{\partial L}{\partial s^1}=\underbrace{\frac{\partial L}{\partial \theta _ {SFG}(T^K([s^{K},c _ m^k]);\alpha_j)}\cdot \frac{\partial \theta _ {SFG}(T^K([s^{K},c _ m^k]);\alpha _ j)}{\partial T^K([s^K,c _ m^K])}} _ {\text{SFG}}\cdot \underbrace{\frac{\partial T^K([s^K,c _ m^K])}{\partial T^{K-1}([{s}^{K-1},c _ m^{K-1}])}\cdots \frac{\partial {T}^{1}([s^1,c _ m^1])}{\partial s^1}} _ {\text{Text Encoder}}. $$ This gradient computation can be conceptually divided into two parts, which correspond to the text encoder and SFG. We denote the computational complexity for each output text feature as $O_T$ and $O_{SFG}$, respectively. As shown in Fig.1 (c), SurPL simultaneously involves $M$ text features back-propagation w.r.t text encoder, which equals to the complexity in single prompt learning: $O(M) = M \cdot O_T$. For the SFG stage, $(B+Z)M$ text features are involved in computation. The overall complexity of SurPL can thus be expressed as: $$ O_{SurPL} = M \cdot O _ T + (B+Z)M \cdot O _ {SFG} $$ Since $\theta_{SFG}<<T$, it follows that $O_{SFG}<<O_T$. Therefore, the second term is negligible, and we can approximate: $O _ {SurPL} \approx M \cdot O_T = O(M)$. >Q2: Comparison with non-prompt-based PEFT methods. A2: Thanks for the suggestion. We further compare SurPL with Adapters (CLIP-Adapter(Gao et al., 2024), Tip-Adapter(Zhang et al., 2022), TaskRes(Yu et al., 2023)), BitFit (CLIPFit$^\text{a}$) and LoRA (CLIP-LoRA$^\text{b}$) based methods. The averaged performances of 11 datasets are shown below, which demonstrate the superiority of SurPL compared to other PEFT techniques. We will add more detailed results and analysis in the next version of the manuscript. | CLIP-Adapter | Tip-Adapter | TaskRes | CLIPFit | CLIP-LoRA | DVLP | SurPL | |:------------:|:-----------:|:-------:|:-------:|:---------:|:-----:|:-----:| | 79.86 | 81.15 | 80.75 | 81.27 | 82.95 | 82.92 | 85.12 | a. Vision-Language Model Fine-Tuning via Simple Parameter-Efficient Modification. EMNLP 2024. b. Low-Rank Few-Shot Adaptation of Vision-Language Models. CVPRW 2024. >Q3: Experiments focus only on classification. Testing on VQA, object detection, or video understanding would provide a broader evaluation. A3: Thanks for your valuable suggestion. First, to our best knowledge, almost all studies of VLM-based PEFT techniques (e.g. prompts, adapters, LoRA, BitFit) mainly evaluate on classification tasks, and we simply follow these established protocols for fair comparison. Second, although some researches have applied PEFT techniques on other visual tasks, they tend to leverage existing methods rather than explore new approaches. Diverse prompt learning has not been well-explored on other tasks, making it challenging to identify a suitable baseline for SurPL on such tasks. Additionally, exploring the effectiveness of diverse prompt learning on VQA and detection from scratch and then implementing SurPL require significant time, making it difficult to include such results within the rebuttal period. However, we sincerely appreciate this insightful suggestion and will actively explore this direction in the near future. >Q4: Hyperparameter sensitivity analysis is missing. A4: Thanks for the comments. Due to the character limit, we provide comprehensive hyperparameter analysis at the Q&A 2 of Reviewer jTad. The results indicate that the hyperparameters applied in this work are reasonable and relatively stable. Please refer to that response for details. >Q5: Why was cross-attention chosen as SFG over other feature generation techniques (e.g., Transformer-based mechanisms)? A5: Thanks for the comments. SFG aims to take basic prompted text feature and conditional signal as input, and generate the surrogate prompted feature according to the given signal. Cross-attention module is the most intuitive choice for this requirement. While Transformer-based mechanisms can also achieve it, stacking multiple attention blocks would significantly decrease the efficiency. >Q6: Minor suggestions of writing. A6: Thanks for the valuable suggestions. Due to the character limit, we are not able to provide detailed modifications in the rebuttal. We will revise them in the next version of the manuscript.
null
null
null
null
null
null
Delta Decompression for MoE-based LLMs Compression
Accept (poster)
Summary: The paper presents D²-MoE, a new compression framework designed to tackle issues of parameter redundancy, memory usage, and storage inefficiency in MoE LLMs. D²-MoE enhances efficiency by breaking down expert weights into a shared base weight, which captures common features, and a delta weight that represents expert-specific differences. It employs truncation-aware Singular Value Decomposition to compress delta weights effectively and incorporates a semi-dynamic pruning strategy to remove redundancies while adapting to input distribution. Experimental results show that D²-MoE outperforms existing methods like NAEE and MoE-I², maintaining high accuracy and low perplexity even at high compression rates. Claims And Evidence: The claims made in paper are generally well-supported by experimental evidence, particularly those concerning the performance of the D²-MoE framework in achieving higher compression ratios and maintaining or improving accuracy compared to existing methods. The clear experiments demonstrating improvements in perplexity and other metrics provide credible evidence. Methods And Evaluation Criteria: The proposed methods for compressing MoE LLMs, including delta compression and semi-dynamical structured pruning, make sense for the problem at hand. They address well-known issues of redundancy and high memory usage in large-scale models. The evaluation criteria utilizing standard benchmark datasets (like WikiText-2, PTB, and C4) are robust, as they allow for meaningful performance comparisons against state-of-the-art methods. Theoretical Claims: The theoretical foundation requires further elaboration. Authors could delve deeper into the mathematical principles behind truncation-aware SVD. Experimental Designs Or Analyses: The experiments in the paper are relatively well organized including different MoEs and different sparsities, as well as comparisons with different compression methods. Ablations and comprehensive and offer useful insights. Supplementary Material: Yes. I have carefully checked additional details about experiments and code in the supplementary material. I think this is detailed and supports reproducibility. Relation To Broader Scientific Literature: The paper builds on existing literature about MoE architectures and LLM compression techniques. Essential References Not Discussed: There might be relevant works on other sparsity of Transformers [1,2], which are recommended for citation and discussion. [1] Wang et al. Q-Sparse: All Large Language Models can be Fully Sparsely-Activated. CoRR abs/2407.10969 (2024). [2] Li et al. The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in Transformers. ICLR 2023. Other Strengths And Weaknesses: Pros: 1. D²-MoE maintains that the delta parameter is novel and insightful in MoE merging. 2. The performance results are compelling, demonstrating high efficiency and preservation of accuracy across different model scales. 3. The writing and organization are relatively clear, making complex concepts understandable. Cons: 1. The theoretical framework lacks sufficient depth, particularly in justifying certain methodologies implemented within D²-MoE. 2. The dynamic nature of reallocating delta weights might introduce additional complexity in implementation. Other Comments Or Suggestions: This paper would benefit from deeper analysis of limitations. Questions For Authors: 1. Authors could delve deeper into the mathematical principles behind truncation-aware SVD? 2. Authors could discuss the potential for combining D^2-MoE with other advanced compression techniques, such as quantization or knowledge distillation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Dear Reviewer 4qJS** Thank you for your insightful comments and for acknowledging the strengths of D²-MoE in terms of **efficiency, accuracy, and practical applicability**. Below, we address your concerns in detail. ------ **Q1: Motivation and Theoretical Justification of the D²-MoE Framework** **A1:** As illustrated by the CKA similarity analysis in Figure 1, MoE models ***exhibit substantial similarities among different experts.*** Motivated by this observation, we propose merging experts into a shared representation. During this merging process, we utilize Fisher-weighted merging to emphasize critical expert weights effectively. Furthermore, ***to retain task-specific details and enhance model performance***, we introduce a delta branch by applying Singular Value Decomposition (SVD) to the delta weights. ------ **Q2: The dynamic reallocation of delta weights might introduce additional implementation complexity—can this be analyzed further?** **A2:** (1) While delta weight reallocation **adds computational overhead**, Table 4 shows that **D²-MoE achieves up to 1.5× inference speedup** even with dynamic structured pruning. (2) Figure 2 demonstrates that **delta weights exhibit strong low-rank properties**, allowing for **efficient compression without excessive reallocation overhead**. (3) We plan to further refine caching or partial updates to reduce overhead, but for now the two-phase pruning strategy offers a favorable trade-off between complexity and improved compression. ------ **Q3: The theoretical foundation of truncation-aware SVD should be elaborated on.** **A3:** (1) As shown in Table 6, ***naively truncating singular values leads to performance degradation and model collapse***. In contrast, truncation-aware SVD significantly outperforms both vanilla and activation-aware SVD, achieving a lower WikiText-2 perplexity (**5.28**) compared to standard SVD (**6.22**). (2) Therefore, we leverage the activation Gram matrix to compute a scale matrix represented as $\text{Cholesky}(X \cdot X^T)$. By introducing this scale matrix, truncating fewer singular values allows us to preserve more essential information. (3) Why we represent the scale matrix as $\text{Cholesky}(X \cdot X^T)$ can be proven as follows: When the scaling matrix $S$ is the Cholesky decomposition of $X \cdot X^T$, we have $S \cdot S^T = X \cdot X^T$. Under this condition, the compression loss $L_i$ caused by truncating singular values equals precisely to the singular value $sigma_i$ itself. Consequently, truncating the smallest singular values results in minimal compression loss, theoretically justifying our use of the scale matrix defined as $\text{Cholesky}(X \cdot X^T)$. ------ **Q4: The discussion on combining D²-MoE with other compression techniques (e.g., quantization, knowledge distillation) should be expanded.** **A4:** (1) We integrate quantization to further reduce memory footprints in the following Table, following approaches like GPTQ that already show how delta weights can be quantized effectively. Additionally, we apply the mixed-precision quantization method from [**MC-MoE**](https://arxiv.org/abs/2410.06270) to our D²-MoE. ***Table D²-MoE+quantization*** | Method | WikiText-2↓ | PTB↓ | C4↓ | Openb. | ARC_e | WinoG. | HellaS. | ARC_c | PIQA | MathQA | Average↑ | | :------------------------------------- | :---------- | :---- | :---- | :----- | :---- | :----- | :------ | :---- | :---- | :----- | :------- | | Mixtral-8x7B D²-MoE(25%) + GPTQ-4bit | 5.34 | 22.03 | 9.56 | 0.288 | 0.761 | 0.741 | 0.556 | 0.460 | 0.771 | 0.345 | 0.56 | | Mixtral-8x7B GPTQ-3bit | 5.93 | 31.15 | 10.71 | 0.282 | 0.735 | 0.674 | 0.534 | 0.422 | 0.772 | 0.302 | 0.53 | | Mixtral-8x7B D²-MoE(40%) + MC-MoE-4bit | 5.42 | 22.71 | 9.85 | 0.286 | 0.742 | 0.730 | 0.541 | 0.423 | 0.766 | 0.331 | 0.55 | (2) Knowledge distillation can further enhance D²-MoE by **transferring knowledge from the full model to its compressed counterpart**, effectively preserving generalization capabilities. As demonstrated in the following table, applying advanced distillation methods indeed improves our approach's performance. ***Table D²-MoE+KD*** | Method | WikiText-2↓ | PTB↓ | C4↓ | Openb. | ARC_e | WinoG. | HellaS. | ARC_c | PIQA | MathQA | Average↑ | | :---------------------------- | :---------- | :---- | :---- | :----- | :---- | :----- | :------ | :---- | :---- | :----- | :------- | | Mixtral-8x7B D²-MoE(20%) + KD | 4.31 | 15.40 | 9.78 | 0.328 | 0.805 | 0.738 | 0.631 | 0.526 | 0.807 | 0.391 | 0.60 | | Mixtral-8x7B D²-MoE(40%) + KD | 4.69 | 21.61 | 10.74 | 0.318 | 0.792 | 0.726 | 0.603 | 0.506 | 0.795 | 0.354 | 0.58 | | Mixtral-8x7B D²-MoE(60%) + KD | 5.35 | 33.06 | 12.21 | 0.292 | 0.753 | 0.701 | 0.565 | 0.434 | 0.768 | 0.318 | 0.55 |
Summary: This paper decompose expert weights into a shared base weight and expert-specific delta weights, allowing for effective compression while preserving expert diversity. The delta weights are then compressed using SVD and the base weights undergo semi-dynamical structured pruning. The paper provides extensive empirical validation on Mixtral etc, demonstrating superior accuracy compared to prior methods. ## update after rebuttal Thanks to the authors for their efforts to provide feedback. After the rebuttal, all of my concerns have been adequately addressed. Thus, I tend to accept this submission. Claims And Evidence: Empirical results support the claim on performance, showing that D²-MoE consistently outperforms baselines. Methods And Evaluation Criteria: Evaluation covers multiple MoE models and diverse tasks. The use of WikiText-2, PTB, and C4 for language modeling perplexity evaluation, as well as ARC-e, PIQA, HellaSwag, and others tasks. Theoretical Claims: The paper does not present significant new theoretical results but relies on well-established techniques like Fisher information weighting, SVD, and structured pruning. The Fisher-weighted merging is reasonable but would benefit from a more detailed theoretical justification of why it outperforms simpler merging strategies. The truncation-aware SVD is well-motivated, but a more formal discussion on how the truncation threshold is determined based on activation patterns would be useful. Experimental Designs Or Analyses: The experiments are comprehensive, covering various model sizes and compression ratios. The comparison across multiple baselines is a strong aspect of the paper, and the ablation on different merging methods (Table 5) is insightful. Supplementary Material: Yes, the supplementary material was reviewed. The additional results on compression ratios, inference speed, and memory reduction help support the claims. Relation To Broader Scientific Literature: The paper is well-aligned with recent advances in Mixture-of-Experts compression, building on methods like MoE-I², NAEE, and MoE-Compress. Essential References Not Discussed: None Other Strengths And Weaknesses: **Strengths:** + The core idea of delta weight decomposition is novel and well-motivated. + Comprehensive experiments across multiple MoE models and datasets demonstrate clear performance gains. + High reproducibility with detailed pseudo-code, supplementary experiments, and claimed code availability. **Weaknesses:** + Limited details on SVD truncation: how is the threshold determined dynamically? + Limited theoretical justification for Fisher merging: why is it better than other merge methods? Other Comments Or Suggestions: None Questions For Authors: Does the method generalize to non-MoE architectures? Could the same delta compression be used for compressing dense transformer models, or is it fundamentally MoE-specific? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Reviewer V5e9** Thank you for your detailed review and for recognizing the novelty of D²-MoE and its strong empirical performance. Below, we address your concerns in depth. ------ **Q1: The criteria for setting the SVD truncation threshold should be explained.** **A1:** (1) We set an overall compression ratio (e.g., 40%), and to enhance performance, we prune 10% of the base weights. Under this setting, ***we can calculate the truncation threshold in D²-MoE accordingly.*** Detail can be seen in Table 7. (2) We integrate activation statistics by computing a scaling matrix from the activation Gram matrix (see Section3.3). This ensures that truncating fewer singular values preserves more essential information. And the scale matrix can be represented as $\text{Cholesky}(X \cdot X^T)$. And Table 6 demonstrates that truncation-aware SVD significantly outperforms vanilla and activation-aware SVD, reducing WikiText-2 perplexity to 5.28 compared to 6.22 for standard SVD. (3) Why we represent the scale matrix as $\text{Cholesky}(X \cdot X^T)$ can be proven as follows: When the scaling matrix $S$ is the Cholesky decomposition of $X \cdot X^T$, we have $S \cdot S^T = X \cdot X^T$. Under this condition, the compression loss $L_i$ caused by truncating singular values equals precisely to the singular value $sigma_i$ itself. Consequently, truncating the smallest singular values results in minimal compression loss, theoretically justifying our use of the scale matrix defined as $\text{Cholesky}(X \cdot X^T)$. ------ **Q2: The theoretical justification for Fisher-weighted merging should be expanded—why is it better than simpler merging strategies?** **A2:** (1) We draw inspiration from model merging methods, where typically a base model is merged with various task-specific vectors. However, standard model merging techniques are ineffective for MoE expert merging since there is ***no explicit base expert weight.*** In contrast, ***Fisher merging directly operates on expert weights***, emphasizing parameters with higher gradient norms relative to the likelihood. This effectively identifies and retains the most critical expert weights. (2) Our experiments in Table 5 confirm that Fisher-weighted merging consistently outperforms mean averaging and frequency-based merging. Fisher merging achieves a 5.28 perplexity on WikiText-2, compared to 7.66 for mean averaging and 6.42 for frequency-based merging. ------ **Q3: Does D²-MoE generalize to non-MoE architectures? Could this delta compression approach be applied to dense transformers?** **A3:** (1) We design D²-MoE specifically for multi-expert redundancy, but the principle of extracting a shared base plus compressed deltas is not limited to specialized experts. (2) We are now carrying out similar approach apply to large dense transformers ***by factoring out common subspaces from multiple layers and storing residual differences.*** (3) We leave systematically exploring dense variants for future work, as the our D²-MoE currently focuses on MoE layers.
Summary: This paper introduces D2-MoE, which decomposes expert weights into a shared base weight and unique delta weights. The delta weights are then compressed using SVD, and the base weight is further compressed using a semi-dynamical structured pruning strategy. The authors claim D2-MoE achieves better compression ratios and performance compared to other methods on models like Mixtral, Phi-3.5, DeepSeek, and Qwen2. Claims And Evidence: The claims are partially supported by the empirical results. The improvement against baseline methods is not very significant as Table 2 shows. Methods And Evaluation Criteria: The proposed methods and evaluation criteria generally make sense. Theoretical Claims: N/A. There are no formal theoretical claims or proofs in the paper. Experimental Designs Or Analyses: I highly recommend the authors select some state-of-the-art baseline methods from top-tier machine learning conferences (i.e., ICML, NeurIPS, ICLR) for comparison, which makes their conclusion more convincing. Additionally, I suggest the authors use the conference version of each reference rather than the arxiv version. Supplementary Material: I reviewed the section A of the supplementary material. Relation To Broader Scientific Literature: I am wondering how the proposed method relates to the idea of LoRA which also utilizes a delta weight. Essential References Not Discussed: I suggest the authors carefully discuss the novelty of the proposed method against LoRA [1]. Additionally, the paper could discuss more recent work on the quantization method [2] for MoEs, as quantization is a common technique for further compressing LLMs. [1] Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." ICLR, 2022. [2] MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More. ICLR 2025. Other Strengths And Weaknesses: Overall, the proposed method is interesting and well motivates the MoE compression problem. However, I have several concerns: - The scalability of the proposed methods to even larger MoE models needs to be addressed. It would be great if the authors could do experiments on larger MoE models such as Deepseek V3. - I would like to hear more about the novelty of this paper against LoRA which seems to leverage the delta weight idea as well. - The selected baseline methods are relatively weak (from ACL or findings of ACL). I suggest the authors compare their method with some SOTA baselines from top-tier ML conferences. - The paper should discuss more recent work on quantization methods for MoEs, as quantization is a common technique for further compressing large language models. Other Comments Or Suggestions: N/A Questions For Authors: See the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Dear Reviewer LEQk** **Q1: Careful discussion on relation with LoRA** **A1:** (1) Our framework structurally builds a multi-LoRA setup for MoE compression, consisting of a single base branch and multiple delta low-rank branches, enabling us to leverage existing LoRA research for further fine-tuning and ***efficient multi-LoRA inference***. (2) Unlike standard LoRA methods, our approach is a post-training compression framework that ***does not require additional training.*** (3) Compared with similar methods such as [**LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation(ICML 2023)**] and [**Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy (MC-sMoE, ICLR 2024 Spotlight)**], ***which also apply pruning and SVD but involve further training and both come from top-tier machine learning conferences***, our method significantly outperforms these approaches in performance and ***requires no training***, as illustrated in the table. ***Table D²-MoE vs LoSparse(ICML23), MC-sMoE(ICLR24 Spotlight)*** | Method | WikiText-2↓ | PTB↓ | C4↓ | Openb. | ARC_e | WinoG. | HellaS. | ARC_c | PIQA | MathQA | Average↑ | | :------------------------- | :---------- | :------ | :------ | :----- | :---- | :----- | :------ | :---- | :---- | :----- | :------- | | Mixtral-8x7B D²-MoE(20%) | 4.65 | 16.32 | 8.59 | 0.33 | 0.80 | 0.75 | 0.61 | 0.51 | 0.81 | 0.39 | 0.60 | | Mixtral-8x7B MC-sMoE(20%) | 5.00 | 15.36 | 9.68 | 0.336 | 0.794 | 0.766 | 0.603 | 0.503 | 0.794 | 0.380 | 0.59 | | Mixtral-8x7B MC-sMoE(40%) | 4881.31 | 3276.94 | 4467.16 | 0.12 | 0.276 | 0.524 | 0.267 | 0.195 | 0.539 | 0.206 | 0.30 | | Mixtral-8x7B LoSparse(20%) | 953.51 | 805.16 | 1273.12 | 0.20 | 0.27 | 0.49 | 0.28 | 0.26 | 0.53 | 0.20 | 0.31 | (4) Additionally, further fine-tuning and knowledge distill to effectively recover performance, as shown in the table ***Table D²-MoE+further training*** | Method | WikiText-2↓ | PTB↓ | C4↓ | Openb. | ARC_e | WinoG. | HellaS. | ARC_c | PIQA | MathQA | Average↑ | | :------------------------------ | :---------- | :---- | :---- | :----- | :---- | :----- | :------ | :---- | :---- | :----- | :------- | | Mixtral-8x7B D²-MoE(40%) + KD | 4.69 | 21.61 | 10.74 | 0.318 | 0.792 | 0.726 | 0.603 | 0.506 | 0.795 | 0.354 | 0.58 | | Mixtral-8x7B D²-MoE(40%) + LoRA | 4.54 | 16.04 | 9.17 | 0.31 | 0.771 | 0.739 | 0.604 | 0.463 | 0.789 | 0.348 | 0.57 | ------ **Q2: Discussion on MoE quantization methods.** **A2:** (1) Quantization and D²-MoE are ***orthogonal approaches:*** Quantization aims to reduce the model's precision, primarily speeding up inference and reducing memory usage ***rather than parameter count***, while our D²-MoE targets parameter reduction explicitly through delta decomposition, it does acknowledge quantization as a complementary approach in the related work section, mentioning techniques like BitDelta that successfully quantize delta weights. (2) ***Existing quantization methods can be easily integrated into the D²-MoE framework.*** ***Table D²-MoE+quantization*** | Method | WikiText-2↓ | PTB↓ | C4↓ | Openb. | ARC_e | WinoG. | HellaS. | ARC_c | PIQA | MathQA | Average↑ | | :------------------------------------- | :---------- | :---- | :--- | :----- | :---- | :----- | :------ | :---- | :---- | :----- | :------- | | Mixtral-8x7B D²-MoE(25%) + GPTQ-4bit | 5.34 | 22.03 | 9.56 | 0.288 | 0.761 | 0.741 | 0.556 | 0.460 | 0.771 | 0.345 | 0.56 | | Mixtral-8x7B D²-MoE(40%) + MC-MoE-4bit | 5.42 | 22.71 | 9.85 | 0.286 | 0.742 | 0.730 | 0.541 | 0.423 | 0.766 | 0.331 | 0.55 | ------ **Q3: The scalability of D²-MoE to larger models** **A3:** (1) Table 2 show results on DeepSeekMoE-16B-Base at up to 60% compression. While not the exact DeepSeek V3 variant, these 16B-parameter experiments reveal consistent advantages, indicating D²-MoE scales to large MoE. (2) Due to limited experimental resources, we plan to integrate advanced DeepSeek V3 configurations in future work. To demonstrate the scalability of D²-MoE to larger models, ***we conduct experiments on Mixtral-8x22B.*** Experimental results are summarized in the table below: ***Table D²-MoE on larger scalability*** | Method | WikiText-2↓ | PTB↓ | C4↓ | Openb. | ARC_e | WinoG. | HellaS. | ARC_c | PIQA | MathQA | Average↑ | | :------------------------ | :---------- | :---- | :--- | :----- | :---- | :----- | :------ | :---- | :--- | :----- | :------- | | Mixtral-8x22B | 2.95 | 10.1 | 6.14 | 0.37 | 0.86 | 0.80 | 0.67 | 0.59 | 0.83 | 0.50 | 0.66 | | Mixtral-8x22B D²-MoE(20%) | 3.99 | 14.61 | 8.65 | 0.36 | 0.83 | 0.78 | 0.63 | 0.55 | 0.80 | 0.44 | 0.63 |
Summary: This paper introduces D²-MoE for MoE Language Models. The author decomposes expert weights into a shared base weight and expert-specific delta weights, then compresses each component separately. Claims And Evidence: The primary claim that D²-MoE outperforms existing compression methods is backed by comparative evaluations across multiple MoE architectures. The ablation studies further validate design choices. Methods And Evaluation Criteria: The authors evaluate on both language modeling (perplexity on WikiText-2, PTB, and C4) and reasoning tasks (accuracy on seven reasoning benchmarks), providing a holistic assessment of model capabilities after compression. Theoretical Claims: The paper does not make formal theoretical claims requiring proofs, but rather presents empirically-grounded algorithmic innovations. The mathematical formulations are clearly presented. Experimental Designs Or Analyses: The experimental design is comprehensive. I verified the methodology for evaluating MoE compression across different models, compression ratios, and benchmark tasks. Supplementary Material: I reviewed the supplementary material (discussion, computational cost, implementations). Relation To Broader Scientific Literature: This work builds upon expertise from both MoE compression methods (like NAEE, MoE-I², and MoE-Pruner) and general LLM compression techniques. Essential References Not Discussed: A few additional references would strengthen the context: "ZipLM: Inference-Aware Structured Pruning of Language Models" (Kurtic et al., 2023). "Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning" (Xia et al., 2023). Other Strengths And Weaknesses: Strengths: 1.The delta decomposition approach is a new contribution to MoE compression. The method achieves substantial compression ratios. 2. The experiments on multiple models and benchmarks provide strong evidence for the method's effectiveness across diverse settings. 3. D²-MoE doesn't require expensive fine-tuning after compression, making it more practical for large models. Weaknesses: 1.Limited more systematic analysis of the relationship between compression ratio and performance degradation. 2.The paper doesn't examine how D²-MoE might interact with other compression techniques (like quantization) in a comprehensive compression pipeline. Other Comments Or Suggestions: The paper is well-written and organized, but could benefit from a few improvements: 1. The introduction could more clearly separate the motivation (problems with existing approaches) from the proposed solution (D²-MoE components).\ 2. The figures showing CKA similarity and singular value energy retention could benefit from more detailed captions explaining the implications of these results. 3. Minor typos: "experts delta decomposition" → "expert delta decomposition" (page 3), "Our D²-MoE successful compact" → "Our D²-MoE successfully compacts" (page 1). Questions For Authors: How does D²-MoE interact with quantization techniques? Since quantization is a common complementary approach to model compression, understanding whether these methods can be effectively combined would provide valuable insights for practitioners. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Dear Reviewer 73q5** Thank you for your thoughtful review and constructive feedback. We appreciate your recognition of the novelty of D²-MoE and its strong empirical performance. Below, we address your concerns in detail. **Q1: The relationship between compression ratio and performance degradation is not analyzed in detail.** **A1:** (1) Table 2 demonstrates D²-MoE's robustness at various compression ratios. For example, on Mixtral-8×7B, performance declines gradually with increased compression (from 20% to 60%) without model collapse. Specifically, at **40% compression**, D²-MoE achieves an average accuracy of **0.57**, significantly outperforming NAEE (**0.48**) and MoE-I² (**0.49**). Even at **60% compression**, D²-MoE maintains **0.52** accuracy, whereas NAEE drops sharply to **0.36**. (2) Figure 4 shows how delta weight trimming affects D²-MoE's performance. As trimming increases, perplexity gradually rises, yet our method maintains stable performance even at extreme compression ratios of **81%**. Specifically, at **43%** compression (trimming 1 delta weight), D²-MoE achieves a WikiText-2 perplexity of **6.43**. Remarkably, even under extreme 81% compression, the model does not collapse, demonstrating the robustness and effective trade-off between compression ratio and performance. ------ **Q2: The interaction of D²-MoE with other compression techniques, particularly quantization, is not explored.** **A2:** (1) ***Our current approach combines SVD decomposition and pruning, orthogonal to quantization.*** We focus on decomposing experts into Fisher-weighted bases and SVD-compressed delta weights without formally integrating quantization yet. However, our design allows straightforward quantization of low-rank delta factors ***(Section “Delta compression in MoE LLMs”).*** (2) ***Existing quantization methods can be easily integrated into the D²-MoE framework:*** We plan to integrate quantization techniques to further reduce memory footprint, as demonstrated in the following table. Approaches such as GPTQ have shown effective quantization of delta and base-merged weights. Additionally, we apply the mixed-precision quantization method from [**MC-MoE**](https://arxiv.org/abs/2410.06270) to our D²-MoE. ***Table D²-MoE+quantization*** | Method | WikiText-2↓ | PTB↓ | C4↓ | Openb. | ARC_e | WinoG. | HellaS. | ARC_c | PIQA | MathQA | Average↑ | | :------------------------------------------- | :---------- | :---- | :---- | :----- | ----- | :----- | :------ | :---- | :---- | :----- | :------- | | Mixtral-8x7B D²-MoE(25%) + GPTQ-4bit | 5.34 | 22.03 | 9.56 | 0.288 | 0.761 | 0.741 | 0.556 | 0.460 | 0.771 | 0.345 | 0.56 | | Mixtral-8x7B GPTQ-3bit | 5.93 | 31.15 | 10.71 | 0.282 | 0.735 | 0.674 | 0.534 | 0.422 | 0.772 | 0.302 | 0.53 | | Mixtral-8x7B D²-MoE(40%) + MC-MoE-4bit | 5.42 | 22.71 | 9.85 | 0.286 | 0.742 | 0.730 | 0.541 | 0.423 | 0.766 | 0.331 | 0.54 | | DeepSeekMoE-16B-Base D²-MoE(25%) + GPTQ-4bit | 7.62 | 12.61 | 12.94 | 0.264 | 0.707 | 0.655 | 0.511 | 0.373 | 0.769 | 0.275 | 0.51 | | DeepSeekMoE-16B-Base GPTQ-3bit | 8.33 | 13.79 | 16.01 | 0.252 | 0.677 | 0.653 | 0.445 | 0.358 | 0.711 | 0.269 | 0.48 | ------ **Q3: The introduction should better separate the motivation from the proposed solution.** **A3:** (1) ***Problems with existing approaches:*** We will revise the introduction to juxtapose the storage/memory challenges of MoE (motivation) with our delta decomposition approach (solution), referencing new Table 1 (Section “Related Work”) to highlight how D²-MoE differs from pure pruning or merging. (2) ***D²-MoE's motivation:*** We emphasize that the moderate overlap (CKA 0.3–0.5) between experts to extract a shared base weight while preserving expert-specific diversity in compressed delta form. ------ **Q4: Figures (e.g., CKA similarity, singular value energy retention) need more detailed captions explaining their implications.** **A4:** (1) We will **expand the captions** of Figures 3 and 4 to explicitly describe what each metric represents. (2) For **Figure 3 (CKA similarity)**, we will clarify that **lower similarity values indicate that merging all experts directly would lead to performance loss**, justifying the need for delta decomposition. (3) For **Figure 4 (singular value energy retention)**, we will highlight that **delta weights exhibit strong low-rank properties**, making them well-suited for **SVD-based compression**. ------ **Q5: Minor typos need correction.** **A5:** (1) We will correct all typos, including those in **page 1 ("successful compact" → "successfully compacts")** and **page 3 ("experts delta decomposition" → "expert delta decomposition")**. (2) We will conduct a **thorough proofreading pass** to ensure consistency and clarity throughout the paper.
null
null
null
null
null
null
DynaMind: Reasoning over Abstract Video Dynamics for Embodied Decision-Making
Accept (poster)
Summary: This paper proposes to encode the manipulation video into `"dynamic representation" by assigning weight to frames. Leveraging this representation, future states are predicted, which are then used to output the action for robot control. The weight of each frame are determined by the variance and similarity between images and language. This approach is evaluated on two manipulation dataset and one navigation dataset. Claims And Evidence: The main claim of this paper is the effectiveness of "video dynamic abstraction" in bridging the language instruction and video. This is supported by the comparison with baselines on three benchmarks and ablations. Methods And Evaluation Criteria: The evaluation is thorough with different simulation benchmarks and real-world experiments. The method could fulfill the language-conditioned robot control problem. Theoretical Claims: Do not apply. Experimental Designs Or Analyses: This method is evaluated on two manipulation benchmarks and one navigation benchmark as well as 5 real-world tasks. My major concern is about the setting in manipulation. It seems that all testing tasks are seen in the training set (line 570 and 579). Considering the language-conditioned manipulation setting, the author should evaluate the method on unseen tasks to validate the generalization ability. Besides, the navigation benchmark is simple 2D setting, which might fail to effectively reflect the performance of the proposed method. Supplementary Material: I checked the detailed setting of experiments and some visualizations of trajectories. Relation To Broader Scientific Literature: It could be related to text-conditioned manipulation. Specifically, it is related to model-based manipulation learning, robot control with video generation model, and visual representation learning for robotics. Essential References Not Discussed: No. Other Strengths And Weaknesses: ### Strengths 1. The idea of abstracting dynamic information from video with key frame selection instead of generating video from text is novel. ### Weaknesses 1. The design of the proposed approach is not well-motivated. Firstly, the authors claim that "a single language instruction can correspond to multiple videos", which lead to the design of abstracting dynamic representation from video. However, current video generation models like video diffusion could model the randomness of video given the text condition. The benefit of using abstracted features need further justification. Besides, the design of the "Video Dynamic Abstraction" module aims to assign different weight to frames. But transformer with attention mechanism could already perform weighted aggregation, which is more flexible than the hand-crafted losses for weight learning. 2. The setting and design need further justification. I wonder the inference setting of this model since it requires a video input. What's the video input during test time? The history frames? Following the question, the designed consistency loss for the abstraction process directly use the cosine similarity of the image features and text features. How to ensure they are aligned in the same feature space? Is there a pre-trained model used? Furthermore, the assumption that higher similarity indicate higher importance need clarification. 3. The evaluation seems to be problematic as written above. Other Comments Or Suggestions: ### Minor 1. More evaluation on unseen tasks and on real-world tasks could improve the quality of this paper. The navigation tasks should be in 3D robot navigation space. 2. [Minor] There should be period after the bold text (e.g., line 184). Questions For Authors: Please see the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate Reviewer mTuA’s recognition of the novelty. Please find our responses to each comment below. >Seen manipulation tasks during training;lacks evaluation on unseen tasks for generalization. - Randomized initializations within seen tasks are a standard evaluation protocol. In Table 1, tasks are fixed across train/test, but random initial states introduce diverse, unseen configurations, allowing evaluation of generalization to state and visual variations. Baselines follow the same setup. - Our paper included evaluations on unseen tasks. - Instruction generalization(Table 2): Testing on paraphrased, unseen commands. - Cross-task transfer(Table 4): Testing on more complex, unseen tasks. - Compositional generalization(Table 6): Testing on novel task combinations. >2D navigation limitation. To complement this, we extended experiments to iTHOR, which features realistic indoor navigation scenes. Results for other methods (AVDC, GVMA) are taken from [1];both belong to the line of learning to act from video. -|Kitchen|Living Room|Bedroom|Bathroom|Overall -|-|-|-|-|- AVDC|12.2|13.9|26.7|6.1|14.7 GVMA[1]|48.3|42.7|51.0|52.7|48.7 Ours|55.0|38.3|78.3|41.7|53.3 [1] Grounding Video Models to Actions through Goal Conditioned Exploration.ICLR2025 >Related work clarity. We will clarify these connections in the revision. Unlike methods that model the environment or generate future frames, our approach predicts video dynamics, making it more suitable for long-horizon tasks. It also avoids large-scale pretraining, learning goal-conditioned representations for control. >The claim that "a single language instruction can correspond to multiple videos" motivates abstracted features from video, but video generation models can model the randomness of video given the text condition. The benefit of using abstracted features need further justification. - Our goal is to bridge the gap between abstract language and detailed video, which becomes more pronounced when one language instruction maps to multiple executions. We focus on mitigating the modality gap by learning compact dynamic representations that capture what matters most in a video, making ours fundamentally different from modeling trajectory diversity like video generation models. - As for the benefit of video abstraction, a concurrent vision-language study[2] shows that modality gaps—like the asymmetry between image and text in CLIP—negatively impact downstream performance. While their work is analytical, we build on similar insights and offer a practical solution in the embodied setting. [2]Two Effects,One Trigger.ICLR2025. >Abstraction vs. Transformer attention. - Explicit Inductive Bias vs. Data-Driven Attention. Unlike data-driven attention, our method introduces an explicit bias toward semantically relevant and visually salient frames. - Lightweight Structure. FrameScorer is a simple 2-layer MLP, lighter than Transformer-based alternatives and better suited for long-horizon tasks. - Empirical Validation. -|SR -|- Replace Abstraction with Attention|33.93% Ours|39.81% >Inference-time video input and use of history frames. Our method uses an online inference setup, where only current and past frames are available. During testing, frames are collected incrementally for real-time decision-making, consistent with standard practice in prior work. >The designed consistency loss for the abstraction directly use cosine similarity(CS) of the image features and text features. How to ensure they are aligned in the same feature space? - We clarify that the consistency loss is computed between abstracted video dynamics and language embeddings, rather than between raw image and text features. Instead of forcing alignment between inherently mismatched modalities, we introduce an abstraction module as a bridge. - While CS is a standard metric for cross-modal alignment, we understand the reviewer’s concern. Therefore, we estimate MI to capture global statistical dependency in the complex, long-horizon Franka Kitchen. The higher MI between abstracted dynamics and language, compared to raw pairs, further supports our approach. -|MI -|- Dynamic↔Language|0.058 Video↔Language|0.011 >Use of pretrained model. We uses a frozen DistilBERT for language and no large-scale pre-trained visual features. >Clarification on higher similarity indicate higher importance. We understand the reviewer’s concern, and we clarify that our method does not assume that higher similarity directly indicates frame importance. The semantic consistency loss is applied over the entire dynamic representation sequence and the language embedding, encouraging global task relevance rather than relying on per-frame similarity. Importantly, we avoid strong frame-level assumptions. Instead, frame importance is inferred contextually, guided by consistency and saliency losses, enabling the model to focus on what truly matters—not just what appears similar. >Missing period. We’ve added the missing period. --- Rebuttal Comment 1.1: Comment: Thank the authors for their detailed answers. I still feel there are several concerns that are not addressed by the rebuttal. 1. Evaluation on unseen tasks. While I understand that randomness also lies in the different initial object layouts, the advantage of video-based manipulation method is the cross-task generalization ability (e.g., UniPi: Learning universal policies via text-guided video generation). Therefore , more evaluations on unseen manipulation skills are important. However, Table 2 shows the performance with different instructions but the same manipulation skill, and Table 4,6 show the method can generalize to new combinations or more complex tasks while the subtasks or settings are seen. The paper's quality could be largely improved with the setting of training on A,B,C tasks and evaluating on D. 2. Explicit inductive bias on frames can help the model when the data is not abundant. However, it could also harm the performance when the data is enough, which deepens my concern on not evaluating on the ABC-->D settings, where large-scale dataset is provided. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the detailed feedback and appreciate the concerns raised. Below, we address each point in turn. >Evaluation on unseen tasks. More evaluations on unseen manipulation skills are important. We would like to clarify a potential misunderstanding: **the experiment in Table 4 does evaluate the model on tasks that include manipulation skills unseen during training**. Specifically, the model is trained only on GoToSeq, which consists solely of navigation instructions (e.g., “go to a box”) and does not include any object manipulation actions such as `pick up`, `open`, or `put`. In contrast, the test tasks—SynthSeq and BossLevel—require executing new types of skills, such as pick up a key, open a door, and put an object. These manipulation skills are not present in the training set, thus demonstrating DynaMind’s ability to generalize to entirely new skills, not just novel combinations of seen skills. >Explicit inductive bias on frames can help the model when the data is not abundant. However, it could also harm the performance when the data is enough. - We thank the reviewer for the thoughtful comment regarding the use of explicit inductive bias in different data regimes. In our approach, **the inductive biases in our system are not hard constraints, but auxiliary losses embedded within a broader end-to-end supervised framework that also includes direct supervision from executed actions.** These auxiliary objectives provide soft and general-purpose guidance that helps the model focus on task-relevant information without restricting its flexibility or overfitting to the auxiliary signals. - Moreover, as shown in Appendix Figure 12, both our method and the baselines improve as the number of trajectories per task increases, with **our method achieving a larger performance gain and showing no sign of saturation**. We believe these results highlight the scalability of our overall approach and its ability to make efficient use of additional data. Please don’t hesitate to let us know if any concerns remain—we sincerely welcome further suggestions.
Summary: This paper proposes a novel method to leverage video data for decision making. To address the gap between abstract language and complex video, the paper proposes to learn abstract dynamic representations for video, rather than making language more detailed. The dynamic representation is learned by assigning higher score for key frames that capture significant spatiotemporal patterns. Based on learned dynamic representations, they predict the future dynamics, and then uses the predicted dynamics to infer the corresponding action sequence. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: For methods that predicts latent dynamic representations, the method could only be evaluated by the final performance. Because it is hard to say if the predicts latents are good or not. The most impressive results to me, is the learned key frames in Figure 5. Given that the authors didn't use pre-trained visual representations for image encoder, it is quite impressively that FrameScorer could find key frames corresponds to the language instructions, especially given the training dataset is not very large (can you describe the size of training data on Franka Kitchen?). Is this because there are many repeated language instructions and repeated key frames so that the model learns to find the correspondence automatically? Do you think the success can be extended to more diverse language instructions with larger training dataset and even with pre-trained visual representations? Supplementary Material: No. Relation To Broader Scientific Literature: The key contribution of this paper is a novel idea to make video representations more abstract, in order to address the mismatch between abstract language instruction and complex/detailed video data in methods that use video for learning decision making ability. Essential References Not Discussed: There is a line of related works compressing visual changes into latent actions and then predicting latent actions, which also use video for pre-training decision making ability, should be discussed here. Learning to Act without Actions LAPA: Latent Action Pretraining from Videos IGOR: Image-GOal Representations Atomic Control Units for Foundation Models in Embodied AI Regarding predicting future frames, here is another work that is related. Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation Other Strengths And Weaknesses: it could be better if the paper compare itself with motoGPT, which leverages video to pre-train decision making ability. PIDM also seems to be very related. Other Comments Or Suggestions: It would be better to present more training details in the appendix, for example, the image encoder used in the paper, or the learning rate, training epochs, etc. Questions For Authors: I am curious how FrameScorer learns to find key frames corresponds to the language instructions, especially given the training dataset is not very large (can you describe the size of training data on Franka Kitchen?). Is this because there are many repeated language instructions and repeated key frames so that the model learns to find the correspondence automatically? Do you think the success can be extended to more diverse language instructions with larger training dataset and even with pre-trained visual representations? The setting seems a little bit strange to me, because the language instructions in the example (Fig1) and demo (Fig 5) both contain several sub tasks/instructions. And it seems the methods learns information about which sub-tasks the agent is current in, which sub-tasks are the next to perform. However, in the most popular framework, we usually use LLM to decompose tasks into several sub-tasks, and feed the sub-tasks one by one into a VLA model. I am curious to learn what is the difference between the proposed method and feeding VLA with the current sub-task instructions, or taking one step further, ask VLA to predict the next sub-task instructions. Which one do the authors think would be better? Moreover, do you think it is possible to split the language instruction into several sub instructions, and learn the Framescorer by matching the correspondence between dynamic features and sub-task instructions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer fGaZ for the valuable feedback. Below, we respond to each comment and will revise the paper accordingly. >Works like LAPO, LAPA, and IGOR use video to learn latent actions for decision-making. PIDM relates to future prediction. These works, like ours, target video-based decision-making. We briefly summarize the differences here and will provide a detailed comparison in the revised version. LAPO [ICLR2024] enables learning latent-action policies from raw video via consistency objectives, which can be quickly fine-tuned into expert-level policies. LAPA [ICLR2025] extends this by incorporating language. In contrast, our method avoids pretraining and latent action modeling, and instead learns video dynamics to address the modality gap through end-to-end training. IGOR [arXiv:2411.00785] learns a shared latent space for cross-embodiment transfer, while our method models dynamics within a single embodiment. PIDM [ICLR2025] predicts future states and feeds them into an inverse dynamics model to couple perception and control. Our method instead focuses on “what matters” rather than pixel-level predictions. >Comparation with MotoGPT,PIDM MotoGPT [arXiv:2412.04445] and PIDM are relevant as both leverage video and use Transformer-based architectures for policy learning. MotoGPT follows a three-stage pipeline: latent motion token modeling, generative model pretraining, and collaborative fine-tuning. PIDM adopts an end-to-end approach that predicts actions from forecasted states, coupling perception and control. Given the structural similarity and its end-to-end design like ours, we compare with PIDM on the Franka Kitchen. For fairness, we use the same image and language encoders trained from scratch. A detailed comparison with MotoGPT will be included in the final version. -|Success Rate -|- PIDM|36.77% Ours|39.81% >Training details in appendix We will include training details such as the image encoder, learning rate, epochs, batch size, and hyperparameters. >How FrameScorer learns to find keyframes from language with limited data? Can it extend with larger training datasets and pretrained visual representations? **Reasons for finding key frames with limited data** - Sub-task repetition: Many instructions share similar sub-tasks, providing multiple observations of the same semantic goal across different trajectories. - Multiple forms of weak supervision: FrameScorer is guided by two information-rich signals: focusing on semantically relevant frames and visually salient ones. **Regarding the scale of Franka Kitchen dataset** In our main experiments, we trained the model with 25 trajectories per instruction (~300 video-action pairs each), constrained by computational resources. We anticipated the impact of dataset size and included an ablation study (Appendix, Fig.12) showing that our method scales well. Increased trajectory diversity helps the model better capture video-language patterns and improves performance. **Regarding pre-trained visual representations** We had similar thoughts with replacing our lightweight visual encoder with the large-scale pretrained R3M(frozen during training; Appendix Table 7), but observed a performance drop. We attribute this to (1)modality misalignment—R3M’s CLIP-style modeling struggles to bridge language–vision gaps, and (2)domain gap—R3M is trained on human egocentric videos, which differ from our robotic tasks. Nonetheless, we see strong potential in pretrained models and are exploring adapter-based fine-tuning for better integration. >Common frameworks typically use LLMs to decompose tasks or letting the VLA predict the next sub-task instruction? Which is better compared to yours? The methods you described often follow a language-centric paradigm, whereas our approach is video-centric, offering a complementary perspective. Language-centric methods provide modularity and interpretability but depend on accurate sub-task decomposition, which can cause cascading errors in open-ended or ambiguous tasks. Our method models task progression implicitly by capturing temporal patterns in video conditioned on the language instruction. We see combining both approaches as a promising direction for future work. >It is possible to split the language instruction into sub-instructions, and learn the Framescorer by matching them to dynamic features? Interestingly, this suggestion overlaps with a direction we explored by integrating our method with LISA, which decomposes language instructions into sub-instructions, corresponding to skills and matched to video dynamics (Fig.8). However, the integration did not improve performance. Mutual information analysis showed consistently low correlation between the decomposed language and video features by LISA, suggesting that the decomposition introduced noise or mismatches. Nonetheless, we believe this remains a promising direction and plan to explore stronger decomposition models or structured task planners in future.
Summary: This paper proposes the DynaMind framework for video dynamic abstraction and reasoning, aiming to extract key dynamic information from long-horizon videos for future prediction and decision-making. First, a FrameScorer mechanism is designed to evaluate the importance of video frames based on visual saliency and semantic consistency, generating high-level dynamic representations through weighted fusion. Then, an autoregressive Transformer is employed for dynamic reasoning, leveraging temporal modeling to predict future evolution. Finally, an action Transformer integrates historical frames, past actions, and predicted future dynamics for long-term action decision-making. During training, multi-task loss optimization enhances video abstraction quality, while a hybrid assignment strategy stabilizes action prediction. Experimental results on the LOReL Sawyer robotic manipulation dataset demonstrate that DynaMind effectively reduces redundant information, improves adaptability to task complexity and scene variations, and outperforms existing language-decomposed task planning approaches. Claims And Evidence: The thesis is supported by compelling evidence Methods And Evaluation Criteria: The method proposed in this paper is suitable for this problem. Theoretical Claims: This paper has less theoretical arguments and more descriptive formulas. Experimental Designs Or Analyses: 1. The imitation learning baselines used for comparison are somewhat outdated. Could the authors compare their method with more recent approaches from the past two years? 2. Are there any directly comparable methods? The Multimodal Alignment Methods and Language-Decomposed Methods serve as indirect baselines, which may not fully demonstrate the superiority of the proposed method. 3. In Table 1, for some tasks (e.g., "move black mug right"), the performance gap is quite large compared to the best results. I suggest the authors analyze the reason for this discrepancy. Supplementary Material: Yes Relation To Broader Scientific Literature: Previous methods primarily addressed the gap between the simplicity of language abstraction and the complexity of video from a linguistic perspective. This paper, however, approaches the problem from the video perspective, offering a new viewpoint. Essential References Not Discussed: The citations are fairly comprehensive, but there is a lack of comparison with the latest imitation learning methods. Other Strengths And Weaknesses: The focus of this paper on the video component is quite innovative, and the writing is clear. The experiments were conducted in both virtual and real-world scenarios, and they are thorough. However, some methodological details and experimental results lack explanation. See the Question section for details. Other Comments Or Suggestions: See Question for details. Questions For Authors: 1. In Section 3.3, how does the Hybrid Assignment distinguish the early stages of training, and how is the transition implemented specifically? 2. In Section 3.3, how are the historical frame sequences, historical action sequences, and predicted future dynamic representations input into the action transformer, and how are they fused? 3. This paper focuses on abstracting information from videos. Have the authors attempted to integrate their approach with methods that primarily focus on language? If so, what were the results?. 4. I am curious about the training cost of this method. Could the authors provide more details on this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer kRnT for recognizing the novelty and presentation of our work. We address each concern below and will revise the paper accordingly. >Comparison with more recent imitation learning methods from the past two years We agree that including comparisons with more recent imitation learning methods strengthens the overall evaluation. In response, we add 3 recent methods: - Diffusion Policy(DP)[IJRR2023]: a diffusion-based imitation learning method. - LCSD[ICAPS2024]: an extension of DP that introduces a language-conditioned skill learning module. - PIDM[ICLR2025]: an end-to-end imitation learning approach that unifies vision and action by predicting actions from forecasted visual states. We compare these methods with ours under 2 benchmarks: Lorel Sawyer -|DP|LCSD|Ours -|-|-|- Task-wise SR|36.6%|45.5%|53.67%| Rephrasal-wise SR|24.8%|35.8%|53.73% Franka Kitchen -|DP|PIDM|Ours -|-|-|- SR|33.42%|36.77%|39.81% - Lorel results are reported from LCSD. - Kitchen results are from our re-implementation using their public codebases. >Are there any directly comparable methods? To the best of our knowledge, we are the first to address video-language modality imbalance from a video-centric perspective for language-conditioned decision-making. We compared with the most relevant works—LISA and SkillDiffuser—which approach the problem from the language side. Notably, a concurrent study [1] identifies similar modality imbalance in CLIP-style models, caused by information asymmetry between image and text. It further shows that smaller modality gaps lead to better performance, and that embedding dimensions contribute unequally. Though focused on a different task, it highlights the general importance of the problem and supports our motivation. [1] Two Effects,One Trigger.ICLR2025 >In Table 1, some tasks (e.g.,move black mug right) show a large performance gap. The short-horizon, low-complexity tasks are well-suited for vanilla imitation learning, which excels at fitting simple, deterministic behaviors. Interestingly, both our method and Text2Video approaches like SkillDiffuser underperform in these cases(line 282), likely due to indirect objectives introducing unnecessary complexity. This reflects a common but often overlooked limitation, which we plan to address for better adaptability across task complexities. In contrast, vanilla imitation methods struggle on more complex tasks with long-term dependencies or greater generalization demands (Table 1,2,6). >How does Hybrid Assignment handle early training and implement the transition? To mitigate the impact of early-stage instability in the dynamic reasoning module on action prediction, we initially use ground-truth goal features (future frame representations) for stable supervision, then gradually shift to predicted dynamics to enable end-to-end learning while maintaining training stability. This transition is controlled by a linear annealing schedule with sampling probability $p_n=\frac{n}{N}$, where $n$ is the current epoch and $N$ is the total epochs. >How are historical frames, actions, and predicted dynamics fed into the Action Transformer and fused? The Action Transformer takes three inputs: 1) historical frame features, 2) historical actions (embedded into the same latent space), and 3) a goal token, as described in the last question. The fusion process includes: adding timestep embeddings to encode temporal order, interleaving state and action tokens by timestep, prepending the goal token for global conditioning, and applying LayerNorm and an attention mask before feeding the sequence into the Transformer. >Results of integrating with language-centric methods This is a thoughtful insight—it happens to align with a direction we’ve explored. Specifically, we integrated DynaMind with the language decomposition module from LISA (Figure 8), but it did not improve performance in the Franka Kitchen. To further investigate, we analyzed the mutual information during training(Fig.8, bottom). The results show that DynaMind progressively increases the mutual information between modalities, while LISA does not, suggesting that its language decomposition may lead to semantic information loss. Nevertheless, we consider our method orthogonal and potentially complementary to language-based approaches. In future work, we plan to explore integration with more advanced language decomposition methods. >Training cost of this method Our method is designed to be lightweight, using a small visual encoder and shallow Transformer blocks for dynamic reasoning and action prediction. To assess training cost, we compare with LISA (Transformer) and SkillDiffuser (Diffusion) under identical A800 GPU settings. We report parameter count and GPU memory usage (batch size 64, Lorel Sawyer). As shown in the table, Ours balances computational cost and success rate. -|Trainable Params(M)|GPU Memory(MiB)|SR -|-|-|- LISA|7.52|690|40% SkillDiffuser|60.29|1136|43% Ours|7.84|854|53.6%
Summary: This paper aims to address the mismatch problems between abstract languages and the rich content of videos. It proposes dynamic abstraction to represent spatiotemporal latents as a substitute for videos. It generates dynamic abstraction by learning semantic consistency and visual saliency and learns the agent policy conditioned on dynamic abstraction. The empirical results show that its model can learn key video information, capture the correlation between languages and dynamic abstraction, and generalize to new tasks. Claims And Evidence: 1. Figure 1 shows the simplicity of language and the complexity of videos. 2. Figure 5 shows the model can score high weight for key frames. 3. Methods And Evaluation Criteria: 1. The fixed window size is less flexible, as also mentioned by the author. It needs to decide corresponding to the different tasks. 2. What exactly is the image encoder? Is it pre-trained from SkillDiffuser or trained from scratch? 3. Equation 4 and 5 compute the similarity between the abstraction of the short clip and language instructions of a whole task, which might include several subtasks. It seems not very reasonable. 4. The inputs of Video Dynamic Reasoning are different in the training and inference stages. It might cause a performance drop in the inference. 5. One input of action transformer is $g_t$. What is the $g_t$ from? Is it from the concatenation of $h$ 6. It's confused that the output actions are $a_{t-C+1:t-1}$ but the input frame features are $e_{t-C+1:t-1}$. Should it be autoregressively generated? 7. There are three modules that need training. The whole training pipeline is unclear. A pseudocode might make this clear. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: 1. Figure 3 shows the results on Franka Kichen. The success rate of completing 3 and 4 tasks is extremely low. I understand the difficulty of completing multiple tasks. However, the success rate drops from about 0.4 (2 tasks) to 0.1 (3 tasks). 2. Table 2 shows the instruction generalization, where instructions are different while conveying the same meaning. It should not be a problem when the model is using a large language encoder like T5-XXL. The language embeddings would be similar. 3. Figure 6 shows the ablation on dynamic abstraction. It would be better to discuss why FrameScorer improves performance much (about 0.1) on Kitchen but less (about 0.02) on BabyAI. 4. Figure 7 shows the ablation on the action transformer. It would be better to discuss why D&LG performs poorly than dynamic-guided. 5. Figure 8 shows the mutual information. The MI for LISA is extremely low (nearly zero), which does not align with the results (0.015) in the original paper. Supplementary Material: ABCDE. 1. It lacks the training details and experimental details like hyperparameters (T, C) for different environments. Relation To Broader Scientific Literature: This paper proposes a solution (dynamic abstraction) for the mismatch between languages and videos. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The empirical results show that the performance is improved by the method, and there are plenty of ablation studies to show the efficiency of the design. 2. The dynamic abstraction is reasonable to solve the redundant information in the videos. Other Comments Or Suggestions: None Questions For Authors: 1. The method and training pipeline are very unclear to me; see the method part. 2. I do not totally agree that text2video (SkillDiffuser) performs poorly than Dynamic Abstraction (DynaMind) in such simple tasks. I think text2video is expensive in such a setting. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank Reviewer VsTD for the feedback. To ensure clarity, some responses are stated directly—we appreciate your understanding. >The fixed window size is less flexible - Fixed window sizes are standard practice, used in baselines like LISA and SkillDiffuser, and in some video understanding work. - Performance is stable under moderate window changes (Fig.7a), suggesting robustness. - We acknowledged the limitation(lines 886–888) and plan to explore adaptive horizons. >What is the image encoder—pretrained from SkillDiffuser or from scratch? The image encoder is a CNN trained from scratch, not from SkillDiffuser or any pretrained source, ensuring that improvements reflect our own contributions. >Eq.4&5 compute similarity between abstraction of the short clip and instructions of a whole task—seems questionable. We do not compute similarity between a short clip and language. Instead, as noted in lines 170–173 and 184–188, it is computed between the full dynamic representation of the entire trajectory and the task-level language instruction. >Inputs to Video Dynamic Reasoning differ between training and inference may impact performance. We adopt a closed-loop process to reduce the train-test gap, a common strategy in decision-making. Both stages use dynamic abstracted from real observations, mitigating error accumulation. Specifically, during training, inputs come from actual frames; during inference, predicted dynamics guide actions, and real observations are appended iteratively. >The source of the goal token $g_t$ As noted in lines 271–274(left) and 220–229(right), during training, $g_t$ is gradually shifted from ground-truth future frame to predicted dynamics to stabilize learning. At inference, $g_t$ is taken from the predicted dynamics. >Confused why actions are $a_{t-C+1:t-1}$ but the input are $e_{t-C+1:t-1}$—shouldn't actions be autoregressively generated? The model is auto-regressive, generating actions step by step during inference. Following common practice in sequence modeling, we apply parallel supervision over the action sequence during training to improve efficiency and stability. >A pseudocode might make training clear We’ve prepared detailed pseudocode but couldn’t include it due to space limits. It will be added in the final version. >Fig.3 shows a sharp drop from 0.4(2 tasks) to 0.1(3 tasks) The sharp drop in performance with more tasks reflects the challenge of long-horizon task, especially under constrained settings (lightweight model, limited data). As shown in Fig.3, all baselines struggle, while ours performs better. >Tab.2 uses paraphrased instructions for generalization. With T5-XXL, this is less of an issue due to embedding similarity. - We use the same lightweight pretrained language encoder (DistilBERT) as our baselines, yet achieve better instruction generalization(Tab.2). This indicates that **success relies not just on the language model, but on the ability to connect language with videos**. - Our method is resource-efficient and complementary to LLMs(11B for T5-XXL). While LLMs reduce linguistic variation, ours bridges the language–vision gap. >Fig.6:Why does FrameScorer help more in Kitchen than in BabyAI? - The difference stems from environment: FrameScorer helps in Kitchen with rich visuals and redundancy, while BabyAI’s simplicity reduces the need for abstraction. - FrameScorer is one part of our method;overall performance gains in BabyAI reflects the effectiveness of other modules. >Fig.7:Explain why D&LG performs poorly than dynamic-guided D&LG underperforms due to a mismatch: language encodes long-term goals, while dynamics reflect short-term cues. Their fusion introduces redundant or conflicting signals, hindering decisions—supported by low mutual information in Fig.8(bottom). >Fig.8:LISA’s mutual information is nearly zero—inconsistent with the original paper Due to environment differences: we compute MI in the more complex Franka Kitchen (lines 406–407), while LISA uses BabyAI. Higher diversity in Kitchen leads to lower MI. >Supplementary Material lacks details We will include details to ensure reproducibility. >Not convinced that SkillDiffuser underperforms DynaMind on simple tasks. - SkillDiffuser results are directly taken from the original paper, without any modification. - SkillDiffuser underperforms on simple tasks, where skill reuse is limited. It suits long-horizon tasks but adds unnecessary complexity to simpler ones. - The two methods are complementary, not competing, and each has its strengths. Our goal is fair comparison under a unified protocol, not to claim superiority. >Text2video is expensive Unlike T2V methods that require full trajectory generation, our lightweight model predicts compact dynamics without heavy computation. On Lorel(A800 GPU, batch size 64), it achieves better SR with comparable or lower cost. -|Trainable Params(M)|GPU Memory(MiB)|SR -|-|-|- LISA|7.52|690|40% SkillDiffuser(T2V)|60.29|1136|43% Ours|7.84|854|53.6% --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I still have some concerns. The main concern is how well consistency and saliency loss can help to learn good abstractions on complex environments and complex instructions. The good abstractions are learned solely on the implicit distance functions. 1. In Figure 5, Frame Score is very low from Frame 30 to 75. Does it mean that $h_i$ from Frame 30 to 75 is meaningless if the sliding window is short? 2. Is it possible that some general meaningful sub-goals (like "open box") will be discarded because they would appear in different demonstrations from different instructions? --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your interest in our work and for your detailed comments. Below, we provide point-by-point responses to each of your questions. >The main concern is how well consistency and saliency loss can help to learn good abstractions on complex environments and complex instructions. The good abstractions are learned solely on the implicit distance functions. We clarify that the abstractions learned by our model do not rely exclusively on implicit distance functions, such as semantic consistency and visual saliency losses. Instead, **these implicit losses serve as auxiliary objectives within a broader end-to-end supervised framework**. Crucially, the model receives explicit supervision signals directly from executed actions, ensuring that the abstractions learned are practically beneficial for task success, particularly in complex environments and under complex instructions. Specifically, the abstracted dynamic sequences are fed into a dynamic reasoning module, which explicitly learns structured temporal dependencies through autoregressive predictions of future dynamics. This prediction step is directly supervised by mean squared error against ground-truth future dynamics. Importantly, these predicted future dynamics then serve as inputs to the action decision module, whose training is directly supervised by the executed actions. Consequently, the learned abstractions are driven not only by consistency or saliency criteria but also strongly aligned with practical task performance. >In Figure 5, Frame Score is very low from Frame 30 to 75. Does it mean that from Frame 30 to 75 is meaningless if the sliding window is short? - We clarify that the low Frame Scores between Frames 30 and 75 in Figure 5 do not indicate that these frames are meaningless, even when using a short sliding window. **The lower scores simply reflect that these frames contribute less to high-level dynamic reasoning**, which focuses on frames that contribute critically to the global structure and progression of the task. Since Frames 30 to 75 correspond to a transition phase with minimal visual change, it is reasonable that they receive lower saliency scores. - As discussed in Section 3.3, the low-level action decision module uniformly utilizes all historical frames within the sliding window to capture temporal continuity and contextual dependencies. Consequently, **even frames with low scores play a meaningful role in low-level action prediction**—for instance, by supporting smooth transitions and maintaining semantic coherence. Their lower scores indicate reduced relevance for high-level reasoning, but do not imply irrelevance to the overall system. >Is it possible that some general meaningful sub-goals (like "open box") will be discarded because they would appear in different demonstrations from different instructions? We believe the answer is no. DynaMind is designed to retain general sub-goals like “open box” by learning them as transferable abstract units. This is supported by two key aspects: **(1) Mechanisms for abstracting and preserving shared sub-goals.** - Semantic consistency loss, which aligns dynamic representations with instruction semantics. When similar sub-goals appear under varied phrasings, the model learns to represent them consistently as long as their functional role remains similar. - Visual saliency loss, which ensures that visually meaningful transitions—such as opening actions—are preserved in the representation, even if not mentioned in language. **(2) Experimental results confirm that DynaMind reuses generalizable sub-goals across tasks.** - In the LoReL Sawyer setting, the dataset includes six tasks with shared sub-goal structures (e.g.,reach → grasp → manipulate). Trained only on these simple tasks, DynaMind is evaluated on novel task compositions at test time. It significantly outperforms all baselines (Table 6), indicating successful learning and reuse of general sub-goal representations. - In addition, in the BabyAI zero-shot transfer setting (Table 4), DynaMind is trained only on simple navigation tasks and tested on unseen, more complex tasks with different instructions. The model generalizes well by reusing dynamic sub-goal structures to handle these tasks without additional training.
null
null
null
null
null
null
Simple Graph Contrastive Learning via Fractional-order Neural Diffusion Networks
Reject
Summary: This paper introduces a novel augmentation-free GCL framework. Unlike traditional GCL methods that rely on complex augmentations or negative sampling, this framework uses Fractional Differential Equations to generate different feature views. Claims And Evidence: The experimental results demonstrate competitive performance, however, some claims need theoretical justification and empirical validation: 1. The authors claim that the main contribution is introducing FDE-based graph contrastive learning. However, the background on FDE is not well-developed. Specifically, why is FDE a better choice than ODE for GCL? Has FDE been applied to GNNs in previous works? 2. While t-SNE and PCA visualizations suggest improved representation learning, they don't prove that FD-GCL mitigates dimensional collapse better than existing methods. It needs a stronger theoretical analysis. Methods And Evaluation Criteria: The proposed method and evaluation criteria are reasonable, but there is a concern about whether the step size of diffusion significantly impacts the final performance. It would be useful to test the effect of different diffusion depths (T). Theoretical Claims: The paper presents theoretical claims regarding the role of FDEs in generating diverse feature views for contrastive learning. The key theoretical argument is that varying fractional order 𝛼 allows the model to control local vs. global feature mixing, thereby improving representation learning. However, the mathematical justification for this claim is intuitive, so the authors should provide formal derivations or proofs. Experimental Designs Or Analyses: The experiments are well-structured, but there is a concern regarding the completeness: In the ablation study, the choice of different 𝛼 values has a significant impact on model performance. However, the paper only reports results for a limited set of 𝛼 values, raising concerns about whether different combinations might yield different outcomes. It remains unclear whether the selected 𝛼 values are optimal across all datasets or if dataset-specific tuning is necessary. Without a more comprehensive exploration of 𝛼 variations, the generalizability of the findings is uncertain. A more systematic analysis, testing a broader range of 𝛼 combinations across multiple datasets, would strengthen the experimental validity. Supplementary Material: I've review the code part in the supplementary material. Relation To Broader Scientific Literature: The paper situates itself within the broader literature on GCL, specifically in the context of augmentation-free contrastive learning and graph diffusion models. Its primary contribution—leveraging FDEs to control local vs. global feature mixing—draws connections to existing work on graph diffusion models based on ODEs. However, while ODE-based methods have been well studied, the application of FDEs to GCL is relatively novel. Essential References Not Discussed: The paper does not compare it to some works in negative-free contrastive learning: [1] Xia, Jun, et al. "Simgrace: A simple framework for graph contrastive learning without data augmentation." Proceedings of the ACM web conference 2022. 2022. [2] Thakoor, Shantanu, et al. "Large-Scale Representation Learning on Graphs via Bootstrapping." International Conference on Learning Representations. Other Strengths And Weaknesses: Please see above all parts. Other Comments Or Suggestions: Please refer to above all parts. Questions For Authors: My questions from all the above sections are summarized here: 1. The authors claim that the main contribution is introducing FDE-based graph contrastive learning. However, the background on FDE is not well-developed. Specifically, why is FDE a better choice than ODE for GCL? Has FDE been applied to GNNs in previous works? 2. While t-SNE and PCA visualizations suggest improved representation learning, they don't prove that FD-GCL mitigates dimensional collapse better than existing methods. It needs a stronger theoretical analysis. 3. The proposed method and evaluation criteria are reasonable, but there is a concern about whether the step size of diffusion significantly impacts the final performance. It would be useful to test the effect of different diffusion depths (T). 4. The paper presents theoretical claims regarding the role of FDEs in generating diverse feature views for contrastive learning. The key theoretical argument is that varying fractional order 𝛼 allows the model to control local vs. global feature mixing, thereby improving representation learning. However, the mathematical justification for this claim is intuitive, so the authors should provide formal derivations or proofs. 5. In the ablation study, it remains unclear whether the selected 𝛼 values are optimal across all datasets or if dataset-specific tuning is necessary. Without a more comprehensive exploration of 𝛼 variations, the generalizability of the findings is uncertain. A more systematic analysis, testing a broader range of 𝛼 combinations across multiple datasets, would strengthen the experimental validity. 6. The paper does not compare it to some works in negative-free contrastive learning: [1] Xia, Jun, et al. "Simgrace: A simple framework for graph contrastive learning without data augmentation." Proceedings of the ACM web conference 2022. 2022. [2] Thakoor, Shantanu, et al. "Large-Scale Representation Learning on Graphs via Bootstrapping." International Conference on Learning Representations. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the insightful comments and suggestions. **W1**. Why FDE for GCL. A core principle of GCL is to generate diverse views, with novelty in how they are constructed. FD-GCL uses neural diffusion-based encoders governed by FDEs, where fractional order $\alpha$ controls diffusion scale—enabling views with varying locality/globality. FDEs generalize ODEs ($\alpha=1$) but offer greater flexibility. Fixed $\alpha=1$ yields less diverse views, weakening contrastive effect. As a generalization, FDEs perform no worse than ODEs while enabling richer view generation, making them a better choice. See **Appendix C and D** for details on FDEs and ODEs. Theoretically, **Thm 2** (formal version of Thm 1) proves that embeddings generated with different $\alpha$ are provably distinct, with contrast increasing as differences in $\alpha$ grows. Empirical results (**Table R2 for Reviewer ttop**) further confirm FDE's advantage over ODE-based methods (GRAND, GraphCON) for all datasets. Additionally, this is the **first** work to apply FDE-based diffusion to GCL, offering a novel and flexible mechanism for view generation via tunable diffusion dynamics, despite prior usages of FDEs in GNNs [R1,R2]. [R1] Kang. et al, Unleashing the potential of fractional calculus in graph neural networks with FROND, ICLR, 2024. [R2] Zhao. et al, Distributed-order fractional graph operating network, Neuips, 2024. **W2**. FD-GCL mitigates dimensional collapse Dimension collapse refers to features being confined to a low-dimensional subspace, which may harm CL performance, though low-dimensional features are not inherently poor. Thus, to fairly compare CL models, model performance remains the primary comparison metric for different datasets. Nevertheless, we have provided evidence that FG-GCL can mitigate dimension collapse. Theoretically **(Thm 2 cf. Appendix E)**, we have shown that for small $\alpha$, the generated features tend to belong to a space spanned by a relatively large amount vectors in the spectral domain, which is empirically supported by the PCA visualization (a set of collapsed features should resemble a delta function). Refer to [Fig.(link)](https://limewire.com/d/nubcZ#8VF9FURNHF) for the PCA visualization of PolyGCL (i.e., low-pass and high-pass spectral views), indicates that the number of significant PCA components for FD-GCL (Fig. 2 in our paper) is comparable to that of PolyGCL. **W3**. The effect of different diffusion depths ($T$) As rigorously discussed in **Appendix E**, increasing diffusion depth $T$ generally enhances view diversity. We empirically evaluate the effect of varying $T$ on both homophilic and heterophilic datasets in **Table R5**, showing that larger $T$ values consistently lead to improved performance. The values $T$ for each dataset are reported in **Table 8 in Appendix F.3**. **Table R5. Classification accuracy w.r.t $T$** |T|5|10|15|20 |-|-|-|-|- |Cora|81.50|82.97|83.68|84.68 |Ogbn-arxiv|63.77|66.46|66.48|66.18 |Squirrel|40.67|40.85|57.77|51.08 |Chameleon|60.65|60.74|70.87|73.18 |Cornell|60.97|61.08|61.81|68.38 |Wisconsin|71.57|71.96|73.92|79.02 **W4**. Mathematical analysis of FDEs This claim is theoretically supported by **Thm 2 (cf. Appendix E)**, which proves that embeddings generated by FDEs with different fractional orders are provably distinct, with the contrast between them increasing as the difference in $\alpha$ grows. The rigorous mathematical analysis can be found in Appendix E. The relation between Theorem 1 and the claim in the comment has been explained in Sec.4.2 (2nd paragraph of ''distinct views''). Intuitively, for large $\alpha$, we have shown that the spectral are more concentrated on low-frequency components. It is well-known in GSP that generally, a low-frequency signal lacks variation across the entire graph and thus it represents a ''global view''. In addition to the theoretical justification, this claim is also empirically validated. As shown in **Fig. 1 and Appendix G.1**, node features generated by two FDE encoders with different fractional orders exhibit clearly distinct characteristics. Specifically, a smaller fractional order (e.g., $\alpha=0.01$) leads to embeddings with a concentrated core, whereas a larger order (e.g., $\alpha=1$) yields features that are more evenly distributed across the space. **W5**. Tuning of $\alpha_{1}$ and $\alpha_{2}$ See **Reviewer ttop's W4**. **W6**. Lack comparisons with two CL works (e.g., Simgrace and BGRL) Please note that the requested comparisons have already been included: SimGRACE results are in **Table 10**, and BGRL comparisons are in **Table 1 and 2**. Although SimGRACE is specifically designed for graph classification, FD-GCL achieves comparable results on this task. Moreover, FD-GCL surpasses BGRL on node classification across both homophilic and heterophilic datasets, with notable gains on heterophilic datasets (e.g., 28% on Squirrel/Wisconsin, 26% on Texas/Cornell, 5% on Actor/Roman/Arxiv-year).
Summary: This paper proposed a simple and effective augmentation-free graph contrastive learning framework, which uses Fractional Differential Equations induced graph neural diffusion models . By varying the order parameter, this method generates diverse views that capture both local and global graph information, eliminating the need for both complex augmentations and negative samples. It achieves state-of-the-art performance across diverse datasets. Claims And Evidence: The claims are well-motivated and largely supported by theoretical and empirical evidence. Methods And Evaluation Criteria: The success of an augmentation-free approach hinges on two factors: (a) the ability of the encoders to generate high-quality feature embeddings, and (b) the capability of contrasting encoders to produce distinct views of the same input. To address these requirements, this paper propose a novel GCL framework that utilizes neural diffusion-based encoders to generate contrasting views of node features. The proposed methods and evaluation criteria are appropriate and well-aligned with the paper’s goals. Theoretical Claims: I checked all theoretical claims, including proofs in the main paper. Experimental Designs Or Analyses: The experimental designs are reasonable and complete. Supplementary Material: I reviewed all the supplementary material. Relation To Broader Scientific Literature: The work situates itself at the intersection of graph contrastive learning and graph representation learning. The addressed challenge is important for graph contrastive learning, which makes sense. Essential References Not Discussed: Most related and foundational works are well-cited and discussed, but some latest work on unsupervised graph contrastive learning should also be considered. [1] LOHA: Direct Graph Spectral Contrastive Learning Between Low-pass and High-pass Views. Other Strengths And Weaknesses: Strengths: S1: The paper is well-written and easy to understand.The proposed framework innovatively integrates Fractional Differential Equations induced graph neural diffusion models with different order parameter as encoders to obtain contrastive views, offering a meaningful solution with practical impact. S2: The evaluation of the proposed method is comprehensive, experiments include performance comparisons and more visualization results. Especially in the process of method design, the rationality of the method is fully verified through rich visualization results. S3: Releasing well-organized source code, allowing reproduction of reported results on the datasets provided. Weaknesses: W1: The contribution of this study is limited and insufficient to meet the standards of ICML. This is mainly because many key technologies are based on existing works. For example, the fractional-order differential operator $D_{\alpha}^t$ , a cricial technology, is designed by Kang et al. (2024) . W2: As shown in Theorem 1, different order parameters will result in the augmented views being as distinct as possible. But why is this beneficial for contrastive learning, lacking sufficient theoretical analysis. And, as shown in Figure 3, why can the discrimination of the encoder be enhanced. W3: For S1 in Section 4.3, the operator $\mathbf Y_l = \mathbf X \mathbf W_l$ typically increases the feature dimension. What is the purpose of increasing feature dimensions? W4: Is there a clear pattern in parameters selection of $\alpha_1$ and $\alpha_2$ for homogeneous and heterogeneous graphs. Since parameters $\alpha_1$ and $\alpha_2$ are very important, providing theoretical selection guidance would improve the quality of this paper. Other Comments Or Suggestions: No, see weaknesses. Questions For Authors: Please see the Weakness part above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the insightful comments and suggestions. **W1**. The novelty of FD-GCL A general guiding principle for GCL is to generate views from diverse perspectives, with *novelty lies in how these views are generated*. For example, PolyGCL uses polynomial filters for low-pass and high-pass spectral views, while bandpass graph filters are well studied in GSP and GNN. Analogously, the core novelty of FD-GCL is not merely replacing components in existing augmentation-free or negative-free pipelines, but introducing a new perspective for encoder design: generating distinct views via diffusion dynamics, which aligns with GCL's core principle. FD-GCL uses neural diffusion-based encoders governed by FDEs, where the fractional order $\alpha$ controls the locality/globality of features. By using different $\alpha$ values, FD-GCL produces views with varying diffusion scales. To our knowledge, this is **the first work** to apply diffusion dynamics via FDEs for contrastive learning, offering a novel and flexible *mechanism for view generation through tunable diffusion rates*. Another technical novelty is the rigorous analysis in **Thm 2** (the formal version of Thm 1), which proves that embeddings generated by FDEs with different fractional orders are provably distinct, and that the contrast between them increases as the difference in $\alpha$ values becomes larger. This is the **first** formal mathematical analysis of this property in GCL, despite the established use of FDEs in GNNs [R1,R2]. This theoretical insight is further supported by numerical evidence in **Figure 1 and Appendix G.1**, where we examine the FDEs under two widely separated fractional orders ($\alpha_{1}=0.01$ and $\alpha_{2}=1$). The results clearly show distinct feature distributions: a small $\alpha$ leads to embeddings with a highly concentrated core, while a large $\alpha$ produces more evenly spread features. Moreover, unlike BGRL, CCA-SSG, GraphACL and PolyGCL, which rely on either data augmentation or negative sampling, **FD-GCL requires neither**, simplifying usage. AFGRL shares this simplicity but is limited to homophilic datasets. In contrast, FD-GCL is straightforward and effective on both homophilic and heterophilic settings (Table R3). **Table R3. Comparison with state-of-the-art GCL methods** |Method|Argumentation-free| Without negative sampling|Homophilic|Heterophilic| |-|-|-|-|-| |BGRL| ✘|✔|✔|✔| |CCA-SSG|✘|✔|✔|✔| |GraphACL|✔|✘|✔|✔| |PolyGCL|✔|✘|✔|✔| |AFGRL|✔|✔|✔|✘| |FD-GCL|✔|✔|✔|✔| [R1] Kang. et al, Unleashing the potential of fractional calculus in graph neural networks with FROND, ICLR 2024. [R2] Zhao. et al, Distributed-order fractional graph operating network, NeurIPS 2024. **W2.** Lack of theoretical analysis & Fig. 3 explanation The core principle of GCL is to design encoders that generate distinct yet meaningful views. FD-GCL introduces a novel view-generation mechanism via neural diffusion governed by FDEs, where fractional order $\alpha$ controls feature locality versus globality across continuous scales. This mechanism is theoretically grounded, shown in **Theorem 2 (cf. Appendix E)**. We refer to response to **W1** for more details. Fig. 3 shows the unsupervised clustering capability of FDE-based encoders. For each class $c$, we measure clustering quality using the discrimination ratio $r_c = d_c^{\mathrm{inter}}(\text{intra-class distance})/d_c^{\mathrm{intra}}(\text{inter-class distance})$. The higher the ratio, the better the clustering quality. Each curve (one per class) shows that $r_c$ increases and then stabilizes during training. This trend indicates that the encoder gradually enhances inter-class separation while maintaining intra-class cohesion, which is crucial for classification. W3. Purpose of increasing feature dimensions By [R3], increasing feature dimension enhances GCL expressiveness by capturing more complex patterns. We use the operator $Y_l = XW_l$ to project features to a higher dimension, and larger dimensions generally improve performance on all graph types (see Table R4). [R3] Xiao et al, Simple and asymmetric graph contrastive learning without augmentations, Neuips, 2023. **Table R4. Classification accuracy w.r.t feature dimension d** |d|128|256|512|1024|2048 |-|-|-|-|-|- |Cora|83.27|84.42|83.28|83.13|83.44 |Citeseer|63.10|64.18|67.78|71.29|73.70 |Pubmed|77.03|79.93|79.95|80.19|80.57 |Computer|82.22|85.48|87.81|88.80|90.13 |Photo|88.66|91.63|92.88|93.41|93.94 |Squirrel|36.82|44.70|53.29|61.85|64.43 |Chameleon|59.69|68.09|71.62|72.28|73.53 |Cornell|58.10|68.64|64.05|68.65|67.58 |Wisconsin|72.74|76.27|71.76|77.64|77.05 |Roman|62.73|65.49|68.78|70.56|71.48 **W4**. Tuning of $\alpha_{1}$ and $\alpha_{2}$ Refer to **Reviewer ttop's W4**. **W5**. Lack of latest work (e.g., LOHA) Please note that LOHA (AAAI 2025, Jan. 2025) is concurrent work per ICML 2025 policy; thus, direct comparison is not required. Also, the code for LOHA is not public, hindering timely reproduction. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the rebuttal. Indeed, the authors have addressed my concerns in some details, but some theoretical analyses are not convincing enough to validate the effectiveness of the proposed method in contrastive augmentation. --- Reply to Comment 1.1.1: Comment: Thank you for the remark. We will be grateful if you could be more specific regarding which particular parts or assumptions of the analysis you believe require further clarification or additional supporting evidence. We will be glad to provide further details if needed and your feedback will be of great help for us to improve our work.
Summary: This paper proposes Fractional-order Neural Diffusion Networks (FNDN) as a new encoding method for Simple Graph Contrastive Learning (GCL). Unlike augmentation-based GCL approaches that rely on complex data transformations or augmentation-free methods that still require careful encoder design, this work introduces fractional-order differential equations (FDEs) to generate diverse feature views dynamically. The key insight is that the fractional derivative order α controls the extent of local vs. global information captured in node embeddings, allowing different contrastive views without requiring negative samples. Claims And Evidence: Problematic Claims: * The paper claims to propose "a novel way of generating contrastive views in GCL", but augmentation-free and negative-free GCL strategies have already been extensively studied (e.g., BGRL, CCA-SSG, AFGRL, GraphACL). The use of FNDN only replaces an existing technique rather than introducing a fundamentally new learning paradigm. * The claim that FD-GCL is efficient lacks strong empirical backing—while complexity analysis is included, no runtime comparisons on large-scale datasets (e.g., OGB) are provided. * The claim that FD-GCL naturally avoids feature collapse is not thoroughly analyzed—existing augmentation-free GCL methods often mitigate this issue through architectural modifications or regularization rather than diffusion-based methods. Methods And Evaluation Criteria: * The method requires manual tuning of $\alpha_{1}$ and $\alpha_{2}$, and there is no adaptive strategy for selecting optimal values across different datasets. * The scalability of FD-GCL is not well-studied—while theoretical complexity analysis is provided, empirical results on large-scale datasets are missing. Theoretical Claims: Theorem 1 provides a solid spectral analysis of how different fractional orders affect node embeddings, and it aligns with established graph signal processing principles. The derivation of the diffusion process is technically sound, but it does not introduce a fundamentally new theoretical framework—fractional diffusion has been previously studied in graph signal processing and neural PDE-based models. No major mathematical flaws were found, but the impact of this analysis on contrastive learning remains unclear beyond providing an alternative view-generation method. Experimental Designs Or Analyses: The experiments are well-designed, though additional scalability validation would be helpful. Supplementary Material: The paper provides additional proofs, definitions of fractional operators, and hyperparameter settings in Appendices C, D, E, and F. I reviewed the mathematical derivations, which are technically sound, but they do not significantly advance the theoretical foundations of contrastive learning. Relation To Broader Scientific Literature: * Graph Contrastive Learning (GCL): The method aligns with augmentation-free GCL techniques (e.g., BGRL, CCA-SSG, GraphACL), but instead of using architectural tricks or spectral filtering, it leverages fractional-order diffusion. * Graph Neural Diffusion Models: Related to ODE-based graph diffusion methods (GRAND, GraphCON) but extends them to fractional derivatives. * Graph Signal Processing (GSP): The use of fractional diffusion has connections to spectral graph theory, though it has been studied before in graph signal reconstruction and filtering. While the paper connects well to existing literature, its contribution is incremental rather than groundbreaking. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: * Avoids Explicit Data Augmentation and Negative Samples: While augmentation-free GCL is not novel, this method provides an alternative mechanism for generating diverse feature views, avoiding traditional graph perturbation methods. * Applicability to Both Homophilic and Heterophilic Graphs: The method performs consistently across different graph structures, showing robustness to topology variations. Weaknesses: * Incremental Contribution Rather Than Fundamental Innovation: The main idea is not fundamentally novel, as augmentation-free GCL methods already exist. The paper replaces traditional view-generation mechanisms but does not introduce a new learning paradigm. * Scalability and Efficiency Are Not Demonstrated: The method relies on fractional-order diffusion, which may introduce additional computational overhead. However, the paper does not include a runtime analysis or test on large-scale datasets (e.g., OGB). This raises concerns about its practical applicability. * Manual Hyperparameter Selection: The selection of $\alpha_{1}$ and $\alpha_{2}$ is manual, making the method less adaptive to different datasets. There is no guidance or automated strategy for tuning these parameters. * No Discussion on Alternative Diffusion Models: The paper focuses solely on fractional diffusion but does not compare it to other graph diffusion techniques (e.g., heat diffusion, GRAND, GraphCON). A comparison would strengthen the justification for choosing fractional-order diffusion over other approaches. Other Comments Or Suggestions: * Clarify novelty compared to augmentation-free baselines: The introduction should explicitly differentiate FD-GCL from existing methods like BGRL, CCA-SSG, GraphACL and explain why fractional diffusion is a meaningful improvement rather than just an alternative. * Provide runtime and memory efficiency analysis: Adding computation time comparisons against baseline methods would address concerns about efficiency. * Justify the choice of fractional diffusion over other diffusion strategies: A discussion of how fractional diffusion compares to existing graph diffusion models would strengthen the argument. Questions For Authors: Q1: How does fractional-order diffusion compare to other graph diffusion methods (e.g., heat diffusion, Personalized PageRank, GRAND, GraphCON)? Q2: How does FD-GCL prevent feature collapse compared to existing augmentation-free GCL methods? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the insightful comments and suggestions. **W1**. The novelty of FD-GCL FD-GCL's novelty is not merely replacing components of existing augmentation-free or negative-free pipelines, but introducing a new perspective for encoder design: generating distinct views via diffusion dynamics, aligning with GCL's core principle. **See W1 of Reviewer ZHMy on both architectural and theoretical novelty**. **W2**. Scalability & efficiency of FD-GCL Scalability and efficiency are illustrated by training and running time comparisons on the large-scale Ogbn-arxiv dataset, shown in **Table 4** in the paper and Table R1, respectively. The results confirm that FD-GCL maintains competitive training and running times, highlighting its practical efficiency. Additional evaluations on other large-scale Roman-empire and Arxiv-year datasets (**Table 1 and 2**) further support its scalability, with dataset sizes comparable to benchmarks like PolyGCL. **Table R1. Testing time (sec)**. OOM refers to out of memory on an NVIDIA RTX A5000 GPU (24GB) during training. |Method|Cora|Wisconsin|Ogbn-arxiv| |-|-|-|-| |GraphACL|13.29|38.69|14.59| |PolyGCL|7.46|9.73|OOM| |FD-GCL|2.96|2.93|14.88| **W3**. How FD-GCL mitigates dimensional and feature collapse Dimension collapse refers to embeddings being confined to a low-dimensional subspace. In **Theorem 2 (cf. Appendix E)** , we theoretically prove that using a small fractional order $\alpha_{1}$ mitigates this issue by reducing energy concentration in the spectral domain. Specifically, it shows that $\mathbf{Z}_{\alpha_1}(t) $ is less energy concentrated in the spectral domain, i.e., it has a decomposition $\sum_{1\leq i\leq N} c_i\mathbf{u}_i$ with many large $|c_i|$, meaning the embeddings span a higher-dimensional subspace. This is further supported by PCA results (Fig. 2), where smaller $\alpha$ values preserve significantly more principal components. On the other hand, view collapse means generated views converge to similar representations. We interpret the reviewer is referring to view collapse. While existing augmentation-free GCL methods typically rely on architectural changes or explicit regularization to address feature collapse, FD-GCL adopts a fundamentally different approach by leveraging fractional diffusion dynamics. We mitigate this collapse with a regularized cosmean loss (i.e., $\mathcal{L}(\mathbf{Z}_1,\mathbf{Z}_2)= \mathcal{L}_0(\mathbf{Z}_1,\mathbf{Z}_2)+\eta |\langle \mathbf{c}_1, \mathbf{c}_2 \rangle|$) that includes a penalty term on the angle between the dominant directions $\mathbf{c}_1$ and $\mathbf{c}_2$ of embeddings $\mathbf{Z}_1$ and $\mathbf{Z}_2$. This encourages diversity between the views without relying on negative samples. This approach is possible as we have observed that features from each view of FD-GCL tend to have a pronounced dominant component. As demonstrated in **Fig. 5 and Fig. 6 in Appendix F.4**, this regularization ensures stable performance across training epochs. Combined with our theoretical findings, these results provide strong evidence that FD-GCL effectively avoids feature collapse through its diffusion-based framework. **W4**. Tuning of $\alpha_1$ and $\alpha_2$ We refer the reviewers to **Appendix F.3**. Both $\alpha_{1}$ and $\alpha_{2}$ are tunable within the range $(0,1]$. Motivated by the theoretical insights in Thm 1 (or Thm 2), which suggest that a larger difference between $\alpha_2$ and $\alpha_1$ enhances the contrast between views, we adopt a simple yet effective strategy: we fix $\alpha_{2}=1$ to maintain a consistent global view and tune $\alpha_{1}$ over the range $(0,1]$ using a grid search. This approach is guided by our analysis and provides a practical, computationally efficient solution for tuning $\alpha_l$. The corresponding values $\alpha_{1}$ and $\alpha_{2}$ for each dataset are reported in **Table 8 in Appendix F.3**. We recognize that more adaptive or data-driven methods for selecting fractional orders are promising. To this end, in our future GCL work, we may adopt variable-order fractional derivatives where the derivative orders depend on hidden features. **W5**. Compare with alternative diffusion models. Note that the fractional diffusion model in FD-GCL generalizes other graph diffusion techniques (e.g., GRAND (cf. (3) in Appendix C) and GraphCON (cf. (4) in Appendix C)) by setting $\alpha_{1}=\alpha_{2}=1$. Table R2 shows this fixed setting yields poorer results on both homophilic and heterophilic datasets, since it produces less distinct views, weakening the contrastive effect. This highlights the advantage of fractional-order diffusion. **Table R2. Classification accuracy on different graph diffusion models**. |Method|GRAND|GraphCON|FD-GCL| |-|-|-|-| |Cora|78.09±0.19|76.50±0.10|84.27±0.27| |Ogbn-arxiv|66.37±0.13|OOM|70.46±0.13| |Crocodile|63.57±1.01|67.79±0.60|68.99±0.66| |Wisconsin|61.57±6.21|62.35±5.74|79.22±5.13| |Arxiv-year|47.08±0.15|43.93±0.13|47.22±0.13| --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the rebuttal. I will keep my original score. --- Reply to Comment 1.1.1: Comment: Thank you for reading our rebuttal. Could you please let us know whether it has resolved you concerns regarding the paper? If you have any additional questions, we would be happy to provide further clarification if needed.
null
null
null
null
null
null
null
null
Enhancing Parallelism in Decentralized Stochastic Convex Optimization
Accept (poster)
Summary: This paper presents Decentralized Anytime SGD, a decentralization optimization algorithms that is based on Anytime SGD. The authors presents the convergence analysis of Decentralized Anytime SGD. Decentralized Anytime SGD achieves linear speedup and has a better sample complexity than that of D-SGD under the non-convex scenario. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The reviewer has check the theoretical claims and proofs in this paper and has not found any fatal issues. Experimental Designs Or Analyses: There is no experiments in this paper. However, as a decentralized optimization algorithm, it is essential to present numerical experiments to compare the proposed algorithm with other classial decentralized algorithms (including D-SGD and more), which can validate the practical performance of the proposed algorithms. Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengthes** 1. Present a proof sketch of the convergence rate. 2. Achieves linear speedup. **Weakness** 1. Lack of the analysis of transit complexity. Other Comments Or Suggestions: Linear speedup is a significant character of decentralized algorithms. Although the proposed algorithm achieves the linear speedup, the authors fail to high it and use the sample complexity $N$ in the convergence result. The reviewer suggest the author to add more discussion on linear speedup and highlight it in the theoretical result. Questions For Authors: 1. Whether the authors can present additional experiments to compare the proposed algorithm to other decentralized algorithms? 2. Can the author provide any discussion on linear speedup and transient complexity? 3. Whether the symmetric assumption of the gossip matrix $P$ can reduce? Why? The reviewer would to update the final rating according to the response to those weaknesses and questions, as well as the experimental performance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We address the reviewer’s questions individually: **Experiments**: We have included experiments evaluating our method on both a synthetic, convex least squares problem and non-convex neural network training. We refer the reviewer to our response to **Reviewer 3nY6** for a discussion of the results. **Linear speedup and transient time**: We will add a discussion about the transient complexity in the revised version. It can be inferred from Theorem 4.1 that the transient time for our method is $T\geq\mathcal{O}(M/\rho^2)$; this improves upon D-SGD by a factor of $M^2$. For example, this implies a transient complexity of $\mathcal{O}(M^5)$ for a ring and $\tilde{\mathcal{O}}(M)$ for a static exponential graph. Should the reviewer have any further comparisons in mind, we would be happy to incorporate them into our text. **Symmetric gossip matrix assumption**: Thank you for raising this insightful point. Our analysis relies on the contraction property stated in Property 2.6 (Eq. (2)), which holds when the communication matrix is symmetric and doubly stochastic. This assumption is standard in the literature and has been used in many prior works, including [1,2,3,4]. We acknowledge that there is a growing body of work analyzing more general communication matrices—such as asymmetric and/or only row-/column-stochastic matrices—e.g., [5,6,7]. Our goal in this work was to provide a clean and interpretable analysis under a widely adopted and well-studied assumption. We believe our results open the door to extending the analysis to more general settings with non-symmetric or non-doubly-stochastic matrices. We will include a discussion of this direction in the 'Conclusion and Future Work' section. [1] Lian et al., “Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent”, '17 [2] Tang et al., “Communication compression for decentralized training”, '18 [3] Koloskova et al., “A unified theory of decentralized sgd with changing topology and local updates”, '20 [4] Koloskova et al., “An improved analysis of gradient tracking for decentralized machine learning”, '21 [5] Assran et al., “Stochastic gradient push for distributed deep learning”, '19 [6] Pu & Nedic. “Distributed stochastic gradient tracking methods”, '21 [7] Lu & De Sa. “Optimal Complexity in Decentralized Training”, '21 --- Rebuttal Comment 1.1: Comment: The reviewer has no additional problems and decides to update the rating. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for positively updating their evaluation and for their constructive feedback, which helped us further strengthen our paper.
Summary: The paper proposes Decentralized Anytime SGD — a noved algorithm for decentralized optimization. The algorithm is based on anytime SGD algorithm proposed by (Cutkosky, 2019). The paper provides the convergence rate of their method for convex functions, showing improvement over D-SGD in the middle convergence term, as well as showing the convergence for the last iterate averaged across the nodes, instead of the priorly used average of losses from all the iterates. ## update after rebuttal I would like to thank the authors for their response and for adding the experiments. While some of my concerns have cleared, I have some of the remaining concerns and therefore I keep my score. - I disagree with the authors that their method always improves over the baselines theoretically. For example, when data heterogeniety term is large, then Gradient Tracking is expected to have the better convergence (see Table 2 in this submission). Even with this, I believe that the paper provides an interesting improvements over the existing decentralized methods in the homogeneous case, however, I would like to see a more rigorous discussion of this. - Given that, I am a bit surprised that experiments show the opposite from theory: that D-SGD and D^2 improve in the homogeneous case, while DAT-SGD improves in the heterogeneous case. - I beleive that the Gradient Tracking method is not orthogonal, but a direct baseline mehtod, and therefore it should be included in the experimental comparison. For example, in the theoretical comparison in Table 2 of this submission, the proposed algortihm DAT-SGD was combared with Gradient Tracking but was not compared with D^2. Thus, I do not understand the choice of the baseline of D^2 instead of Gradient Tracking in experiments. - For tuning hyperparameters in experimental comparison, please ensure that the optimal learning rate is not on the end of the grid by extending the grid when necessary. E.g. in neral network experiments the learning rate was chosen only from 3 values, which makes it very likely that for some experiments, the found learning rate was on the end of the tuned grid. Claims And Evidence: The algorithm is interesting and the proposed algorithm provides an interesting and non-trivial improvement in some cases over the prior decentralized SGD algorithms for the convex smooth functions. Methods And Evaluation Criteria: theoretical part yes, however there is no empirical evaluation of the proposed algorithm. Theoretical Claims: I am not sure of correctness of the proofs since when I started to check the proof, already statement of Lemma A.1. seems to have a typo: it should be alpha_{tau - 1} instead of alpha_{tau}. Moreover while checking the proof of Theorem 1 from (Dahan & Levy), I noticed that it uses iterates x_{tau - 1} for tau = 0, however x_{-1} was never defined. Please clarify these points, as right now the proof seem to be incorrect. Experimental Designs Or Analyses: How does the proposed algoritm compares to D-SGD and GT emperically? Can we see in practice the benefit of the improved convergence rate? Supplementary Material: I started to review the proofs. Relation To Broader Scientific Literature: Has there been any other work on decentralized optimization methods showing last-iterate convergence? Has there been any lower bounds for the convex decentralized optimization? How does the provided convergence rate compare to those lower bounds? Essential References Not Discussed: - Other Strengths And Weaknesses: The method is limited to the convex functions only Other Comments Or Suggestions: - Questions For Authors: Could you characterize, under which conditions the proposed method improves over the prior works? I.e. what are the conditions on \rho and \zeta, under which the proposed algorithm improves convergence? Also, e.g. could highlight the case of the homogeneous function with zeta=0 and give a condition on rho. Can the proposed method be generalized for non-convex smooth functions? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time and valuable input. We address the reviewer’s concerns and questions separately: **Correctness of the proof and Lemma A.1**: We divide our answer into 2 parts: - First, the reviewer’s concern about the appearance of the term $x_{-1}$ in the analysis of [1] refers specifically to the equality $\alpha_{\tau}(x_{\tau}-w_{\tau}) = \alpha_{0:\tau-1}(x_{\tau-1} - x_{\tau})$, which is applied for $\tau=0,...,T$ (in Appendix D therein). Although the authors of [1] did not explicitly state this, by definition we have $\alpha_{0:-1}=0$ (also see the first line in the proof of Theorem 1 in [2]; they start at $t=1$ like we do, thus defining $\alpha_{1:0}=0$). Therefore, at $\tau=0$, both sides of this equality trivially evaluate to zero, since $w_{0}=x_{0}$ and $\alpha_{0:-1}=0$. Consequently, despite the formal appearance of the term $x_{-1}$, it can be defined arbitrarily since it is multiplied by zero; thus, the equality (and therefore Theorem 1 in [1]) remains valid. - Second, regarding Lemma A.1, we thank the reviewer for helping us spot a typo; however, we clarify that the typo does not occur within Lemma A.1 itself and does not impact our results in any way. Nevertheless, the typo requires slight adjustments to the text, as we elaborate below. The error appears in Eq. (6) (and similarly in Eq. (16)), where the coefficients should be corrected to: $x_{t+1}=\frac{\alpha_{1:t}}{\alpha_{1:{t+1}}}x_{t}+\frac{\alpha_{{t+1}}}{\alpha_{1:{t+1}}}w_{t+1}$. - With this correction, our Lemma A.1 aligns precisely with Theorem 1 in [1], except for the indexing of iterations—[1] starts indexing from $t=0$, while we start from $t=1$. Therefore, all summations in our analysis (including terms like $\alpha_{1:t}$) begin from $\tau=1$ instead of $\tau=0$. After this correction, Lemma A.1 is accurate and correctly stated in its current form. - The typo correction also necessitates a minor update in the definition of $\delta_t$ at Line 594: it should now be $\delta_{t}=\alpha_{t+1}/\alpha_{1:t+1}$ instead of $\alpha_{t}/\alpha_{1:t}$. Importantly, this adjustment does not affect Lemma C.3, since for $\alpha_{t}=t$, we now have $\delta_{t} = 2/(t+2)$ and consequently $\alpha_{t}\delta_{t}=2t/(t+2)$, which still satisfies the condition $\alpha_{t}\delta_{t}\leq 2$ (Lines 903-904). - The only additional proof requiring modification is Lemma B.1, which remains correct once these coefficients are appropriately updated. We hope this clarification resolves any potential confusion. **Experiments**: We have added experimental results and refer the reviewer to our response to **Reviewer 3nY6** for further discussion. In the experiments, we compare our method with both D-SGD and $D^2$ (for the non-convex image classification task; as suggested by **Reviewer 3nY6**), with the latter being “more tolerant to data heterogeneity”. We note that Gradient Tracking (GT) is orthogonal to our method; the tracking mechanism can also be applied to our approach by tracking the gradients at the query points $x_t^i​$. Investigating the effect of GT, both theoretically and practically, is a valuable future direction. **Last-iterate convergence**: To the best of our knowledge, there is no other work in the decentralized setup showing last-iterate convergence. **Lower bounds for decentralized SCO**: The first statistical term, of order $\sigma/\sqrt{MT}$, matches the centralized rate and is unimprovable; see, e.g., [3,4]. While we are not aware of any lower bound for the second term (of order $1/\rho T$ in our rate), which is related to the network topology, our derived parallelism bound of $M\leq\mathcal{O}(\rho\sqrt{N})$ is unimprovable in terms of $N$ (i.e., $\sqrt{N}$), as it matches the centralized case. It remains an interesting open question whether the dependence on $\rho$ can be further improved. **Convex analysis**: The reviewer is correct – our analysis is valid for convex functions. We have included experiments on neural network training to demonstrate our method's potential in non-convex optimization scenarios. Establishing convergence bounds for non-convex functions is a non-trivial task we leave for future work. **Improvement w.r.t prior work**: Our proposed method improves over prior work for any value of $\rho$. Note that the second term in our rate, of order $1/\rho T$, also appears in the convergence bounds of D-SGD and GT. However, our analysis removes the term that scales with $1/\rho^{1/3}T^{2/3}$, which limits the achievable parallelism bound. [1] Dahan & Levy, “SLowcal-SGD: Slow Query Points Improve Local-SGD for Stochastic Convex Optimization”, '24 [2] Cutkosky, “Anytime Online-to-Batch, Optimism and Acceleration”, '19 [3] Woodworth et al., “Graph oracle models, lower bounds, and gaps for parallel stochastic optimization”, '18 [4] Woodworth et al., “The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication”, '21
Summary: The paper introduces Decentralized Anytime SGD (DAT-SGD) to enhance parallelism in decentralized stochastic convex optimization (SCO). Main Findings DAT-SGD extends the parallelism threshold to O(ρ√N), matching centralized learning, while prior decentralized methods were limited to O(ρ¹/²N¹/⁴). Main Results The algorithm achieves an improved error bound of O(σ/√MT + (√σ+√ζ)/ρT + 1/T), enabling efficient large-scale decentralized training. Main Algorithmic Idea DAT-SGD builds on Anytime SGD, using averaged iterates and gossip averaging to reduce consensus bias, improving convergence and scalability in decentralized networks. Claims And Evidence: The paper’s claims are well-supported by rigorous theoretical analysis and comparisons with prior work. It effectively demonstrates that DAT-SGD improves parallelism to O(ρ√N), surpassing previous decentralized methods. The claim that DAT-SGD mitigates consensus distance is backed by mathematical proofs, showing reduced model divergence through averaged iterates. Additionally, the improved error bound provides strong evidence for faster and more stable convergence. However, the paper lacks empirical validation. Experimental results comparing DAT-SGD with existing decentralized methods would further strengthen the claims and demonstrate its real-world applicability. Methods And Evaluation Criteria: The proposed DAT-SGD method is well-grounded in theoretical analysis, focusing on improving parallelism in decentralized stochastic convex optimization. The convergence bounds and parallelism limits provide strong analytical validation. However, the paper lacks empirical evaluation, which is crucial for assessing real-world performance. Including experiments on benchmark datasets and various network topologies would strengthen the evaluation. While the theoretical framework is solid, practical testing would provide a more comprehensive understanding of scalability, robustness, and efficiency in real decentralized learning environments Theoretical Claims: The proofs in Sections 4.3 and 4.4 were reviewed, particularly the proof sketches outlining the convergence analysis and bias reduction in DAT-SGD. The arguments appear logically sound, with a clear derivation of key results such as the improved parallelism bound O(ρ√N). The analysis effectively builds on Anytime SGD and consensus distance reduction. No major issues were found Experimental Designs Or Analyses: The paper does not include an experimental section, making it difficult to validate the practical effectiveness of DAT-SGD. Supplementary Material: The supplementary material primarily consists of proofs for various lemmas supporting the theoretical claims in the main paper. A general review of these proofs was conducted, and no obvious errors were identified. The arguments appear logically structured and consistent with the main theoretical results. Relation To Broader Scientific Literature: This paper builds on two key works: Koloskova et al. (2020) and Cutkosky (2019). Koloskova et al. (2020) developed a unified theory for decentralized SGD, addressing topology changes and local updates but with limited parallelism scalability. DAT-SGD improves upon this by enhancing parallelism to O(ρ√N) while maintaining strong convergence guarantees. Cutkosky (2019) introduced Anytime SGD, which leverages averaged iterates for stable updates. The authors extend this idea to decentralized settings, using it to mitigate consensus distance and improve statistical efficiency. Essential References Not Discussed: NA Other Strengths And Weaknesses: The primary weakness of the paper is the lack of an experimental section, making it difficult to assess the practical effectiveness of DAT-SGD. Without empirical validation, it remains unclear how the method performs in real-world decentralized learning scenarios. Benchmark experiments on different network topologies and datasets would significantly strengthen the paper. Other Comments Or Suggestions: NA Questions For Authors: Do you plan to include experiments in an extended version or supplementary material? If there are ongoing or planned experiments, providing preliminary results or an outline of the experimental setup would help in evaluating the method’s real-world applicability. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer’s constructive feedback. It appears that the reviewer’s primary concern is the lack of experimental results. In response, we have included experiments evaluating our method on both a synthetic convex problem and non-convex neural network training. We refer the reviewer to our response to **Reviewer 3nY6** for a detailed discussion of the results.
Summary: The paper studies an anytime variant of decentralized SGD. It achieves bounds allowing a larger number of nodes successfully team up in decentralized training. It does so by using gradients at averaged query points, thus improving the consensus distance and thus convergence under large number of nodes, which is a valuable contribution. Claims And Evidence: The paper improves the convergence results for decentralized SGD, and also gives a last-iterate convergence result, both being interesting additions to the understanding of such methods. The algorithmic change is simple & elegant and has no implementation downsides, yet leads to the significantly improved convergence results mentioned. Methods And Evaluation Criteria: Yes, this is a theory paper Theoretical Claims: See claims & evidence above In terms of presentation, the proof sketch was very useful. I could not check the full proof in detail but overall the approach looks plausible and appropriate Experimental Designs Or Analyses: No experimental results are provided unfortuantely. I know this will sound as a typical ICML reviewer cliche, but I think here some experiments would improve the value of the paper. The claim of higher tolerance to larger number of nodes is very clear, well-supported by theory, so it would really add value to the work to verify this phenomenon on large graphs on simple settings, with the competing methods that you already theoretically compare. In addition, several methods which are more tolerant to data heterogeneity could be included in the comparison (e.g. D^2). This would not be very hard to do as many good codebases simulating DSGD are available by now. UPDATE: I appreciate that the authors have added experiments along the main research aspects now, which I think adds value to the paper. Given this, i have upgraded my rating to 'accept'. Supplementary Material: i have not checked the newly submitted code repository Relation To Broader Scientific Literature: Looks appropriate Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our contributions and for the positive feedback. As suggested, we provide experiments to evaluate our method, including a synthetic least squares problem and an image classification task. All experiments are run with 3 random seeds, and we report the average performance. Results are available in the following anonymized GitHub repo; we recommend downloading the repo for optimal examination: https://anonymous.4open.science/r/DAT-SGD-figures-4378 **Synthetic Least Squares**: For machine $i\in[1,...,M]$, the local objective is $f_i(x)=\frac{1}{2}\lVert A_ix-b_i\rVert^2$, with $A_i\in\mathbb{R}^{d\times{d}}$ drawn from $\mathcal{N}(0,I)$. The targets are given by $b_i=A_i(x^\sharp-\delta_i)$, where $x^\sharp\sim\mathcal{N}(0,I/d)$ is sampled once per run and $\delta_i\sim\mathcal{N}(0,\zeta^2 I/d)$ introduces heterogeneity. To incorporate stochasticity, we add noise $\xi\sim\mathcal{N}(0,\sigma^2 I/d)$, yielding the noisy gradient $\nabla f_i(x)+\xi$. We compare our method with D-SGD over ring, torus, and exponential graph (for which $1/\rho=\mathcal{O}(\log{M})$) topologies, varying $\sigma,\zeta\in[1,10]$, number of machines $M\in[4,9,25,49,100]$, and $d=50$. For each run, we tuned the learning rate via a grid search over $\eta\in[0.1,5e-2,1e-2,5e-3,1e-3,5e-4,1e-4]$. In ‘least_squares_parallelization.pdf’, we plot final errors ($\frac{1}{M}\sum_{i=1}^{M}{\lVert x_T^i - x^*\rVert^2}$ and $\frac{1}{M}\sum_{i=1}^{M}{\lVert w_T^i - x^*\rVert^2}$ for DAT-SGD and D-SGD, respectively) vs the number of machines for varying $\sigma$ and $\zeta$ and across topologies. Colors represent different topologies; line-style denotes the method (solid:DAT-SGD, dashed:D-SGD). For D-SGD, performance deteriorates as $M$ increases, and more significantly for less connected graphs (lower $\rho$). For ring, this degradation occurs from $M=4$, while for torus and exponential topologies performance is flat between $M=4$ and $M=9$ and degrades afterwards. In contrast, our method improves as $M$ grows: for torus and exponential topologies performance steadily improves, while for ring, it improves up to $M=25$ before deteriorating in a trend similar to that of D-SGD. This suggests that for some $M$ between 25 and 49, the network-related convergence term ($1/\rho T=\mathcal{O}(M^2/T)$ for ring) becomes dominant. Overall, this figure aligns with our theoretical findings: DAT-SGD enables performance improvement for larger $M$. We provide the complete convergence curves for different topologies and $M\in[9, 25,100]$ in ‘least_squares_curves_X.pdf’, where X denotes the topology (ring/torus/exponential). **Image Classification with a Neural Network**: We conduct experiments on Fashion MNIST using LeNet, comparing DAT-SGD with D-SGD and $D^2$ [1]. For DAT-SGD and D-SGD, we use momentum with $\beta=0.9$. Data is distributed among machines using a Dirichlet distribution with parameter $\alpha$ to control heterogeneity [2]. Experiments are performed on both a ring topology and the Base-2 Graph [3]-a time-varying, state-of-the-art topology for decentralized learning. Learning rates are tuned over $\eta\in[0.1,0.01,0.001]$. Colors represent methods and line styles indicate topology (solid:ring, dashed:Base-2). Unlike the convex least squares setting, this task is non-convex. Following the heuristic proposed by [4], we adopt a momentum-like Anytime update: $x_{t+1}=\gamma_t x_t+(1-\gamma_t)w_t$, with a fixed $\gamma_t=0.9$. Their work shows this enhances training stability and adaptability in non-convex landscapes. In ‘fashion_mnist_parallelization.pdf’, we plot final accuracy after 200 epochs vs $M$ for ring topology with heterogeneous data ($\alpha=0.1$). Our method outperforms the baselines; in addition, the largest accuracy drop for DAT-SGD occurs between $M=8$ and $M=16$, while D-SGD and $D^2$ degrade most between $M=4$ and $M=8$, demonstrating our increased parallelism claim. In ‘fashion_mnist_curves.pdf’, we show test accuracy vs epochs for $M\in[8,16]$ and $\alpha\in[0.1,10]$ (heterogeneous and homogeneous setups). In the heterogeneous case, our method outperforms baselines consistently over both topologies, with ring topology performance matching ($M=8$) or nearly matching ($M=16$) the baselines on the well-connected Base-2 graph. Conversely, in the homogeneous case, D-SGD and $D^2$ achieve better performance, motivating further study of our anytime averaging heuristic in non-convex scenarios. [1] Tang et al., “D^2: decentralized training over decentralized data”, ‘18 [2] Hsu et al., “Measuring the effects of non-identical data distribution for federated visual classification”, ‘19 [3] Takezawa et al., “Beyond exponential graph: Communication-efficient topologies for decentralized learning via finite-time convergence”, ‘23 [4] Dahan & Levy, “Do stochastic, feel noiseless: stable stochastic optimization via a double momentum mechanism”, ‘25
null
null
null
null
null
null
Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards
Accept (oral)
Summary: This paper focuses on the adversarial manipulation of voting-based LLM leaderboards, e.g., Chatbot Arena. Intuitively, keeping the model's response anonymous is essential to ensure the integrity of the leaderboard. However, this paper demonstrated that an adversary can efficiently de-anonymize the responses and thus can upvote/downvote some specific models. This paper discusses two "target model detector", namely the identity-probing detector and the training-based detector. The experiments show that both detectors can separate the target model from others, which efficiently breaks the anonymization. This paper also discusses some possible mitigations of such attacks for the purpose of further enhancing the robustness of voting-based LLM leaderboards. Claims And Evidence: In the statement of contributions (Lines 73-82, left), the claims include: 1. The users can break model response anonymity with high probability, 2. The estimated votes to boost or reduce a model's ranking is "a few thousand" 3. A cost model and some potential mitigations are discussed. Generally speaking, I think the claims are well supported. However, I find the evidence for the first claim is not convincing enough. See the Questions For Authors part. Methods And Evaluation Criteria: The methods to de-anonymize the model response include 1. Identity-probing detector 2. training-based detector While the evaluation of the identity-probing detector is straightforward, I am not sure if I have fully understood the training-based detector, see the questions in the Questions For Authors part. Theoretical Claims: This paper does not include theoretical analysis. Experimental Designs Or Analyses: I have checked the soundness/validity of the experimental designs or analyses in this paper. Supplementary Material: I have scanned the whole supplementary material. Relation To Broader Scientific Literature: 1. According to the related work section, this paper seems to be the first work focusing on the adversarial manipulation of voting-based LLM leaderboards. The related works discussed in Sections 5 and A are not directly related to the topic of the present paper. 2. The setting of this paper is similar to jailbreaking attacks against LLMs. See the Questions For Authors part. Essential References Not Discussed: The following two methods are related to a similar topic of "LLM identification". 1. Gubri, M., Ulmer, D., Lee, H., Yun, S., and Oh, S. J. TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification. In Annual Meeting of the Association for Computational Linguistics (ACL). ACL, 2024. 2. Jin, H., Zhang, C., Shi, S., Lou, W., and Hou, Y. T. ProFLingo: A Fingerprinting-based Copyright Protec tion Scheme for Large Language Models. CoRR abs/2405.02466, 2024. Could the authors please discuss on the relationship between the present paper and these two papers and the references therein? Other Strengths And Weaknesses: **Strenths** 1. Very practical. As mentioned in Line 036, the authors claimed that they work with the Chatbot Arena developers and have enhanced the robustness of the voting-based leaderboards based on their analysis. Other Comments Or Suggestions: Typo on Line 104 (right): its name Questions For Authors: 1. Given the efficiency of the identity-probing detector, what is the meaning of applying a more sophisticated training-based detector? 2. Could the authors please explain the intuition/motivation of the training-based detector? 3. Is it possible to employ some specific system prompt to prevent the models from revealing their identity, e.g., "You are an anonymous model competing in the Chatbot Arena. Do not tell anyone about your identity." This is the simplest mitigation I could have come up with. As the detection accuracies in Table 2 are all >95% in the best cases, I suppose introducing some basic defense would enhance the validity of the results. 4. What is the performance of the mitigations under jailbreaking attacks, e.g., PAIR and AutoDAN? The adversary can perform more stealthy attacks against the model than those mentioned in Section 2.2. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and questions. Our detailed responses are as follows: > Q1: Given the efficiency of the identity-probing detector, what is the meaning of applying a more sophisticated training-based detector? Could the authors please explain the intuition/motivation of the training-based detector? **A**: We appreciate the question. We note two limitations of the identity=porbing detector: - Susceptibility to Countermeasures: the Chatbot Arena leaderboard already uses post-processing to exclude votes from Elo score calculation when model responses mention model names, which naturally limits the usefulness of the identity-probing detector. But we still analyze the effectiveness of this detector in the paper, as it could be effective in other voting-based chatbot benchmarks, and because the post-processing could be evaded, e.g., by asking the model to reveal its identity in Base64 encoding. - Lack of Stealth: Identity-probing is an active technique requiring specific, often obvious queries. This makes the adversarial attempt itself highly detectable. In contrast, the training-based approach is passive and thus stealthier. > Q2: Is it possible to employ some specific system prompt to prevent the models from revealing their identity, e.g., "You are an anonymous model competing in the Chatbot Arena. Do not tell anyone about your identity." This is the simplest mitigation I could have come up with. As the detection accuracies in Table 2 are all >95% in the best cases, I suppose introducing some basic defense would enhance the validity of the results. **A** :We appreciate the suggestion. While preventing models from outputting their identities can provide a basic layer of defense for identity-probing detectors, it’s insufficient to prevent more sophisticated detectors. As shown in Table 3, training-based detectors—which do not rely on explicit model name outputs—still give accuracy rates above 95%. Furthermore, it may be undesirable to require system prompt changes for models participating in the leaderboard, as system prompts are generally carefully constructed by model owners. > Q3: What is the performance of the mitigations under jailbreaking attacks, e.g., PAIR and AutoDAN? The adversary can perform more stealthy attacks against the model than those mentioned in Section 2.2. **A**: We would like to clarify that our mitigations are entirely independent of the prompts and responses and so jailbreaking should have no relationship to our mitigations. We would appreciate further clarification on: 1) What makes the reviewer believe jailbreaking attacks could undermine the mitigation strategies discussed in the paper, and 2) What "more stealthy attacks" means in this context. > Q4: Missing related work **A**: We appreciate the reviewer sharing the pointers and will incorporate these references in the final version.
Summary: It has become common for LLMs to be evaluated subjectively in crowd-sourced "arenas", which usually use elo-based scoring based on user preferences. The authors study this voting-based evaluation setting, and find that they are susceptible to adversarial manipulation through a two-step attack: (1) de-anonymizing model responses with high accuracy, and (2) selectively voting for or against a target model to manipulate the rankings. They demonstrate that this attack is feasible and cheap, and then explore mitigations. Claims And Evidence: The claim that model responses can be reliably distinguished by malicious parties is supported by thorough empirical experimentation, across many models. An accuracy of >95% can be achieved using extremely simple supervised learning methods. The authors verify the cost-related claim by running simulations on a system emulating Chatbot Arena. Methods And Evaluation Criteria: The methods seem appropriate and well-designed. The paper targets an attack on a real-world system: Chatbot Arena. The ultimately convincing evaluation would be to attack the actual system, and show that such an attack is feasible in the real world. The authors instead evaluate on a *simulation* of Chatbot Arena, based upon real-world voting records from Chatbot Arena. This seems like a proper evaluation setup to me. Theoretical Claims: The paper does not necessitate any proofs or theoretical claims. Experimental Designs Or Analyses: The experimental designs and analyses appear sound to me. Supplementary Material: I examined the details of the simulation in appendix D.4. It may be beneficial for the authors to add some more details as to how the simulation is set up, and give some assurances that it is exactly emulating the real-world Chatbot Arena system (or acknowledge any differences). Relation To Broader Scientific Literature: This work is related to the body of literature studying the validity of benchmark evaluation numbers, which is an important field, as new models are judged based on the evaluation numbers that they achieve. Essential References Not Discussed: I don't know of any references that are relevant and missing. Other Strengths And Weaknesses: Strengths: - The paper is very well written and organized, and it for the most part quite easy to follow - The topic is very important. Every major model release highlights voting-based elo scores. Weaknesses: - Sections 4.2.3 and 4.3 could be revised to be clearer. - I think the paper would benefit from a rewriting of these sections, as they were unclear to me the first time I read them. - Specifically, 4.3 should clarify that it is experimenting based off of the proposed defenses in 4.2.3 (and has nothing to do with 4.2.1, 4.2.2, 4.2.4, if my understanding is correct). - 4.2.3 can be rewritten to more clearly present the attacker's strategy, and the defender's strategy. The attacker's strategy in scenario (1) informs the defender's strategy in scenario (2), and this would be a clearer presentation. Other Comments Or Suggestions: Typos: - Line 19: "two randomly selected and models" - Line 267: "An passive attacker" - Line 341: missing punctuation Questions For Authors: - For the "perturbation" defense in 4.2.3, could the attacker simply create multiple accounts, aggregate the perturbed numbers across the many accounts, and then use the average? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and questions. Our detailed responses are as follows: > Q1: Sections 4.2.3 and 4.3 could be revised to be clearer…Specifically, 4.3 should clarify that it is experimenting based off of the proposed defenses in 4.2.3 (and has nothing to do with 4.2.1, 4.2.2, 4.2.4, if my understanding is correct). 4.2.3 can be rewritten to more clearly present the attacker's strategy. **A**: We appreciate the reviewer’s suggestion of reorganizing the defense section (Sec 4). We apologize for the confusion caused by presenting the results for malicious user identification in Sec 4.3 without a clear pointer to Sec 4.2.3. We will reorganize these sections in the final version. > Q2: For the "perturbation" defense in 4.2.3, could the attacker simply create multiple accounts, aggregate the perturbed numbers across the many accounts, and then use the average? **A**: No, please note that only one permuted table is released by the system therefore you cannot average or detect the permutations. Q3: Typos. **A**: Thanks! We will fix them in the final version.
Summary: This submission examines the susceptibility of voting-based LLM assessment platforms to adversary interference, particularly emphasizing Chatbot Arena, a prominent platform that ranks language models according to human preferences. The primary contributions of the paper are: (1) Evidence that users can effectively compromise model response anonymity with high precision (>95%) employing basic classification methods; (2) Simulation-based estimations indicating that a few thousand adversarial votes can substantially modify a model's ranking on the leaderboard; (3) Formulation of a cost model for the attack and the proposal of various mitigation strategies to enhance the expense and complexity of such attacks; (4) Collaboration with the Chatbot Arena developers to execute these mitigations, exemplifying a responsible disclosure methodology in security research. The authors assert that their findings are relevant to any voting-based ranking systems, not alone Chatbot Arena, underscoring a wider issue for platforms dependent on human preferences for assessment. ## Update after rebuttal The authors provided detailed discussions with additional experiments to address my concerns. I appreciate their effort and would like to retain my original score to accept this submission. Claims And Evidence: The claims made in the submission are generally well-supported by empirical experiments. The claim that model responses may be de-anonymized is supported by Figure 3, which presents detection accuracy across various prompts and models. The claim that the quantity of votes might influence the leaderboard is supported by simulations utilizing actual voting data shown on page 16. Tables 4 and 5 include comprehensive estimates of the votes and interactions necessary to alter the ranks of both high-ranked and low-ranked models. Methods And Evaluation Criteria: The methodologies and assessment standards are effectively aligned with the objectives of the submission. The procedures and criteria included in the submission are suitable for evaluating the proposed approach. Utilizing accuracy metrics is appropriate for evaluating detector performance. The authors show the separability of model responses by principal component analysis of bag-of-words characteristics, successfully demonstrating "how different models respond to the same prompt" (Figure 2). The suggested mitigations are evaluated based on their efficacy in increasing the cost of the assault. The authors assess the security benefits and potential drawbacks of each mitigation, including the distributional changes that may occur when authentication is required. The simulation technique for determining the necessary votes to affect the ranking is meticulously designed. The author delineates clear objectives for the attack: "Up(M, x)" and "Down(M, x)" (page 5). The simulations account for the attacker's detection accuracy and behavior when the target model is not identified. Theoretical Claims: The paper does not make extensive theoretical claims or provide formal proofs, as it is primarily an empirical study focused on demonstrating and mitigating a practical vulnerability. Experimental Designs Or Analyses: The experimental designs and analyses specified in the submission are meticulously crafted. The de-anonymization assessments are meticulously designed, scrutinizing both identity-probing and training-based detectors across a diverse range of prompts and models. The authors performed ablation studies to assess the impact of different design selections on detector efficacy. The authors examine several feature types and conclude that "basic text features such as BoW and TF-IDF achieve remarkably high detection accuracy, with BoW surpassing 95% in multiple cases" (page 4). The simulation studies aimed at estimating the votes necessary to affect the leaderboard include real voting data from Chatbot Arena, hence enhancing the credibility of the results. The authors analyze many scenarios and provide detailed findings in Tables 4 and 5. Supplementary Material: The paper includes several appendices that provide additional details and analyses that support the main findings. I have reviewed these supplementary materials, which include: (1) Appendix A: Related Work, which provides a more comprehensive discussion of LLM evaluation methods; (2) Appendix B: Discussion, which discusses additional considerations for the attack, such as the advantage that model owners have in upvoting their own models versus downvoting competitors; (3) Appendix D: Experimental Details, which provides comprehensive information about the experimental setup, including the list of models used, the prompts for embedding visualization, and details about the training-based detector. Relation To Broader Scientific Literature: The paper places its contributions within the broader scientific literature on voting-based systems, and security vulnerabilities. The paper connects to the literature on voting-based systems and their security vulnerabilities. The authors note that "voting-based systems are frequently used in security relevant scenarios, such as for malware identification [1] or for content validation [2]" and that "attacks on these systems are well studied [3]" (page 7). They also discuss reputation systems as a common approach to securing these systems, citing work by [2] and [4]. The de-anonymization task is related to authorship attribution and model detection, as the authors acknowledge: "Our primary attack involves training a classifier that can identify which language model system produced a given generation. This task is related to the much older task of authorship attribution—identifying the authors of anonymous (but human-written) works of writing [5,6]". Ref: [1] VirusTotal Documentation, 2024. [2] The eigentrust algorithm for reputation management in p2p networks. In WWW, 2003. [3] A survey of attack and defense techniques for reputation systems. In CSUR, 2009. [4] Towards {Tracking-Resistant} anonymous reputation. In NSDI, 2016. [5] Authorship attribution in the era of llms: Problems, methodologies, and challenges. In arXiv, 2024. [6] De-anonymizing text by fingerprinting language generation. In NeurIPS, 2020. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: N.A. Questions For Authors: - The proposed technique demonstrates high accuracy in de-anonymizing model answers; nonetheless, LLMs are regularly updated. In what manner may the efficacy of your detection techniques be altered when models undergo updates and fine-tuning? - The simulations concentrate mostly on singular opponents; however, how may the vulnerability escalate with coordinated assaults from numerous adversaries? - As leaderboards expand to encompass additional models, does the efficacy of this defensive strategy alter? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive feedback and questions. Our detailed responses are as follows: > Q1: The proposed technique demonstrates high accuracy in de-anonymizing model answers; nonetheless, LLMs are regularly updated. In what manner may the efficacy of your detection techniques be altered when models undergo updates and fine-tuning? **A**: We appreciate the question. We first note that updates to most models via API are relatively infrequent. Assuming that there are (infrequent) updates to the model, the detector efficacy post-update depends on the adversary: - When the adversary controls the target model, the adversary can easily retrain its detector to reliably detect the new model. - When the adversary does not have the control of the target model, our experiments with llama-3-8b-instruct and gemma-2-27b-it suggest that the detector is relatively robust to model changes: as shown in Table 1, detection accuracy remained above 90% even after several hundred SFT steps on these models. Table 1: The original detector’s accuracy on the SFTed model with different steps | # SFT steps on lmsys-chat-1m | Model: llama-3-8b-instruct | Model: gemma-2-27b-it | |------------------------------|----------------------------|-----------------------| | 0 | 95.4 | 96.3 | | 100 | 95.2 | 95.6 | | 200 | 94.7 | 95.1 | | 500 | 94.1 | 94.7 | > Q2: The simulations concentrate mostly on singular opponents; however, how may the vulnerability escalate with coordinated assaults from numerous adversaries? **A**: We appreciate the question. On voting-based leaderboards, users typically evaluate models through head-to-head comparisons. We're curious to understand what the reviewer meant by "numerous adversaries" and would appreciate more context. > Q3: As leaderboards expand to encompass additional models, does the efficacy of this defensive strategy alter? **A**: Thank you for pointing this out. Yes, the strategy's effectiveness will change as the number and type of models on the leaderboard evolve. Newer models might have different performance characteristics or vulnerabilities, impacting the defense's relative success. However, as discussed in the paper, each defense involves a utility-cost trade-off. This allows system designers to analyze its effectiveness under current conditions and adjust parameters to meet system requirements at the time of deployment. We will clarify this in the paper.
null
null
null
null
null
null
null
null
From Mechanistic Interpretability to Mechanistic Biology: Training, Evaluating, and Interpreting Sparse Autoencoders on Protein Language Models
Accept (spotlight poster)
Summary: This paper shows that SAEs can be used to better understand PLMs. They show that the features are interpretable via a series of case studies, including some clean histograms of activating examples. They use the SAEs to make high level observations about the PLMs, such as by categorizing the feature activation styles (point, short motif, etc), and studying the variation over the course of layers. Finally, they investigated probing for properties of proteins, using mean pooling, both on the residual stream and the feature activations. There was comparable performance suggesting that key information has not been lost by the SAE. Further, they find that the features used by a probe are often interpretable and find related concepts. Some of these relationships are not obvious but were discovered in prior research (?) suggesting SAEs have the potential to generate new hypotheses to validate. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: Covered well by the paper Essential References Not Discussed: No Other Strengths And Weaknesses: See below Other Comments Or Suggestions: ## Major Comments *The following are things that, if adequately addressed, would increase my score* 1. This was a delightful paper! I'm now fairly convinced that PLMs learn interpretable concepts and that SAEs have the potential to be a valuable tool for scientific discovery by extracting them. Interpretability for scientific discovery has long been a dream of the field, and this seems like a step in the right direction! This is a clear accept and I am open to a strong accept if the concerns below are addressed - Caveat: I work in interpretability, but not with PLMs, so my assessment of the novelty there is shaky. Ditto, I have not carefully checked the PLM related details of the work. 2. The main problem with the paper, in my opinion, is that it rests too much on qualitative case studies. These are great and add a lot of flavour and detail to the work, but are vulnerable to potential cherry picking. Assessing what I see as your key contributions for this: 3. Claim: SAE features are interpretable - This is OK. I think figure 3a is great, and I love the InterProt visualisations, but these are equally consistent with 5% of features being interpretable and 95% being - The main fix I would recommend is randomly choosing 100 features and for each one trying to interpret the top activating examples and just rating each for whether there's a consistent pattern. As done in eg the Gated Sparse Autoencoders paper 4. Claim: Features in different layers have different types of activation patterns - This seems well supported to me. Table 2 was very helpful and clear and seems reasonable. The area plot in 3c is great! 5. Probing - I struggled to follow the exact point you were trying to make here. The claims I took away from this were: - Probing still works on SAE activations, therefore they haven't broken anything or lost key info (but it need not be interpretable) - Given a labelled dataset for some concept, we can find related features by looking at large coefficients after training a probe - I found it a bit odd that you were using dense probes on the SAE activations, rather than sparse probes (which is what I consider to be standard - see eg Finding Neurons In A Haystack for discussion of different methods). But scoped into the two claims above this is OK, as the probe is just a method to find key latents - Note: I predict that the probing is unnecessarily complication and that you could find similarly good features by looking for those with the highest mean difference or difference in how often they activate between the positive and negative labels - It would be even better if you took random concepts (from some external list of suitable datasets, filtering for those that a dense linear probe works well for) and looked at the associated SAE latents, and said how often there's an interpretable connection (and how often it's trivial vs non-trivial) 6. I don't understand the "steering with a family feature changes it least at evolutionarily conserved points for that family". What exactly is the computation being done here? Shouldn't the pLM already be confident in the conserved points, so it's harder to shift? Have you compared to a control like steering with a feature of a different family? - An alternative experiment would be to look at the log prob of the relevant token and do gradient based attribution from each SAE feature to that log prob - ie, take the gradient of that log prob with respect to each feature, elementwise multiply by the activations, and add it up across the sequence dimension (basically a saliency map over SAE features). Family features should have high attribution in their families and not otherwise, according to this hypothesis. I recommend excluding the first token when doing this, as ablating SAE features is not a valid operation there (it's constant) and will often mess with your results 7. There may not be time in rebuttals, but I think your paper would be even stronger with causal evidence for the interpretable role of SAE features. - For example, showing that when you ablate a feature, the times when the correct next log prob decrease the most are interpretable - Or showing that you can steer the model in predictable ways by adding it in to unrelated contexts - One useful technique for exploring this would be gradient based attribution (approximating the effect of an ablation) of each SAE feature at each token to the next log prob at some interesting token - you could visualise this and see which tokens seemed most relevant and where, and if this matched your hypotheses - If you do this, be careful about hindsight bias! Eg do blinding, where you get shown 5 features, of which one had the highest attribution, and have to predict which is why ## Minor Comments *The following are unlikely to change my score, but are comments and suggestions that I hope will improve the paper, and I leave it up to the authors whether to implement them. No need to reply to all of them in the rebuttal* 1. When, eg in Figure 3a, you argue that a feature has some explanation, you need to also check for false negatives - things in the beta-lactamase family that it doesn't fire on. Otherwise it could be much more specific, eg activating on every other token in beta-lactamase - This is a minor comment because basically every other SAE paper also fails on this point, and I expect it to in fact be more specific than that explanation 2. I am very familiar with SAEs but not PLMs. If you want people like me to be able to engage well with your work, having a short appendix primer on PLMs (or linking externally to one) defining key things would help a lot, eg what ESM-2 was trained on, how things are tokenized, what protein jargon like residues & secondary structures mean, etc. 3. If "quality of names" was a category, I'd give InterProt 5/5 4. I wonder if the several alpha helix features thing is feature splitting (as discussed in Towards Monosemanticity). You could test this by training an SAE with smaller dictionary size and seeing if an alpha helix feature forms 5. Mammalian cell expression seems very exciting - this doesn't change my assessment of the paper as it lacks any real exploration but I think it's reasonable to leave out of scope. I'd love to see the follow up paper though! Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their suggestions and enthusiasm for our work. We agree that one of the limitations of the paper is a heavy reliance on qualitative measures of feature interpretability. Towards making feature analysis more quantitative, we introduced the family specificity and activation pattern categorizations. In our experience, family specific-features are interpretable (can be interpreted as a specific sequence motif in a specific family), so the number of seemingly interpretable features is lower bounded by the number of family-specific features. But these measures admittedly don’t truly measure interpretability, and we would love to include results for human raters, and compare it to the ESM baseline. As done in the Gated SAE paper, we will conduct a blinded human rater experiment where our 5 raters who are familiar with protein biology will assess the interpretability of an SAE latent and ESM baseline as being interpretable (yes/maybe/no). We will include the results in a revision. Your interpretation of our goal in the probing section is correct; we are aiming to show that SAE representations haven’t lost key information (as demonstrated by task performance) and that the highest weighted latents correspond to features which makes biological sense with respect to the task. You are right that we could identify relevant features using a simpler method. Our motivation for probing, besides simply identifying relevant features for a task, was to demonstrate how SAE embeddings could be used as a drop-in replacement for dense ESM embeddings, without a significant hit to performance. Since in the pLM literature, linear probing is typically used to measure downstream task performance [1], we opted to do the same to make a head-on comparison. Regarding steering family specific latents, you make an excellent point. We will add a control where we steer a different (not family specific) latent to better demonstrate that family specific features are more important for conserved regions than other features. We will include this in the camera ready version. This is our attempt at getting more “causal” evidence to interpret SAE latents. As touched upon in our future directions section, getting interpretable steering results using SAE latents beyond simple amino acid specific features has remained challenging, and is a direction we will continue to explore in the future. In our calculation of the F1 score for family specificity, we also include sequences which do not activate the latent of interest. We believe this should help account for the false negative problem you raise. To make our work more accessible to the broader mechanistic interpretability, we will add a primer to introduce protein language models and some domain specific terms we often use in the paper. There have been some good reviews recently published which we can link as well [2]. And we’ve happy you like the name of our project! [1] https://www.biorxiv.org/content/10.1101/2024.02.05.578959v2 [2] https://www.nature.com/articles/s41587-024-02123-4 --- Rebuttal Comment 1.1: Comment: Thanks! Those sound like great changes to the paper. I will maintain my score, but I think this is solid work that should obviously be accepted and plausibly spotlight
Summary: The authors train SAEs on ESM-2 (a large protein language model), characterize the discovered features, use these organized features to better understand how ESM-2 learns protein representation. They also develop a visualization tool, and find SAE features that correspond to known properties such as thermostability and subcellular localization. Claims And Evidence: Overall, the claims are well-supported by evidence. I would prefer to shrink the claims from "pLMs" in general to ESM-2 unless the authors analyze more than one pLM (which they do not, as far as I can tell). For example, "we determine that pLMs use a combination of generic features and family-specific features to represent a protein." (abstract, lines 024-026). While I agree that SAEs are likely to apply to multiple pLMs, I think all SAE+pLM work has used ESM-2 (the authors note concurrent work, Simon and Zou 2024, also analyzes ESM-2). This is noted in the Limitations section already. Minor: the authors claim that linear probes on SAE features are more reliably than linear probes on ESM features. This doesn't really appear true for Mammalian cell expression (Figure 4, lower right). Methods And Evaluation Criteria: The methods and evaluation are appropriate. The authors are not proposing any new algorithmic innovation; they apply TopK SAEs to a pLM (ESM-2). Their evaluations are qualitative in nature because they are interested in qualitative understanding of how useful SAEs are in the context of pLMs. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experimental design is good. In the related work, the authors argue that their work focuses on coevolution and scientific discovery. I don't see any experiments focusing on coevolution; am I misunderstanding? Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The authors fairly position their work in the landscape of interpreting pLMs, using SAEs on pLMs, and dictionary learning methods on biological models. However, it's unclear to me how SAEs improve on prior work. Essential References Not Discussed: No essential references are missing. Other Strengths And Weaknesses: This paper applies SAEs to pLMs. It is one of the first works to do so. This is undoubtedly a strength. Unfortunately, the positioning (or lack thereof) makes it challenging for me (a non-biologist) to understand why the results are impactful. I would improve my score if the authors can explain why the results are significant from a biology perspective. Do SAEs offer a path towards answering questions that were previously unanswerable? Are SAEs significantly cheaper, or more scalable, or easier to tune than other interpretability methods? It's not clear to me why I should care about SAEs+pLMs---I'm sure there's a reason, but this current iteration does not convince me to care. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed reading and thoughts. We acknowledge that the lack of other pLMs in our analysis beyond ESM-2 means that we should not make general conclusions about all pLMs. We intend to update the text to reflect this, replacing pLM with ESM-2 to narrow the scope of the results. As the reviewer points out, the performance of SAE probes on the mammalian expression task lags behind the ESM probes. In the text, we say that “linear probes on SAEs achieve performance similar to their ESM embedding baselines across all layers”. We intend to update the text to qualify this statement, that they achieve similar performance on most tasks. > In the related work, the authors argue that their work focuses on coevolution and scientific discovery. I don't see any experiments focusing on coevolution; am I misunderstanding? This is a good point, we could have been more specific. By coevolution, we refer to our analysis of features which seem to correspond to protein family-specific motifs. We would like to expand on why we view our results as significant from a biological perspective. Understanding biological sequences such as protein sequences remains a difficult problem. Traditionally, biologists have used tools such as multiple sequence alignments and statistical models to identify local patterns in protein sequences. Recently, protein language models trained on millions of sequences across evolution have been successful in providing useful protein representations. Embeddings from pLMs are often used as input to simple models (e.g. linear regression) on downstream tasks such as protein structure and variant effect prediction. Yet as these representations are uninterpretable, it is not easy to identify what features the pLM has learned which contribute to task performance. We advance on prior work by demonstrating SAEs can help reveal what ESM-2 learned features are important for downstream task performance. For example, by probing on SAE latents, we can identify that amino acid composition is an important predictor for thermostability. This may allow a biologist to gain insight into novel sequence determinants of a protein property which would have required manual feature crafting previously. As representations from ESM are widely used, we offer SAE representations as a drop-in replacement with (usually) similar performance as the ESM representations but with interpretable dimensions. We hope this clarifies the advantage of SAE-derived representations. --- Rebuttal Comment 1.1: Comment: Thank you for your further explanation. I think what I am still stuck on is language like this: This may allow a biologist to gain insight into novel sequence determinants of a protein property which would have required manual feature crafting previously. It seems that after applying SAEs to PLMs, this work does has not demonstrated that there are new capabilities that were previously inaccessible without SAEs applied to PLMs. While I appreciate the technical difficulty of applying SAEs to PLMs and the novelty associated with being one of the first works to do so, I will leave my score as a 3 because I do not feel that your work demonstrates meaningful advances in capabilities. What does an SAE trained on a PLM actually unlock? I am, however, happy for this work to appear in ICML if other reviewers are excited about it.
Summary: The paper investigates the interpretability of protein language models by training sparse autoencoders on pLM latents (in particular from ESM2). The goal is to extract and analyze features that pLMs use to represent protein sequences, with the broader aim of linking these features to biological properties. The authors develop a visualization tool, InterProt, to examine the learned features and categorize them based on activation patterns and specificity to protein families. Theyfind that a large subset of latents are family-specific, but some are generic and pick things like intrinsically disordered regions, motifs and known heuristics (e.g. glutamine count for thermostability), down to single residues (prolines at helix boundaries). The authors train linear probes on SAE embeddings and compare their predictive performance to standard ESM embeddings across several downstream tasks. They demonstrate that SAEs can uncover biologically relevant features such as nuclear localization signals and thermostability determinants. The study also explores how SAE hyperparameters influence feature extraction, showing that increased sparsity leads to more family-specific features. Overall, the paper provides a framework for using SAEs to interpret pLMs and proposes that such models can facilitate biological discovery by identifying novel functional patterns in protein sequences. Claims And Evidence: The main claim is that pLMs do not merely memorize protein sequences but instead encode meaningful biological patterns. The authors argue that training SAEs on pLM activations allows them to reveal such patterns in a structured way which is easier to interpret. The claim that follows from the above is that these SAE-derived features are useful for biological discovery, as they can highlight functional determinants of protein properties that might otherwise be hidden in black-box model representations. The claims are reasonably well supported by experimental results: the authors train autoencoders on different layers of ESM-2 and develop the InterProt tool to inspect the learned features. They show that some latent dimensions correspond to biological concepts e.g. secondary structure, conserved sequence motifs, and biochemical patterns, and that many of these features activate within specific protein families. Furthermore it is shown that adjusting SAE hyperparameters (sparsity, expansion factor) influences the number of family specific features extracted. Nevertheless, there are some remarks wrt the claims made: 1. As a result, the authors suggest pLMs encode biological knowledge beyond simple memorisation. In my opinion this is not entirely accurate. The interpretable latents are a compelling finding but e..g the presence of protein family specific latents could be interpreted as evidence of memorization (as opposed to true generalization) and so does not necessarily mean the model has learned fundamental biological principles rather than statistical correlations present in training data. Such limitation however, is alluded to by the authors in the discussion section. 2. Whether SAEs truly offer a systematic path to understanding pLMs or if they simply provide another layer of abstraction that still requires human intuition to decode remains to be determined not just in the biology domain but the wider ML space. Furthermore, their susceptibility to different hyperparameter settings is known (and something the authors also look at in the paper). The recent scrutiny by several works in the literature also calls them into question. I would have liked to see this addressed in a bit more detail with some further thought on other techniques from the mech interp literature that could be used for pLMs. Methods And Evaluation Criteria: The methods and evaluation criteria are well matched to the problem. SAEs have been widely studied by the community in particular to interpret activations from LLMs, so their application to pLMs make sense in a similar fashion to how language modelling tasks were applied to protein sequences in the first place. Protein language models have shown strong predictive power for biological tasks but indeed their internal representations have not been as well studied, and this paper as well as a small number of recent works aim to fill this gap. Linear probing to evaluate feature importance is aligned to this goal from the perspective of identifying which features can be most predictive in several downstream tasks. One remark regarding mean pooling: in this case, mean pooling for protein level tasks can obscure important sequence specific details e.g. where functional determinants are localized to specific residues. While it simplifies model inputs, it may limit the ability of the probes to capture fine-grained sequence information. The authors do outline that there could be further work in this direction Theoretical Claims: not applicable Experimental Designs Or Analyses: Experimental design and analysis are generally well structured and the experiments the authors choose to conduct make sense in this context. The only remark I have here relates to specific choices of hyper parameters in each setting. In this paper we are dealing with SAEs, which as mentioned earlier in this review, and as the authors themselves acknowledge, are sensitive to different hyperparameter choices. The authors do explore this direction a bit, but in my opinion could have done so in a bit more detail. The authors do not provide enough detail on hyperparameter selection strategy. Due to the biases that could be introduced by this step, this hinders reproducibility. The use of logistic regression for classification tasks and ridge regression for thermostability prediction is a reasonable choice, as these methods provide interpretable coefficients that can be analyzed to identify important features. The datasets used for evaluation are well-chosen and appropriate for the study. Secondary structure prediction, subcellular localization, thermostability, and mammalian cell expression are all biologically meaningful tasks. The classification of SAEs into different categories based on activation patterns and family specificity is useful for interpretability, but it is based on heuristics. The threshold of F1 > 0.7 for defining family-specific features is somewhat arbitrary. Supplementary Material: Yes. I reviewed the supplementary material in full. It contains some detail on the linear probing, additional latent visualizations, classification criteria for SAE features, effect of SAE hyperparameter on family specific features and activation pattern. In general, the feeling is that there could be more content in the supplementary material, in particular for a study of this nature where several things aspects of the resulting model could be analysed in a considerable amount of depth. As mentioned, there could be more detail on the hyperparameter selection process - supplementary material shows results from varying k and latent dimension size, but only to assess the impact on the downstream results. For the classification scheme of the SAE latents, the supplementary material defines the classification thresholds, but again, a more formal clustering analysis or statistical validation would make these definitions more rigorous (instead of the fixed F1 score threshold). The authors include some additional latent visualizations - these are helpful but it would be useful to see more quantitative comparisons and a more systematic evaluation (e.g. enrichment analysis comparing latents with functional annotations). Relation To Broader Scientific Literature: This paper builds upon research in protein language models and mechanistic interpretability. The field of mechanistic interpretability has seen increasing interest due to the advent of LLMs and techniques like such as sparse coding and dictionary learning have been applied to them to identify human interpretable features (see e.g. [1]). The idea of applying techniques from the interpretability literature to interpret pLMs is not in itself new but it is timely and a promising area of research gathering increasing interest by the community (see e.g. [2, 3, 4]). Perhaps the most related work is InterPLM [3] which uses SAEs to identify human-interpretable features and correlating them with biological concepts such as binding sites, structural motifs and functional domains. [1] https://arxiv.org/abs/2309.08600 [2] https://arxiv.org/abs/2411.06090 [3] https://arxiv.org/abs/2412.12101 [4] https://arxiv.org/abs/2502.09135 Essential References Not Discussed: not applicable Other Strengths And Weaknesses: One of the main strengths of this paper is its originality in applying SAEs to interpret pLMs. As mentioned earlier, this is a relatively new field and a great effort from the authors to test various properties of pLMs through this machinery. The introduction of InterProt as a tool for visualizing these learned features adds practical value. The paper is clearly written and well structured. There is a concern with respect to novelty, as there have been other works from the literature doing several similar things, and by itself this work is not presenting a new method but applying known techniques to pretrained models. Furthermore, as mentioned earlier, the paper could benefit from more content and detail (in particular in the supplementary section) supporting their findings and the main thesis of the paper. There are some potential concerns with respect to statistical validation (e.g. F1 threshold) and the results would be stronger if there were quantitative comparisons between extracted features and known biological annotations. In the downstream tasks, evaluations demonstrate that SAE embedding are predictive but there is no ablation analysis to determine whether specific latents are directly responsible for performance improvements. Finally, there is some concern with respect to generalisability - the experiments focus on a single pLM (it is unclear what results other pLMs would yield) and dataset. Other Comments Or Suggestions: not applicable Questions For Authors: not applicable Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and many good suggestions. We agree with the reviewer’s points around the limitations of SAEs. The presence of a large number of family-specific features suggests that SAEs do indeed learn, or memorize, MSAs. Furthermore, the activation pattern of family-specific features support the hypothesis proposed by Zhang et al. [1] that pLMs store evolutionary statistics via common motifs. Although one cannot conclude from the analysis of SAE features that pLMs have learned any biophysics (as Zhang et al. suggest, they likely do not), the presence of non-family-specific features do present evidence that ESM2 has learned some patterns beyond memorizing common motifs within protein families. For example, features corresponding to generic helixes (L16/2611), hydrophobic core residues (L8/2064), and nuclear localization signals (L28/2375) suggest that pLMs do contain generic notions not specific to a protein family. Our mental model is that pLMs not only store co-evolutionarily information from MSAs but also some useful statistical interpolations between them. As is common in interpretable ML, understanding SAE latent representations still relies heavily on human intuition. In the case of pLM SAEs, we benefit from established biological knowledge—such as secondary structure and known motifs—which gives us useful reference points. It's encouraging that many SAE latents align with these expected features. However, our claims would be stronger with a quantitative comparison showing that SAE latents are more interpretable than an ESM baseline. To that end: - Though using a smaller model in the ESM family and with a different architecture, Simon et al. [2] showed that more SAE latents map to SwissProt annotations compared to ESM neurons. For a more comprehensive comparison, see our response to reviewer 5Na5. - In A.7., we show some analysis of linear probe weight distributions that weakly suggest SAE probe coefficients to be more interpretable. - As done in the Gated SAE paper, we will conduct a blinded human rater experiment where 5 raters who are familiar with protein biology will assess the interpretability of an SAE latent and ESM baseline as being interpretable (yes/maybe/no). We will include the results in a revision. Recent criticism of SAEspoint to the limitations of LLM-based auto-interpretation techniques [3], which were not used in our work. It has also been found that SAEs trained on the same data learn different features [15]. Many other mechanistic interpretability techniques have been applied to pLMs. For example, attention matrices have been shown to resemble pairwise contact maps ([4], [5], [6]), and contain information about motifs such as signal peptides [7], which are also represented as SAE features. More recent methods such as model diffing [8], trans-coders/cross-coders ([9], [10]), and other SAE architectures ([11], [12]) are promising future directions for this work. We agree that mean-pooling SAE embeddings when performing downstream probes has important limitations. For the presence of a short functional motif or residue-level signal, max-pooling will likely yield clearer results. We would also like to try more sophisticated pooling methods such as aggregating via optimal transport [14] in the future. For the definition of family-specific features, we set the F1 threshold to 0.7 because we observed there to be a significant dropoff in F1 score below that point. We will update the supplement with a figure demonstrating this. Finally, we agree with the suggestion to include more quantitative evaluations of SAE features. Doing so across different hyperparameter schemes would additionally enable a more principled hyperparameters selection strategy. Our initial hope was to achieve this via our feature categorization analysis but only observed small effects across different hyperparameters (A.4). We will evaluate our probes across SAEs trained with different hyperparameters. Specifically, a range of SAE latent hidden dimensions and a range of k values. We hope this will help demonstrate the robustness of our results. [1] https://tinyurl.com/m8a5ratb [2] https://tinyurl.com/3xsfefaj [3] https://arxiv.org/abs/2501.17727 [4] https://arxiv.org/abs/2404.16014 [5] https://arxiv.org/abs/2006.15222 [6] https://tinyurl.com/yja6xyhf [7] https://tinyurl.com/mub4tuy5 [8] https://tinyurl.com/4hwm4tp8 [9] https://arxiv.org/abs/2406.11944 [10] https://tinyurl.com/4hwm4tp8 [11] https://arxiv.org/abs/2404.16014 [12] https://arxiv.org/abs/2407.14435 [13] https://tinyurl.com/yc2memtc [14] https://tinyurl.com/yc8j3fzt [15] https://arxiv.org/abs/2501.16615 --- Rebuttal Comment 1.1: Comment: Thank you for this detailed response. I am happy for this work to appear at ICML given the stated additions to the paper.
Summary: This paper studies sparse autoencoders trained on the protein language model ESM-2. They find that the SAEs contain a variety of generic and family-specific features, as well as features that can be used to identify sequence determinants of properties such as thermostability and subcellular localization. They also provide an interactive visualization tool to help in the labeling or interpretation of SAE features. Finally, they explore the impact of SAE hyperparameters and ESM layers on the features learned by their SAEs. Claims And Evidence: The claims made in this paper are supported by clear evidence, with none being problematic. Please see more on the methods and evaluation criteria for parts that can be strengthened. Methods And Evaluation Criteria: The proposed datasets are intuitive for the task, however, the evaluation criteria is not very clear. In particular, given that one of the main claims of this paper is that SAEs can help to interpret pLMs, more could have been said about how interpretation was performed and the agreement between different labelers of the SAE features. Ideally, the proposed visualization tool, InterProt, would have been anonymously provided with the submission. Alternatively, some examples of most-activating sequences for a few different latents could be provided. Theoretical Claims: This work is mainly empirical, and thus no proofs were provided. Experimental Designs Or Analyses: The soundness and validity of all experimental analyses were checked, in particular for Sections 4.3, 4.4, and 5.2. Note that much of the analysis was highly qualitative in nature, and thus could not be rigorously checked without access to the proposed visualizer. Supplementary Material: All materials in the Appendix (pages 11-15) were reviewed along with the main paper. Relation To Broader Scientific Literature: The key contributions of this paper are mainly related to literature in mechanistic interpretability that explore how sparse autoencoders can be trained on LLMs and VLMs to understand the various features encoded by models. Given the growing popularity of SAEs, recent works have proposed applying them to scientific models, including similar concurrent work by Simon and Zou (2024) that trains on pLMs, SAE-Rad that trains on a medical VLM (Abdulaal et al, 2024), and more. By attempting to explain models in an unsupervised manner, SAEs have the potential to discover novel concepts that humans may not have been able to explain before. This paper validates the utility of SAEs/dictionary learning for understanding what features pLMs may be leveraging to perform downstream tasks, with novel findings on the differences between layers of pLMs and the organization and prevalence of various features. Thus, this paper's main contribution with respect to prior literature is evaluating if SAEs are interpretable or useful for this previously unexplored modality. Essential References Not Discussed: While this paper cites Simon & Zou (2024) as a concurrent work and notes the differences between the two works, it would be useful if more details were provided on the similarities and differences between the results of each paper, even if the models explored between the two are different. Did the authors of the other paper find similar breakdowns for each type of latent? Were there notable discrepancies between the conclusions drawn in the two papers? I believe further expanding on this discussion would strengthen this paper and highlight the potential generalizability and reproducibility of the claims made. Other Strengths And Weaknesses: Other strengths: - The main strength of this paper is the ability to essentially create concept bottleneck models on top of pLMs by training probes on top of SAE features, thus being able to explain how pLMs are able to solve various downstream tasks and coming up with potentially hypotheses for the underlying mechanisms of those tasks. - The proposed visualization tool also seems like a significant contribution of this work, but is currently unverified as it was not submitted with the paper for review. Other weaknesses: - It is not clear exactly what the use of SAEs unlocked in this paper. I think the authors could include more discussion of what new scientific understanding of protein modeling / pLMs was gained specifically as a result of using SAEs that would not have been found through rigorous evaluation of the base model or by probing it more traditionally. - I wonder how SAEs compare against a prototype-based model, such as a ProtoPNet, that directly relies on canonical examples of features rather than learning relevant latent directions that must be labeled post-hoc to understand. While potentially outside the scope of this work, comparison against other methods would strengthen the work and provide stronger evidence for why SAEs are a reasonable and efficient method for understanding pLMs. Other Comments Or Suggestions: In the "future directions" section, the authors note that "training on task-relevant sequences may yield more function-specific latents." I would suggest they look at [1], which provides evidence for this behavior in task-specific SAEs. [1] Makelov, Aleksandar, George Lange, and Neel Nanda. "Towards principled evaluations of sparse autoencoders for interpretability and control." arXiv preprint arXiv:2405.08366 (2024). Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for these thoughtful comments and suggestions. We created an anonymous link for our InterProt visualizer at http://icml.interprot.com and hope that it can provide context on our manual interpretation process and showcase some interpretable features. To support our claims around interpretability, we agree with the need for more human label data and quantitative analysis. As done in the Gated SAE paper (Rajamanoharan et al.), we will conduct a blinded human rater experiment where 5 raters who are familiar with protein biology will assess the interpretability of an SAE latent and ESM baseline as being interpretable (yes/maybe/no). We will include the results in a revision. We hope that this can provide more robust evidence on the value added by SAE compared to directly analyzing the base model. We agree with the lack of details on how our work differs from Simon et al. and plan to include more information in our revision. The key differences are as follows: - We trained TopK SAEs while Simon et al. used ReLU SAEs. - We used a larger, 650M param variant ESM-2, compared to the 8M variant used by Simon et al. The 650M param model is far more widely used. - We propose a framework for systematically categorizing latents by family-specificity and activation patterns; Simon et al. does not do latent categorization. We also evaluate the effect of different SAE hyperparameters on these feature categorizations. We use linear probes on four downstream tasks to extract interpretable features with the goal of enabling scientific discovery. Simon et al. focuses on demonstrating the feasibility of SAEs on pLMs and performs quantitative interpretability comparisons between ESM and SAE. - We share an open-source visualizer InterProt with data visualizations aimed for manual feature interpretation. For example, each SAE latent is displayed with a collection activating sequences across different activating ranges, whether they cluster within specific protein families, providing options to align them. InterProt also enables searching a sequence across all latents, a feature that has enabled discovery of interesting, interpretable latents starting from a protein of interest. Simon et al. also provides a visualizer though it focuses more on displaying activating distributions of each latent and whether it has been linked to any Swiss-Prot concepts. Simon et al. proposed a method to automate the interpretation of SAE features using an LLM. We did not explore this approach. We see models like ProtoPNet with built-in interpretability as an exciting and complementary direction to post-hoc interpretation (of usually larger and more performant pLMs) via SAEs. Concept bottleneck models have also been applied to proteins, and have shown competitive performance with pLMs [2]. While we agree that a comparison of SAEs to these methods is outside the scope of this work, we plan to add a discussion of these references to our Related Works section. Finally, we thank the reviewer for the reference on the previous work by Makelov et al. on the effects of training data distribution, a direction we plan to explore in follow-up work. [1] https://arxiv.org/abs/2404.16014 [2] https://arxiv.org/abs/2411.06090
null
null
null
null
null
null
LARM: Large Auto-Regressive Model for Long-Horizon Embodied Intelligence
Accept (poster)
Summary: In the open-world environment of Minecraft, this paper proposes the Large Auto-Regressive Model (LARM), which leverages the instruction-following and generalization capabilities of large language models to construct a Minecraft agent. Additionally, the paper introduces Referee RL to provide immediate feedback for training LARM. Experimental results on MineDojo and Mineflayer environments show that LARM outperforms baselines. ## update after rebuttal I have carefully read all the reviewers' comments as well as the authors' rebuttal. Most of my concerns can be addressed through a thorough revision of the manuscript. However, my primary concern remains with the evaluation on the MineRL environment. While I appreciate the additional experiments provided by the authors, I find the following issues problematic: 1) I am surprised by the authors were able to adapt their method to the MineRL environment, train the LARM model, and complete evaluations on 200 tasks (each with 30 trials) within just one day. 2) Due to the significant lack of implementation details and the experiments on MineRL, the paper requires substantial revision. So I keep the scores. Claims And Evidence: One of the key claims of this paper is leveraging the advantages of RL methods and LLMs while mitigating their limitations. Regarding the slow inference speed of LLMs, the paper asserts that LARM achieves an inference speed of 0.58 seconds per inference. However, the paper does not provide a comprehensive experimental comparison for inference speed. Moreover, this speed does not meet the 20Hz inference requirement of MineDojo. Methods And Evaluation Criteria: 1. Table 1 presents experiments conducted on MineDojo; however, the number of evaluated tasks is too limited, significantly fewer than the MineDojo benchmark, which includes 1,581 tasks for Programmatic Tasks. 2. There is a lack of comparison to some powerful baselines on MineDojo, such as DEPS [1], Jarvis-1 [2], MP5 [3], etc. [1] Wang, Zihao, et al. "Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents." NeurIPS 2023. [2] Wang, Zihao, et al. "Jarvis-1: Open-world multi-task agents with memory-augmented multimodal language models." TPAMI 2024. [3] Qin, Yiran, et al. "Mp5: A multi-modal open-ended embodied system in minecraft via active perception." CVPR 2024. Theoretical Claims: No obvious errors were found in the theoretical claims. Experimental Designs Or Analyses: As stated in the Methods and Evaluation Criteria section, the paper lacks a sufficient number of evaluation tasks and up-to-date baselines. Supplementary Material: The authors have provided a demo video in the supplementary materials. I confirm that I have reviewed all supplementary materials. Relation To Broader Scientific Literature: The core objective of this paper is to leverage large language models to generate appropriate skills (code) for interacting with the environment to complete tasks. The fundamental idea is derived from prior work, Voyager. The reviewer did not find significant innovations in terms of model architecture, training methods, or skill implementation. However, the reward design approach may contribute to the community. Essential References Not Discussed: The paper does not discuss comparisons between the proposed LARM and key baselines [1] [2] [3] [4] in current Minecraft research. The authors need to clarify the differences or advantages of LARM compared to the aforementioned agents. [1] Wang, Zihao, et al. "Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents." NeurIPS 2023. [2] Wang, Zihao, et al. "Jarvis-1: Open-world multi-task agents with memory-augmented multimodal language models." TPAMI 2024. [3] Qin, Yiran, et al. "Mp5: A multi-modal open-ended embodied system in minecraft via active perception." CVPR 2024. [4] Li, Zaijing, et al. "Optimus-1: Hybrid multimodal memory empowered agents excel in long-horizon tasks." NeurIPS 2024. Other Strengths And Weaknesses: Strength 1. Using a large language model to provide rewards is an interesting idea. Although the reward implementation in this paper is relatively simple, it offers new insights into reward design in the field of reinforcement learning. Weakness 1. As stated in line 218, the paper provides GPT-4 with an inventory list and information about environmental resources surrounding the agent to generate appropriate rewards. At the same time, these information serve as input of LARM. However, this implementation relies on internally integrated environment APIs, making it an unfair comparison against most existing works. Other Comments Or Suggestions: None. Questions For Authors: 1. As stated in line 271, the paper follows Voyager’s skill update strategy but does not explain how these skills are generated. Does this imply that these skills originate from Voyager or that LARM is based on the Voyager framework? 2. Could the authors provide more implementation details on how skills interact with the environment in MineDojo? As far as I know, the action space in MineDojo consists of low-level actions rather than code. 3. How was the 0.65 success rate in Table 1 obtained? Line 340 states that each task was executed 30 times—how does this translate to a success rate of 0.65? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We believe the Reviewer has significant misunderstandings of this work. In the following, we address the concerns one by one using more precise explanations and sufficient experiments. ## Q1: Inference speed analysis We did not explicitly compare the speed of our method with the counterparts because many of them are based on LLMs deployed on remote servers like GPT-4. Their speeds are largely influenced by network latency, so it is difficult to compare fairly. To conduct a comprehensive speed comparison as suggested, we first report the efficiency metrics of our method as follows: Inference Time | Inference Memory | FLOPs | Training Time | :-: | :-: | :-: | :-: | 0.58s | 3.8G | 614.8G | 42 hours Then, we study how the success rates and inference times are changed by adopting different base LLMs. Due to the reply character limit, please refer to the reply to Q5 of Reviewer 2LZg for results. For the online inference claim, it is because LARM is for high-level schedule. Each time of high-level schedule corresponds to several seconds of low-level skill execution. The inference speed of the low-level skill policy is more than 1000 FPS. So the LARM speed of 0.58 second per inference meets the online inference requirement. To avoid such misunderstandings, we change the claim in the paper as "meet the speed requirement of online high-level action scheduling". ## Q2: Evaluated task number Actually, we have test our method in other tasks of MineDojo. Our method is still very effective. We do not report results on them because all previous works in MineDojo choose to select representative tasks to report results, although there are totally 1581 tasks in MineDojo. If we report the results on other tasks, we have no method to compare. The reported tasks in Table 1 of the paper follow previous published works. To further show the effectiveness of our method, we add experiments based on a household simulator VirtualHome and a real-world robot. Refer to the replies to Q1 and Q2 of Reviewer 4ZZV for details. ## Q3: Missing comparison As reminded by the Reviewer, we will add all the missing references to the paper and discuss the relation with them. In the following, we compare success rates with them. As their experiment settings are different from ours, we unify all methods using the setting of DEPS. The success rates on key achievements are as follows: | Method | Wooden | Stone | Iron | Diamond | | :-: | :-: | :-: | :-: | :-: | | MP5 | 0.89 | 0.76 | 0.52 | 0.22 | | DEPS | 0.80 | 0.69 | 0.17 | 0.02 | | JARVIS-1 | 0.89 | 0.89 | 0.35 | 0.09 | | Optimus-1 | 0.98 | 0.92 | 0.47 | 0.12 | | LARM | 1.00 | 0.97 | 0.57 | 0.20 | ## Q4: Significance from previous works We believe this work shows great difference and advantages over the works mentioned by the Reviewer. * The works mentioned by the Reviewer are also based on LLMs. This means if the LLM does not have accurate knowledge about this environment, the method fails. By contrast, ours combines LLM and RL. These two parts both contribute to learning new tasks. If LLM does not have accurate knowledge (the referee reward is noisy), the RL still conducts learning through environment reward. To show this, we have added experiments using a household simulator and a real robot. Refer to the replies to Q1 and Q2 of Reviewer 4ZZV for results. * The methods mentioned by the Reviewer are all based on a stack of heavy models, meaning slow inference and high deployment cost. However, embodied applications mostly require fast response and deployment on local devices. Our real robot experiment shows the trained policy can be deployed using the local resource of a robot to achieve good success rates. ## Q5: Unfair comparison The comparison is absolutely fair. Comparing methods in MineDojo and Mineflayer, our method does not use any extra information than the compared methods. If we call an API, the compared methods also call this API to get information. The inventory list and environment information used by our method are also used in other methods. ## Q6: How skills are generated We use a skill generation pipeline similar to Voyager, but the overall method framework is very different. Voyager completely relies on GPT-4, a remote LLM, to complete tasks in an offline way. Ours supports online learning and is lightweight. ## Q7: Skills in MineDojo Although the basic actions in MineDojo are low-level actions like moving forward a step, almost all previous works in MineDojo use RL policies and rule-based methods to build higher level skills, like search a tree. The Reviewer can refer to the code of Plan4MC to know how these skills are built. Our skills used in MineDojo completely follow them, therefore the comparison is absolutely fair. We do not use any extra information. ## Q8: Success Rate 0.65 We test our own method for 30 times to compute the success rate. The 0.65 is the success rate of a compared method and is obtained from its original paper. --- Rebuttal Comment 1.1: Comment: Thank you for the response. However, I still have the following concern: ## Response to Q1: Although the reviewer understands that this was an oversight, every claim made in the paper should be rigorous and accurate. The reviewer hopes that the authors will revise this claim accordingly in the revision. ## Response to Q2: The reviewer appreciates the authors' efforts in conducting experiments in alternative environments. However, as the main experimental setting of this work, the experiments conducted in Minecraft still lack a sufficient number of evaluation tasks. While the reviewer acknowledges the challenges of conducting experiments on the full MineDojo benchmark, it is worth noting that several prior works have been evaluated on significantly more tasks than LARM, such as 71 tasks for DEPS [NeurIPS'23], 67 tasks for Optimus-1 [NeurIPS'24], and 200 tasks for Jarvis-1 [TPAMI'24]. ## Response to Q3: The reviewer appreciates the authors for incorporating the suggested baselines into the revision. However, it is important to note that these baselines should be fairly compared in the main Table 1 of the paper. This raises a concern, as the reviewer observes that LARM underperforms these baselines on many tasks. For example, in the task "harvest stick," LARM achieves a success rate of 0.93, while Jarvis-1 reaches 1.0. For more comparisons, please refer to the original sources mentioned above. ## Response to Q6: The reviewer finds the authors' claim—"Voyager completely relies on GPT-4, a remote LLM, to complete tasks in an offline way. Ours supports online learning and is lightweight."—somewhat unclear. To the best of my knowledge, Voyager generates code (i.e., skills) in an online environment, thereby enabling the continual updating of its skill library. Moreover, the inference phase of Voyager is also conducted in an online setting. Therefore, what is “offline way” for Voyager? ## Response to Q7: Thank you for pointing out Plan4MC, which allowed the reviewer to spend time studying and understanding the implementation details of LARM. However, the reviewer suggests that these details be included in the main text or the appendix, as doing so could help reduce potential confusion and save readers considerable time. The reviewer appreciates the additional experiments and clarifications provided during the rebuttal phase. However, the reviewer does not consider the concerns mentioned above to be "misunderstandings." If these concerns can be adequately addressed, the reviewer is open to reconsidering the score with a positive attitude. --- Reply to Comment 1.1.1: Comment: ## Response to Q1: Rigorous claims We greatly thank the Reviewer for all the constructive feedbacks. We will update all the revised claims and experiments to the paper. ## Response to Q3: Inferior performance in Table 1 For more clear explanation, it is better to first address the concern in Response to Q3 (Inferior performance). Afterwards, we address the Response to Q2 (More evaluation tasks). The performance of LARM in Table 1 of the paper is inferior to some methods like Javis-1 is because of **unfair comparison**. In fair comparison, LARM outperforms them. The research focus of this work and all the works mentioned by Reviewer is high-level skill scheduling. For fair comparison between such two works, we need to make sure two experiment settings are the same, the experiment environment and used low-level skills. The experiment environment used by the mentioned works like Javis-1 is the Minecraft Universe Benchmark. By contrast, the experiment environments used in Table 1 and Table 2 of the paper are MineDojo and Mineflayer, respectively. In different experiment environments, the task setting, agent initial status, and agent atom actions can be different. This is why we compare LARM with other methods using two tables in the paper. In Table 1 and Table 2 of our paper, the compared methods in the same table adopt the same environment. For low-level skills, the skills used in Table 1 of the paper are based on Plan4MC, while the works mentioned by the Reviewer adopt STEVE-1. The low-level skill policy does not always execute a skill successfully, and the failure of any skill execution could result in the task completion failure of the high-level scheduling policy. The low-level skill policy STEVE-1 adopted by the mentioned works is much stronger than our employed Plan4MC in Table 1 of the paper (the skill execution success rate of STEVE-1 is higher than Plan4MC). Therefore, the success rate reported by Javis-1 is higher than the reported success rate of LARM in Table 1 due to unfair comparison. To ensure fair comparison, we re-test LARM using the same experiment setting as the mentioned works (based on Minecraft Universe Benchmark and STEVE-1). The whole experiment results are reported in the Reply to Response to Q2 (the following Reply). In fair comparison, LARM outperforms the mentioned works like Javis-1. We will add all these explanations to the paper. ## Response to Q2: More evaluation tasks As suggested by the Reviewer, we conduct experiments on 200 tasks following Javis-1. As explained in the Response to Q3, we test our method based on the experiment environment Minecraft Universe Benchmark and low-level skill policy STEVE-1, which are consistent with the compared methods. The results are reported in the same format as Javis-1 (group 200 tasks into 7 categories and report the average success rate of each category). The results of the compared methods come from the paper of Optimus-1. The success rates of all these methods are as follows: | Method | DEPS | JARVIS-1 | Optimus-1 | LARM | | :-: | :-: | :-: | :-: | :-: | Wood | 0.77 | 0.94 | 0.99 | 1.00 Stone | 0.49 | 0.89 | 0.92 | 0.97 Iron | 0.16 | 0.36 | 0.47 | 0.57 Gold | 0.00 | 0.07 | 0.09 | 0.17 Diamond | 0.01 | 0.09 | 0.12 | 0.20 Redstone | 0.00 | 0.16 | 0.25 | 0.30 Armor | 0.10 | 0.16 | 0.19 | 0.27 According to the results, we can find that LARM outperforms all the compared methods in all categories. This reveals the advantages of combining LLM and RL over solely based on LLM (the compared methods). We will add all the descriptions and experiment results to the paper. ## Response to Q6: Offline way of Voyager We apologize for the unclear explanation. Voyager is offline because its method is not in use after the agent starts to execute scheduled skills. Voyager has two phases, exploration and test. In exploration, given a target task, it uses GPT-4 to write the code of skills that controls the agent to complete the task. If the task is not executed successfully, GPT-4 is prompted to revise the code until the task is completed successfully. In test, Voyager employs GPT-4 to decompose the target task into the order of executing skills generated in exploration. After the task decomposition, the agent executes the code of scheduled skills one by one. **During executing code, Voyager has no perception, reasoning, or a mechanism of revising the code based on new environment observation.** This means if something unexpected in the code happens (this often happens), Voyager cannot handle this problem. This is what "offline way" refers to, its method is not in use after the agent starts to execute scheduled skills. By contrast, LARM perceives the environment and reasons about what is the next skill during executing the target task. LARM can adjust its actions in unexpected situations, and thus outperforms Voyager. ## Response to Q7: Add details We will add all these details to the paper as suggested by the Reviewer.
Summary: This paper introduces a lightweight LLM-based agent that balances efficiency and generalization for long-horizon tasks. Using Referee RL, which employs a giant LLM for immediate feedback, LARM overcomes reward vanishment in reinforcement learning. Tested in Minecraft, it outperforms previous methods, achieving complex goals like enchanted diamond equipment. Claims And Evidence: The paper claims that "LARM runs at 0.58 seconds per inference, meeting online inference requirements." However, with an interaction FPS of less than 2, this is insufficient for real-time deployment. The claim should be more accurately framed. Methods And Evaluation Criteria: The proposed method is highly similar to prior works [2] and [3], which also employ multimodal large models as reward models to assist agent learning. The contributions are incremental rather than groundbreaking. The paper fails to compare LARM with essential baselines, particularly [2] and [3], which are closely related. A fair comparison is necessary to establish the advantages of the proposed approach. Theoretical Claims: Yes. Experimental Designs Or Analyses: The method is only evaluated on Minecraft, whereas related works typically demonstrate robustness and generalization across multiple gaming environments [4]. Testing on other domains, including games with incomplete Wiki resources (e.g., Montezuma’s Revenge), would strengthen the claims. When test in MineFlayer, are the skill code are generated by GPT-4 or your fine-tuned models? The paper lacks evaluation on other vision-language models (e.g., Fuyu, BLIP, Qwen2-VL), making it unclear if LARM's improvements generalize across architectures. Supplementary Material: Yes. Relation To Broader Scientific Literature: No. Essential References Not Discussed: This paper does not cite some related works: [1] Wang et al. Jarvis-1: Open-world Multi-task Agents with Memory-Augmented Multimodal Language Models. T-PAMI 2024. [2] Jiang et al. CLIP-Guided Reinforcement Learning for Open-Vocabulary Tasks. ECCV 2024. [3] Li et al. Auto MC-Reward: Automated Dense Reward Design with Large Language Models for Minecraft. CVPR 2024. [4] Wang et al. OmniJarvis: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents. NeurIPS 2024. Other Strengths And Weaknesses: The method relies heavily on Minecraft Wiki for learning. However, the paper does not quantify how much this pretraining improves performance over the original model, leaving a significant gap in evaluation. Additionally, the preprocessing details of the Minecraft Wiki corpus are missing, making replication difficult. Other Comments Or Suggestions: See in weakness. Questions For Authors: See in Weakness. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We have addressed the concerns of the Reviewer one by one in the following. The paper will be revised accordingly. ## Q1: Real-time inference The speed of 0.58 second per inference is the inference time of the high-level scheduling policy. Each time of high-level schedule corresponds to seconds of low-level skill execution. The inference speed of the low-level skill policy is more than 1000 FPS. Therefore, we claim that our method meets the online inference requirement. To avoid misunderstanding, we change the claim in the paper as "meet the speed requirement of online high-level action scheduling". ## Q2: Similar to two previous works The two works mentioned by the Reviewer are both about low-level action learning, while ours is about high-level scheduling. The motivation, method, and results are all very different: * CLIP RL: This work uses CLIP attention map to replace text instruction and provides an invariant representation for the policy to approach different objects. This work does not have significant similarity with ours, which combines LLM and RL. * Auto MC-Reward: This work relies on LLMs to write code to generate reward functions. Its idea can be applied to learn low-level skills where reward functions can be defined explicitly. However, in many tasks (like the real robot block building task in the Reply to Q2 of the Reviewer 4ZZV), the rewards are too subjective and complex to define explicitly, so this work is inapplicable. By contrast, our method does not have this limitation. As suggested by the Reviewer, we add comparison to these two works. For fair comparison, all three works adopt the train and test settings of Auto MC-Reward. The success rates on different key achievements are as follows: | Method | Wood | stone | iron | diamond | | :-: | :-: | :-: | :-: | :-: | | CLIP RL | 0.64 | 0.23 | 0.02 | 0.00 | | Auto MC-Reward | 0.85 | 0.78 | 0.63 | 0.29 | | LARM | 0.98 | 0.96 | 0.81 | 0.56 | ## Q3: Test in more environments As suggested by the Reviewer, we have test our method in more environments, including VirtualHome (a household activity simulator) and the Cobot Magic robot (a robot in the real world). Due to the reply character limit, please refer to the replies to Q1 and Q2 of Reviewer 4ZZV for experiment details and results. We believe these experiments sufficiently confirm the practical value of our work. ## Q4: How skills are generated The skills are generated by GPT-4. Our fine-tuned model is used for online high-level action scheduling. It does not serve as the referee model. ## Q5: Generalization to more LLMs Our method can be generalized to different LLMs. As suggested by the Reviewer, we add experiments that test using different LLMs as the referee and LARM base model. First, we study the success rates adopting different sizes of referee LLMs, including TinyLLaVA-3.1B, Fuyu-8B, Llama3-8B, and Qwen2-14B. Notably, these models do not always provide correct referee reward, meaning the reward quality is different. The success rates are as follows: | referee | Stick | Wooden | Stone | Iron | | :-: | :-: | :-: | :-: | :-: | | TinyLLaVA-3.1B | 0.73 | 0.50 | 0.17 | 0.00 | | Fuyu-8B | 0.87 | 0.57 | 0.27 | 0.03 | | Llama3-8B | 0.90 | 0.67 | 0.33 | 0.13 | | Qwen2-14B | 0.90 | 0.67 | 0.37 | 0.20 | | GPT-4o | 0.93 | 0.70 | 0.40 | 0.27 | According to the results, we can observe that generally LLMs with more parameters lead to better performance. The key factor is the quality of the generated referee reward. Refer to the replies to Q1 and Q4 of Reviewer xu7T for more in-depth analysis on the reward noise tolerance of LARM. Then, we study replace the LARM based model from TinyLLaVA-3.1B to Fuyu-8B and Qwen2-14B. The success rates and speed are as follows: | Base Model | Stick | Wooden | Stone | Iron | Inference Time | :-: | :-: | :-: | :-: | :-: | :-: | | TinyLLaVA-3.1B | 0.93 | 0.70 | 0.40 | 0.27 | 0.58 | | Fuyu-8B | 0.97 | 0.77 | 0.43 | 0.30 | 1.19 | | Qwen2-14B | 1.00 | 0.80 | 0.47 | 0.33 | 2.88 | According to the results, we can observe that replacing the base model with a larger one generally benefits the performance. However, the inference time is also increased significantly. ## Q6: Missing reference As suggested, we will add all these references to the paper and discuss the relation with them. ## Q7: Wiki Pre-train effect The Wiki pre-train improves model convergence speed and success rates on different tasks significantly. Actually, we have reported the success rate gains caused by Wiki pre-train in Table 4 of the paper. For the convenience of review, we present the results in the following table. Besides success rates, we also report the exploration iteration number for arriving convergence. | Wiki Pre-train | Exploration Iterations | Stick | Wooden | Stone | Iron | | :-: | :-: | :-: | :-: | :-: | :-: | | No | 8500 | 0.83 | 0.57 | 0.33 | 0.13 | | Yes | 5000 | 0.93 | 0.70 | 0.40 | 0.27 | The Wiki data preprocessing pipeline is the same as the work MineDojo.
Summary: This paper focusses on the long-horizon embodied intelligence, specifically, the MineCraft tasks. Previous works generally rely on the strong generalization of giant LLM agents, since the performance of lightweight LLMs such as LLaVA-7B is limited. However, this requires huge computing resources. In this paper, the authors aim to combine the advantages o both RL methods and LLM methods while avoiding their shortcomings. To achieve this, they first propose Large Auto-Regressive Model (LARM), with the main body using the same lightweight LLMs as TinyLLaVA. LARM is equipped with basic knowledge about the game it is playing by using numerous WiKi webpage data for pre-training. It predicts the next action to perform in an auto-regressive manner by taking environmental observation as input. To train LARM, this paper introduce referee RL instead of using traditional RL which will lead to reward vanishment during long-horizon embodied exploration. The core idea is to employ a referee (like a giant LLM) to provide immediate feedback about whether the just performed action brings positive contribution to realizing the final target. Claims And Evidence: Yes. this paper provide solid experiments to valid their calim: using a lightweight LARM for long-horizon embodied tasks while using referee RL as the training method. Methods And Evaluation Criteria: Yes. This paper carries out experiments on Minedojo and Mineflayer, which are commonly used to assess an embodied intelligence's ability of long-term tasks. The author also provides extensive comparisons with previous SoTA methods. Theoretical Claims: Yes. they are correct. Experimental Designs Or Analyses: Yes. the experiments are extensive. Supplementary Material: Yes, all parts --- the demo video to finish the final task. Relation To Broader Scientific Literature: The method proposed in this paper could be potentially adopted to promote AI agent in game environments. Essential References Not Discussed: No. Other Strengths And Weaknesses: This paper use a tiny LLM to finish the long-term diamond task. Comparing with previous works which rely on giant LLMs (e.g., gpt-4o) or finetuned on larger LLMs (e.g., 70B LLaMA), LARM proposed in this paper is time-efficient, which only takes 40 GPU hours on a single RTX4090 GPU. Other Comments Or Suggestions: It will be better if the authors can provide more results in addition to the MineCraft environments. For example, do the authors ever try their methods on simulated household environments like VirtureHome, or even the real world? It will show more value compared to the current bit-style MineCraft, considering the rules of MineCraft are highly structured. Questions For Authors: No additional questions Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We have test our method in more environments as suggested, and the details are in the following. The paper will be revised accordingly. ## Q1: Experiment in a household simulator We thank the Reviewer for this suggestion. As suggested, we conduct experiments in VirtualHome to further validate the effectiveness of our method. We design 10 tasks that each task requires 4 to 6 steps of low-level skill execution. An example of the designed task is "watch TV". To complete this task, the agent needs to complete the following five steps in sequence: (1) Walk to TV. (2) Turn on TV. (3) Walk to a sofa. (4) Sit on the sofa. (5) Watch TV. An anonymous video link describing how LARM performs this "watch TV" task is provided here: [VirtualHome Video](https://drive.google.com/file/d/1Dp0ivtO3e-8xa6fsBtbQD7wLtSl6cYtp/view?usp=sharing). In this experiment, we remove the Wiki data pre-train step for LARM to show that our algorithm can also work well without Wiki data pre-train. we employ Qwen-VL-Max as the referee model. Notably, Qwen-VL-Max does not always provide correct referee reward (about 10% error rate). Therefore, this experiment shows the robustness of our method based on noisy reward. Refer to the replies to Q1 and Q4 of Reviewer xu7T for more in-depth analysis on the robustness of our method under noisy reward. In this experiment, we compare our method with classic RL methods including Deep Q-learning, TPRO, and PPO to reveal how much efficiency is improved by combining RL and LLM using our method. The input to policies includes task target description,agent initial state, historical actions. As VirtualHome provides the function to execute each low-level skill, we do not need to implement low-level policies like in Minecraft. For each method, we report the avarage times of explorations the policy needs to complete the designed tasks for the first time (all actions are predicted without random exploration). The results are as follows: Method | Deep Q-learning | TPRO | PPO | LARM | :-: | :-: | :-: | :-: | :-: | Average Times | 4876.3 | 3925.8 | 3766.0 | 85.4 The above results sufficiently show the efficiency advantage of LARM. For tasks needing longer action chains (like tens to thousands of steps are needed for the tasks in Minecraft), classic RL methods like Deep Q-learning cannot complete a task for the first time after millions of exploration times. By contrast, LARM learns to complete the task efficiently. ## Q2: Experiment in the real world As suggested by the Reviewer, we design an experiment in the real world to further validate the practical value of our method. In this experiment, we train a LARM policy to control a robot to build blocks. Specifically, several blocks in various shapes are randomly placed on a table. The LARM policy is trained based on our proposed referee RL algorithm to stack blocks as a building similar to a house. To complete this task, the policy needs to learn to select suitable blocks and decide their relative positions to place. The basic skills are implemented based on imitation learning, where we adopt the robotic manipulation algorithm VIRT to grasp and place blocks. We use Qwen-VL-Max as the referee model. It judges whether the building is becoming more like a house based on image observation. Notably, we do not clearly define how a house should look like. Therefore, there are multiple ways of stacking different blocks to be similar to a house. Correspondingly, the referee reward provided by the referee policy is noisy. Interestingly, after 100 iterations of random exploration, the policy learns to select and stack different blocks as a house. The anonymous videos of two stacking block process examples are provided here: [Robot Stack Blocks 1](https://drive.google.com/file/d/15dB0SLVZBckz6WzJfIEq7GKv7n_Mugdg/view?usp=sharing) and [Robot Stack Blocks 2](https://drive.google.com/file/d/1SofuQErTk8qn51ChDwSVmTTklY9WpG5X/view?usp=sharing). According to the results of this experiment, we can conclude that (1) Our proposed method can be applied to a real robot task. (2) By combining LLM and RL, our method can derive policies that are able to perform creative tasks. --- Rebuttal Comment 1.1: Comment: Thank you for your reply and sorry I make a mistake to use "official comment" instead of "rebuttal comment". I do appreciate the experiments the authors carry out in additional environments like VirtureHome and the reality. For now I hold a positive attitude to this work and will make a final decision after the discussion with other reviewers.
Summary: The paper introduces LARM (Large Auto-Regressive Model), a lightweight LLM-based embodied agent designed for long-horizon decision-making in open-world environments. LARM is built on a lightweight auto-regressive model (fewer than 5B parameters) and directly predicts actions instead of generating text like traditional LLMs, enabling faster inference in real-time settings. The paper identifies the reward vanishment problem in classic RL, where long-horizon credit assignment becomes ineffective. To address this, the authors propose Referee RL, a technique where a giant LLM (GPT-4) provides immediate feedback on the quality of actions, distilling generalizable knowledge into LARM without human supervision. Claims And Evidence: LARM balances efficiency and generalization by combining RL’s efficiency with LLM’s reasoning (Figure 1). It directly predicts actions instead of generating text, enabling faster inference. LARM outperforms RL and LLM-based baselines (MineAgent, Plan4MC, LLaMA-Rider, RL-GPT), achieving higher success rates in Minecraft tasks and becoming the first AI to craft enchanted diamond equipment (Tables 1 & 2). Referee RL mitigates reward vanishment, providing immediate feedback via GPT-4, improving long-horizon learning stability (Equation 5, Algorithm 1). Ablations confirm that removing Referee RL hurts performance (Table 3). Methods And Evaluation Criteria: Reasonable methods and evaluation criteria. LARM uses Referee RL, where GPT-4 provides auxiliary rewards to address reward vanishment in long-horizon tasks. Built on TinyLLaVA-3.1B with LoRA, it directly predicts actions instead of generating text, enabling faster inference. Pretrained on a 34GB Wiki dataset for better long-horizon planning. Evaluation Setup: Tested in: MineDojo (open-ended AI) and Mineflayer (API-based Minecraft). Metrics: Task success rates, inference speed (0.58s per step on RTX 4090). Theoretical Claims: LARM identifies the "reward vanishment" problem in long-horizon RL, where TD errors approach zero over time, making standard RL inefficient for credit assignment (Equation 5). Referee RL mitigates this issue by injecting auxiliary rewards from GPT-4, providing immediate feedback on action quality, which stabilizes long-horizon learning (Algorithm 1, Figure 2). GAE formulation (Equation 6) supports the claim that standard PPO struggles with long-horizon tasks, while Referee RL preserves meaningful TD errors, enabling more effective optimization. Discussions: Missing formal convergence analysis – The paper does not provide a proof that Referee RL leads to stable policy updates over time. Experimental Designs Or Analyses: -Strengths: Comprehensive evaluation of LARM across multiple long-horizon tasks in MineDojo and Mineflayer, demonstrating its effectiveness in open-ended embodied AI. Comparison with RL-based and LLM-based baselines (MineAgent, Plan4MC, LLaMA-Rider, RL-GPT, Voyager, AutoGPT, STEVE) ensures fair benchmarking (Tables 1 & 2). -Weaknesses and Missing Analyses: Referee RL is the key contribution, but its impact is not fully analyzed. The paper shows that GPT-4-based referee feedback improves LARM, but does not explore alternative reward shaping methods (e.g., inverse RL, reward relabeling). More experiments are needed to analyze how different referee models (e.g., smaller LLMs vs. GPT-4) affect policy training and success rates. How sensitive is Referee RL to noisy or incorrect feedback? The paper does not test scenarios where the referee gives suboptimal or misleading rewards. Supplementary Material: Video explains the details. Relation To Broader Scientific Literature: LARM contributes to RL-LLM hybrid learning. Key novelty: Referee RL introduces LLM-based auxiliary rewards to mitigate reward vanishment in long-horizon RL. LARM is an example of multi-modal input models for direct action prediction, showing how LLMs can enhance policy learning without text-based prompting. What’s missing? Comparison to hierarchical RL approaches Discussion on alternative reward shaping methods – hindsight experience replay, etc. Relation to decision transformers or sequence modeling in RL Essential References Not Discussed: Decision Transformer (DT) (Chen et al., 2021) is not referenced or discussed. -DT treats reinforcement learning as sequence modeling, using autoregressive token prediction similar to LARM’s auto-regressive action selection. -Why it matters: Comparing LARM’s LLM-based policy with DT’s transformer-based decision-making would clarify how Referee RL improves credit assignment compared to DT’s return-conditioned training. Hierarchical Reinforcement Learning (HRL) methods are not discussed. -LARM avoids hierarchical task decomposition by relying on an LLM-based policy, but methods like FeUdal Networks (Vezhnevets et al., 2017) or Option-Critic (Bacon et al., 2017) explicitly structure long-horizon tasks with sub-goals. Other Strengths And Weaknesses: -Strengths Novel approach to handling reward vanishment – LARM introduces Referee RL, where a GPT-4-based referee provides auxiliary rewards, improving long-horizon credit assignment. Combines RL and LLM advantages while mitigating their drawbacks – Unlike task-specific RL agents or slow, text-generating LLM agents, LARM achieves both efficiency and generalization by directly predicting actions (Figure 1). -Weaknesses: The paper is highly specific to VLMs and LLMs, limiting broader RL insights. The system design and evaluation focus heavily on LLM-driven decision-making, but a more general RL perspective (e.g., comparisons with hierarchical RL, multi-step planning, or alternative reward shaping methods) would provide broader applicability. Few traditional RL baselines – While the paper compares against LLM-based baselines, it does not compare LARM’s learning efficiency against traditional RL methods beyond PPO. No discussion on whether Referee RL could benefit non-LLM policies – Could a transformer-based decision model or hierarchical RL agent also benefit from Referee RL? No computational efficiency analysis. The paper claims LARM is more efficient than traditional LLM agents, but does not provide FLOP comparisons, memory usage, or training time benchmarks to quantify efficiency gains. Other Comments Or Suggestions: Referee RL needs further analysis. How sensitive is Referee RL to incorrect or noisy feedback? The paper does not analyze how GPT-4 errors affect policy learning. No study on different referee models – The paper assumes GPT-4 is necessary, but smaller LLMs or alternative feedback mechanisms could be explored. Questions For Authors: How does LARM compare to traditional RL methods beyond PPO? The paper primarily compares LARM to LLM-based approaches, but how does it compare to hierarchical RL or model-based RL in long-horizon planning? Impact on Evaluation: If LARM is significantly better than traditional RL, it strengthens the argument for LLM-driven decision-making in embodied AI. How sensitive is Referee RL to incorrect or noisy feedback? The paper assumes GPT-4 always provides accurate auxiliary rewards, but what happens if the referee makes errors or inconsistent judgments? Impact on Evaluation: If Referee RL is highly sensitive to noise, additional robustness measures may be needed. Could a smaller LLM serve as an effective referee? The paper uses GPT-4 as the referee, but has the impact of smaller LLMs (e.g., LLaMA-7B, Mistral-7B) been tested? Impact on Evaluation: If smaller models perform similarly, LARM could be more computationally efficient. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We have addressed all concerns of the Reviewer in the following. The paper will be revised accordingly. ## Q1: Referee RL stable update proof We thank the Reviewer for this reminder and will add this proof to the paper. Due to the reply character limit, we cannot provide the whole proof here, but we can give a concise description. PPO can be treated as an extension of TPRO, and the stable update of TPRO has been proved. Thus, to prove the stable update criteria of referee RL, we need to focus on the key difference between referee RL and PPO, the reward noise. To imitate this noise, we replace the correct reward as false reward with a ratio $\sigma$ with respect to a random distribution. Empirically, if $\sigma$ exceeds a threshold $\epsilon$, the policy cannot converge stably. We can analyze the value of $\epsilon$ based on the gradient bias analysis in stochastic optimization theory. ## Q2: The impact of referee RL and relation with RL techniques Referee RL can be applied to various RL algorithms beyond PPO. Referee RL combines the techniques of RL and LLM to address the key problems in these two communities, therefore benefiting both communities. The key problem of RL is its exploration inefficiency. LLM has general knowledge. Our work shows that LLM can provide guidance to RL and reduce the exploration cost very significantly (like the experiment results in the Reply to Q1 of Reviewer 4ZZV). The key problem of LLM is it lacks the knowledge of direct environment interaction. Based on RL exploration, we tune a lightweight LLM into a SOTA embodied agent. Our method does not conflict with the RL techniques mentioned by the Reviewer, such as hierarchical RL. They handle the long-horizon exploration problem from different perspectives and can be applied simultaneously. ## Q3: Exploration on more RL methods The RL methods mentioned by the Reviewer include hierarchical RL, inverse RL, reward relabling, Decision Transformer, more RL baselines. These methods do not conflict with our method and can be applied simultaneously. We will add discussion on the relation with these methods to the paper. Specifically, our LARM model is for high level scheduling, and a part of the low-level skills are implemented based on PPO or deep Q-learning. So hierarchical RL has been used in our work. Inverse RL assumes there are optimal expert demonstrations. However, the demonstrations are unavailable in our studied task. For reward relabling, it assumes the historical actions are optimal under the reward function. However, this assumption is often too strong to meet, and the reward is also difficult to define explicitly. We have tried the idea of Decision Transformer, which relies on training on expert demonstration data. To realize this idea, we first collect abundant demonstration data, and the method achieves similar performance as referee RL with less computing cost (4 hours of training). However, the expert demonstration data is often unavailable. We have tried replacing the PPO in referee RL as deep Q-learning, and the result comparison is as follows: | | Stick | Wooden | Stone | Iron | | :-: | :-: | :-: | :-: | :-: | | Deep Q-learning | 0.87 | 0.63 | 0.30 | 0.17 | | PPO | 0.93 | 0.70 | 0.40 | 0.27 | We can see that PPO based referee RL gets better performance. ## Q4: Influence of referee reward noise We have analyzed reward noise influence theoretically in the reply to Q1. This part analyzes it using experiments. We randomly modify the reward generated by GPT-4 with a ratio $\sigma$, and the success rates on different tasks are as follows: | $\sigma$ | Stick | Wooden | Stone | Iron | | :-: | :-: | :-: | :-: | :-: | | 0% | 0.93 | 0.70 | 0.40 | 0.27 | | 10% | 0.93 | 0.67 | 0.30 | 0.17 | | 30% | 0.77 | 0.33 | 0.17 | 0.07 | | 50% | 0.50 | 0.13 | 0.00 | 0.00 | We can observe that (1) Stronger noise deteriortes the sucess rates. (2) Tasks requiring longer execution chains are less noise tolerant. In addition, we design two new experiments using the VirtualHome simulator and a real robot, where the referee LLM sometimes provide incorrect rewards. Refer to the replies to Q1 and Q2 of Reviewer 4ZZV for experiment details and results. ## Q5: Results on smaller referee LLMs As suggested by the Reviewer, we study the success rates adopting different sizes of referee LLMs. Due to the reply character limit, refer to the replies to Q5 of Reviewer 2LZg for details. ## Q6: Computing efficiency analysis The reason we did not provide quatitative efficiency comparison with previous LLM based methods is these methods are often based on large LLMs deployed on remote servers, like GPT-4. We cannot know their specific parameter numbers, FLOPs, etc. What we can make sure is their computing costs are enormous as publicly known and far more than ours. We report the efficiency metrics of our method as follows: Inference Time | Inference Memory | FLOPs | Training Time | :-: | :-: | :-: | :-: | 0.58s | 3.8G | 614.8G | 42 hours
null
null
null
null
null
null
Adversarial Combinatorial Semi-bandits with Graph Feedback
Accept (poster)
Summary: **Edit post-rebuttal: I thank the authors for their feedback, which answered my questions. I maintain my overall positive score.** The submission considers adversarial combinatorial semi-bandits, with additional feedback, ranging from no additional feedback to full-information feedback. The two extreme cases were respectively studied by Audibert, Bubeck and Lugosi (2014, no additional feedback) and Koolen, Warmuth, Kivinen, (2010, full-information feedback). The corresponding optimal regret bounds (up to logarithmic factors) are respectively $\sqrt{K S T}$ and $S \sqrt{T}$, where $S$ is the number of arms played at each round and $K$ is the total number of arms. The present submission interpolates between these two extremes (both in terms of upper and lower bounds), by suggesting a model of graph-determined feedback and by stating optimal regret bounds in terms of the richness of the graph (quantified by the independence number of the graph). The two extremes are given by a graph with inner loops at each node, and no other edge, and by the complete graph. Note that the regret upper bounds also work for a sequence of adversarial feedback graphs. Claims And Evidence: The results are theoretical: I could not spot flaws in the proofs---though their clarity could sometimes be improved, and though large parts of the proofs could be made more concise, by using existing results; see details below. Methods And Evaluation Criteria: The setting considered makes sense. The regret is only handled in expectation, and I think that high-probability bounds would have been appreciated. Theoretical Claims: I checked in details the main two results: Theorem 1.1 and Theorem 1.3 (based on Lemma 1.2, proved in Appendix A, whose proof I did not check in detail). I could generally follow the derivations. Theorem 1.1 - Could you detail (page 4, column 1, line 182) why this assumption comes with no loss of generality? It would look more satisfactory to me to rather have some of the $I_j$ contain $n$ elements and some others to contain $n-1$ elements, and write the proof in this context - Page 4, first two thirds of column 2: the arguments could be made more compact (and would avoir resorting to densities) by using the bound by Ménard, Garivier, and Stoltz (2019) - Page 4, Equation (4): first, $T_0$ is not defined (it is the number of pulls in $I$); second, I think that this is the place where you critically use the definition of $I$ as a maximal independent set and this must be detailed and commented. This is the crux of the proof, while earlier manipulations were standard - The remarks stated right before Section 3 are nice and should have been stated before the formal proof, as they help navigating it Theorem 1.3 - The beginning of this proof (possibly from page 6, column 1, line 317 to page 7, column 1) could be replaced by an appeal to the bound of Theorem 3 of Audibert, Bubeck and Lugosi (2014), which actually holds for all estimators; i.e., the new part consists of the end of the proof, where, in particular, the negativity of the covariance of the $p^t$ is used Experimental Designs Or Analyses: N/A Supplementary Material: No, as the proof of the main claims (except for the proof of Lemma 1.2) are written in the main body of the article. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: The submission discusses all references I expected to find. Other Strengths And Weaknesses: This submission provides an interesting, elegant, and optimal interpolation between existing optimal regret bounds for combinatorial semi-bandits. Most of the proof techniques were known (lower bound proof structure; pre-regret upper bound in terms of the estimates; Lemma D.2 by Alon et al., 2015) but they are well assembled and there seems to be a new ingredient provided by the representation of Lemma 1.2. On a side note, I particularly appreciated the comments and insights throughout the text; e.g., page 3, column 2, lines 117-125, or page 6, column 1, lines 282-295 Other Comments Or Suggestions: - Page 1, column 1, line 97: I guess this should be $\mathbb{E}[p_a^t]$ and not $\mathbb{E}[v_a^t]$? - Page 1, column 2, line 64: put the definition of $\alpha$ and $\delta$ before using them - Page 2, Lemma 1.2: do we agree that, in some sense, Conv($A$) and the probability distributions over $A$ are the same set? - Page 2, Section 1.3: the concepts of independent subset and dominating subset should be defined, to make the submission self-complete - Page 8: $\delta$ is only used at this stage, its definition could thus be postponed here Questions For Authors: I have no specific question but feedback from the authors on some points raised above (e.g., whether high-probability bounds are easy to obtain; whether indeed the application of Theorem 3 by Audibert, Bubeck, Lugosi, 2014, would indeed save 1 page, etc.) are welcome. **Edit post-rebuttal: I thank the authors for their feedback, which answered my questions. I maintain my overall positive score.** Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The detailed review and insightful feedback from the reviewer are deeply appreciated. We have done several updates in light of the review in our revision. - It is worth mentioning that our previous lower bound construction did not lead to the desired trade-off and was wrong. We corrected it by constructing $S$ independent sub-problems via multi-index $u\in[n]^S$ (instead of $u\in[n]$) and adopted a stopping time argument from [1]. The crux of the lower bound is that the learner has to learn each of the $S$ sub-problems well and independently. As an intuitive reasoning, a good learner tends to pick one arm of each sub-problem, i.e. distributes the pulls roughly evenly, because if he/she picks two, then one of the two must be suboptimal and incurs regret. This new proof will be deferred to the appendix due to space limit and that it is not the particular contribution of this work. We will provide an intuitive sketch of the proof in Section 2 that includes the remark mentioned by the reviewer at the beginning. - Regarding the comments on the lower bound: we may assume $\alpha > 3S$, since otherwise $S\sqrt{T}=\Omega(\sqrt{\alpha ST})$ and we are done. Then we only need to consider the independent subset with size $\lfloor \alpha/S\rfloor S=nS\le alpha$. Note since $\alpha > 3S$, it holds that $nS\ge \alpha-S\ge \alpha/2$, so the final bound would be the same up to a factor of 2. - We apologize for the confusion. $T_0$ was a typo and was meant to be $N_0$, which counts the number of pulls outside of the independent subset $I$. So the regret would be $\Omega(N_0)$ since any node outside (while they may be more informative) incurs a constant regret. - We agree that the beginning of the upper bound proof overlaps with that of the earlier work. Nonetheless, it was included for the sake of rigor because we use an additional truncation on the convex hull. - The high probability upper bound may be a nontrivial extension and we will leave it to future work. - The comments are carefully addressed in our revision. In particular, there is a surjective mapping from $Conv(\mathcal{A})$ to the probability distributions over $\mathcal{A}$, so in this sense it is easy to obtain a distribution with mean $x\in Conv(\mathcal{A})$. But the requirement of negative correlations in our case is nontrivial to satisfy and, as shown in Section 4.1, is impossible for some $\mathcal{A}$. References: [1] Lattimore, Tor and Kveton, Branislav and Li, Shuai and Szepesvari, Csaba. TopRank: A Practical Algorithm for Online Stochastic Ranking, NeurIPS 2018.
Summary: The authors consider the problem of combinatorial semi-bandit with feedback graphs, where a graph over the $K$ actions may provide the learner with side information during the learning process. By presenting appropriate lower and upper bounds, the authors establish a minimax optimal regret bound for graphs containing self-loops of $\widetilde \Theta(S \sqrt{T} + \sqrt{\alpha S T})$, where $S$ is the size of a combinatorial action and $\alpha$ is the independence number of the underlying feedback graph. For the upper bound, the authors design a natural extension of the OSMD algorithm to the feedback graph setting, with the crucial property that the sampling distributions $p_t$ satisfy a negative correlation property, which allows obtaining the optimal dependence on $S$ for this algorithm. Furthermore, the authors consider scenarios where not all combinatorial actions belong to the available action set, and show that in such settings it may not be possible to satisfy the negative correlation property, and in fact the optimal rates in such settings are worse by a $\sqrt{S}$ factor. The authors also briefly discuss the weakly observable setting and analyze an Explore-Then-Exploit algorithm for this setting with stochastic rewards. ## update after rebuttal: Given the latest response by the authors, they seem to have a reasonably convincing argument showing that if negative correlations are not explicitly enforced in OSMD, the algorithm could incur suboptimal regret. Given the validity of their argument, I think that such a result significantly strengthens the contributions of the paper, and I highly encourage the authors to include it in the final version. I am now inclined to increase my score to "Accept". Claims And Evidence: The authors provide formal proofs for all of their results, contributions and technical claims. The analysis in the paper is clear and easy to follow, with many parts being very similar to analysis from previous related work. Methods And Evaluation Criteria: N/A Theoretical Claims: I went over some of the proofs of the theoretical claims in the paper, and found one possible issue which probably does not affect the overall claim, but requires the authors' attention. Specifically, in the upper bound analysis, when bounding the second order term, the authors bound a term of the form $\sum_a \frac{x^t_a}{\sum_{i \to a} x^t_i}$ by an order of $\alpha$ by referring to Lemma 5 of [1]. However, examining the conditions of this lemma, using it as is requires the sum $\sum_a x^t_a$ to be bounded by 1, while in the combinatorial semi-bandit setting it is in fact bounded by $S$. This can be mitigated if the authors modify the optimization domain $Conv_{\epsilon}(\mathcal{A})$ to $Conv_{S \epsilon}(\mathcal{A})$ so that the iterates would satisfy $x^t_a \geq \epsilon \sum_i x^t_i$ as required from the conditions of the lemma. This modification would result in a slightly worse dependence on $\epsilon$ in the overall bound, which will ultimately affect the additive term ($\epsilon SKT$ instead of $\epsilon K T$) and the logarithmic term, and thus would not change the overall claim and contributions. I would appreciate it if the authors could address this issue. Reference: [1] Alon, Noga, et al. "Online learning with feedback graphs: Beyond bandits." Conference on Learning Theory. PMLR, 2015. Experimental Designs Or Analyses: N/A Supplementary Material: I went over some of the supplementary material, with a particular focus on the auxilliary lemmas from previous works and how they are used in the analysis of the authors' upper bound. Relation To Broader Scientific Literature: The authors discuss relevant works from both the bandits with feedback graphs literature and combinatorial semi-bandit literature. Specifically, they cite Alon et al. ('15) inwhere optimal regret bounds are established for the non-combinatiral variant of the problem, and Audibert et al. ('14) who presented the OSMD algorithm which achieves optimal regret bounds for combinatorial semi-bandits with no feedback graphs. Essential References Not Discussed: When discussing full-bandit feedback, the authors do not mention previous works that prove that the optimal bounds are of the form $\Theta(S^{3/2} \sqrt{KT})$, shown in [1] for a specific combinatorial action set (namely, multitask bandits) and in [2] for the full action set. While the full-bandit variant is not extremely relevant to this work, since the authors mention this variant and cite some related work, they should also mention the aforementioned papers in which the optimal rates are characterized. References: [1] Cohen, Alon, Tamir Hazan, and Tomer Koren. "Tight bounds for bandit combinatorial optimization." Conference on Learning Theory. PMLR, 2017. [2] Ito, Shinji, et al. "Improved regret bounds for bandit combinatorial optimization." Advances in Neural Information Processing Systems 32 (2019). Other Strengths And Weaknesses: Strengths: * The authors study a well-motivated problem which has not been studied in previous works, namely combinatorial semi-bandits with feedback graphs. * The authors establish optimal (up to logarithmic factors) regret bounds for this problem under mild assumptions on the feedback graph. * The analysis presented by the authors is clear, easy to follow, and in large parts resembles analysis of similar problems from previous works. * The key technical observation made by the authors, namely, that using sampling distributions with negative correlation between arms is essential in order to obtain optimal bounds, seems very interesting to me, and I'm not sure that I've seen anything similar in previous works. The fact that such sampling distributions exist also seems nontrivial. * The authors support the fact that the negative correlation property is crucial by exhibiting instances with a limited decision set in which this property doesn't hold, and the optimal regret bound is in fact worse by a factor of $\sqrt{S}$. Weaknesses: * It seems to me that other than the key observation regarding the negatively correlated arms (specifically, Lemma 1.2 and how it is used in the proof of Theorem 3.2), the other parts of the analysis (including the lower bound) uses fairly standard techniques which are very common in related previous work. Therefore, I think perhaps this key observation is not highlighted well enough in the current version of the paper, and it is a bit lost among the other parts of the analysis which are fairly straightforward. * The authors mention that extending their results from graphs with self-loops to strongly observable graphs is straightforward without any elaboration. However, in previous works, specifically in Alon et al. ('15), such an extension requires some additional techniques which are not present for graphs with self-loops. I suggest that the authors either elaborate on how such an extension can be performed for their results, or alternatively remove this remark altogether. Other Comments Or Suggestions: * I suggest that the authors further highlight their main technical novelty regarding the negatively correlated arms, as it seems highly nontrivial and without it the rest of the techniques are pretty straightforward extensions from previous works. On that regard, I think that less focus should be given to the lower bound in the main text, as it seems pretty straightforward and doesn't include any highly novel techniques. Instead, the authors could provide further intuition and discussion on the negative correlation property, specifically, I would appreciate a high level intuition of why there must exist such a distribution, as the proof of Lemma 1.2 is quite technical and not so intuitive. Questions For Authors: 1. I would appreciate it if the authors could comment on the possible issue I mentioned under "Theoretical Claims". 2. Do the authors have an example where if their algorithm OSMD.G is run without the negative correlation condition, then this algorithm can incur regret of the form $\Omega(S \sqrt{\alpha T})$ even when the action set is full? I think that such an example can considerably strengthen the results, as it would imply that explicitly requiring this condition is essential for this algorithm. Given such a result, I may consider increasing my evaluation score of the paper. 3. Regarding the weakly-observable case, have the authors attempted an extension of EXP3.G (Alon et al., '15) to the combinatorial setting? If so, can the authors comment on why such an extension is nontrivial? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thorough review and comments and have done several updates in our revision in light of the review. - We appreciate the review spotting the issue of missing a factor $S$ in the log term. We have corrected this in our revision. - We agree that the lower bound is not the focus of this work (in fact, its proof was wrong as pointed out by Reviewer bn5t; we have corrected the hard instance construction and proof techniques in our revision). We will move the proof to the appendix and only briefly discuss the construction in the main text, in order to keep our focus on the upper bound ideas. Instead, we provide an intuitive reasoning for why such distributions exist, which is essentially the high-level idea of our proof of Lemma 1.2. Since the revision cannot be posted, we will paste this reasoning here for clarity: When $S=1$, any distributions possess negative correlations. Inductively, let us suppose such distributions exist for $1,2,\dots, S-1$. Then for a fixed target $x\in\mathsf{Conv}(\mathcal{A})$, we can always find an index $i\in[K]$ such that $\sum_{j=1}^{i-1}x_j + cx_i=1$ and $\sum_{j=i+1}^Kx_j + (1-c)x_i=S-1$ for some $c\in[0,1]$. Namely, the target of size $S$ is partitioned into two sub-targets with ranges $[1,i]$ and $[i,K]$, each with sizes $1$ and $S-1$, and with an overlap on index $i$. We can then assign $v_i=0$ with probability $1-x_i$, to the first half $[1,i]$ with probability $cx_i$, and to $[i,K]$ with probability $(1-c)x_i$. To obtain a final size $S$ solution, we draw $v'$ supported on $[1,i-1]$ with size $0$ or $1$ and $v''$ on $[i+1,K]$ with size $S-1$ or $S-2$, conditioned on the assignment of $v_i$. For any $j_1\in [1,i-1]$, $j_2\in[i+1,K]$, and $i$, any two of them are negatively correlated because, at a high level, the presence of one `reduces' the budget size of the other. The negative correlations among the first half $[1,i-1]$ and $[i+1,K]$ are guaranteed by the induction hypothesis of the existence of such distributions for solutions with size less than $S$. Finally, the structure of $\mathcal{A}$ ensures that our pieced-together solution is valid, i.e. lies in $\mathcal{A}$. - We have removed the assumption on self-loops in $G$ and extended our results to general strongly observable graphs. The extension is done via an adaptation of the efforts in [1] to our reward setting. In the discussion, we will also generalize $\widetilde{\Theta}(S\sqrt{T}+\sqrt{\alpha ST})$ to a subclass of decision sets (including the full set) satisfying this exchange property: for any $u,v\in\mathcal{A}_0$, there exist $i\in u$ and $j\in v$ such that $u-i+j$ and $v-j+i$ remain in $\mathcal{A}_0$. Examples include that a learner operates $S$ systems in parallel and chooses one action for each system at each time, such as multi-platform online advertising. - We agree that a comparison between the algorithms with/without negative correlations will be interesting and have practical value. Our focus is nonetheless on the theoretical understanding on the regret characterization, and thus we have not conducted numerical comparisons. Theoretical analyses of the two algorithms, on the other hand, require precise understandings of the trajectories of $x^t$ in OSMD-G and may be considered in future work. - It is definitely possible to adapt EXP3.G to the combinatorial setting for the weakly observable graphs. However, we believe the importance lies in figuring out the lower bound in the combinatorial setting (under full decision set or proper subsets). Straightforward analysis can give bounds like the one we have for ETC in Section 4.2, but they are interesting only if they turn out to be tight. In particular, in our study of the strongly observable case, preliminary results gave the upper bound $\widetilde{O}(S\sqrt{\alpha T})$ and the lower bound $\Omega(S\sqrt{T} + \sqrt{\alpha ST})$ and we did not know which one was tight. It then turns out they are both tight under different decision set structures. So the interesting question, rather than generalizing EXP3.G, would be the tight characterization under weakly observable graphs, which may again depend on the decision set. We do not have a lower bound beyond the existing $\Omega(\delta^{1/3}T^{2/3})$ for now. Again, we would like to thank the reviewer for the helpful feedback, and we are more than happy to answer any questions. References: [1] Alon, Noga and Cesa-Bianchi, Nicolo and Dekel, Ofer and Koren, Tomer. Online learning with feedback graphs: Beyond bandits. COLT 2015. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. Regarding my second question for the authors, I was not referring to numerical analysis, but rather to a theoretical result possibly establishing an algorithm-specific lower bound of a vanilla version of their algorithm (one which doesn't guarantee negative correlations in the sampling distributions). If the authors can prove that such a variant has a suboptimal regret lower bound, I think it significantly strengthens their result, and would be a very nice addition to this paper as it currently seems that the main technical novelty is this negative correlation property. Having read the authors' rebuttal, I am currently inclined to maintain my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer's reply and clarification. In fact, we realized it is *possible* to construct a hard instance such that OSMD with positive correlations suffers $\Omega(S\sqrt{\alpha T})$, which gives a good complement to our story. The statement and the idea are as follows: > Consider the full decision set and given $(K,S,\alpha,T)$. There is a graph $G$ with $\alpha(G)=\alpha$, a choice of mapping $F$, and a sampling scheme $p^t$ with which the OSMD has a minimax regret lower bound $\Omega(S\sqrt{\alpha T})$, when $S\alpha \le K$ and $ST\ge \alpha^3$. We consider the negative entropy mirror mapping $F$ (this can be further relaxed). The vanilla OSMD only requires $p^t$ to satisfy the mean condition, denoted by **(M)**, and we now choose a type of $p^t$ satisfying **(M)** but leading to positive correlations, and we construct an instance in which this type of OSMD suffers $\Omega(S\sqrt{\alpha T})$. To save space, consider the feedback graph $G$ and $H$ and the partition in Section 4.1, with cliques $V_1,...,V_{K/S}$ each with size $S$. Let $p^t$ be the following: - (1). If the target $x^t$ has same values over each clique, i.e. for each $V_i$ and any $u,v\in V_i$, $x^t_u=x^t_v\equiv x^t(V_i)$, then $p^t$ draws $V_i$ with probability $x^t(V_i)$ (note this is a valid distribution, since $S = \sum_{u\in[K]}x^t_k = S\sum_{i\in[K/S]}x^t(V_i)$). - (2). Otherwise use any $p^t$ satisfying **(M)**. Key observation is, if the rewards $r^t$ also has same values over each clique (let's call this property **(P)**), we can show (2) never happens and thus $p^t$ in (1) always has positive correlations. This claim can be seen in a few steps and an inductive argument: - The uniform initialization $x^1=\frac{1}{K}\mathbf{1}$ (given by minimizing the negative entropy) satisfies **(P)**. - Then $p^t$ draws $v^t$ as defined in (1). - Since $r^t$ satisfies **(P)** and the feedback given by $G$ reveals either another clique in entirety or none, the constructed reward estimator $\tilde{r}^t$ satisfies **(P)**. - Then the solution $w^{t+1}$ satisfies **(P)** by the update rule in eq.(6); and the projected solution $x^{t+1}$ satisfies **(P)** under KL projection onto the truncated convex hull. Inductively, $p^t$ is given by (1) for every $t$. So when the rewards $r^t$ respects **(P)**, this OSMD reduces to an MAB algorithm running on the cliques $V_1,...,V_{K/S}$ with graph $H$ (similar to the construction in Section 4.1). From the MAB lower bound, we know there is a set of hard instances with rewards $R^t$ that leads to $\Omega(S\sqrt{\alpha T})$, where $R^t_i$ is the *group* reward of the clique $V_i$ in the MAB lower bound. We just split it equally over the clique to obtain $r^t$, namely $r^t_u=\frac{R^t_i}{S}$ for $u\in V_i$, that respects property **(P)** and thus leads to the aforementioned behavior and the desired lower bound. **Note** that the previous work [1] proves bounds for OSMD with negative entropy $F$ and *any* $p^t$ as long as **(M)** is satisfied. This counterexample now shows the structure of $p^t$ is crucial to leverage the presence of additional feedback, so **(M)** is not enough. [1] Audibert, J.-Y., Bubeck, S., and Lugosi, G. Regret in online combinatorial optimization.
Summary: The paper extends the standard combinatorial semi-bandit problem by incorporating a feedback graph $G$ that allows the learner to observe rewards not just from the arms selected in the combinatorial action but also from their neighbors in $G$. The authors show that the optimal regret scales as $S\\sqrt{T} + \\sqrt{\\alpha ST}$ ignoring logarithmic factors, where $S$ is the combinatorial action size, $\\alpha$ is the independence number of $G$, and $T$ is the number of rounds. This result interpolates between the full-information and semi-bandit feedback scenarios. To achieve this regret guarantee, the authors propose an algorithm called OSMD-G that adopts online stochastic mirror descent (OSMD) with unbiased reward estimators for graph feedback and sampling distributions that ensure negative correlations among distinct arms. The authors also provide a regret lower bound of the same order. Claims And Evidence: Most claims in the paper are supported. However, some are stated without providing enough intuition as to why they would be correct. For instance, stating that the results assuming all self-loops in $G$ immediately extend to all strongly observable feedback graphs (e.g., see line 117) is not necessarily true. Even for standard (non-combinatorial) bandits with feedback graphs, extending the analysis to general strongly observable feedback graphs is definitely nontrivial and requires specialized technical results for it (Alon et al., 2015). One may therefore be convinced that it is even more nontrivial to have an analogous extension in the combinatorial setting. Moreover, throughout the paper the authors often overly generalize their claims by stating they hold for "general directed feedback graphs" while, again, they make the assumption of including all self-loops in the graphs considered. Methods And Evaluation Criteria: N/A Theoretical Claims: I examined all the theoretical claims of the submission. First, I would focus on the proof of Theorem 1.3, i.e., the regret lower bound. Checking the end of the proof, there seems to be some issue with the computation. Line 248 states that the regret $R(\\pi)$ of an algorithm $\\pi$ satisfies $$ \\max\_{u \\in [n]} \\mathbb{E}\_u[R(\\pi)] \\ge \\Delta ST (1/2 - 8\\Delta S\\sqrt{T/(3\\alpha)}) $$ whose right-hand side, replacing for the chosen value of $\\Delta$, becomes $\\frac{1}{32}\\sqrt{\\alpha ST} (1/2 - \\sqrt{S/48})$ which is non-positive, and thus vacuous, for $S \\ge 12$. Additionally, even before that, line 194 shows a sum where each term presents a KL-divergence $\\mathrm{KL}(\\mathbb{P}_0(r^t|R^{t-1}) \\| \\mathbb{P}_u(r^t|R^{t-1}))$. The issue here is that $\\mathbb{P}_0(r^t|R^{t-1})$ and $\\mathbb{P}_u(r^t|R^{t-1})$ present the entire reward vector $r^t$, and thus their KL-divergence is equal to $S \\cdot \\mathrm{KL}(\\mathrm{Bern}(1/4) \\| \\mathrm{Bern}(1/4+\\Delta))$ (by the chain rule over the coordinates of $r^t$), instead of only over $m \\in [S]$ such that $a\_{m,u} \\in N\_{\\mathrm{out}}(v^t)$. Nonetheless, it is possible that a more careful analysis might avoid this issue. Other that the proof of the lower bound, the other results seem to hold. Experimental Designs Or Analyses: N/A Supplementary Material: I reviewed the entire supplementary material. My only concern is about the proof of Lemma A.1. It seems that some cases for $i,j$ are missing which are not necessarily that equivalent to the ones presented at lines 570-589. Even so, this seems not to be an issue with respect to the correcteness of the claim. Relation To Broader Scientific Literature: The regret bound for OSMD-G follows directly from the known analysis of OSMD combined with the variance bound for feedback graphs. The only technical contribution here is the design of the distribution $p^t$ to allow a reduction to the standard analysis for the variance with feedback graphs. The applicability of the results also requires quite restrictive assumption that are commonly lifted in the related literature, such as the presence of all self-loops in the strongly observable feedback graph, the knowledge of the independence number (NP-hard to compute) for tuning the learning rate, and considering only the full combinatorial action space $\\mathcal{A}$. All of these point seem to underline a rather limited contribution of this submission. Essential References Not Discussed: Most of the relevant references are discussed. Other meaningful references on combinatorial bandits to include could be: - Richard Combes, Mohammad Sadegh Talebi, Alexandre Proutiere, Marc Lelarge. Combinatorial Bandits Revisited. NeurIPS 2015. - Alon Cohen, Tamir Hazan, Tomer Koren. Tight Bounds for Bandit Combinatorial Optimization. COLT 2017. - Lukas Zierahn, Dirk van der Hoeven, Nicolò Cesa-Bianchi, Gergely Neu. Nonstochastic Contextual Combinatorial Bandits. AISTATS 2023. Other Strengths And Weaknesses: ### Post-rebuttal edit: I thank the authors for providing detailed responses to address my concerns about the contents of their submission. I am satisfied with the responses. Provided the authors can guarantee to integrate all the modifications required to correct the issues and address the concerns pointed out in my review and the above discussion, I now have no strong reservations about the acceptance of this paper and I have updated my score accordingly. To respond to the authors' final question about prior work without assuming knowledge of the independence number, here are a few relevant references: - Kocák, Neu, Valko, and Munos. "Efficient learning by implicit exploration in bandit problems with side observations". NeurIPS 2014. - Alon, Cesa-Bianchi, Gentile, Mannor, Mansour, Shamir. "Nonstochastic Multi-Armed Bandits with Graph-Structured Feedback". SIAM Journal on Computing, 2017. - Rouyer, Van der Hoeven, Cesa-Bianchi, and Seldin. "A Near-Optimal Best-of-Both-Worlds Algorithm for Online Learning with Feedback Graphs". NeurIPS 2022. - Eldowa, Esposito, Cesari, Cesa-Bianchi. "On the Minimax Regret for Online Learning with Feedback Graphs". NeurIPS 2023. Other Comments Or Suggestions: - It is good practice to avoid using a parenthesized citation when it is a subject/object of a sentence (e.g., Koolen et al. (2010) instead of (Koolen et al., 2010) in line 91). - Another good practice is to place footnote numbers after eventual punctuation (e.g., line 118). - A more standard notation for a directed edge uses an ordered pair $(a,i)$ instead of $a \\to i$, as $E$ is a subset of ordered pairs of nodes (e.g., in line 74). - The self-loops assumption is actually a crucial and important assumption, but is only mentioned as footnotes (namely, footnotes 1 and 2). It would be better to state it in the main body. - The type of feedback for which the learner only observed the realized payoff $\\langle v^t, r^t\\rangle$ is often called "full-bandit" feedback. - The algorithm from Alon et al. (2015), called Exp3.G, is described as an explore-then-commit algorithm while it is not (see line 71). - Line 75: time-varying feedback graphs are also studied in Alon et al. (2015). - In the statement of Theorem 1.1, underline that it holds for any directed graph $G$ containing all self-loops instead of an arbitrary one. - $\\overline D$ (with an overline) is used without being defined. Please, provide a definition for it. - In line 243, it would be clearer to specify that (c) holds for the concavity of the square root together with, e.g., Jensen's inequality. - The log factors at lines 336, 383, and 675, following from the application of Lemma D.2, is missing an $S$ multiplicative factor in its argument. - The square power at the expectation at lines 347, 358, and 665 should be inside the square brackets. Typos: - The multiplicative factor $T$ in the regret bound at line 78 should not be there. - Line 147: $\\mathcal{A}$ instead of $A$. - Line 198: $\\Omega$ should actually be $[n]$, right? - Line 190: $a\_{m,u}$ instead of $a\_{m,u\_m}$. - Lines 252 and 277: occurrences of $x$ are missing a superscript $t$. - Line 274: $\\mathbb{E}[v] = x^t$ is missing a subscript $v \\sim p^t$, as otherwise even $x^t$ is a random variable (as similarly in multiple other places in this paper). - Line 348: same as line 274, but with a missing outer expectation too. - In the statement of Lemma A.1, quantify $S$ before using it in the definition of $\\mathcal{A}$. - Line 517: "basis vector" instead of "unit vector". - Line 528: $i>r$ instead of $i \\ge r$. Questions For Authors: - Please, address the main concerns pointed out above, if possible. - A minor question: could you be more precise on what you mean by "tabular" contextual bandits (line 81)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's detailed comments and, most importantly, for pointing out our error in the lower bound. We have the following updates in our revision in light of the review: - We have corrected the lower bound. Specifically, we now construct $S$ independent sub-problems by an $S$ dimension index $u\in[n]^S$ (while previously they were correlated through single index $u\in[n]$) and apply the stopping time argument in Theorem 2 of [1] to establish the exploration-exploitation tradeoff for each sub-problem. Regarding the KL chain rule, it is crucial (as in many previous works, e.g. [1, 2]) that our quantity of interest only depends on the observed rewards, so the distribution of the $m$-th sub-problem only differs when the node $a_{m,u_m}$ is observed. This refinement leads to the bound $\sum_{m=1}^S\mathbf{1}[a_{m,u_m}\in N_{out}(v^t)]\cdot \Delta^2$ instead of $S\cdot \Delta^2$. - We have removed the assumption on self-loops for $G$ and extended our results to general strongly observable graphs. The extension is done via an adaptation of the efforts in [3] to our reward setting. Additionally, we have clarified our claim statements to specify that they apply to strongly observable graphs rather than arbitrary graphs. - To address concerns about the practical applicability of our results, we will add a section to explain how $\widetilde{O}(S\sqrt{T} + \sqrt{\alpha ST})$ applies to a subclass of decision sets satisfying this exchange property: for any $v,u\in\mathcal{A}_0$ there exist $i\in v$ and $j\in u$ such that $v-\{i\}+\{j\}$ and $u-\{j\}+\{i\}$ are in $\mathcal{A}_0$. For example, this subclass includes multi-platform online advertising. - While it is true that general $\alpha$ is NP-hard to compute, in many practical applications, $\alpha$ (or any upper bound) is known due to the problem structure. For example, in online advertising with the winner's bid revealed [2] or inventory control, the feedback graph is known to have $\alpha=1$, and thus the change from $K$ to $\alpha$ is significant. Beyond practical applications, our contribution also includes a theoretical understanding of the optimal regret characterization in the combinatorial setup, including both the $\widetilde{\Theta}(S\sqrt{T}+\sqrt{\alpha ST})$ characterization on the above subclass of decision subsets and the impossibility result in Section 4.1. - We are grateful to the reviewer for the detailed comments and corrected typos and have carefully addressed them in the revision. - The references mentioned by the reviewer are indeed relevant and have been added to the revision. - By "tabular" contextual bandits, we refer to the contextual bandits without additional structures (e.g. linear contextual bandits) that only treat the contexts as general variables affecting the rewards. Once again, we deeply appreciate the reviewer’s insightful feedback, which has helped us refine and strengthen our work. References: [1] Lattimore, Tor and Kveton, Branislav and Li, Shuai and Szepesvari, Csaba. TopRank: A Practical Algorithm for Online Stochastic Ranking, NeurIPS 2018. [2] Han, Yanjun and Weissman, Tsachy and Zhou, Zhengyuan. Optimal no-regret learning in repeated first-price auctions. Operations Research 2025. [3] Alon, Noga and Cesa-Bianchi, Nicolo and Dekel, Ofer and Koren, Tomer. Online learning with feedback graphs: Beyond bandits. COLT 2015. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for taking the time to reply to my comments and concerns. While some of my remarks and minor doubts were addressed, I still have some concerns regarding some of the points I raised. **KL chain rule in the lower bound:** Thank you for clarifying this part of the derivation in the lower bound. Please, observe that you would need to correct the part of the proof at lines 165-196 (column 2) accordingly. **Proposed fix to the lower bound:** It is yet unclear to me how the proposed fix to the lower bound would resolve the issue I raised in my previous review. Constructing environments for any vector $u \\in [n]^S$ does not appear to be sufficient. The only change would essentially be that the sum $\\frac{1}{n} \\sum\_{u=1}^n$ would be replaced by $\\frac{1}{n^S} \\sum\_{u \\in [n]^S}$, thus the larger variety of environments due to "uncorrelating" the index within each group of arms would well be compensated by the multiplicative $\\frac{1}{n^S}$ factor throughout the entire proof. Given this, it is also not clear to me how using the stopping time argument from [1] would help in this regard. Do the authors actually have a convincing argument for solving the issue in the lower bound pointed out in my review? **Self-loops assumption and stated claims:** Thanks for clarifying the required assumption on the graph in your results; it is appreciated as I believe it will help the reader. About lifting the self-loops assumption for strongly observable graphs, I would like to have more details from the authors. Removing such an assumption is shown to be definitely non-trivial in previous works (e.g., [3] as also mentioned by the authors) as it typically requires specialized technical lemmas and a more refined regret analysis. This is even more true in the more general combinatorial setting considered in this work. One should then be careful in verifying that these aspects can indeed be achieved in a more difficult combinatorial setting, *simultaneously* to keeping the other properties of the proposed algorithm. As of now, there are just not enough details in order to assess whether all these aspects can indeed be achieved and, I reiterate, it could require a non-negligible amount of work. **Constrained decision sets:** Thank you for clarifying this aspect of your work. I believe your remark on extending the applicability of your results to constrained decision sets satisfying this "exchange" property is interesting and should be included in the paper. I would also explicitly leave the interesting question of extending the results to arbitrarily constrained decision sets to future work. **About prior knowledge of $\\alpha$:** Even if there may be some specific applications where the prior knowledge of $\\alpha$ may not be a problem, I still believe that this is a limitation of the proposed algorithm. The problem setting considered here is a general combinatorial bandits setting that considers an *arbitrary* feedback graph (containing all self-loops). Hence, the algorithm should be able to efficiently work in the most general case, where the prior knowledge of $\\alpha$ is not available. As I already mentioned in my previous review, this assumption is *not* required in recent related work on bandits with feedback graphs and it remains a concerning computational aspect. All things considered, it appear that solving all the issues would anyways require significant amount of work and changes to the content of the submission. In general, this would often require a resubmission of the work after a major revision, hence why I am currently keeping my initial evaluation. I would be happy to further discuss with the authors about my remaining doubts, especially if this may lead to resolving them, and I would also be happy to engage in a discussion with the other reviewers about my comments above. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's in-depth response. Indeed, we make a few changes to validate and improve our results, but the only major change is the lower bound correction. Please see the changes as follows: **Strongly Observable Graphs:** When $S=1$ it reduces to MAB, and we need to invoke an adapted version of Lemma 4 in [1]. But when $S>1$ it is straightforward by noting $\sum_{i\in U}x^t_i(\hat{r}^t_i)^2 \le \sum_{i\in U}x^t_i(\frac{\sum_{j\neq i}\mathbf{1}[v^t_j=1]}{\sum_{j\neq i}x^t_j})^2 \le \sum_{i\in U}x^t_i(\frac{S}{S-1})^2 \le 4\sum_{i\in U}x^t_i \le 4S$, where $U$ is the set of nodes with no self-loop. By splitting the sum in eq.(10) into $U$ and $U^c=[K]-U$, we can apply the same upper bound to the latter. To apply Lemma D.2 on $U^c$ (nodes with self-loops), note that for $i\notin U$, $\frac{1}{\sum_{j\in N_{in}(i)}x^t_j} \le \frac{1}{\sum_{j\in N_{in}(i)\land j\notin U}x^t_j}$ and $\alpha$ of subgraphs is upper bounded by $\alpha$ of the whole graph. The final bound is changed by a constant. The intuition is that, when $S>1$, every node in $U$ has to be observed, so the problem becomes easier. **Lower Bound Correction:** Due to space limit here, we kindly point the reviewer to [2]'s Appendix D (referred as *their*) for some details. The same decomposition follows *their* eq.(6): under any environment $u$, let $\mathbb{E}^{(u)}$ denote expectation under this environment. Then $\mathbb{E}^{(u)}[\mathsf{R}(\pi)] \ge \frac{\Delta}{2}\sum_{m=1}^S T - T_m(u_m;\tau_m)$ where $T_m(j;t)$ denotes the number of pulls of arm $a_{m,j}\in I_m$ up to time $t$, $T_m(t)=\sum_{a_{m,j}\in I_m}T_m(j;t)$ the total pulls in $I_m$, and $\tau_m = \min\lbrace T,\min\lbrace t: T_m(t)\ge T\rbrace\rbrace$ the stopping time as in [2]. Let $u_{-m}$ denote $u$ with $m$-th entry removed and $u^{-m}$ be its $m$-th entry replaced by $0$. As noted before, we only need to bound the Bayes regret $\mathbb{E}_{u\sim [n]^S} \mathbb{E}^{(u)}[\mathsf{R}(\pi)]$ (drawn uniformly). This is bounded by $$\frac{\Delta}{2}\sum_{m=1}^S\mathbb{E}_{u\sim [n]^S} \mathbb{E}^{(u)}[T - T_m(u_m;\tau_m)] = \frac{\Delta}{2}\sum\_{m=1}^S\mathbb{E}\_{u\_{-m}\sim [n]^{S-1}}\mathbb{E}\_{u_m\sim[n]}\mathbb{E}^{(u)}[T - T_m(u_m;\tau_m)].$$ Then we use Pinsker's inequality and KL applied to the *observed* rewards (recall our reward distribution construction in Section 2; now $a_{m,u_m}\in I_m$ is optimal) similar to *their* eq.(8), and get: $$\mathbb{E}^{(u)}[T_m(u_m;\tau_m)]\le\mathbb{E}^{(u^{-m})}[T_m(u_m;\tau_m)]+4\Delta T\sqrt{\mathbb{E}^{(u^{-m})}[T_0]+\mathbb{E}^{(u^{-m})}[T_m(u_m;\tau_m)]}$$ where $T_0$ denotes the pulls outside of the independent set $I=\cup_{m=1}^S I_m$. Again, if $\mathbb{E}^{(u^{-m})}[T_0]\ge \sqrt{\alpha ST}$ for any $m$, we are done, since $\mathbb{E}^{(u^{-m})}[\mathsf{R}(\pi)] \ge \frac{1}{4}\mathbb{E}^{(u^{-m})}[T_0].$ Otherwise, we have for each $m$, by Jensen's inequality, $$\mathbb{E}\_{u_m\sim[n]}\mathbb{E}^{(u)}[T -T_m(u_m;\tau_m)]\ge T - \frac{1}{n}\sum_{u_m=1}^n\mathbb{E}^{(u^{-m})}[T_m(u_m;\tau_m)]-4\Delta T\sqrt{\sqrt{\alpha ST}+\frac{1}{n}\sum_{u_m=1}^n\mathbb{E}^{(u^{-m})}[T_m(u_m;\tau_m)]}$$ $$\ge T-\frac{S+T}{n}-4\Delta T\sqrt{\sqrt{\alpha ST}+\frac{S+T}{n}}.$$ The second line follows from the definition of $\tau_m$ such that $\sum_{u_m=1}^nT_m(u_m;\tau_m) = T_m(\tau_m) \le T+S$ since each time the pulls increase at most by $S$. When $T\ge \max\lbrace S,\alpha^3\rbrace$ and $n\ge 4$, plugging this back, the Bayes regret is bounded by $$\frac{\Delta}{2}\sum\_{m=1}^S\mathbb{E}\_{u\_{-m}\sim [n]^{S-1}}\left[T - \frac{S+T}{n}- 4\Delta T\sqrt{\sqrt{\alpha ST}+\frac{S+T}{n}}\right]\ge\frac{\Delta ST}{4}-4\Delta^2ST\sqrt{T/n}$$ which gives $\frac{3}{1024}\sqrt{\alpha ST}$ for $\Delta=\frac{1}{64}\sqrt{n/T}=\frac{1}{64}\sqrt{\alpha/(ST)}$. The key idea is that with this decoupled multi-index $u$ and the introduction of stopping time, we are allowed to deal with $S$ independent sub-problems each incurring regret of order $\sqrt{nT}$ (recall each sub-problem $I_m$ has $n$ independent arms). **Knowledge of $\alpha$:** To the best of our knowledge, we are not aware of works achieving $\tilde{O}(\sqrt{\alpha T})$ without using $\alpha$ in the parameters in the adversarial case, while it is possible for stochastic rewards (e.g. the algorithm in our Appendix B doesn't need $\alpha$). We would greatly appreciate it if the reviewer could point us to a reference, so we can include an appropriate comparison. We hope the arguments here are clear, and are more than happy to answer any questions. While we have a few changes, the major amount of change only lies in the lower bound, so we kindly ask the reviewer to reconsider the evaluation. [1] Alon, Noga and Cesa-Bianchi, Nicolo and Dekel, Ofer and Koren, Tomer. Online learning with feedback graphs: Beyond bandits. COLT 2015. [2] Lattimore, Tor and Kveton, Branislav and Li, Shuai and Szepesvari, Csaba. TopRank: A Practical Algorithm for Online Stochastic Ranking, NeurIPS 2018.
Summary: The paper studies the problem of semi-bandits, aiming to generalize bandit feedback and the full-information setup into a single framework. The problem is formulated as follows: Given a graph $G=(K,E)$, where $K$ represents the number of vertices and $E$ is the set of edges, consider a $K$-armed bandit problem. If the learner selects an arm $a\in[K]$], they receive all the rewards associated with the neighbors of arm $a$ in $G$. When $G$ is a complete graph, this setting corresponds to the full-information setup, and when $G$ only contains self-loop edges, the feedback procedure corresponds to a standard multi-armed bandit problem. In the full-information setup and multi-armed bandits (the two extreme cases), the problem is well understood. The main objective of this paper is to bridge the gap between these two cases by providing upper and lower bounds that characterize the minimax regret based on the properties of $G$. The paper establishes a lower bound and, under the assumption that there exists a probability density function over the decision set with certain correlation properties (outlined in Theorem 3.2), proposes a mirror descent-type algorithm that achieves a matching upper bound. Furthermore, the authors discuss the necessity of the correlation property of the probability density function in the final section of the paper. Claims And Evidence: All claims in the paper have been completely addressed. Methods And Evaluation Criteria: The method and evaluation criteria make sense. Theoretical Claims: I have checked the correctness of the proofs in the main body of the paper. The theoretical claims are well-supported, and the methodology is sound. Experimental Designs Or Analyses: The paper is theoretical, so no experimental design has been considered. Supplementary Material: I did not check the supplementary material. Relation To Broader Scientific Literature: The contribution of the paper is related to the bandit framework and online machine learning setup, which has been addressed by the authors. Essential References Not Discussed: I do not have in mind any essential references that the authors did not discuss. Other Strengths And Weaknesses: The paper appears theoretically solid. I have some comments regarding the presentation, which I will address in the next section. Other Comments Or Suggestions: 1. I believe the authors should elaborate more on some definitions that were mentioned in the paper. This would enhance the readability of the paper: $S$ on the first page, defined as the size of the combinatorial decisions. It would also be helpful if the authors provided the definition of independent and dominating subsets in Section 1.3. Questions For Authors: 1. The algorithm requires knowledge of $\alpha,$ as it provides the sufficient information the learner needs about the graph $G$. Do the authors have any comments or insights on adaptive methods where the learner has no prior information about $G$? 2. Do the authors have any comments on problem-dependent bounds, such as a first-order regret bound? 3. Have the authors explored other regularizers for the mirror map, such as Tsallis entropy or the log-barrier function? Is there any advantage to using negative entropy, as stated in the current algorithm? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We truly appreciate the reviewer's review and inspiring questions. Please see the following for our updates in light of the review and our thoughts: - We appreciate the review's point on the readability and in our revision, we have added a few sentences on $S$ and the fact that $S>1$ corresponds to enlarging the selection budget of the learner in the classic multi-armed bandit case. We also add the definition of independent and dominating subsets in Section 1.3. - Q1 is an important question and requires answers from 2 aspects: 1. Suppose the graph $G$ is given. It is known to be NP-hard to find $\alpha$ for any $G$. However, one can substitute any upper bound of $\alpha$ into the algorithm and the results remain valid. More importantly, in many applications, $\alpha$ is (perfectly or approximately) known due to the problem structure. For example, in online advertising with the winner's bid revealed [1] or inventory control, the feedback graph is known to have $\alpha=1$, and thus the change from $K$ to $\alpha$ is significant. 2. If $G$ is not known but fixed. It takes the learner $K$ rounds to pull every node to know $G$ and pay $K$ regret, so our results still hold. Suppose $G$ is not known and is time-varying and $G_1,...,G_T$ are chosen adversarially. When the rewards are adversarial as in our case, [2] shows a lower bound $\Omega(\sqrt{KT})$ for $S=1$, i.e. no graph information can be leveraged, so we do not expect an improvement in our $S>1$ case either. When the rewards are stochastic, [2] shows a bound $\widetilde{O}(\sqrt{\alpha T})$, so it is interesting to see if similar improvement is possible for the combinatorial case with stochastic rewards. - Q2: To the best of our knowledge, first-order bounds in the presence of feedback graphs are still unexplored even in the multi-armed bandit setting ($S=1$), and we do not have much to say in this even harder combinatorial setting for now. Meanwhile, under stochastic rewards, instance-dependent bounds are possible, and how to leverage $G$ there would be an interesting future direction. - Q3: We choose to work with negative entropy only because it is relatively common. It is yet known that (related to Q2) log-barrier leads to first-order bound in the MAB setting. Thus there may be practical advantages or theoretical advantages if one is interested in the first-order bound. We thank the reviewer for the review and welcome any follow-up questions or suggestions. References: [1] Han, Yanjun and Weissman, Tsachy and Zhou, Zhengyuan. Optimal no-regret learning in repeated first-price auctions. Operations Research 2025. [2] Cohen, Alon and Hazan, Tamir and Koren, Tomer. Online learning with feedback graphs without the graphs. ICML 2016.
null
null
null
null
null
null
Transfer Learning for Nonparametric Contextual Dynamic Pricing
Accept (poster)
Summary: The paper studies dynamic pricing problems using transfer learning techniques. The objective is to maximize expected total rewards and to minimize regret. The problem setup is stylized (monopoly scenario, time homogeneous demand), yet, reasonable. Numerical experiments are performed and results are compared to other baseline solutions. Overall, the paper is well-executed. Claims And Evidence: The claims of the paper appear valid and are supported by the numerical evaluation. Methods And Evaluation Criteria: The methods used are suitable and interesting. The regret criterion makes sense. The presentation of the concepts used is dense in some places but overall ok. Theoretical Claims: The theoretical claims and proofs appear solid. However, I did not check all proofs provided in the Appendix in full detail. Experimental Designs Or Analyses: The experimental design is reasonable. Small synthetic examples as well as real-world data based experiments are performed. Results are compared against other state-of-the-art baselines. Results are discussed and insights are inferred. Supplementary Material: The Appendix provides proofs and additional experiments, which I partly reviewed. Relation To Broader Scientific Literature: The literature using related methodologies is sufficiently discussed. Essential References Not Discussed: I do not miss essential references. Other Strengths And Weaknesses: Strengths: - The paper derives analytical results (theoretical bounds). - The provided experiments are convincing. Weaknesses: - Limitations, scalability, and required parameter tuning of the approach could be better discussed. Other Comments Or Suggestions: I suggest to better discuss limitations, scalability, and required parameter tuning of the approach. Questions For Authors: I have the following questions: (1) Does the approach scale for larger d? (2) What happens if the other product is not similar? (3) Which hyperparameters have to be tuned? (4) What are the limitations of the approach? (5) Would it be possible to extend the setup to time-dependent demand and/or inventory considerations – as typical for e.g., airline ticket market, accommodation, etc.? Please discuss. After reading the other reviews and the rebuttal, I decided to keep my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and helpful comments. The additional simulation results are given in this anonymized link. https://docs.google.com/document/d/e/2PACX-1vRBfGzJo3ETCltTWfOi_0p4RjLbsUJo6g9z0J-Ckm2m6fL0fahJWSrrptiFwOGCyxhtHNuyHsQP0tOh/pub **Q1** Our theoretical guarantees hold for any $d$. Like most nonparametric works, however, our approach does encounter curse of dimensionality in high-dimensional settings, which can be seen from its regret upper bound. To mitigate this, one can impose additional structures on the reward function $f(x,p)$, where $x=(x^{(1)}, ..., x^{(d)})\in \mathbb R^d$ is the context. For example, one can adopt a sparse additive model $f(x,p)=\sum_{i=1}^d g_i(x^{(i)}, p)$, with sparsity $s \ll d$ (i.e. most $g_i(\cdot,\cdot)$ are zero functions). In this setting, the effective dimension is $s$ rather than $d$, albeit at the potential cost of model misspecification. Following your comment, we will provide a detailed discussion on this aspect. **Q2** First, we note that the context $X$ can encode useful information such as product specifications and consumer features, thus allowing the revenue function $f(x,p)$ to remain roughly stable across different products conditioned on a rich $X=x$. For example, in our real data experiments in Section 5.2, loans differ in their terms (e.g. duration). The source and target data differ in market conditions (i.e. different US regions). Despite these discrepancies, our method still improves performance, showing its robustness. Second, we further test robustness of our method under posterior drift (i.e. when revenue functions are different between source and target). Following the setup of Configuration 1 in Scenario 1, we modify the true source revenue function as $f(x, p) = \theta_0' + x^{\top} \theta' p + \tilde{\theta}' p^2,$ where $\theta_0' \sim \mathcal{N}(\theta_0, \sigma^2)$, $\theta' \sim \mathcal{N}(\theta, \sigma^2 I_d)$, and $\tilde{\theta}' \sim \mathcal{N}(\tilde{\theta}, \sigma^2)$, with $\theta_0, \theta, \tilde{\theta}$ being the target revenue function parameters. We set $\sigma = r \cdot \min(|\theta_0|, \|\theta\|_\infty, |\tilde{\theta}|)$ and control the drift severity by varying $r$. Results (Figure 1 in the link above) show our method performs well and is robust under moderate posterior drift. **Q3** One key hyperparameter is the smallest exploration radius $\tilde{r}$. While the regret bound remains valid for any choice of $\tilde{r}$ as discussed in Remark 4.2, the optimality of our TLDP algorithm relies on the specific $\tilde{r}$ given in equation (13). That choice depends on two parameters often unknown in practice: the transfer exponent $\gamma$ and the exploration coefficient $\kappa$. In practice, $\kappa$ can be estimated from the source data, while $\gamma$ can be estimated by measuring the empirical overlap between the source and target covariate distributions using a small portion of the target data (e.g. $\sqrt{n_Q}$ samples). In our synthetic experiments, we used the true values of $\kappa$ and $\gamma$ to compute $\tilde{r}$. To assess robustness, we have conducted additional simulations under Configuration 1 in Scenario 1, using a list of mis-specified values of $(\kappa,\gamma)$, while keeping the true values fixed at $\kappa = 0.6$ and $\gamma = 1$. Results (Figure 2 in the link above) indicate that our method remains robust under moderate misspecification of these parameters. **Q4** As discussed in our response to your previous question, one limitation would be that to achieve optimal regret, our approach requires the knowledge of the unknown quantities $(\kappa, \gamma)$. It would be interesting to rigorously extend our method such that it is adaptive to the knowledge of $(\kappa, \gamma)$ and further establish theoretical guarantees for such an algorithm. In the revision, we will discuss more on this aspect of our limitation. **Q5** If the demand (thus revenue) function depends explicitly on time (e.g. seasonality, holiday, weekly effects), we can include time as an additional dimension in the context space. Our partition-based procedure then proceeds similarly and the theoretical guarantees still hold. However, if the revenue function $f(x,p)$ changes over time in a rather arbitrary fashion, we then need to rely on additional procedures such as change-point detection or moving-window estimation to counter the non-stationarity. We conjecture the regret upper bound will inevitably inflate in this case. As for inventory consideration, we conjecture one needs to reformulate the revenue maximization problem into a constrained optimization due to inventory limit. Based on deterministic approximation, Wang et al 2025 (on dynamic pricing with covariates) proposes a dual-based method for dynamic pricing under parametric models. We believe similar ideas can be used to help solve price $p_t$ in our Algorithm 1 under inventory constraint. We will comment on these aspects in the revision.
Summary: The authors study the problem of transfer learning in the context of dynamic pricing. Given a pre-collected source dataset, the authors propose an algorithm that exploits such a dataset to learn a partitioning of the joint context-price space in order to propose a price for each user with the goal of maximizing the cumulative revenue. The authors also study a lower bound of the problem and derive, as a consequence, both an upper and a lower bound in the case in which no source dataset is available. --- Post rebuttal: The authors adequately answer my questions. Given that, I increased my score from 2 to 3. Claims And Evidence: The theorems proposed by the authors are supported by proofs. However, my concern is with respect to the assumptions required for the theorems to hold, as I will discuss later. Methods And Evaluation Criteria: The authors propose both a synthetic experiment and an experiment on real data. Regarding the synthetic experiment (i.e., Section 5.1), the simulation setting is crafted to match the assumptions required by the algorithm, which do not seem likely to hold in a real-world scenario, thus subtracting from the representativeness of the experiment. The experiment on real data (i.e., Section 5.2), on the other hand, seems to be much more promising, however some concerns remain, which will be discussed later. Theoretical Claims: I have not thoroughly checked the proof in the appendix. Experimental Designs Or Analyses: I am not completely convinced by the validity of the experiments, but I am looking forward to the discussion with the authors to confirm or clear my concerns. Supplementary Material: Apart from the appendix, no supplementary material (i.e., the code used in the experiments) is provided. Relation To Broader Scientific Literature: This work tackles an interesting problem, namely trying to exploit previous data to reduce the number of samples required to learn the pricing strategy of a new product, however, to the best of my understanding, some peculiarities of the specific setting have not been correctly addressed. Essential References Not Discussed: The references cited in this paper are sufficient to understand the motivation and the contributions of this work. To the best of my knowledge, no further references are necessary. Other Strengths And Weaknesses: **Strengths** This work in written in a clear and understandable way, guiding the reader through the authors' reasoning. Also, I believe the problem tackled by the authors to be of practical importance. **Weaknesses** My main concerns are on the problem formulation and the assumptions made in this work. W1) The formulation of the random revenue seems to be incorrect. Although I understand the provenance of such a formulation, I believe it does not fit in the dynamic pricing setting, as the seller either observes a revenue of 0 (i.e., the customer does not buy the product) or of $p_t$ (i.e., the customer buys the product), and not a random revenue which can, in general, be in $[0,1]$. The simulation in Section 5.1 shows that a Gaussian is used to model the random revenue, which again does not seem correct. W2) The assumption in Equation (3) (i.e., that the unknown reward functions are identical between the source problem and the target problem) seems to me as too strong, and unverifiable in the majority of applications. Such an assumption would require that every possible customer has the same propension to buy the two (in general, different) products for all the prices in the range. This is not in general true for different products and/or for different markets. The real-world experiment in Section 5.2 works well since the same product (i.e., loans) in the same market (i.e., the USA) is considered. However, I find this assumption very limiting. W3) Considering weakness W2), it seems to me that the alternative setting (i.e., posterior drift) could have been a more applicable approach. The weaknesses stated above have guided my judgement, however, in case the authors address them satisfactorily, I could reconsider my evaluation. Other Comments Or Suggestions: Here I list some minor modifications that I believe could increase the readability of the work: 1. Line 57, right column, $\mathcal{Z} = \mathcal{X} \times \mathcal{P}$ is the more unambiguous definition of $\mathcal{Z}$; 2. The authors should formalize what is $Q_X (B)$ when $B$ is a ball (see Assumption 2.2); 3. Although understandable from the text, Algorithm 1 is demanding to read, and I would suggest the authors to write a higher-level algorithm in the main paper, reviewing and rewriting Algorithm 1 in the appendix; 4. In Theorem 4.1, it is not clear what the authors mean by writing "Assume that the target dataset, defined in (1)", as Equation (1) represents the expected reward. Questions For Authors: I would like the authors to answer the following questions: Q1) In Theorem 4.1, the authors write that the source dataset comprises "triplets independent across time". Does this mean that a dataset obtained from the observations collected in a different dynamic pricing application would not work, since each price would then be dependent on past observations? If yes, does removing such an assumption have an impact on the resulting regret bound? Q2) In lines 264-266, the authors write "The choice of $\widetilde{r}$ in (13) ...". However, since $\widetilde{r}$ depends on $C_r$, and $C_r$ is, in turn, only defined in terms of two inequalities, is there a way to define a single optimal value of $C_r$? Q3) From my understanding, $Q_X$ is a probability distribution, and as such $Q_X (\cdot) \in [0,1]$. Thus, following Assumption 2.2, I infer that $0 < c_Q < C_Q <= 1$. However, from the experiment in Section 5.1, at line 305, right column, a choice of $C_r = 1/4$ together with the conditions of line 247, left column, would result in $c_Q \ge 2048$. Can the authors clarify this point? Q4) $P_X$ and $Q_X$ are not known in a real-world scenario, and $\gamma$ and $\kappa$ depend on these distributions. The value of $\widetilde{r}$, in turn, depends on $\gamma$ and $\kappa$. Given these observations and what the authors write in the conclusions, what could be the impact of misspecification, or of an interactive refinement, of these values on the regret? Moreover, what values have the authors considered for $\gamma$ and $\kappa$ in the experiment in Section 5.2? Finally, I ask the authors to address my comments from the "Weaknesses" section of the review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and helpful comments. The additional simulation results are given in this anonymized link. https://docs.google.com/document/d/e/2PACX-1vRBfGzJo3ETCltTWfOi_0p4RjLbsUJo6g9z0J-Ckm2m6fL0fahJWSrrptiFwOGCyxhtHNuyHsQP0tOh/pub **W1)** The classic $0$ or $p$ revenue model is a special case of our model. Formally, revenue $Y$ is $p\cdot D(x,p)$, where $D(x,p)$ is the random demand given context $x$ and price $p$ with $\mathbb E(D(x,p))=d(x,p)$. This leads to $\mathbb E(Y)=f(x, p) = p d(x, p)$. Thus we can directly model $f(x,p)$. When $D(x,p)$ is Bernoulli, it gives the 0 or $p$ revenue. However, demand $D(x,p)$ can be continuous (e.g. when sold by weight/volume like rice/gas). Also, we clarify in simulation scenario 2, we use [0,1]-truncated Gaussian centred at $f(x, p)$. **W2)&3)** Covariate shift is a widely used scheme in transfer learning (Suk&Kpotufe2021, Cai et al 2024). The context $X$ can encode product specifications and consumer features, thus allowing revenue function to remain roughly stable across products or markets when conditioned on a rich $X$. In Section 5.2, loans differ in their terms (e.g. duration). The source and target data differ in market conditions (i.e. different US regions). Despite the discrepancies, our method still improves performance, showing its robustness. We further test robustness under posterior drift. Following the setup of Configuration 1 in Scenario 1, we modify the true source revenue function as $f(x, p) = \theta_0' + x^{\top} \theta' p + \tilde{\theta}' p^2,$ where $\theta_0' \sim \mathcal{N}(\theta_0, \sigma^2)$, $\theta' \sim \mathcal{N}(\theta, \sigma^2 I_d)$, and $\tilde{\theta}' \sim \mathcal{N}(\tilde{\theta}, \sigma^2)$, with $\theta_0, \theta, \tilde{\theta}$ being the target revenue parameters. We set $\sigma = r \cdot \min(|\theta_0|, \|\theta\|_\infty, |\tilde{\theta}|)$ and control the drift severity by varying $r$. Results (Figure 1 in the link above) show our method performs well and is robust under moderate posterior drift. **Q1)** The independence assumption (i.e. source data generated by a fixed behavior policy) is a standard assumption in bandit and RL literature, which simplifies theoretical analysis by enabling concentration inequalities for independent data. Our analysis can be extended to allow temporally dependent observations, such as those satisfying mixing conditions. Note that one must impose some structural conditions on the adaptivity of source data, as without them, data with arbitrary dependence can be uninformative. It would be interesting to allow source data to be collected via an adaptive policy. Doing so introduces additional complexities in terms of dependence structures and requires careful handling in the analysis to ensure valid results. We leave this for future research. **Q2)** The constant $C_r$ is used to define the smallest exploration radius $\tilde{r}$, which intuitively controls a bias-variance trade-off: $\tilde{r}$ small enough to yield fine local estimates v.s. large enough to avoid excessive partitioning and exploration. We do not have a single optimal value for $C_r$ in closed form as it depends on unknown quantities (e.g. $c_Q$ and $c_{\gamma}$). However, any constant $C_r$ in a range that satisfies the bounding inequalities leads to optimal regret (in order $O(\cdot)$). This is inline with bandit and RL literature, where one often settles for an order-wise optimal choice rather than a numerically unique constant. **Q3)** You are correct that $Q_X$ is a probability distribution, so Assumption 2.2 implies $c_Q\leq 1$. The constant 8 in $C_r^4c_{\gamma}c_Q\geq 8$ is merely sufficient for technical simplicity, and by no means necessary and sharp. While we conjecture that it may be technically possible to sharpen $8$ to some smaller constant to cover the choice of our experimental design, the magnitude of these constants does not affect the order of regret. In fact, our experiment shows our method is robust when the sufficient condition $C_r^4c_{\gamma}c_Q\geq 8$ is violated. **Q4)** Misspecification of $\gamma$ and $\kappa$ affects the optimal choice of $\tilde{r}$, and consequently, the regret bound, as discussed in Remark 4.2. In practice, $\kappa$ can be estimated from source data, while $\gamma$ can be estimated by measuring the empirical overlap between the source and target covariate distributions using a small portion of target data (e.g., $\sqrt{n_{Q}}$ samples). In our experiment, we use true $\kappa$ and $\gamma$ to compute $\tilde{r}$. To assess robustness, we conduct additional simulations under Configuration 1 in Scenario 1, using a list of mis-specified values of $(\kappa, \gamma)$, while fixing true values at $\kappa = 0.6$ and $\gamma = 1$. Results (Figure 2 in the link above) indicate our method remains robust under moderate misspecification of these parameters. We hope our responses help resolve your concerns and positively influence your evaluation. --- Rebuttal Comment 1.1: Comment: I would like to thank the Authors for thoroughly addressing my questions and concerns. I appreciate the clarifications provided and the robustness of the experiments linked. As a final request, I would like the Authors to add such discussions (mainly those regarding the robustness) to the paper, as I have found them impactful in terms of practical applicability of this work. As a consequence, I am increasing my evaluation. --- Reply to Comment 1.1.1: Comment: Thank you very much for your acknowledgments, suggestions and appreciation. We will endeavour to include the discussions, especially those on robustness, to the paper, within the length limit.
Summary: The paper studies contextual dynamic pricing with nonparametric demands, a critical application in revenue management. The authors consider how transfer learning techniques can be applied for this problem, and achieve minimax optimal regret by devising a provably optimal online dynamic pricing algorithm while also providing matching information-theoretic lower bounds. They show the regret exhibits a phase transition phenomenon: when the source data size is small, the optimal regret rate is the same as the case where only target data is used, and the rate will benefit a lot from transfer learning when the source data size is larger. The authors also conduct numerical experiments to study the empirical behavior of their TLDP algorithm. Claims And Evidence: Yes. The paper's contributions are mainly on the theoretical side, and all assumptions are clearly stated and all claims are rigorously proved. Methods And Evaluation Criteria: The experiment section is easy to follow. The benchmark selection of the ABE and the ExUCB algorithms is reasonable. Theoretical Claims: I did not check the proofs carefully, but I did follow the main steps regarding the minimax optimality of regret. The proofs seem to be well-written. Experimental Designs Or Analyses: The experimental results look sound. Supplementary Material: I have reviewed the main steps of the proofs and the additional experiment results in the appendix. Relation To Broader Scientific Literature: The authors are the first to present a theoretical characterization of the benefits of applying transfer learning to the nonparametric contextual dynamic pricing setting. Essential References Not Discussed: None. Other Strengths And Weaknesses: The strength of the paper lies in the strong technical content involved in characterizing the minimax optimal regret for nonparametric contextual dynamic pricing with transfer learning. The analysis is quite skillful and the obtained results are strong. In particular, the paper can be seen as an extension of [1], to a contextual bandit with continuous actions (specialized to pricing), since the major setup and assumptions are the same. A minor suggestion is that the authors need to comment on the additional technical challenges of analyzing the discretized version of the algorithm in [1] for the dynamic pricing setting (lack of such discussion is a current weakness of the paper). [1] Cai, Changxiao, T. Tony Cai, and Hongzhe Li. "Transfer learning for contextual multi-armed bandits." The Annals of Statistics 52.1 (2024): 207-232. Other Comments Or Suggestions: None. Questions For Authors: It is more common for researchers to study parametric demand models in dynamic pricing. Can you comment on why they solve the non-parametric problem first while not look into the parametric problem? In particular, how will your result look like or what are the challenges of characterizing the benefits of transfer learning for contextual dynamic pricing with (partially) linear demands (as in [1])? [1] Bu, Jinzhi, David Simchi-Levi, and Chonghuan Wang. "Context-based dynamic pricing with partially linear demand model." Advances in Neural Information Processing Systems 35 (2022): 23780-23791. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and insightful comments. **Comparing our work with Cai et al (2024).** Cai et al (2024) study transfer learning for nonparametric contextual MAB under covariate shift. In their setting, actions (i.e. arms) are discrete and for simplicity are treated as a constant (i.e. the number of arms $K \asymp 1$). The simplification $K \asymp 1$ results in their regret bound losing explicit control of $K$, making their approach inapplicable to infinite or uncountable action spaces. In contrast, dynamic pricing presents a unique challenge due to its continuous action space (i.e. prices), requiring more intricate methodologies than that in Cai et al (2024). Our approach effectively addresses this by adaptively partitioning the joint covariate-price space. This adaptation ensures optimal regret scaling while accounting for both the covariates (of dimension $d$) and price (of dimension $1$). We will include the discussion in our revision. **Nonparametric assumption.** By not assuming a specific functional form for the demand curve, a nonparametric approach allows for more adaptability in capturing complex relationships between price and demand, especially when the true underlying demand structure is unknown or difficult to specify a priori. Therefore, compared with parametric model, the nonparametric model offers greater flexibility, which motivates our work. **Partially linear models in Bu et al (2022).** In terms of the partially linear model as you mentioned in Bu et al (2022), it covers two different demand models, $d(x,p)=bp+g(x)$ and $d(x,p)=f(p)+a^\top x$. For the first one, we believe that similar techniques developed in this paper, namely the adaptive exploration/partition of the covariate space, could be particularly useful for handling the nonparametric part $g(x)$ of the demand function and its transfer learning. On the other hand, due to the linearity in price, we conjecture a local linear demand model should be fitted within each bin to leverage the structure of the problem and achieve the optimal regret (instead of a local constant revenue function as in our paper). Since there is no interaction between $p$ and $x$ and there is linearity structure in $p$, we believe sharper rates than the ones in our paper can be achieved. For the second one, we conjecture a different upper bound strategy other than our adaptive exploration/partition should be used to leverage the linearity in $a^\top x$ and the smoothness in the one-dimensional function $f(p)$. In addition, for the linear part, under the covariate shift framework, we believe that techniques developed in existing literature, such as those in Lei et al (2021) and He et al (2024), can be applied for its transfer learning. A thorough investigation of transfer learning under the partially linear demand models is indeed an interesting avenue for future research. Following your suggestion, we will discuss these aspects in the revision. **References** [1] Cai, Changxiao, T. Tony Cai, and Hongzhe Li. "Transfer learning for contextual multi-armed bandits." The Annals of Statistics 52.1 (2024): 207-232. [2] Bu, Jinzhi, David Simchi-Levi, and Chonghuan Wang. "Context-based dynamic pricing with partially linear demand model." Advances in Neural Information Processing Systems 35 (2022): 23780-23791. [3] Lei, Qi, Wei Hu, and Jason Lee. Near-optimal linear regression under distribution shift. In International Conference on Machine Learning, pages 6164–6174. PMLR, 2021. [4] He, Zelin, Ying Sun, and Runze Li. Transfusion: Covariate-shift robust transfer learning for high-dimensional regression. In International Conference on Artificial Intelligence and Statistics, pages 703–711. PMLR, 2024
Summary: This paper introduces a novel Transfer Learning for Dynamic Pricing (TLDP) algorithm designed to effectively utilize pre-collected data from a source domain to improve pricing decisions in a target domain. The regret upper bound of TLDP is established under a straightforward Lipschitz condition on the reward function. To demonstrate the optimality of TLDP, we also derive a matching minimax lower bound, which encompasses the target-only scenario as a special case and is presented for the first time in the literature. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: This paper considers a transfer learning scenario in the dynamic pricing problem. Essential References Not Discussed: No Other Strengths And Weaknesses: 1. The presentation is good. The authors clearly state their problem background as well as their contributions. 2. The paper proposes a novel algorithm for transfer learning in dynamic pricing , and provide regret upper/lower bounds of the problem, which is shown to be minimax optimal. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your appreciation, especially in acknowledging our presentation and our novelty.
null
null
null
null
null
null
Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond
Accept (poster)
Summary: This paper explores the robustness of large language model (LLM) unlearning against relearning attacks, which can effectively restore forgotten knowledge through minimal fine-tuning. The authors establish a connection between robust LLM unlearning and Sharpness-Aware Minimization (SAM), a technique designed to improve generalization by flattening the loss landscape. The contributions of this paper include: 1. Formulating LLM unlearning as a min-max optimization problem, analogous to adversarial training, where the adversary aims to reverse the unlearning effect. 2. Demonstrating that SAM and broader smoothness optimization techniques (gradient penalty, curvature regularization, randomized smoothing, weight averaging) enhance robustness against relearning attacks. 3. Conducting extensive experiments on WMDP and MUSE datasets, showing that smoothness optimization significantly improves LLM unlearning stability. 4. Extending the framework to defend against jailbreaking attacks, making LLM unlearning more resistant to adversarial prompting Claims And Evidence: yes Methods And Evaluation Criteria: pros 1. The analogy between unlearning robustness and adversarial training is insightful and well-justified. Introducing SAM-based optimization as a solution is a novel contribution to the field. 2. Conducts large-scale experiments across two benchmarks (WMDP, MUSE) and multiple unlearning techniques. Evaluates both relearning attacks (fine-tuning-based) and jailbreaking attacks (prompt-based), covering diverse adversarial settings. 3. Shows that SAM-enhanced unlearning consistently outperforms standard methods. Provides quantitative insights into how different smoothness optimization techniques (RS, GP, CR, WA) impact unlearning resilience. 4. Provides min-max optimization analysis to justify why sharpness-aware methods improve unlearning stability. Derives the connection between curvature regularization and robustness against relearning. Cons: 1. The paper only considers small-scale fine-tuning-based relearning attacks. More adaptive attack strategies, such as gradient inversion e.g., [1] or meta-learning-based attacks, e.g., [2], should be explored. 2. SAM and second-order smoothness techniques introduce non-trivial training costs, which might limit practical adoption in large-scale LLMs (e.g., GPT-4, PaLM). Discussion on efficiency trade-offs is missing—how does increased robustness affect model training time? 3. While WMDP and MUSE are useful benchmarks, real-world regulatory or compliance-driven unlearning cases (e.g., GDPR data deletion) should be considered. Scalability beyond benchmark datasets is not addressed—how does the method perform in internet-scale training corpora? 4. Jailbreaking defenses are briefly discussed, but more sophisticated adaptive jailbreak attacks (e.g., prompt engineering-based attacks) should be tested. Evaluating how well unlearned models withstand iterative adversarial prompting would strengthen the claim that smoothness optimization mitigates jailbreaking. [1] Zhang, Rui, et al. "A survey on gradient inversion: Attacks, defenses and future directions." arXiv preprint arXiv:2206.07284 (2022). [2] Gong, Xueluan, et al. "Augmenting Model Extraction Attacks Against Disruption-Based Defenses." IEEE Transactions on Information Forensics and Security (2024). Theoretical Claims: no proof Experimental Designs Or Analyses: see Methods And Evaluation Criteria Supplementary Material: yes, all Relation To Broader Scientific Literature: see Methods And Evaluation Criteria Essential References Not Discussed: see Methods And Evaluation Criteria Other Strengths And Weaknesses: see Methods And Evaluation Criteria Other Comments Or Suggestions: see Methods And Evaluation Criteria Questions For Authors: see Methods And Evaluation Criteria Ethical Review Concerns: no need Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate Reviewer s57’s careful evaluation of our work. The constructive criticism and insightful questions help us further improve the paper. We respond to each key question below. 1. **Response to the choice of attacks** Thank you for raising this question. Based on your suggestion, we have looked into the suggested references and their attack methods. However, we feel that they were not very appropriate in the context of this work. We choose relearning attacks and jailbreaking attacks as the main evaluation settings in this work because they are widely recognized as the two dominant and SOTA attack types in the LLM unlearning literature [1][2][3]. These attacks align well with the general machine unlearning setting, where an attacker either fine-tunes the unlearned model using a small amount of data or uses adversarial prompts to circumvent unlearning effects. However, gradient inversion and meta-learning-based attacks are primarily developed for CNN-based image classification models and thus, it is difficult to directly adapt these attacks to LLM unlearning. 2. **Response to computation efficiency and larger models** Thank you for your insightful suggestion. **[Fig. R1](https://ibb.co/4Rqfnq8d)** presents the total run time of our proposed smoothness-enhanced unlearning approachs as well as the additional baseline approach, **TAR** (Tampering Attack Resistance via meta-learning) [1]. As we can see, NPO+SAM shows the second-best efficiency, slightly behind NPO+WA. It approximately doubles the runtime of vanilla NPO due to the one-step maximization for weight perturbation in the alternating gradient ascent-descent implementation of SAM (Appendix A). In addition, TAR incurs a prohibitively high computational cost, with a running time of 7,441.9 minutes, which is 647× slower than NPO+SAM. In addition, **[Table R1](https://ibb.co/S7fFVRfX)** provides additional robustness comparison results using a **larger model, LLaMA 3 8B**, which is the largest model supported in our lab environment. In addition to TAR, we also considered another baseline approach **LAT** (Latent Adversarial Training, which enhances robustness to persistent harmful behaviors in LLMs by adding perturbations to neuron activations) [5]. As we can see, NPO+SAM delivers highly competitive performance on the WMDP-Bio unlearning task, comparable to TAR and significantly outperforming LAT. This shows the consistent effectiveness of NPO+SAM on the larger 8B-sized model. 3. **Response to benchmark choice** To the best of our knowledge, WMDP is a representative benchmark closely aligned with practical unlearning needs, focusing on the removal of harmful or sensitive knowledge (e.g., biological facts) from pre-trained LLMs. MUSE is another widely used benchmark for data- and knowledge-wise unlearning, including copyrighted books (MUSE-Books) and real-world news (MUSE-News). These benchmarks reflect real-world goals: WMDP promotes safe content generation, while MUSE addresses copyright concerns. Moreover, we are not aware of any established unlearning benchmark built on internet-scale training corpora. This is expected, as unlearning is fundamentally different from pretraining, and typically operates under a well-defined, narrow unlearning scope, as evidenced by the small size of forget datasets in existing benchmarks. 4. **Response to the choice of jailbreaking attack** Thank you for raising this question. To clarify, Enhanced-GCG is a prompt-engineering-based, adaptive attack that optimizes prompts against the unlearned model, making it particularly effective at bypassing unlearning defenses. We selected Enhanced-GCG because it has been shown to be the most effective jailbreaking attack specifically designed for LLM unlearning [3]. In contrast, other jailbreaking methods [6][7] fail to reliably compromise unlearned models, rendering them less suitable for evaluating worst-case robustness of LLM unlearning against input-level attack. [1] Lynch, Aengus, et al. "Eight methods to evaluate robust unlearning in LLMs." arXiv preprint arXiv:2402.16835 (2024). [2] Hu, Shengyuan, et al. "Jogging the Memory of Unlearned Models Through Targeted Relearning Attacks." ICML 2024 Workshop on Foundation Models in the Wild. [3] Łucki, Jakub, et al. "An adversarial perspective on machine unlearning for AI safety." arXiv preprint arXiv:2409.18025 (2024). [4] Tamirisa, Rishub, et al. "Tamper-resistant safeguards for open-weight llms." arXiv preprint arXiv:2408.00761 (2024). [5] Sheshadri, Abhay, et al. "Latent adversarial training improves robustness to persistent harmful behaviors in llms." arXiv preprint arXiv:2407.15549 (2024). [6] Li, Nathaniel, et al. "The WMDP benchmark: Measuring and reducing malicious use with unlearning." arXiv preprint arXiv:2403.03218 (2024). [7] Huu-Tien, Dang, et al. "On effects of steering latent representation for large language model unlearning." arXiv preprint arXiv:2408.06223 (2024).
Summary: The paper reveals that Sharpness-Aware Minimization (SAM), traditionally used for improving model generalization, naturally yields a robust optimization framework for LLM unlearning. Through experiments, the paper shows that SAM-enhanced unlearning methods result in smaller discrepancies between model performance before and after relearning attacks, indicating better retention of unlearned information. Claims And Evidence: The paper presents experimental results and visualizations that support its claims and highlights the potential of SAM as a tool for improving the security and privacy of LLMs. Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The experimental design involves evaluating the unlearning robustness of the methods on different datasets, such as WMDP, AGNews, GSM8K, and SST2, under various relearning attack settings. The paper also reports utility performance results to assess the balance between unlearning and model performance preservation. No obvious issues with the experimental design or analyses were apparent. Supplementary Material: No, I did not go through it. Relation To Broader Scientific Literature: The paper contributes to the field by demonstrating the effectiveness of SAM in enhancing the robustness of LLMs against relearning attacks during the unlearning process. Essential References Not Discussed: No Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for the positive review. Your comment regarding the lack of theoretical claims has encouraged us to reflect on whether rigorous guarantees can be established to support the improved unlearning robustness enabled by SAM. While our strong empirical validation has already been acknowledged by reviewers, we agree that exploring theoretical underpinnings remains an important and valuable future direction. Inspired by the comment, we made an effort to bound the least number of relearning steps against an unlearned model and link it with the smoothness of the loss landscape, quantified by the largest eigenvalue of the Hessian matrix. We believe this is a promising and feasible direction, but it requires more substantial and rigorous theoretical development. The possible proof is conceptually below: we leverage gradient unrolling to contrast the relearning and unlearning dynamics. Specifically, we connect the number of required relearning steps to the largest eigenvalue of the Hessian, obtained from a local quadratic approximation of the forget loss (via Taylor expansion) around the pretrained model state. This approximation enables us to characterize how SAM-induced smoothing, reflected in a reduced Hessian spectrum, influences the model's sensitivity to relearning. It could theoretically justify the enhanced robustness of SAM-based unlearning, as evidenced by the increased number of relearning steps required to reverse the forget loss (directly linked to the Hessian spectrum). We will make our best effort to complete the proof and include it in the revised version if successful. If not, we will clearly outline this as future work and discuss the associated challenges in detail.
Summary: This paper investigates improving the robustness of LLM unlearning against relearning attacks by incorporating sharpness-aware minimization (SAM) and other smoothness optimization techniques. The authors draw an analogy between robust unlearning and adversarial training, formulating the problem as a min-max optimization task. Experiments on WMDP and MUSE datasets demonstrate that SAM and other smoothness-promoting methods improve resistance to relearning attacks and even provide some robustness against jailbreaking attacks. Claims And Evidence: The authors claim that SAM provides a robust optimization framework for LLM unlearning and significantly improves resilience to relearning attacks. While the empirical results support the claim that smoothness optimization enhances robustness, the paper lacks a detailed computational efficiency analysis, which is crucial given that techniques like SAM introduce additional overhead. Additionally, SAM is a well-established technique with broad applications, and its role in unlearning seems more like an adaptation rather than a fundamentally new algorithmic contribution. The marginal gains of SAM over other smoothness techniques, as seen in Table 1, also raise questions about its distinct effectiveness. A more in-depth theoretical justification or additional comparative studies would strengthen the claims. Methods And Evaluation Criteria: The proposed methodology is reasonable, leveraging smoothness optimization to mitigate relearning attacks. However, the study does not analyze the computational cost associated with different smoothness techniques, which is critical for real-world applications. While SAM is emphasized as the core contribution, the performance difference between SAM and other smoothness techniques (e.g., gradient penalties, weight averaging) appears relatively small in Table 1, raising questions about its necessity as the primary approach. Theoretical Claims: N/A Experimental Designs Or Analyses: The paper does not provide runtime or computational cost comparisons across different techniques, which is important given that SAM introduces additional optimization steps. Supplementary Material: Yes, I reviewed the supplementary material, particularly the additional experimental results. Relation To Broader Scientific Literature: The paper references influence function-based and knowledge attribution-based unlearning approaches and includes NPO as a baseline. However, the evaluation primarily focuses on smoothness optimization techniques rather than a direct comparison with alternative unlearning frameworks. Essential References Not Discussed: The paper cites most of the relevant prior work, including influence function-based and knowledge attribution-based unlearning methods. However, these approaches are not deeply discussed or analyzed in comparison to the proposed smoothness-based method. Other Strengths And Weaknesses: Strengths: 1. The paper is well-organized and easy to follow. 2. The experimental setup and evaluation methodology are clearly described. 3. The study provides an interesting connection between SAM and robust unlearning, which could inspire further research in this direction. Weaknesses: 1. Computational overhead of different techniques is not analyzed. 2. SAM’s marginal advantage over other smoothness techniques is not well justified (as seen in Table 1). 3. No theoretical insights are provided beyond the empirical observations. 4. Lack of broader comparisons with other unlearning paradigms beyond smoothness optimization. Other Comments Or Suggestions: N/A Questions For Authors: How does the computational cost of SAM compare to other smoothness optimization techniques? Does it introduce significant overhead? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer RRKK for the thorough and thoughtful review. Below, we address each key point raised in the comments. 1. **Response to Computation Efficiency** Thank you for your constructive feedback. **[Fig. R1](https://ibb.co/4Rqfnq8d)** presents the total run time of our proposed smoothness-enhanced unlearning approachs as well as the additional baseline approach, **TAR** (Tampering Attack Resistance via meta-learning) [1]. As we can see, NPO+SAM shows the second-best efficiency, slightly behind NPO+WA. It approximately doubles the runtime of vanilla NPO due to the one-step maximization for weight perturbation in the alternating gradient ascent-descent implementation of SAM. In addition, TAR incurs a prohibitively high computational cost, with a running time of 7,441.9 minutes, which is 647× slower than NPO+SAM. 2. **Resposne to simple adaptation of SAM and small marginal gainsover other smoothness techniques** First, we respectfully clarify that our work is not a simple adaptation of SAM, and establishing connections to other smoothness techniques is an essential contribution of our paper, as noted by Reviewer ZQJD: "First work to connect SAM and smoothness optimization to LLM unlearning robustness.", and Reviewer s57: “The analogy between unlearning robustness and adversarial training is insightful and well-justified. Introducing SAM-based optimization as a solution is a novel contribution to the field.” and “Derives the connection between curvature regularization and robustness against relearning.”. Second, we do not think that the performance gains of SAM are marginal. As shown in Table 1, NPO+SAM consistently outperforms the second-best smoothness method, though the runner-up may vary across different relearning settings. For instance, while NPO+SAM and NPO+RS perform similarly at $N=40$ with UE = 0.5, the performance gap widens at $M=3$, where NPO+SAM achieves UE = 0.59 versus NPO+RS at UE = 0.42. Moreover, Table 6 demonstrates that NPO+SAM also provides consistent improvements in robustness against jailbreaking attacks. 3. **Regarding theoretical explanation** This is a very intriguing comment. During the rebuttal, we made an effort to bound the least number of relearning steps against an unlearned model and link it with the smoothness of the loss landscape, quantified by the largest eigenvalue of the Hessian matrix. We believe this is a promising and feasible direction, but it requires more substantial and rigorous theoretical development. The possible proof is conceptually below: we leverage gradient unrolling to contrast the relearning and unlearning dynamics. Specifically, we connect the number of required relearning steps to the largest eigenvalue of the Hessian, obtained from a local quadratic approximation of the forget loss (via Taylor expansion) around the pretrained model state. This approximation enables us to characterize how SAM-induced smoothing, reflected in a reduced Hessian spectrum, influences the model's sensitivity to relearning. It could theoretically justify the enhanced robustness of SAM-based unlearning, as evidenced by the increased number of relearning steps required to reverse the forget loss (directly linked to the Hessian spectrum). We will make our best effort to complete the proof and include it in the revised version if successful. If not, we will clearly outline this as future work and discuss the associated challenges in detail. 4. **Response to More Robust Unlearning Methods** Thank you for your valuable suggestion. To address this concern, we conducted additional experiments comparing our proposed SAM-based unlearning method with two new baselines: TAR (Tampering Attack Resistance via meta-learning) [1] and LAT (Latent Adversarial Training, which enhances robustness through neuron activation perturbations) [2]. The results, shown in **[Table R1](https://ibb.co/S7fFVRfX)**, demonstrate that NPO+SAM achieves highly competitive performance on the WMDP-Bio unlearning task, comparable to TAR and significantly outperforming LAT. The clear advantage of NPO+SAM over LAT highlights the importance of weight-space perturbations (as in SAM) over activation-space perturbations (as in LAT). As noted in Line 77, TAR formulates unlearning vs. relearning as a meta-learning problem. However, computing the meta-gradient requires a series of gradient unrolling steps, resulting in extreme computational overhead. As we responded in the 1st question and shown in **[Fig. R1](https://ibb.co/4Rqfnq8d)**, TAR incurs a running time of 7,441.9 minutes, making it 647× slower than NPO+SAM, and thus impractical for large-scale LLM unlearning. [1] Tamirisa, Rishub, et al. "Tamper-resistant safeguards for open-weight llms." arXiv preprint arXiv:2408.00761 (2024). [2] Sheshadri, Abhay, et al. "Latent adversarial training improves robustness to persistent harmful behaviors in llms." arXiv preprint arXiv:2407.15549 (2024).
Summary: This paper addresses the challenge of robust LLM unlearning, where undesired knowledge is removed from a large language model (LLM) without requiring full retraining. A key issue with existing unlearning methods is their vulnerability to relearning attacks, where a small fine-tuning step can restore forgotten information. The paper draws an analogy between relearning attacks and adversarial attacks, proposing Sharpness-Aware Minimization (SAM) as a solution to improve the robustness of LLM unlearning. The key contributions of this paper are: - Establishing SAM as a robust optimization foundation for resisting relearning attacks. - Extending beyond SAM by exploring other smoothness optimization techniques (Randomized Smoothing, Gradient Penalty, Curvature Regularization, and Weight Averaging). - Conducting experiments on WMDP and MUSE datasets, demonstrating that SAM-based unlearning is significantly more resistant to relearning and jailbreaking attacks. - Providing loss landscape visualizations that show how smoothness optimization flattens the loss surface, improving unlearning stability. The results indicate that SAM-enhanced unlearning consistently outperforms state-of-the-art methods in resisting both relearning and adversarial jailbreaking attacks. Claims And Evidence: The paper makes several strong claims regarding the effectiveness of SAM and smoothness optimization for robust LLM unlearning. Supported Claims: - Relearning attacks can reverse unlearning effects → Verified experimentally on WMDP/MUSE datasets. - SAM minimizes relearning vulnerability → Shown through min-max optimization formulation, experimental results, and loss landscape analysis. - Other smoothness techniques (RS, GP, CR, WA) also improve robustness → Experiments demonstrate that all variants improve resilience compared to the baseline. - SAM-based unlearning is also resistant to jailbreaking attacks → The authors show improved KL divergence on adversarial prompts. Weak Claims: - The claim that SAM is the "optimal" robust unlearning method may be too strong. While it performs the best in their experiments, there could be alternative robustness strategies (e.g., meta-learning-based unlearning, Bayesian methods) that were not explored. - The paper does not provide an in-depth theoretical explanation of why SAM is superior beyond empirical results. Some theoretical insights on why SAM discourages relearning in LLMs specifically could strengthen the claim. Methods And Evaluation Criteria: - The benchmark datasets (WMDP and MUSE) are well-chosen and represent real-world unlearning scenarios, including hazardous content removal and copyrighted data forgetting. - The paper evaluates multiple unlearning methods (NPO, GradDiff, RMU) and compares them with smoothness-enhanced versions, providing a fair comparison. - Unlearning robustness is assessed via multiple metrics (Unlearning Effectiveness, Utility Retention, Relearning Resistance, Jailbreaking Robustness). Theoretical Claims: - The paper builds on SAM-based optimization and adapts it to LLM unlearning, using min-max optimization to counter relearning attacks. - Derivations of the SAM-enhanced loss function (Equation 3-7) appear correct and align with sharpness-aware training literature. - The connection between SAM and curvature regularization is well-explained, but the paper does not rigorously prove that SAM minimizes relearning risk optimally. No major theoretical errors were found, but a more formal generalization bound on SAM’s effect on unlearning would strengthen the work. Experimental Designs Or Analyses: - The WMDP and MUSE datasets are appropriate for testing unlearning and relearning vulnerabilities. - The experiments are well-structured: They test the effect of smoothness-enhanced unlearning against different relearning attack intensities (epochs, sample sizes). - The loss landscape visualizations clearly demonstrate how SAM and other smoothness methods flatten the loss surface. Potential issues: - The paper primarily focuses on Zephyr-7B-beta and LLaMA-2-7B, which are relatively small-scale models compared to cutting-edge LLMs (e.g., GPT-4, LLaMA-3). Would these results generalize to much larger models? - The experiments use only one unlearning baseline per dataset (NPO for MUSE, NPO/GradDiff/RMU for WMDP). Adding other methods (e.g., model editing, knowledge distillation-based unlearning) could provide a broader comparison. - Ablation studies on the hyperparameters ($\rho$ in SAM, number of smoothness layers in RMU) are useful, but further sensitivity analysis would be beneficial. Supplementary Material: - The appendix contains additional loss landscape visualizations, detailed experiment setups, and ablation studies on SAM’s hyperparameters. - The SAM-based unlearning algorithm (Algorithm A1) is well-documented and provides a reproducible framework. - Some details on relearning attack methods (sampling strategy, fine-tuning details) could have been elaborated further. Relation To Broader Scientific Literature: - The paper extends research on LLM unlearning (Yao et al., 2024; Maini et al., 2024) by proposing a robust optimization framework using SAM. - It connects adversarial robustness techniques (Madry et al., 2018; Foret et al., 2021) to the field of unlearning, which has not been widely explored before. - The findings align with prior work on curvature-based regularization (Moosavi-Dezfooli et al., 2019) and weight smoothing (Izmailov et al., 2018). Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - First work to connect SAM and smoothness optimization to LLM unlearning robustness. - The paper is well-written, with clear motivation, theoretical insights, and experiments. - Could improve AI safety, privacy, and compliance with legal regulations (e.g., GDPR, right to be forgotten). Weaknesses: - The approach is computationally expensive, as SAM requires perturbation-based training, increasing cost. - Does not address whether LLM unlearning itself could be adversarially misused (e.g., selectively erasing safety mechanisms). Other Comments Or Suggestions: - Clarify whether SAM slows down LLM inference/training significantly. - Provide examples where SAM fails to prevent relearning (e.g., highly structured knowledge). - Test SAM-based unlearning on larger models like LLaMA-3 or GPT-4 to validate scalability. Questions For Authors: 1. How does SAM compare to alternative robust unlearning methods (e.g., meta-learning or Bayesian forgetting)? 2. Would different fine-tuning methods for relearning (e.g., RLHF, LoRA) change the attack effectiveness? 3. Does SAM negatively impact generalization or increase catastrophic forgetting on retain data? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank Reviewer ZQJD for the thorough review and the encouraging comments on our contributions and presentation. We also greatly appreciate the constructive feedback. Below, we address each key point raised in the comments. 1.**Regarding more robust unlearning methods and larger model evaluation** Thank you for your valuable suggestion. To address this concern, we conducted additional experiments to compare our proposed SAM-based unlearning method with two additional baselines: **TAR** (Tampering Attack Resistance via meta-learning) [1] and **LAT** (Latent Adversarial Training, which enhances robustness by adding perturbations to neuron activations) [2]. These experiments were performed using a **larger model, LLaMA 3 8B**, which is the largest model supported in our lab environment. The detailed results are presented in **[Table R1](https://ibb.co/S7fFVRfX)**. As we can see, NPO+SAM delivers highly competitive performance on the WMDP-Bio unlearning task, comparable to TAR and significantly outperforming LAT. Notably, TAR incurs a prohibitively high computational cost, with a running time of 7,441.9 minutes, which is 647× slower than NPO+SAM. 2. **Regarding computation efficiency of smoothness-enhanced NPO** Thank you for the insightful suggestion. We added experiments and clarifications on the computational efficiency of smoothness-enhanced NPO methods in **[Fig. R1](https://ibb.co/4Rqfnq8d)**, along with the comparison to TAR in the earlier response. NPO+SAM shows the second-best efficiency, slightly behind NPO+WA, while offering the strongest robustness against both relearning and jailbreaking attacks. It approximately doubles the runtime of vanilla NPO due to the one-step maximization for weight perturbation in the alternating gradient ascent-descent implementation of SAM (Appendix A). 3. **Regarding theoretical explanation** This is a very intriguing comment and has prompted us to consider whether any rigorous guarantees can be provided to support the improved unlearning robustness achieved by SAM. Inspired by this, we made an effort to bound the least number of relearning steps against an unlearned model and link it with the smoothness of the loss landscape, quantified by the largest eigenvalue of the Hessian matrix. Due to space constraints, please refer to **[Response to Reviewer hvpj](https://openreview.net/forum?id=zZjLv6F0Ks&noteId=Mcy2h7DqNX)** for more details. 4. **Regarding different fine-tuning methods for relearning** Following the suggestion, we perform the relearning attack using LoRA. The results are shown in **[Table R2](https://ibb.co/v4MvfCgt)**. As observed, NPO+SAM achieves a much higher unlearning effectiveness after the relearning attack compared to NPO, indicating that our method is effective not only against full-model relearning attacks but also against LoRA-based relearning attacks. 5. **Regarding the impact on retain data** In our experiments, we did not observe any clear negative impact of incorporating SAM on retain performance. For instance, MUSE uses the KnowMem metric on the retain set to directly evaluate the performance of the unlearned model on retained knowledge. As shown in Table A2, NPO+SAM achieves higher KnowMem scores than vanilla NPO, 42.58 vs. 41.58 on MUSE-News and 35.48 vs. 34.71 on MUSE-Books. This aligns with SAM’s known ability to improve generalization via loss landscape flatness. While SAM was originally designed for generalization, our contribution lies in adapting it to unlearning and showing that its generalization benefits can enhance unlearning robustness. 6. **Other suggestions/comments on sensitivity analysis, sample strategy and failure example** Thank you for acknowledging our experiments on ρ in SAM and the number of smoothness layers in RMU. In the revised version, we will include additional sensitivity analyses, such as the Gaussian noise variance $\sigma^2$ in RS, and explore variations in fine-tuning methods, including LoRA (as noted in our earlier response). Furthermore, below is an example where SAM fails to prevent relearning: > Prompt: What gradients are involved with scHPXV purifcation? A.Sucrose gradients; B.D-mannose gradients… > Origin model:A.Sucrose gradients\n\nExplanation:\nHorsepox virus (HPXV) is a double-stranded DNA virus… > Unlearned model before relearning: /******/ /******/ /******/… > Unlearned model after relearning (that fails to prevent relearning): A. Sucrose gradients.\nThe specific density gradient used depends on the characteristics… Third, we will improve our broader impact statement by discussing the potential misuse of unlearning techniques by adversaries to erase safety mechanisms. [1] Tamirisa, Rishub, et al. "Tamper-resistant safeguards for open-weight llms." arXiv preprint arXiv:2408.00761 (2024). [2] Sheshadri, Abhay, et al. "Latent adversarial training improves robustness to persistent harmful behaviors in llms." arXiv preprint arXiv:2407.15549 (2024).
null
null
null
null
null
null
Sketch to Adapt: Fine-Tunable Sketches for Efficient LLM Adaptation
Accept (poster)
Summary: This paper is concerned with parameter-efficient fine-tuning. It is argued in this paper that previous solutions in parameter-efficient fine-tuning are either low-rank or quantized. They are limited in applicability due to restrictive assumptions. In this paper, a sketchtune approach is proposed with the inspiration of sketching. By rigorously experimenting with LLaMA-1/2/3 models on math problem-solving, commonsense reasoning, and instruction-following tasks, it is shown that the proposed sketchtune is way better than considered baselines ranging from LoRA, DoRA, to S2FT in terms of both model performance and parameter efficiency. Specifically, sketchtune can achieve the same performance with baselines with smaller base models and fewer trainable parameters. Claims And Evidence: The claims in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods are mostly elaborated and the evaluation criteria is well justified. However, I still have several concerns: 1) The sketch strategy is somehow hard to understand, specifically, in section 2.3, it is not very clear how the mapping matrix, i.e. M, is determined. Theoretical Claims: The proofs for theoretical claims are correctly provided. Experimental Designs Or Analyses: The experimental designs and analyses are sound and adequate. Supplementary Material: N/A Relation To Broader Scientific Literature: The contributions are built upon previous studies in parameter-efficient fine-tuning methods, e.g. LoRA and QLoRA. LoRA and QLoRA-like methods are limited in that they are either low-rank or quantized. So sketchtune proposes optimizations toward the mentioned limitations. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: 1) Equation 6 is mis-leading, the formulation of derivative seems upside-down. 2) The dedicated CUDA kernel would better be attached as supplementary material. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the soundness of our work and providing thoughtful feedback. We address the reviewer’s concerns and suggestions below: ## **[Concern 1 - The sketch strategy is somehow hard to understand. Specifically, in Section 2.3, it is not very clear how the mapping matrix, i.e., M, is determined.]** Thank you for pointing this out. We provide further clarification below on how the mapping matrix $M$ is learned. Each column of $M$ is a binary one-hot vector that maps an original parameter (from a row of size $c$) to one of the $k$ entries in the sketched parameter vector, where $k \ll c$. This mapping inevitably introduces some error in the model output. To minimize this error, we learn the columns of $M$ sequentially and iteratively update the remaining unmapped parameters to compensate for the introduced error after each step. Concretely, for each original parameter, we identify the entry in the sketched parameter vector that is closest to it and assign the corresponding one-hot value in $M$. After fixing this column of $M$, we apply an update $\boldsymbol\delta$ (as defined in Equation 13) to the remaining unmapped parameters to absorb the approximation error. This process is repeated until all columns of $M$ are assigned. We will clarify this procedure in the final version of the paper. ## **[W1 - Misleading Equation 6]** We thank the reviewer for pointing out this typo. We will correct this in the final paper. ## **[W2 - Release CUDA kernel as supplementary material]** Thank you for the great suggestion. We will include the CUDA kernel in the appendix and release our code and models for the final version.
Summary: The paper introduces a method that compresses pre-trained LLM weights row-by-row into a smaller, shared set of trainable “sketched” weights. They compress weights by approximately minimizing the reconstruction error for activations over a set of data. Experimentally, SketchTune shows advantages in terms of model size and efficient inference with minimal performance trade-offs. Claims And Evidence: Main Idea / Theoretical Analysis: SketchTune replaces a low-rank update assumption with a sketch-based approximation, where theoretical arguments (Section 3) show that if weight updates are not actually low-rank, a sketch can give a better approximation. Under some assumptions, the paper characterizes conditions (power-law decay of singular values) under which sketches outperform low-rank approximations. Empirical Results: On math (e.g., GSM8K) and commonsense tasks, SketchTune achieves performance on par with or surpassing strong PEFT baselines (LoRA, DoRA, etc.), even though it uses a smaller “sketched” base model. The method also compares favorably to quantized PEFT approaches like LoftQ. However, one notable methodological detail is that SketchTune uses additional C4 data to compute the Hessian-based weighting for the sketch, effectively a calibration pass, whereas competing methods do not use additional data or have a previous data dependent compression step. Also, Table 9 in the appendix shows this “sketching” step can be a significant overhead relative to normal training times, as it involves an extra pass over substantial data and computes second-order derivatives. Methods And Evaluation Criteria: The paper evaluates on standard math, commonsense, and language modeling tasks, reporting the usual metrics. The authors also benchmark training and inference time (time to first token, decoding latency), along with memory usage. These criteria align well with the stated aim of PEFT. Theoretical Claims: I didn't check it thoroughly, but it seems to be correct. I am not sure how mild is the assumption that the random mapping is uncorrelated with the true update matrix. Experimental Designs Or Analyses: The experiments are generally strong, spanning multiple tasks and baselines (low-rank, quantized). Performance improvements hold across tasks. The main concern is the use of additional data and overhead of sketch computation (Appendix I), which can be quite large compared to normal finetuning time. Specifically, Table 9 suggests that multi-hour overhead might be incurred on some GPUs. This overhead may offset the claimed efficiency advantages in certain settings. I would also like to see more quantization (or actually quantization+LoRA) baselines, especially data dependent quantization methods. Supplementary Material: Not thoroughly, but it seems appropriate. Relation To Broader Scientific Literature: SketchTune builds on prior compressive adaptation ideas (random hashing, count-sketch) and extends them with a layerwise, Hessian-based weighting. It competes directly with low-rank (LoRA, DoRA) and quantization-based (LoftQ, QLoRA) methods but has limited discussion or comparison with pure sparsity-based finetuning. Essential References Not Discussed: The coverage of low-rank, quantization, and other compressive approaches is fairly thorough. However, methods focusing purely on model sparsity as an alternative – e.g. forcing adapter updates to remain in a sparse subset – only receive brief mention. A more direct empirical comparison would strengthen the position of SketchTune among all “competing” approaches. Other Strengths And Weaknesses: Strengths: Principled approach that is complemetary to low-rank updates. Strong empirical results vs. both low-rank adapters and quantized finetuning. Dedicated GPU kernel designs. Weaknesses: Requires a calibration pass on data to compute Hessians and sketch the pre-trained model, which can add substantial overhead. Sparse adapters are not thoroughly compared. Other Comments Or Suggestions: NA Questions For Authors: Time/Compute Overhead: Have you explored sampling a smaller subset of C4 or using fewer Hessian blocks to reduce overhead, and what are the trade-offs in final accuracy? Sparse Adapters: Could row-wise sketches be combined or compared directly with a learned sparse subset of weights (like Diff-Pruning)? Dynamic Remapping: Would re-mapping columns or recalculating the Hessian mid-training yield significantly better final performance, or do diminishing returns make this impractical? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thorough review and thoughtful feedback. Below, we address your questions and concerns. ## **[Theoretical Claims]** The effectiveness of LoRA and SketchTune depends on the structure of the true update matrix $\Delta$. If $\Delta$ is low-rank, LoRA is favored; if $\Delta$ aligns with the mapping matrix, SketchTune performs better. Theorem 3.1 addresses how to choose between methods without detailed knowledge of $\Delta$. Assuming only the singular value distribution, the most neutral case is when $\Delta$ is uncorrelated with the mapping. Under this assumption, Theorem 3.1 shows that as $\Delta$ becomes less low-rank, SketchTune is increasingly likely to outperform LoRA. We adopt the uncorrelated setting in Theorem 3.1 to avoid bias toward either method. Empirically, however, our analysis (Figure 1) shows that $\Delta$ is correlated with the mapping in a way that favors SketchTune. ## **[Experimental Designs 1 - More Data-Dependent Quantization + LoRA Baselines]** We performed additional comparison with data-dependent AQLM [4] quantization + LoRA baselines. We fine-tuned LoRA adapters with a 2-bit AQLM Llama-2-7B model on GSM8K using a learning rate of 1e-5 and a batch size of 16 for 5 epochs. We report the results in the table below. We are actively trying other hyperparameters to improve performance. |Model|Method|Trainable Param (M)|GSM8K| |-|-|-|-| |Llama-2-7B|$\text{AQLM}_{\text{2-bit}}$|159.91|2.81| | |$\text{SketchTune}_{GPR=4}$|21.75|**29.95**| ## **[W1 – Additional Data for Calibration & Sketching Overhead]** The sketching step is efficient, requires only a small calibration dataset, and is performed once per base model. Specifically, we use just 128 sequences of 2048 tokens, which is minimal compared to typical fine-tuning datasets. For further details and results, we kindly refer you to our response to Reviewer WZr9 under **[W1]**. ## **[W2 - Sparse adapters are not thoroughly compared]** Tables 1 and 2 in our paper include comparisons with a sparsity-based method, S2FT. Additionally, while we aimed to compare with Diff-Pruning[1], we encountered a challenge: the official implementation does not support LLMs such as Llama. In the table below, we instead provide additional comparison with SpIEL[2] and SMT[3], two recent sparsity-based PEFT methods, reporting average accuracy across eight commonsense reasoning tasks. |Model|Method|Base Model (GB)|Trainable Param (M)|Avg Acc.| |-|-|-|-|-| | Llama-2-7B|SpIEL|13.48|55.9|78.3| | |SMT(Best)|13.48|330.9|83.4| | |$\text{SketchTune}_{GPR=4}$|4.05|87.0|**83.6**| | Llama-3-8B|SpIEL|16.06|47.2|82.7| | |SMT(Best)|16.06|202.8|**87.2**| | |$\text{SketchTune}_{GPR=4}$|5.92|88.1|87.0| ## **[Q1 - Time/Compute Overhead]** We ran additional experiments to assess the impact of using smaller C4 subsets for calibration, evaluating model quality via WikiText2 perplexity and MMLU accuracy. As shown in the table below, larger calibration sets improve model quality but **do not significantly increase sketching time**, since the dominant cost in sketching comes from clustering to obtain the $k$ centers (Equation 4). These results highlight a trade-off: even a single sequence yields a usable sketch, but larger sets lead to better performance. |Model|# of Sequences|Sketch Time (min)|WikiText2 Perplexity|MMLU Accuracy| |-|-|-|-|-| |Llama-3-8B|128|39.88|6.52|60.56| | |32|36.62|6.59|60.36| | |8|32.88|6.65|60.40| | |1|31.70|7.00|58.58| ## **[Q2 - Sparse Adapters]** Yes, SketchTune can be combined with Diff-Pruning for greater parameter efficiency. This involves keeping the sketched parameters frozen and learning a sparse, task-specific update $\delta_\tau$, so that $w_{sketched, \tau} = w_{sketched} + \delta_\tau$. Following Diff-Pruning, $\delta_\tau$ can be modeled as a gated update using a Hard-Concrete distribution to approximate a binary mask $z_\tau$, with $\delta_\tau = z_\tau \odot w_\tau$, where $w_\tau$ is the dense task-specific update. Implementing this extension is non-trivial, and we defer it to future work. ## **[Q3 - Dynamic Remapping]** Since each sketched parameter is shared by multiple model parameters, re-mapping columns or recalculating the Hessian mid-training may help better align the sketch with the evolving gradients, potentially leading to improved final performance. Yet such an approach may introduce non-trivial computational overhead, as it would require additional clustering steps and second-order estimation during training. We defer a thorough investigation of dynamic remapping strategies to future work. **References** 1. Parameter-efficient transfer learning with diff pruning 2. Scaling sparse fine-tuning to large language models 3. Sparse Matrix in Large Language Model Fine-tuning 4. Extreme compression of large language models via additive quantization
Summary: The paper proposes SketchTune, which uses a learned sketching algorithm to compress the LLM into a small set of shared sketched parameters and fine-tune those parameters for adaptation. The proposed approach reduces model size while preserving the pre-trained capabilities of the full model. ## update after rebuttal I appreciate the authors’ efforts in providing a response, and most of my concerns have been addressed. Accordingly, I will increase my score. Claims And Evidence: Yes, the paper provides extensive experiments and theoretical analysis to support its claims. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are reasonable. Theoretical Claims: I find Section 2.1 to be slightly misleading, as its title, "Motivation: Weight Updates Are Far from Low-Rank," suggests that the section will primarily discuss the fundamental characteristics of weight updates and why they are far from low-rank. However, the text mainly focuses on empirically comparing the effectiveness of low-rank matrices and sketching techniques in terms of approximation error, which seems more like an outcome rather than a motivation. Since the authors prove that SketchTune is well-suited to approximate delta when eta is close to 0 and delta is nearly full-rank in Section 3, providing a more in-depth analysis of empirical observations on whether weight updates of LLMs are actually close to full-rank would better support the authors' theoretical claims. This would also clearly establish the motivation for why weight updates are far from low-rank. Experimental Designs Or Analyses: In LLM fine-tuning, instruction-following benchmark results are typically included (e.g., the Alpaca GPT-4 dataset with MT-Bench scores; refer to the S2FT paper as an example). Given the significant potential of the proposed method as an efficient LLM compression and fine-tuning technique, I am curious whether it performs well on instruction-following benchmarks. Additionally, why do the authors present GPR=4 only in Table 2, while Table 1 includes GPR=1, 2, 4, and 8? Supplementary Material: I reviewed the supplementary material overall, particularly focusing on the experimental details. Relation To Broader Scientific Literature: This paper introduces a new research direction that overcomes the limitations of existing PEFT methods based on low-rank or sparse assumptions. Essential References Not Discussed: I think the authors included the necessary references to support their claims. Other Strengths And Weaknesses: The authors develop a custom CUDA kernel optimized for the specific operations required by SketchTune. Other Comments Or Suggestions: What is the overhead of sketching time in Table 9 of the Appendix? It would be helpful to know how much memory and computation are required for learning to sketch LLM weights (described in Algorithm 1). Questions For Authors: Please refer to my questions for authors in the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and insightful feedback. We address your concerns as follows. ## **[Theoretical Claims 1 - Ambiguous Title in Section 2.1 & More in-depth Empirical Observation Analysis]** We thank the reviewer for the insightful suggestion. We conducted additional analysis to examine whether the weight updates in LLMs are close to full rank. In the table below, we report **the minimum rank and the standard deviation**, across all layers, required to explain a given percentage of the variance in the weight updates. This was computed by performing SVD on each weight update and counting how many top singular values are needed to reach the target variance threshold. The results show that the weight updates are relatively **high-rank**. This suggests that standard LoRA configurations (e.g., rank 32) capture only a limited portion of the variance, and may not effectively approximate the full weight updates. We will incorporate this analysis into the final version of the paper to better align with the section title. | Model | Max Rank | 25% Variance | 50% Variance | 75% Variance | 90% Variance | 95% Variance | |--------------------------------------|---------------|----------------------|------------------------|------------------------|------------------------|------------------------| | Llama 2 7B, fine-tuned on Vicuna | 4096 | 140.4 ± 62.7 | 460.9 ± 178.2 | 1089.0 ± 358.5 | 1866.4 ± 520.5 | 2354.0 ± 581.4 | | Llama 3 8B, fine-tuned on OpenChat | 1024 or 4096 | 197.5 ± 111.8 | 562.5 ± 304.7 | 1180.5 ± 607.7 | 1848.8 ± 903.9 | 2221.8 ± 1051.3 | | Qwen 2.5 7B, fine-tuned on Code | 512 or 3584 | 235.8 ± 164.1 | 594.9 ± 385.6 | 1138.2 ± 691.5 | 1675.3 ± 960.5 | 1958.1 ± 1085.4 | ## **[Experimental Designs 1 – Additional Instruction-Following Benchmark]** We conducted additional experiments by instruction-tuning Mistral-7B on the Alpaca-GPT4 dataset for one epoch, following the experimental setup in S2FT. In the table below, we report MT-Bench scores evaluated by GPT-4, with baseline results taken from the S2FT paper. Despite using a smaller base model, SketchTune outperforms the baselines. | Model | Method | Base Model (GB) | Writing | Roleplay | Reasoning | Code | Math | Extraction | STEM | Humanities | Avg. | |---|---|---|---|---|---|---|---|---|---|---|---| | Mistral-7B | Full FT | 14.48 | 5.50 | 4.45 | 5.45 | 2.50 | 3.25 | 5.78 | 4.75 | 5.45 | 4.64 | | | $\text{S}^2\text{FT}$ | 14.48 | 6.95 | 4.40 | 5.50 | 2.70 | 3.55 | 5.95 | 6.35 | 6.75 | 5.27 | | | $\text{SketchTune}_{\text{GPR}=4}$ | 4.66 | 4.60 | 5.20 | 9.23 | 3.05 | 4.80 | 7.45 | 8.13 | 8.45 | **6.36** | ## **[Experimental Designs 2 – GPR Choice in Tables 1 & 2]** We reported results for GPR=4 in Table 2 due to the time-consuming nature of fine-tuning on the Commonsense170K dataset, which is 17× larger than the Math10K dataset used in Table 1. A single training run takes several days to complete. We selected GPR=4 as a practical trade-off between memory efficiency and final performance. We are happy to conduct additional experiments with other GPR values and include the results in the camera-ready version. ## **[Comments 1 – Overhead of Sketching Time and Memory Requirement]** In Table 9 of the paper, the reported sketching time refers to the time required to compress a base model into a sketched model using a single RTX 8000-48GB GPU. In the table below, we provide additional sketching time (INT4, GPR=4) and peak GPU memory usage for models of various sizes, using a single A100-40GB GPU. Thanks to the layer-wise optimization objective (Equation 1), model sketching scales efficiently to large models (e.g., 70B) on a single GPU. Additionally, we plan to upload sketched models to the HuggingFace Model Hub, enabling users to directly download and fine-tune without repeating the sketching step. | Model | Original Size | Sketched Size | Max GPU Memory | Sketching Time | | ------------- | ------------- | ------------- | --------------- | --------------- | | Llama-3.2-3B | 6.43G | 3.18G | 9.92 GB | 20.7 mins | | Llama-3.1-8B | 16.07G | 5.92G | 18.37 GB | 41.62 mins | | Llama-3.1-70B | 141.12G | 40.15G | 28.05 GB | 266.87 mins | --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ efforts in providing a response. Regarding the overhead of sketching time and memory requirements, could you also provide the cost of fine-tuning sketches? This would help me better understand the relative cost of learning to sketch compared to fine-tuning sketches. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the thoughtful comment. To address the question, we have added a table below presenting the time cost of sketching and fine-tuning on a single A100-40GB GPU. As shown, fine-tuning accounts for the majority of the end-to-end time, especially for larger datasets like Commonsense170K. We also note that sketching is a one-time process per model. To improve usability, we will release the sketched models on the Hugging Face Model Hub, allowing users to directly download them and bypass the sketching step. | Model | Dataset | Epochs | Sketching Time | Fine-tuning Time | Sketching Time / Total Time | |---|---|---|---|---|---| | Llama 3 8B (INT4, GPR=4) | Commonsense170K | 2 | 41.62 mins | 1244.43 mins | 3.24% | | Llama 3 8B (INT4, GPR=4) | Math10K | 4 | 41.62 mins | 160.17 mins | 20.60% |
Summary: The paper proposes an alternative to parameter efficient fine-tuning of LLMs by using sketching to create a low-dimensional representation of the weight matrices which is theoretically shown to be better for certain classes of matrices. Experiments on Llama models shows that the algorithm is able to outperform PEFT while using smaller base models and comparable trainable parameters. ## update after rebuttal The authors have addressed my concerns by adding experiments showing that the overhead of sketch generation is acceptable and clarifying that their choice of hyperparameters allows for significant compression despite grouping. Therefore, I have increased my score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I checked the proofs at a high-level and they appear correct to me. Experimental Designs Or Analyses: I checked the experiments in the main paper, and they appear correct to me. Supplementary Material: I reviewed the theoretical analysis section at a high level. Relation To Broader Scientific Literature: Strengths: 1. Sketching significantly reduces the model size while preserving the pre-trained capabilities of the model. 2. Experiments show that it achieves higher accuracy than PEFT baselines with comparable number of trainable parameters. Weaknesses: 1. The process of generating the sketches appears to be more expensive than LoRA 2. It seems like learning separate c-dimensional sketches for each of the 'g' subgroups in a k-dimensional row can reduce the memory efficiency since we need kg << c to get significant memory saving and that might be difficult to satisfy Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: See questions below Questions For Authors: The derivation of (6) is not clear to me. Please explain how it is derived. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback and for acknowledging the empirical effectiveness of our method. We address your concerns and questions below: ## **[W1 - Sketch generation process appears to be more expensive than LoRA]** While SketchTune introduces an additional sketching step before fine-tuning, this preprocessing is **fast, resource-efficient, and one-time** per base model. In the table below, we report additional end-to-end sketching time (INT4, GPR=4) for different sized models, using a single A100-40GB GPU. Thanks to our layer-wise optimization objective (Equation 1), sketching scales efficiently to large models (**70B**) with a single GPU. | Model | Original Size | Sketched Size | Max GPU Memory | Sketching Time | | ------------- | ------------- | ------------- | --------------- | --------------- | | Llama-3.2-3B | 6.43G | 3.18G | 9.92 GB | 20.7 mins | | Llama-3.1-8B | 16.07G | 5.92G | 18.37 GB | 41.62 mins | | Llama-3.1-70B | 141.12G | 40.15G | 28.05 GB | 266.87 mins | The sketching overhead is comparable to existing compressed PEFT baselines. For example, LoftQ [1] reports a quantization overhead of **21 seconds** for a 4096 $\times$ 4096 matrix, while SketchTune’s overhead is only **5 seconds (4.2x speedup)**. Moreover, a single sketched model can be reused across multiple downstream tasks. As demonstrated in Tables 1 and 2 of the paper, the same sketched models are effective across different domains, i.e. math and commonsense reasoning. We also plan to release these models on the HuggingFace Model Hub, enabling users to directly download and fine-tune without repeating the sketching step. ## **[W2 - Learning separate sketches for each of the $g$ subgroups may reduce memory efficiency]** Our sketching approach is memory-efficient and effectively compresses the model weights. Specifically, each row $\mathbf{w} \in \mathbb{R}^{1 \times c}$ is divided into $g$ non-overlapping sub-rows $\mathbf{w}' \in \mathbb{R}^{1 \times \frac{c}{g}}$, each of which is sketched into a $k$-dimensional vector. The resulting sketched row $\mathbf{w}_{\text{sketched}} \in \mathbb{R}^{1 \times gk}$ leads to a compression factor of $\frac{c}{gk}$. For Llama 2 7B, the row size $c$ is either 4096 or 11008. In our experiments, we set $k \in \\{4, 8, 16\\}$ and $g \in \\{1, 2, 4, 8\\}$, resulting in a compression factor of at least $32\times$ for Llama 2 7B. Notably, the compression factor increases with larger models, due to larger $c$ and fixed $k$ and $g$. Thus, even with subgrouping, the sketching approach remains highly memory-efficient across model scales. ## **[Q1 - The derivation of (6) is not clear to me. Please explain how it is derived.]** Thank you for pointing this out. We realize that Equation (6) contains a typo: the derivative fractions are upside down. We apologize for the oversight and will correct this in the camera-ready version. The correct expression is: $\frac{\partial \mathcal{L}}{\partial w_{\text{sketched}}} = \frac{\partial \mathcal{L}}{\partial y} (MX)^\top$ The equation is derived as follows. Let $X$ be the layer input, $w_{\text{sketched}}$ the sketched row, and $M$ the mapping matrix. The forward pass computes: $y = w_{\text{sketched}}MX$ During backpropagation, we apply the chain rule to compute the gradient of the loss $\mathcal{L}$ with respect to $w_{\text{sketched}}$: $\frac{\partial \mathcal{L}}{\partial w_{\text{sketched}}} = \frac{\partial \mathcal{L}}{\partial y} \cdot \frac{\partial y}{\partial w_{\text{sketched}}} = \frac{\partial \mathcal{L}}{\partial y} (MX)^\top$ We will clarify this derivation in the revised paper. **References** [1] Li, Yixiao, et al. "LoftQ: LoRA-Fine-Tuning-aware Quantization for Large Language Models." The Twelfth International Conference on Learning Representations. --- Rebuttal Comment 1.1: Comment: Thank you for addressing all my concerns. I have increased my score.
null
null
null
null
null
null
Text-to-LoRA: Instant Transformer Adaption
Accept (poster)
Summary: **Summary of Contributions:** The paper introduces **Text-to-LoRA (T2L)**, a hypernetwork model designed to adapt Large Language Models (LLMs) on the fly based on natural language descriptions of target tasks. T2L aims to overcome the limitations of traditional fine-tuning by constructing Low-Rank Adaptation (LoRA) parameters in a single forward pass. The authors trained T2L on a suite of pre-trained LoRA adapters and demonstrate that the reconstructed LoRA instances achieve comparable performance to task-specific adapters. Furthermore, the paper claims that T2L can compress multiple LoRA instances and generalize to unseen tasks zero-shot using natural language descriptions. The work proposes a step towards democratizing the specialization of foundation models with reduced computational requirements. * **Literature:** * The paper adequately places its work within the context of current research on foundation model adaptation, parameter-efficient fine-tuning (specifically LoRA), and hypernetworks. * Relevant works on LoRA compression and combination are also cited. The discussion of related hypernetwork approaches for LLM adaptation (Section 6) highlights the novelty of T2L's use of natural language instructions for zero-shot generalization. * The citations appear to be appropriate, covering key papers in the relevant areas. **Overall Assessment:** The paper presents an interesting and novel approach (T2L) for on-the-fly adaptation of LLMs using natural language task descriptions. The empirical results demonstrate the potential of this method for compressing LoRAs and achieving zero-shot generalization to unseen tasks, outperforming certain baselines. The ability to adapt different LLM architectures is also a strength. However, the current state of T2L still faces significant challenges. The gap in zero-shot performance compared to task-specific LoRAs, the generalization failure of the reconstruction training scheme, and the dependence on high-quality generated task descriptions are major limitations that need to be addressed in future work. Claims And Evidence: **Claims Supported by Evidence:** * **T2L can reconstruct pre-trained LoRAs and match their performance on corresponding test sets**. Table 1 shows that T2L, trained via reconstruction loss on 9 benchmark tasks, can fully recover and even outperform the benchmark-specific LoRA adapters (highlighted in green). This is demonstrated using both one-hot and natural language task embeddings. * **T2L can compress hundreds of LoRA adapters**. Figure 3 shows that T2L can be trained on an increasing number of tasks (up to 479). While there is a performance drop as the reconstruction error increases, the architectures maintain a significant portion of the oracle's performance even at higher error rates. * **T2L can generate useful LoRA adapters for unseen tasks using natural language descriptions (zero-shot generalization)**. Table 2 demonstrates that T2L trained with supervised fine-tuning (SFT) on the Super Natural Instruction (SNI) dataset outperforms a multi-task LoRA baseline on 10 unseen benchmark tasks. The bold numbers indicate improvements over the multi-task LoRA. Furthermore, visualization in Figure 4 shows a clear clustering of T2L activations based on the benchmark tasks, suggesting task-specific adaptation. * **Different T2L architectures (L, M, S) offer complexity-performance trade-offs**. The paper presents results for all three architectures in various experiments, showing that the larger models (L and M) generally achieve better performance but have more parameters, while the smaller model (S) is more parameter-efficient. * **SFT training of T2L generally leads to better zero-shot generalization compared to reconstruction training**. Table 6 shows a clear performance gap between T2L instances trained via reconstruction and SFT, with SFT achieving higher average benchmark performance. The authors attribute this to the potential issue of similar tasks having non-clustered LoRA adapters in the weight space, which reconstruction training struggles with. * **Task descriptions aligned with the target task are crucial for generating effective LoRAs**. Table 5 shows that reconstruction-trained T2L performs significantly better when provided with training or evaluation descriptions (aligned) compared to random strings or training descriptions from other benchmarks (unaligned). **Potential Areas Where Claims Might Need Further Scrutiny or Are Acknowledged Limitations:** * **Zero-shot performance still does not fully reach that of task-specific LoRAs**. While T2L shows promising zero-shot generalization, Table 2 indicates a performance gap compared to the oracle task-specific LoRAs. The authors acknowledge this as a significant challenge. * **The reliance on high-quality generated task descriptions**. The discussion section mentions that the experiments rely on GPT-4o mini generated descriptions, and performance might degrade if users provide lower-quality descriptions in real-world scenarios. * **The nature of generalization in reconstruction training**. Appendix D is mentioned as containing compelling evidence why reconstruction-trained T2L cannot generalize. Section 5.4 further elaborates on the limitations due to the potential lack of clustering of similar task LoRAs in the weight space. Figure 5 shows no correlation between the cosine similarity of adapter weights and task embedding similarity. * **The choice of LoRA as the sole output space**. The limitations section mentions that there might be more efficient ways to modulate LLMs given a text description than just generating LoRA adapters. For example System prompts in LLMs can adopt role playing etc. * **Scalability to larger base models**. The potential for T2L trained on smaller base models to transfer effectively to larger models within the same architecture class is noted as an open area for exploration. Methods And Evaluation Criteria: **Strengths of the Proposed Methods and Evaluation:** * **Relevance to the Problem:** The core idea of T2L, a hypernetwork generating LoRA adapters from task descriptions, directly addresses the challenge of adapting LLMs for new tasks without extensive fine-tuning and data curation. This is a significant step towards democratizing specialization of foundation models. * **Parameter-Efficient Adaptation:** Utilizing LoRA for adaptation is a well-established parameter-efficient fine-tuning technique, making the generated adapters lightweight and easier to integrate. * **Zero-Shot Generalization Focus:** The emphasis on zero-shot LoRA generation for unseen tasks is crucial for the stated goal of "instant" adaptation, as it aims to bypass the need for task-specific datasets. * **Comprehensive Evaluation Benchmarks:** The paper uses a diverse set of 10 widely used benchmarks covering a variety of LLM capabilities such as reasoning (Arc), math (GSM8K), science (Arc, OpenBookQA), coding (HumanEval, MBPP), and general knowledge (BoolQ, Hellaswag, PIQA, Winogrande). This broad coverage provides a good initial assessment of T2L's generalization abilities across different types of tasks. * **Comparison to Relevant Baselines:** Evaluating against task-specific LoRAs, multi-task LoRA, and other zero-shot methods like Arrow Routing provides a context for understanding T2L's performance relative to existing approaches. * **Ablation Studies:** The paper includes several ablation studies that examine the impact of different T2L architectures, training schemes (reconstruction vs. SFT), task embedding models, and task descriptions. These studies offer valuable insights into the factors influencing T2L's performance and robustness. In conclusion, the proposed methods and evaluation criteria in the "Text-to-LoRA" paper provide a strong foundation for demonstrating the potential of language-based instant adaptation of LLMs. The use of relevant techniques (hypernetworks, LoRA), a diverse set of benchmarks, and thorough ablation studies supports the main claims of the paper. However, a more critical perspective highlights potential limitations regarding the representativeness of benchmarks for all real-world adaptation needs, the reliance on high-quality task descriptions, the scope of LoRA adaptation, and the practical implications of "instant" adaptation. Future work could explore the applicability of T2L to a broader range of tasks and foundation models, investigate alternative adaptation techniques, and consider more nuanced evaluation metrics relevant to diverse adaptation scenarios. Theoretical Claims: The paper do not contain any explicit theoretical claims that are accompanied by mathematical proofs. Experimental Designs Or Analyses: **1. LoRA Compression Experiment (Section 4.1 and Table 1, Figure 3):** * **Design:** This experiment aims to determine if T2L can recover the performance of task-specific LoRAs through reconstruction training. It trains task-specific LoRAs (oracles) on benchmark tasks and then trains T2L to distill these LoRAs using either one-hot or natural language task embeddings. The performance of the reconstructed LoRAs is then compared to the oracle LoRAs on the respective benchmark test sets. Figure 3 further explores the impact of increasing the number of training tasks on the reconstruction error and relative performance. * **Soundness/Validity:** * The use of **benchmark-specific LoRAs as "ground truth" for distillation** is a reasonable approach to assess T2L's ability to compress and reproduce the functionality of individual adapters. * Comparing performance with **both one-hot and natural language embeddings** helps to understand the importance of semantic task representation for reconstruction. * The analysis of performance with **increasing reconstruction error** (Figure 3) provides insights into the lossy compression capabilities of T2L. * **Potential Issues:** * The experiment in Table 1 indirectly sees the benchmark tasks during training as T2L learns to distill benchmark-specific LoRAs. This makes it less of a чистое "zero-shot" scenario for these specific benchmarks in this particular experiment, although the goal here is compression, not generalization to completely unseen tasks. * The **hypothesis that the performance gain over oracle LoRAs on some benchmarks (PIQA, WG) comes from lossy compression acting as regularization** is an interesting interpretation but could benefit from further investigation to confirm this mechanism. **2. Zero-Shot LoRA Generation Experiment (Section 4.2 and Table 2):** * **Design:** This experiment investigates T2L's ability to generate useful LoRA adapters for unseen tasks. T2L is trained with Supervised Fine-Tuning (SFT) on 479 SNI tasks using natural language task descriptions. The generated LoRAs are then evaluated on 10 held-out benchmark tasks for which T2L has not seen specific LoRAs during training. The performance is compared against a base model, prepending task descriptions, 3-shot ICL, average LoRA, multi-task LoRA, and Arrow Routing. * **Soundness/Validity:** * The **use of held-out benchmark tasks** is crucial for assessing zero-shot generalization. * The **comparison with a comprehensive set of baselines** provides a good context for evaluating the effectiveness of T2L-generated LoRAs. * Averaging performance over **three different instances of task descriptions** for each benchmark helps to account for the variability in description quality and their impact on T2L. * **Potential Issues:** * As we discussed previously, the **representativeness of the chosen benchmarks** for all possible "unseen tasks" might be a limitation. * The performance comparison with Arrow Routing is noted to be **indirect due to differences in the set of LoRA adapters and training tasks**, as well as potential differences in evaluation prompts. This makes direct conclusions about superiority challenging. **3. Ablation Studies (Section 5):** * **Increasing Training Compute (Section 5.1 and Table 3):** This study examines the scalability of T2L by varying the number of training tasks while proportionally scaling the training budget. The results suggest that increasing training data generally improves zero-shot performance. This design seems sound for investigating the impact of scale. * **Task Embedding Models (Section 5.2 and Table 4):** This ablation compares the zero-shot performance of T2L trained using two different task embedding models (gte-large-en-v1.5 and Mistral-7B-Instruct). The comparable performance across models suggests robustness to the embedding method. This is a well-designed experiment to test the dependency on a specific embedding technique. * **Varying Task Descriptions (Section 5.3 and Table 5):** This experiment investigates the impact of different types of input task descriptions (train, eval, random, train (random)) on the performance of reconstruction-trained T2L. The significant performance drop with unaligned descriptions highlights the importance of task description relevance. This ablation effectively demonstrates the sensitivity of the method to the quality and alignment of the input. * **Training Schemes (Section 5.4 and Table 6):** This ablation directly compares the zero-shot performance of T2L trained via LoRA reconstruction versus Supervised Fine-Tuning (SFT) with roughly equal training time. The significantly better performance of SFT-trained T2L supports the authors' hypothesis about the limitations of reconstruction for generalization. This is a crucial experiment for understanding the optimal training strategy for T2L's zero-shot capabilities. **Overall Considerations:** * The paper generally employs **standard experimental practices** in the field, including the use of benchmark datasets, comparison with relevant baselines, and ablation studies. * The authors are **transparent about certain limitations**, such as the indirect comparison with Arrow Routing and the potential impact of real-world task descriptions. * The **reliance on generated task descriptions** is a key aspect of the approach. While the use of a powerful LLM for generation aims to ensure quality, the potential for variability and misalignment in real-world scenarios remains a factor to consider . The ablation on varying task descriptions (Section 5.3) directly addresses this. * The **scope of evaluation primarily focuses on NLP benchmarks**. While diverse within that domain, the applicability and evaluation of T2L for tasks beyond these benchmarks (as also noted in our previous discussion) would provide a more comprehensive understanding of its capabilities. Supplementary Material: No Relation To Broader Scientific Literature: * **Parameter-Efficient Fine-Tuning (PEFT):** The paper directly addresses the limitations of traditional fine-tuning, which requires expensive and lengthy training. **T2L leverages Low-Rank Adaptation (LoRA), a prominent PEFT technique, but innovates by generating LoRA adapters on the fly.** This contrasts with the typical approach of training separate LoRA adapters for each downstream task. * **Hypernetworks for Neural Network Adaptation:** T2L falls under the umbrella of hypernetworks, which are neural networks that generate parameters for other networks. **The paper builds upon the idea of using hypernetworks for adaptation but distinguishes itself by using natural language task descriptions as input to the hypernetwork.** Prior work in this area often used learned task identifiers or relied on more constrained input formats. T2L's ability to generate task-specific LoRAs from natural language represents a significant step towards more flexible and user-friendly adaptation. * **Zero-Shot Learning:** T2L's capability to generate effective LoRA adapters for completely unseen tasks based solely on their natural language descriptions connects to the field of zero-shot learning. **While previous work explored zero-shot adaptation using hypernetworks in limited contexts (e.g., English dialects), T2L demonstrates task-wise zero-shot generalization across a diverse set of NLP benchmarks.** * **Meta-Learning:** The training of T2L, especially through supervised fine-tuning on a distribution of downstream tasks, can be seen as a form of meta-learning. **T2L learns a general adaptation mechanism from a variety of tasks, enabling it to quickly adapt to new tasks at inference time.** This aligns with the goal of meta-learning to learn how to learn. * **Prompt Engineering and In-Context Learning (ICL):** The paper compares T2L against baselines like prepending task descriptions and few-shot ICL. **While these methods also aim to adapt LLM behavior, T2L offers a different approach by directly modifying the model's parameters through generated LoRA adapters, potentially providing more control and efficiency.** The comparison highlights the strengths of T2L in scenarios where in-context examples might be limited or costly. * **Compression of Adapters:** The paper also investigates the ability of T2L to compress pre-trained LoRA adapters. This relates to the growing body of work on efficiently serving and deploying large numbers of adapters. **T2L offers a method to implicitly compress multiple adapters into a single hypernetwork, allowing for on-demand generation of task-specific parameters.** In summary, **T2L advances the state of the art by combining the efficiency of LoRA with the flexibility of hypernetworks and the generalizability of natural language instructions to achieve instant, zero-shot adaptation of large language models.** It moves beyond task-specific fine-tuning and learned task identifiers, offering a more intuitive and broadly applicable approach to specializing foundation models. Essential References Not Discussed: None Other Strengths And Weaknesses: **3. Pros and Cons:** * **Pros:** * **Enables on-the-fly adaptation of LLMs based on natural language task descriptions**. * **Offers a computationally inexpensive way to generate task-specific LoRA adapters** (single forward pass). * **Demonstrates the potential for zero-shot generalization to unseen tasks**. * **Can compress hundreds of pre-trained LoRA adapters**. * **Outperforms multi-task LoRA and Arrow Routing in zero-shot evaluation**. * **Shows robustness to different task embedding models**. * **Generated LoRAs for semantically similar tasks cluster together in the activation space of T2L (SFT-trained)**. * **Applicable to different base LLM architectures** (Mistral, Llama, Gemma). * **Cons:** * **Zero-shot performance does not yet match that of task-specific LoRA adapters**. * **Reconstruction-trained T2L fails to generalize to unseen tasks**. * **Performance depends on the quality and alignment of the natural language task descriptions**. * **Relies on generated task descriptions for training and evaluation, which might not reflect real-world user inputs**. * **Compression of LoRAs is lossy**, leading to a potential drop in performance compared to the original adapters, although the paper shows recovery and even outperformance in some cases potentially due to regularization. * **Limited to LoRA as the output space**, and other potentially more efficient modulation techniques are not explored. Other Comments Or Suggestions: * **Minor Concerns:** * The quality of writing could be improved in certain sections for better clarity and conciseness. * Further exploration of the computational overhead of T2L itself (hypernetwork size and inference time) compared to storing and using individual LoRA adapters could be beneficial. **Detailed Evaluation:** * **Novelty, Relevance, and Significance:** * The idea of using a hypernetwork to directly generate LoRA adapters from task descriptions is **novel**. This approach potentially offers a more efficient and flexible way to adapt LLMs compared to fine-tuning individual LoRA adapters for each task. * The problem of efficiently adapting foundation models to specific tasks is **highly relevant**. Traditional fine-tuning is computationally expensive and requires task-specific datasets and hyperparameter tuning. T2L's language-based adaptation with minimal compute could be beneficial for practitioners. * The claimed ability to zero-shot generalize to unseen tasks is **significant**. If validated robustly, this could significantly reduce the need for task-specific data and training. The compression of multiple LoRAs into a single hypernetwork also presents a practical advantage. * However, the level of novelty in hypernetwork architectures for adaptation, as the paper acknowledges related works, might be incremental rather than revolutionary. The core novelty lies in applying this technique specifically to generate LoRA adapters from natural language task descriptions for zero-shot generalization. * **Soundness:** * The paper presents empirical evidence through various experiments. The reconstruction training results (Table 1) suggest that T2L can indeed recover the performance of trained LoRAs on seen tasks. * The zero-shot LoRA generation results (Table 2) show that SFT-trained T2L improves over a multi-task LoRA baseline, indicating that it can generate useful adapters for unseen tasks. The comparison with Arrow Routing also provides some context. * However, the paper also acknowledges that T2L **does not fully bridge the performance gap with task-specific LoRAs in a zero-shot manner**. This is a crucial limitation that needs to be considered when evaluating the soundness of the claims about achieving task-specific performance. * The finding that reconstruction-trained T2L fails to generalize to unseen tasks (Section 5.4) raises questions about the underlying adaptation mechanisms and the effectiveness of distillation in this context. The explanation regarding the lack of clustering of similar LoRAs in the weight space (Appendix D and Figure 5) provides a plausible reason but highlights a fundamental challenge. * The ablation studies, such as the impact of task embedding models (Table 4) and varying task descriptions (Table 5), add to the soundness by exploring the robustness of the approach. The scaling experiments (Table 3 and Figure 1) also provide insights into the relationship between training data and performance. * The use of generated task descriptions (Appendix K and Figure 7) is interesting but also introduces a dependency on the quality and consistency of these generated descriptions, which might not always be guaranteed in real-world scenarios. * **Quality of Writing/Presentation:** * The paper is generally well-structured and clearly presents the proposed method and experimental results. The use of figures and tables aids in understanding the concepts and findings. * The conceptual overview of T2L training (Figure 1) and the architectural variations (Figure 2) are helpful. * However, some parts of the paper, especially the appendices detailing the architectures and hyperparameters, could be more concisely integrated into the main body or presented more clearly for better readability. * While the language is mostly professional, there are instances where more precise phrasing could be used to avoid overstating the capabilities of T2L, especially regarding its zero-shot performance compared to task-specific LoRAs. Questions For Authors: 1. Authors found that a T2L trained via reconstruction fails to generalize to unseen tasks (Section 5.4). Given the analysis in Appendix D and Figure 5 suggesting that LoRAs of similar tasks are not necessarily close in the weight space, what are your hypotheses or planned future research directions to bridge this gap and potentially enable better generalization for reconstruction-trained T2L? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > - Zero-shot performance does not yet match that of task-specific LoRA adapters. > - Reconstruction-trained T2L fails to generalize to unseen tasks. > - Performance depends on the quality and alignment of the natural language task descriptions. > - Relies on generated task descriptions for training and evaluation, which might not reflect real-world user inputs. > - Compression of LoRAs is lossy, leading to a potential drop in performance compared to the original adapters, although the paper shows recovery and even outperformance in some cases potentially due to regularization. > - Limited to LoRA as the output space, and other potentially more efficient modulation techniques are not explored. We explicitly acknowledge these concerns as limitations of current implementation of T2L and discuss them thoroughly in the paper. To further address these concerns, we provide additional clarifications here. --- > Zero-shot performance does not yet match that of task-specific LoRA adapters. Task-specific LoRAs are the upper ceiling for the performance. While T2L does not fully reach the ceiling, it gets very close for multiple tasks. We think of T2L as first steps towards efficient test-time adaptation of LLMs by providing compelling evidence that hypernetworks can effectively modulate modern frontier LLMs. We refer the reviewer to our response to reviewer sP2K for a potential performance improvement of T2L. --- > Performance depends on the quality and alignment of the natural language task descriptions. We provide a concrete failure case analysis in table 5 in section 5.3 and our response to reviewer sP2K. However, we believe that using an LLM for adjusting the description alignment could effectively sidestep the main failure case of T2L. --- > Compression of LoRAs is lossy, leading to a potential drop in performance compared to the original adapters, although the paper shows recovery and even outperformance in some cases potentially due to regularization. We agree with the reviewer that lossy compression is not a strict disadvantage of T2L as we show in the experiments that lossy compression of task-specific LoRAs can improve performance in some cases potentially due to regularization. --- > Limited to LoRA as the output space, and other potentially more efficient modulation techniques are not explored. We agree that other potentially more efficient modulation techniques might exist. Since our focus in this work is on generating LoRA due to its widespread use, we leave this investigation for future work. We hope that the reviewer concerns are adequately addressed by our response and the discussion provided in the paper. --- > Authors found that a T2L trained via reconstruction fails to generalize to unseen tasks (Section 5.4). Given the analysis in Appendix D and Figure 5 suggesting that LoRAs of similar tasks are not necessarily close in the weight space, what are your hypotheses or planned future research directions to bridge this gap and potentially enable better generalization for reconstruction-trained T2L? As mentioned by the reviewer, we explained our hypothesis in Appendix D that LoRA of similar tasks are not necessarily close in the weight space, leading to memorization or overfitness of the reconstruction-trained hypernetwork. A potential fix for this problem could be replacing the reconstruction loss. A strong candidate would be contrastive learning based on the similarity of tasks in the semantic space as opposed to the weight space. Utilizing contrastive learning would lead to latent representation of tasks that cluster similar tasks together, potentially leading to better generalization. We leave this investigation for future work.
Summary: The authors explore whether it is possible to represent a set of lora parameters as task embedding(T2L). This allows many pre-trained LoRAs to be compressed, and potentially could generalize to new unseen tasks. They show it is possible to generalize to unseen tasks in this way. They analyze the generated LoRAs, and that they tend to cluser when generated from a generation network trained end-to-end with the seen task,. Claims And Evidence: * Claim: T2L can efficiently encode hundreds of LoRA adapters. This is effecticely whown in Figure 3. Interstingly, while performance decreases with a reconstruction loss (where task-specific loras are reconstructed), it increases with end-to-end tuning with an SFT objective (Table 3). * Claim: T2L can generalize to new unseen tasks. Talbe 2 shows this, though with so many training tasks, it's hard to tell if the benchmark tasks are OOD. In fact, Figure 5 that task similarity is an important factor for high performance, so the method may struggle on truly OOD tasks. * Claim T2L: Clusters parameters in meaningful way . This is shown in Figure 4, showing that different text specifications of tasks do cluster together in hypernetwork space. Methods And Evaluation Criteria: Yes, both methods and evaluation make sense. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: Soundness and validity is strong. Supplementary Material: I reviewed the entire supplementary material ( Appendix). Relation To Broader Scientific Literature: This build on prior work showing that LoRA is an effective PEFT model, and that hypernetworks can effectively generate neural network parameters. Essential References Not Discussed: HyperDreambooth[1] is an intersting paper to cite; it generates lora parameters for personilzation of generative models. [1] Ruiz, Nataniel, et al. "Hyperdreambooth: Hypernetworks for fast personalization of text-to-image models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2024. Other Strengths And Weaknesses: Strengths: It is a novel and interesting method! Weaknesses: Little ablation of the task embedder? Do better reasoning models generalize better? Little analysis of OOD tasks, or failure cases: Does the hypernetwork fail in some cases? The gains over hyperdecoders are marginal, at least on these benchmark tasks. Can the authors demonstrate an instance where indeed the text streerabiliity directly changes the output generation in a way, to motivate the difference (qualitative examples are fine)? Other Comments Or Suggestions: An interesting direction! I'd like to see future work look into OOD generalization of the hypernetwork; does it generate very bad parameters in parts of the task distibution it has not seen? Also, qualitative examples would give the reader a sense of the properties of the model, beyond just benchmark performance, and would be appreciated. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > “HyperDreambooth[1] is an intersting paper to cite” We thank the reviewer for bringing the paper to our attention and will include this prior work in the camera-ready version. --- > “Little ablation of the task embedder? Do better reasoning models generalize better?” We agree with the reviewer that, in general, better representation of the task description, e.g., representation from reasoning models, results in better performance. However, in our simplified implementation, we use a fixed-length vector to represent the task description (as opposed to sequence of activations) and do not expect significant improvement from better embedder models. We hypothesize that such a compact representation would be restrictive, limiting the potential improvement from better representation. We suspect that using a more powerful architecture as opposed to a simple feedforward architecture would allow the hypernetwork to fully utilize better representations. For example, using the Perceiver architecture for aggregating information from the token activations directly. We leave this investigation for future work. Still, thanks to this simplified representation, in table 4 section 5.2, we show that T2L is largely invariant to the embedding model used. --- > “Little analysis of OOD tasks, or failure cases” Due to the response length limit, we refer the reviewer to our response to reviewer 2rJt for the clarification of the OOD tasks. Regarding the failure cases, we include some failure cases in the experiments as follows: Unaligned task descriptions (table 5 in section 5.3) Random strings as task descriptions (table 5 in section 5.3) Overall, we show that using task descriptions that do not correspond to the task will reduce the performance of T2L significantly. We believe that using an LLM for adjusting the description alignment could effectively sidestep the main failure case of T2L. Additionally, we provide a new qualitative result here to show another failure case: Low-quality descriptions. ### Input ``` Toulouse has twice as many sheep as Charleston. Charleston has 4 times as many sheep as Seattle. How many sheep do Toulouse, Charleston, and Seattle have together if Seattle has 20 sheep? ``` ### Base model response (incorrect) ``` Great question! Let's break it down step by step: 1. We know that Seattle has 20 sheep. 2. Charleston has 4 times as many sheep as Seattle, so Charleston has 20 * 4 = <<20*4=80>>80 sheep. 3. Toulouse has twice as many sheep as Charleston, so Toulouse has 80 * 2 = <<80*2=160>>160 sheep. 4. To find the total number of sheep in all three cities, we add the number of sheep in each city: 160 (Toulouse) + 80 (Charleston) + 20 (Seattle) = <<160+80+20=240>>240 sheep. So, Toulouse, Charleston, and Seattle together have 240 sheep. ``` ### Generated LoRA w/ (low-quality) description=``solve this task please’’ (incorrect) ``` Charleston has 4 times as many sheep as Seattle, and Seattle has 20 sheep, so Charleston has 20 * 4 = 80 sheep. Toulouse has twice as many sheep as Charleston, so Toulouse has 80 * 2 = 160 sheep. Toulouse, Charleston, and Seattle have sheep together: 160 + 80 + 20 = 240 sheep. ``` ### Generated LoRA w/ description=``This task challenges your problem-solving abilities through mathematical reasoning. You must carefully read each scenario and systematically work through the data to compute the final outcome.’’ (correct) ``` Toulouse has twice as many sheep as Charleston, so Toulouse has 2 * Charleston = 2 * 4 = 8 times as many sheep as Seattle. Seattle has 20 sheep, so Toulouse has 8 * 20 = 160 sheep. Charleston has 4 times as many sheep as Seattle, so Charleston has 4 * 20 = 80 sheep. Toulouse, Charleston, and Seattle have sheep together = 160 (Toulouse) + 80 (Charleston) + 20 (Seattle) = 260 sheep. ``` ### Generated LoRA w/ description=``Use your programming skill to think algorithmically, logically and systematically. Carefully read each scenario and systematically work through the data to compute the final outcome. Use your math skill to calculate correctly.’’ (correct) ``` let me check the relationships between the number of sheep in each city: 1. Toulouse has twice as many sheep as Charleston. 2. Charleston has 4 times as many sheep as Seattle. From point 2, we know that Charleston has 4 * 20 = 80 sheep. From point 1, Toulouse has 2 * 80 = 160 sheep. So, Toulouse, Charleston, and Seattle have a total of 160 + 80 + 20 = 260 sheep. ``` --- > “Can the authors demonstrate an instance where indeed the text streerabiliity directly changes the output generation in a way, to motivate the difference (qualitative examples are fine)?” We provide an example of steerability of T2L in the response to the previous question. Note that the last two examples are both correct but have different answer styles thanks to different descriptions. We remark that Hyperdecoders cannot exhibit such steerability as it uses the problem instance as the input to the hypernetwork. --- Rebuttal Comment 1.1: Comment: Thank you to the authors. I am satisfied with the rebuttal and upgrade my score.
Summary: This paper proposes the T2L architecture and training methods to generate task-specific LoRA parameters from task embeddings. The authors claim that their approach enhances zero-shot performance by enabling on-the-fly adaptation through a single forward pass of a pretrained hypernetwork. Claims And Evidence: Yes, the authors conducted multiple experiments to support their claims regarding the capability of zero-shot on-the-fly adaptation. Methods And Evaluation Criteria: There is no explicit problem. However, I need the author's comments on some of my concerns, which are described in the Experimental Designs or Analyses section and the Other Strengths and Weaknesses section. Theoretical Claims: The paper primarily relies on empirical evidence to support its claims. While additional theoretical analysis could strengthen the work, the assumptions and heuristics behind their approach and architecture design appear reasonable. Experimental Designs Or Analyses: For this approach to be practical, it should be able to handle out-of-distribution tasks that the model has never encountered or experienced anything similar to during pre-training. While the authors removed 10 tasks from a set of 500 to prevent data contamination in the evaluation benchmark datasets, I am curious whether there exist not identical but similar tasks in the training datasets compared to those in the evaluation benchmarks. It would be helpful if the authors could provide specific examples demonstrating their approach's effectiveness on challenging OOD tasks, supported by detailed evidence. Supplementary Material: I reviewed the supplementary material overall, focusing particularly on the details of the architecture and dataset. Relation To Broader Scientific Literature: While existing literature and concurrent work share a similar motivation, as the authors acknowledge in the related work section, I believe the proposed approach is meaningful, demonstrating the potential effectiveness of LoRA-based on-the-fly adaptation through extensive experiments. Essential References Not Discussed: I think the authors included the necessary references to support their claims. Other Strengths And Weaknesses: HyperLoRA requires an additional initial cost to generate LoRA parameters using a hypernetwork from task embeddings and merge them into the base model. I believe the advantage of HyperLoRA over in-context learning (ICL) in terms of computational efficiency becomes more significant as the model repeatedly performs a given task on more samples. This is because HyperLoRA requires less computation for each inference due to the absence of a need for few-shot samples. Regarding this point, it would be helpful if the authors included a cost analysis with specific numbers, detailing how much initial cost is required for instant adaptation and how many inferences are needed to offset this cost. Additionally, in terms of effectiveness, I am curious whether using K-shot ICL (e.g., K > 3, such as K = 32 or higher) could enable ICL to match or even outperform HyperLoRA. In that case, would the cost-effectiveness of HyperLoRA justify its use over K-shot ICL? Providing specific numerical comparisons in this analysis would be helpful. Other Comments Or Suggestions: N/A Questions For Authors: Please refer to my questions for authors in the Experimental Designs or Analyses section and the Other Strengths and Weaknesses section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > “I am curious whether there exist not identical but similar tasks in the training datasets compared to those in the evaluation benchmarks.” We confirm that some test and training tasks are similar in that they are mostly multiple-choice question-answering tasks. Also, there are similar and overlapping domains between the two splits. For example, the ARC benchmarks are similar to SNI task #47 (see https://instructions.apps.allenai.org/#explore). However, some benchmarks are very different from the training distribution, e.g., MBPP and HumanEval as the training tasks do not contain any code generation task. The closest tasks are SNI task #688 and #956. Therefore, we think that MBPP and HumanEval serve as a good representation of OOD tasks. We will update the text in the camera-ready version to clarify this point. We will also list all the training tasks in the appendices for better clarification. Although the benchmarks are arguably not fully OOD relative to the training tasks, we emphasize that one of the advantages of T2L is its ability to efficiently and cheaply adapt at test-time. --- > “HyperLoRA requires less computation for each inference due to the absence of a need for few-shot samples. Regarding this point, it would be helpful if the authors included a cost analysis” We fully agree with the reviewer that one of the main advantages of T2L is its efficiency. To emphasize T2L’s efficiency, we provide a FLOP analysis on a representative scenario. Let $S$ be the sequence length, $H$ be the hidden size, and $L$ be the number of layers of a Transformer-based LLM. We use the following equations for computing the matrix multiplications (GEMMs) FLOPs [1] **FLOPs for Self-Attention (per layer):** $8 \times S \times H ^ 2 + 4 \times H \times S ^ 2$ **FLOPs for FFN (per layer):** $16 \times S \times H ^ 2$ **Per Transformer Block Total FLOPs:** $24 \times S \times H ^ 2 + 4 \times H \times S ^ 2$ **Setup for comparison** - 3-shot ICL examples are approximately 256 tokens long - Question instances are approximately 64 tokens long - Task descriptions are approximately 48 tokens long - We consider one question instance as the main input - We only consider input tokens for the FLOPs calculation - We use `Mistral-7B-Instruct-v0.2` as the base model (S = 256 + 64 (3-shot ICL + question instance), H = 4096, L = 32) - When the based model is used with T2L, we do not include 3-shot ICL (S = 64 (question instance), H = 4096, L = 32) - We use `gte-large-en-v1.5` as the task description encoder (S = 48 (task description), H = 1024, L = 24) - We use the M hypernetwork architecture detailed in the Appendix F ## T2L per instance FLOPs **gte-large-en-v1.5:** FLOPs = 24 x (24 x 48 x 1024 ^ 2 + 4 x 1024 x 48 ^ 2) = 0.029 TFLOPs/instance **Hypernetwork (M):** FLOPs = 2 x 1024 x 64 + 4 x 4 x 128 x 512 + 128 x 4096 x 8 = 0.000005 TFLOPs/instance **Base LLM w/o ICL:** FLOPs = 32 x (24 x 64 x 4096 ^ 2 + 4 x 4096 x 64 ^ 2) = 0.827 TFLOPs/instance **Total FLOPs** = 0.029 + 0.000005 + 0.827 = **0.856005 TFLOPs/instance** ## Base LLM with 3-shot ICL **Total FLOPs** = 32 x (24 x (256 + 64) x 4096 ^ 2 + 4 x (4096) x (256 + 64) ^ 2) = **4.177 TFLOPs/instance** Based on this calculation, we can see that the adaptation cost of T2L is significantly cheaper than 3-shot ICL—more than 4x FLOPs reduction, saving compute within the first question instance. We will include this ad-hoc analysis in the appendices for the camera-ready version. --- > “I am curious whether using K-shot ICL (e.g., K > 3, such as K = 32 or higher) could enable ICL to match or even outperform HyperLoRA.” It has been shown that many-shot ICL [2] consistently outperforms few-short ICL, to the extent that it can be a substitute for full fine-tuning in some tasks. Thus, we believe that many-shot ICL could match T2L performance in some tasks. While many-shot ICL allows for scaling inference-time compute, at the same time, it is computationally expensive, and depends on the context length and memory capacity of the device. These constraints limit the deployability of the LLM on consumer hardware. In contrast, T2L takes an alternate approach, which amortizes the compute budget (e.g., finetuning or ICL) into the training of T2L before generating task-specific LoRAs cheaply at inference time. Furthermore, generated LoRAs can be merged into the base model and quantized to further reduce the memory requirement and improve inference speed. We provide an ad-hoc analysis comparing few-shot ICL against T2L as the response to the previous question. T2L significantly reduces the FLOPs required for each question instance significantly (4x FLOPs reduction). The reduction would be even more dramatic if we compare many-shot (e.g., 32-shot) ICL to T2L. --- [1] Korthikanti et al. "Reducing activation recomputation in large transformer models." MLSys 2023 [2] Agarwal et al. "Many-shot in-context learning." NeurIPS 2024 --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ efforts in providing a response, and most of my concerns have been addressed. Accordingly, I will increase my score.
null
null
null
null
null
null
null
null
STP: Self-play LLM Theorem Provers with Iterative Conjecturing and Proving
Accept (poster)
Summary: This paper introduces STP, a self-playing training framework for automated theorem proving. STP employs a conjecturer to generate new conjectures based on existing theorems and lemmas, while a prover attempts to prove previously unproven conjectures or statements in an iterative process. Experiments on the LeanworkBook and miniF2F benchmarks show that STP is more sample-efficient than expert iteration and reinforcement learning, achieving state-of-the-art performance on both datasets across the Lean and Isabelle proof assistants. Claims And Evidence: Most claims in the submission are well-supported by evidence. Methods And Evaluation Criteria: I think the proposed methods and evaluation criteria are well-motivated and logically sound. Theoretical Claims: N/A Experimental Designs Or Analyses: I think most of the experiment designs and analyses make sense. Supplementary Material: Yes. Relation To Broader Scientific Literature: I think the key contribution of this paper is the conjecturer, which can generate new, potentially provable training data, helping to mitigate data scarcity and the sparse reward problem in formal theorem proving. This idea is very similar to some approaches in general reasoning tasks (e.g., [1,2]), which also leverage LLM-based rewriting or permutation methods to generate variant training data. [1] MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models [2] WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct Essential References Not Discussed: Besides the above references, I think the following work is not currently discussed in the paper: [1], which proposes generating theorems and proofs from seed concepts. Moreover, I believe the discussion of [2] is not entirely fair, as it focuses on a more challenging setting where unpretrained LLMs generate harder conjectures than those in the training set, thereby avoiding data contamination issues. In contrast, STP in this paper fully leverages an LLM to generate variants of existing theorems rather than entirely harder conjectures. [1] MUSTARD: Mastering Uniform Synthesis of Theorem and Proof Data [2] Learning Formal Mathematics From Intrinsic Motivation Other Strengths And Weaknesses: I think the paper is well-motivated and well-written, with comprehensive experiments. I also appreciate the demonstrated examples, which show that the conjecturer can indeed generate more generalized—and potentially harder—problems given input theorems and lemmas. Additionally, I find the design of reward assignments and filtering strategies well thought out, all of which make sense to me. One major concern I have is that the training dataset design for SFT-conjecture data does not seem to explicitly encourage the conjecturer to generate harder theorems. Instead, it appears to primarily promote the generation of variants of existing theorems, given that theorem X and theorem Y originate from the same proof file. There is no guarantee that the conjecturer will propose genuinely harder conjectures, as its design relies entirely on the diverse sampling capabilities of LLMs. Other Comments Or Suggestions: Minor: In the paper, the authors state that ''RL’s capability is fundamentally bounded by the difficulty level of the theorems in the training dataset—it is unlikely, in principle, for a model to learn college-level proof techniques solely by working on high school-level problems or to solve open math problems using RL on graduate-level problems''. However, I don't think the proposed STP can truly address such a distribution shift problem, as the conjecturer primarily generates variants of existing theorems and remains heavily reliant on the training dataset. Questions For Authors: Could the paper include more ablation studies? Specifically, what is the accuracy of STP after the SFT stage? Additionally, what is the accuracy of STP after self-play training without retraining? Furthermore, could you provide more examples of how STP generates variants of existing theorems? Does the conjecturer primarily produce slight variations of input theorems, or can it genuinely generalize to harder problems? It would be valuable to conduct a small analysis comparing the difficulty of the generated conjectures to the original theorems to better understand STP’s ability to propose more challenging problems Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer 76YT for their positive review, and for noting “the paper is well-motivated and well-written, with comprehensive experiments”. > I think the following work is not currently discussed in the paper: [1], Thank you for the comments. We will include a discussion about [1] upon revision. [1] MUSTARD: Mastering Uniform Synthesis of Theorem and Proof Data > the training dataset design for SFT-conjecture data does not seem to explicitly encourage the conjecturer to generate harder theorems. [...] The conjecture dataset at the SFT stage only serves as an initialization, and we encourage the conjecturer to generate harder conjectures in STP’s iterative training — at every iteration, the conjecturer is continuously trained on previously generated conjectures on which the prover has a *low pass rate*. Therefore, as the prover gets better, the training data of the conjecturer will encourage harder conjectures at the next iteration. > Minor: In the paper, the authors state that ''RL’s capability is fundamentally bounded by the difficulty level of the theorems in the training dataset [...]''. However, I don't think the proposed STP can truly address such a distribution shift problem, as the conjecturer primarily generates variants of existing theorems and remains heavily reliant on the training dataset. Thanks for the comments! We agree with the reviewer that the STP at its current form is unlikely to discover fundamentally novel proof techniques, and we do not claim that STP truly addresses this particular issue. This paragraph serves as a high-level motivation and STP is only a first step toward this direction (more promising than the standard RL baseline). Moreover, even though STP primarily generated variants of existing theorems, it does discover some new interesting ones, at least *harder* forms of existing theorems. We also only trained with limited compute and data, and the full potential of this line of ideas still requires more future work. > what is the accuracy of STP after the SFT stage? Additionally, what is the accuracy of STP after self-play training without retraining? We thank the reviewer for their comments. The SFT checkpoint is not very different from the base model (DeepSeek-Prover-V1.5-SFT): | Method | Sample budget | miniF2F-test | ProofNet-test | |----------------|---------------|--------------|---------------| | SFT checkpoint | 1 | 35.7% | 3.8% | | | 128 | 51.4% | 15.4% | | | 3200 | 55.7% | 19.4% | | STP | 1 | 41.1% | 5.9% | | | 128 | 57.2% | 18.0% | | | 3200 | 61.1% | 23.1% | And STP is better than STP w/o retraining. In general, STP w/o retraining has a higher pass@1 but lower pass@k for large k. | Method | Sample budget | miniF2F-test | ProofNet-test | |--------------------|---------------|--------------|---------------| | STP w/o retraining | 1 | 45.1% | 7.8% | | | 128 | 57.2% | 16.9% | | | 3200 | 60.7% | 21.5% | | STP | 1 | 41.1% | 5.9% | | | 128 | 57.2% | 18.0% | | | 3200 | 61.1% | 23.1% | We will include these results upon revision. > Furthermore, could you provide more examples of how STP generates variants of existing theorems? Does the conjecturer primarily produce slight variations of input theorems, or can it genuinely generalize to harder problems? It would be valuable to conduct a small analysis comparing the difficulty of the generated conjectures to the original theorems The conjecturer generates both variants and more challenging problems. On average, the proof length of the conjectures is 1.98 times that of the seed statements, implying they are generally more difficult to prove. As an example, the following conjecture involves the square root function, where the seed theorem is about polynomials: Seed theorem ``` theorem lean_workbook_plus_36664 (a b : ℝ) (ha : 0 < a) (hb : 0 < b) (hab : (1 + a^2) * (1 + b^2) = 4) : a + b + a * b ≤ 3 ``` Conjecture ``` theorem lean_workbook_plus_36684 (a b c : ℝ) (ha : 0 < a) (hb : 0 < b) (hc : 0 < c) (hab : a * b * c = 1) : Real.sqrt ((a ^ 2 + 1) / b) + Real.sqrt ((b ^ 2 + 1) / c) + Real.sqrt ((c ^ 2 + 1) / a) ≥ 3 / (a + b + c) ``` Or the generated conjectures can have very different proofs: Seed theorem ``` theorem lean_workbook_plus_12215 : sin (π / 2) = 1 ``` Conjecture ``` theorem lean_workbook_plus_60 : ¬(∀ f : ℝ → ℝ , (∀ x :ℝ, f (x + π / 2) = f x - 1) ↔ f = fun x ↦ cos x) ``` --- Rebuttal Comment 1.1: Comment: Thanks for the clarification—I still hold a positive view of the paper. From the examples, it appears that the conjecturer can produce quite diverse conjectures from the same seed theorems. I do have a follow-up question: do these generated conjectures tend to preserve similar properties as those in the training data? For instance, do they rely on some of the same lemmas in their proofs? --- Reply to Comment 1.1.1: Comment: We thank reviewer 76YT for acknowledging our response and providing additional comments. > Do these generated conjectures tend to preserve similar properties as those in the training data? For instance, do they rely on some of the same lemmas in their proofs? Yes, we can use the distribution of lemmas in the proofs as a proxy for similarity between the generated conjectures and the training data theorems. Below are the 20 most frequently used lemmas in the proofs of generated conjectures, most of which are about algebra and inequalities: ``` sq_nonneg, mul_self_nonneg, div_le_div_iff, Real.sq_sqrt, div_le_iff, Real.sin_sq_add_cos_sq, pow_two, Real.sin_add, Real.cos_add, Real.sqrt_pos, div_nonneg, Real.cos_sub, Real.sin_sub, pow_pos, Real.cos_two_mul, Real.cos_sq, div_le_one, Real.sin_two_mul, pow_add, pow_mul ``` Among these 20 lemmas, 17 also appear among the top 20 most frequent lemmas in the proofs of LeanWorkbook theorems, suggesting that the generated conjectures are generally aligned with the problems in LeanWorkbook. We also conducted a granular analysis by examining the overlap of lemmas between each generated conjecture and its corresponding seed theorem. At a late checkpoint of our experiments, we found that 51.1% of correct proofs for generated conjectures share at least one lemma with the proof of the seed statement. Specifically, 33.44% share exactly one lemma, 12.45% share two, 4.22% share three, and about 1% share four or more. This indicates that a substantial portion of generated conjectures are meaningfully related to their seed theorems. The shared lemmas also span diverse topics. Below are the 30 most frequently shared lemmas. While inequalities and algebra remain dominant (e.g., `sq_nonneg, Real.sin_sq_add_cos_sq`), we also see lemmas related to sequence of products (i.e., `Finset.prod_range_succ'`), and number theory (e.g., `Nat.pow_mod, nat_sub_dvd_pow_sub_pow`). (Note: this list differs from the one above because it only counts lemmas shared between a conjecture and its seed theorem, whereas the previous list counts all lemmas used in generated conjectures.) ``` sq_nonneg, mul_self_nonneg, div_le_div_iff, Real.sq_sqrt, Real.sin_add, Real.sin_sq_add_cos_sq, Real.cos_add, pow_two, div_le_iff, Real.cos_sub, Real.sin_sub, Real.cos_sq, Real.sin_two_mul, Real.cos_two_mul, pow_mul, div_le_one, Real.sqrt_pos, pow_add, Real.tan_eq_sin_div_cos, abs_le, Finset.prod_range_succ', pow_pos, Real.cos_sq_add_sin_sq, Nat.sum_range_choose, Nat.pow_mod, pow_three, abs_mul, abs_cases, Real.pi_pos, nat_sub_dvd_pow_sub_pow ``` We will include these discussions in the next revision.
Summary: The paper proposes a novel method called **Self-play Theorem Prover (STP)**. STP addresses the shortage of high-quality training data in automated formal theorem proving by simultaneously training two roles: a **conjecturer** that produces new theorems (or “conjectures”) and a **prover** that attempts to prove them. This approach is aimed to overcome the plateau seen in reinforcement learning (or “expert iteration”) methods. STP continually generates fresh conjectures that match the model’s current skill level—“barely provable” ones (the proof rate (0, 1/4]. The prover’s successful proofs of these conjectures supply training data to improve the model’s proof-writing capabilities. Through experiments using **Lean** and **Isabelle**, STP is shown to outperform standard baselines. Claims And Evidence: The main claim of the papers is that it enables data generation via a self-play mechanism. The key novel element is a generation mechanism of well-calibrated conjectures. The benefits of this are shown by comparisons with two baselines: expert iteration and parallel sampling. The paper claims significant improvement with respect to these baselines. The paper also claims to outperform DeepSeek-Prover-V1.5-RL on the Mini-F2F and ProofNet benchmarks. The evidence is pass rates gathered in Table 1. Methods And Evaluation Criteria: The method can be shortly summarised as follows: A single large language model (LLM) alternates between two tasks: 1. Conjecturer, that generates new mathematical statements (conjectures). 2. Prover – Attempts to prove the generated conjectures along with existing theorems. Key Steps of STP: 1. Generating Conjectures: • Given a known theorem and proof, the conjecturer synthesizes new conjectures that are barely provable 2. Proving Conjectures and Theorems: • The prover attempts to prove both newly generated conjectures and existing unproven theorems from a dataset (successful ones are added to the dataset) 3. Training with Dense Signals: • The prover is fine-tuned on successful proofs. • The conjecturer is updated based on which conjectures led to useful proofs. 4. Goto 1. The method is very intuitive and reasonable. Its main design is unsurprising; the value lies in proposing implementation details. These details feel correct and valid; at the same time, they are somewhat arbitrary and not well justified or motivated. I'd love to see more ablations! (I totally understand that the cost is quite high, though). I find the evaluation criteria perhaps the weakest point of the paper. My criticism is as follows: - Fig 2 - I am not sure how strong the baselines are (and how much effort has been given into tuning them) - Fig 3 - I find comparisons with Deepseek-Prover not quite informative without careful information on the training budget of the two methods (and any detail to ensure an apple-2-apple comparison) - Table 1 - the same as above. Moreover, I am not sure what the difference is between the Sample budget (#proofs) and sample budgets (#steps). - another subtle issue about the comparison of STP and deepseek is how much the presented method 'overfits' to benchmarks. Following that, I'd love to see a deep analysis of the generalization properties (although doing this properly might be another paper) Having said that, I stress that I am not an expert in the field. It's hard for me to full understand the results and used benchmarks. Theoretical Claims: The paper does not contain any significant theoretical claims. Experimental Designs Or Analyses: Correct Supplementary Material: I've briefly reviewed the supplementary material, which feels fine but not very exhaustive. Relation To Broader Scientific Literature: I am not an expert in the field. The related work covers most of the essential topics. Essential References Not Discussed: In the RL parlance, the presented work can be framed as an exploration problem and curriculum construction. I find it strange not to include a short discussion about these topics. I do not insist on going deep. Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: I'd love to see what is the limit of the method. Does it plateau, when, why? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer vtLT for their review. In the following, we address the reviewer’s questions/comments in detail. > Fig 2 - I am not sure how strong the baselines are (and how much effort has been given into tuning them) We spend equal efforts in tuning the baselines and STP (if not more efforts on tuning the baseline). We mostly use off-the-shelf configurations for finetuning LLMs, and only optimize the learning rate and sampling parameters. Our reproduction of DeepSeek-Prover-V1.5-RL on LeanWorkBook is actually better than the best previously reported accuracy. We believe that the amount of tuning is more than sufficient given the enormous gap between our methods and baselines — we don’t expect the hyperparameters in the baseline to make any difference nearly comparable to the gap between ours and the baselines. > Fig 3 - I find comparisons with Deepseek-Prover not quite informative without careful information on the training budget of the two methods (and any detail to ensure an apple-2-apple comparison) First, we’d like to point out that our model is trained on top of Deepseek-Prover-V1.5-SFT. Hence, we assume that the reviewer’s question is about the comparison between STP and Deepseek-Prover as if it were trained with the same additional compute (please kindly let us know otherwise). Since Deepseek-Prover-V1.5-SFT is trained with expert iteration, our expert iteration baseline simulates the performance of DeepSeek’s model as if it were trained for more iterations. Note that the training compute is approximately proportional to the number of generated proofs during training (for our method, we count the proofs of both the statements in the given dataset and the generated conjectures; for expert iteration, we count the proofs of the statements in the given dataset). Therefore, Figure 2 is an apple-to-apple comparison between our method and DeepSeek-Prover. > Table 1 - the same as above. Moreover, I am not sure what the difference is between the Sample budget (#proofs) and sample budgets (#steps). Sample budget (#proofs) and sample budgets (#steps) are two different ways to measure the inference-time compute. For whole-proof generation methods (generating an entire proof auto-regressively by LLM, such as STP), sample budget (#proofs) is the most common metric for inference-time compute, which means the total number of proofs generated independently per problem. Sample budgets (#steps) is mostly used for tree search methods that use LLMs to generate single proof steps (e.g., a single line in the proof) then search for a complete proof by best first search or MCTS. We introduce the two approaches for completeness. > another subtle issue about the comparison of STP and deepseek is how much the presented method 'overfits' to benchmarks. Following that, I'd love to see a deep analysis of the generalization properties (although doing this properly might be another paper) We assume that the reviewer’s question is about whether any method overfits to the benchmarks like miniF2F, ProofNet, and PutnamBench. The STP checkpoint (Line 291) is only trained on LeanWorkbook, and we do not use any early-stopping methods that may leak information about the test datasets. In addition, miniF2F, ProofNet, and PutnamBench cover statements from different sources and of different difficulty. Therefore, we believe that it is unlikely for STP to overfit to all three test benchmarks at the same time. (If the reviewer’s question is about whether STP or DeepSeek-Prover overfits to formal proofs, the answer is yes, but that’s expected because they are only trained on formal proofs. None of the models can generalize to natural language tasks, and we believe that generalizing formal proofs to natural language tasks is still an open question.) > In the RL parlance, the presented work can be framed as an exploration problem and curriculum construction. I find it strange not to include a short discussion about these topics. I do not insist on going deep. We thank the reviewer for the comments. Our method can indeed be framed as automatic curriculum learning (e.g., [1]) and we included some of the related work in Line 177-183. Upon revision, we will include a more broad discussion on their connections. [1] Portelas, Rémy, et al. Automatic curriculum learning for deep rl: A short survey. > I'd love to see what is the limit of the method. Does it plateau, when, why? After the submission, we run STP for longer, and it continues to improve. With about 50B generated tokens (requiring 86K TPU-v4 hours in total; 2.5x the compute used in the submitted experiment), STP reaches at approximately 28.5% pass rate on LeanWorkbook, significantly higher than the best reported in the paper. We don’t have compute to run even further. We also estimate that only about 50% of LeanWorkBook statements are correct, and thus, 28.5% is already pretty high.
Summary: The paper studies the problem of improving LLM theorem provers through training on self-generated conjectures. The approach, called Self-play Theorem Prover (STP), is a framework to train LLM theorem provers in a dual conjecturer-prover setup, evaluated with Lean and Isabelle as formal verifiers. STP is initialized with supervised fine-tuning on proof libraries like Mathlib, then iterates through a self-play loop: the conjecturer, built on DeepSeek-Prover-V1.5-SFT, generates conjectures from seed theorems and lemmas, guided by a reward function targeting low but positive empirical pass rates and an elegancy score. The prover, trained via expert iteration attempts proofs with a fixed sampling budget per statement / conjecture and the proofs are verified with Lean / Isabelle. With 19.8 billion tokens generated over 24 Lean iterations (120M proofs, 2M conjectures), STP achieves a 26.3% cumulative pass rate on LeanWorkbook and strong performance on miniF2F-test, ProofNet-test. The approach is also evaluated on Isabelle, using Llemma-7b as the base model, and show strong performance on a translated version of the LeanWorkbook and PutnamBench and observe strong performance gains. ### Update after rebuttal: The authors clarified several of my concerns during the rebuttal and considering the other reviews, I maintain my positive assessment of the work Claims And Evidence: C1: Propose a self play mechanism for training LLM Theorem provers which enable continuous improvement without requiring additional data and achieves state-of-the-art among whole proof generation methods This is the central claim and contribution of the paper. The experiments in Section 4 provide much of the evidence for the claim. In particular Figure 2, 4 and 5, provide evidence for continuous improvement over iterations of conjecture generation and proving. Table 1 provides the results on miniF2F and ProofNet and Table 3 consists of results on PutnamBench, and STP achieves state of the art performance, outperforming DeepSeek-Prover-v1.5 (I’d like to note that DeepSeek-Prover baseline is missing in Table 3). The results also demonstrate strong performance of the method across two different base models (Llemma and DeepSeek-Prover-1.5-SFT) as well as two different proof verification systems (Lean and Isabelle). Overall there is strong evidence for the claim. C2: Self play mechanism provides denser training signals making training easier and retraining with generated conjectures improves performance The authors conduct two ablations, with results presented in Fig 6 and Table 2, which provide sufficient evidence for these claims. Methods And Evaluation Criteria: The authors use standard benchmark datasets miniF2F, ProofNet, and PutnamBench. The methods used for comparison are also established state-of-the-art to the best of my knowledge. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiment design overall is quite sound, but there are a couple things which seem lacking: - The DeepSeek-Prover baseline appears to be missing in Table 3. - There are some examples of generated conjectures in the paper, but I think a bit more analysis of the conjectures would be useful. For instance, some details about what the distribution of topics looks like among the generated conjectures or how similar they are to the problems in LeanWorkbook. Supplementary Material: I checked some of the implementation details in Appendix A as well as the additional results in Appendix B. Relation To Broader Scientific Literature: A key contribution of the paper is to demonstrate that self-play can be employed at scale for training strong LLM theorem provers. This contribution is important in the context of theorem proving, and also presents potential avenues for the broader topic of reasoning with LLMs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: * The paper is generally quite well written and easy to follow. There are also many details about the approach which make it easy to understand. * The overall approach is also relatively straight-forward, which in my opinion is a strength since it makes the approach easier to use for others to to build upon and scale. Weaknesses: * The authors do not include code to reproduce the results. I appreciate the details in the paper but releasing the code would be an important contribution (considering the strong results) * As the authors discuss in the paper, the overall approach is fairly similar to Minimo (Poesia et al. 2024) but scaled up to Lean, with simpler conjecture generation. The novelty of the approach is thus somewhat limited. To be clear this does not detract from the paper (as indicated by my positive score) but I believe it is worth mentioning. Other Comments Or Suggestions: N/A Questions For Authors: Please see the sections above. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank reviewer dMfo for their positive review, and for noting “The experiment design overall is quite sound”. In the following, we address the reviewer’s questions/comments in detail. > The authors do not include code to reproduce the results. I appreciate the details in the paper but releasing the code would be an important contribution (considering the strong results) We have fully released our code, data, and model weights after the submission deadline. We also provide an anonymous version in the links here: https://anonymous.4open.science/r/STP_rebuttal-85F9. > The DeepSeek-Prover baseline appears to be missing in Table 3. The original DeepSeek-Prover paper does not report the number on PutnamBench. In the following, we report the test result of DeepSeek-Prover-V1.5-RL model conducted by ourselves, and we will include the results in the paper upon revision. | Method | Sample budget | Result | |-------------------------|---------------|--------| | DeepSeek-Prover-V1.5-RL | 64 | 6/644 | | | 3200 | 7/644 | > a bit more analysis of the conjectures would be useful. For instance, some details about what the distribution of topics looks like among the generated conjectures or how similar they are to the problems in LeanWorkbook. Thank you for the comments. We can look at the distribution of lemmas used to prove the generated conjectures as a proxy of their topics. Here are the 20 used lemmas with highest frequency, and the majority of the topics are algebra and inequalities: ``` sq_nonneg, mul_self_nonneg, div_le_div_iff, Real.sq_sqrt, div_le_iff, Real.sin_sq_add_cos_sq, pow_two, Real.sin_add, Real.cos_add, Real.sqrt_pos, div_nonneg, Real.cos_sub, Real.sin_sub, pow_pos, Real.cos_two_mul, Real.cos_sq, div_le_one, Real.sin_two_mul, pow_add, pow_mul ``` Among these 20 lemmas, 17 also rank among the top 20 most frequent lemmas in LeanWorkbook. This suggests that the generated conjectures are generally similar to the problems in LeanWorkbook.
Summary: The paper introduces the Self-play Theorem Prover (STP), a novel method for training large language models (LLMs) in formal theorem proving, addressing the scarcity of high-quality training data. STP employs an LLM in two roles: a conjecturer that generates new mathematical conjectures based on existing theorems and proofs, and a prover that attempts to prove these conjectures alongside statements from an existing dataset. The process is iterative, with the conjecturer trained on previously generated conjectures that are challenging yet provable by the current prover, enabling continuous improvement without additional external data. The method is evaluated on LeanWorkbook, miniF2F, ProofNet, and PutnamBench, achieving SoTA results across the board. Claims And Evidence: The claims are well-supported by clear and convincing evidence. Table 1 compares STP with prior methods on miniF2F-test and ProofNet-test, showing higher pass rates and sampling efficiency compared to previous baselines and RL training. Figures 2 and 3 illustrate STP’s scaling advantage over expert iteration and Deepseek-Prover-V1.5 models. One claim appears problematic: "STP proves 26.3% of the statements in the LeanWorkbook dataset, doubling the previous best result of 13.2% achieved through expert iteration." I cannot find a reference for this reported 13.2%, and according to Figure 2, expert iteration appears to perform better than this stated number. Methods And Evaluation Criteria: The STP method is well-suited for formal theorem proving. Its iterative conjecturing and proving process, inspired by mathematical practice, effectively addresses data scarcity through a self-adaptive curriculum. The evaluation criteria—pass rates on standard benchmarks such as LeanWorkbook, miniF2F-test, ProofNet-test, and PutnamBench—are appropriate and widely accepted in the field. Testing on both Lean and Isabelle verifiers demonstrates generalizability across formal languages. Theoretical Claims: I reviewed the derivation in A.5 for the re-weighting method and it looks correct. Experimental Designs Or Analyses: The experimental designs are sound and the comparisons are fair. The paper could be further improved through additional ablation studies on key design choices, for example, the filtering and re-weighting process for conjecturer training and the effectiveness of the final re-training phase. Supplementary Material: I reviewed the implementation details and pseudo-code for the conjecturing dataset in A.5 and additional results in Appendix B Relation To Broader Scientific Literature: The work is well situated within current trends in using LLMs for theorem proving. It also builds on previous studies on RL and self-play training. Essential References Not Discussed: I can't notice any. Other Strengths And Weaknesses: **Strengths:** - The proposed method is novel and well-motivated; - Empirical results are strong and comprehensive; - The paper is well-structured, with intuitive explanations and illustrative figures. **Weaknesses:** - Ablation studies on the key components of the proposed method are missing. Other Comments Or Suggestions: Line 34 in abstract: versifiers → verifiers Questions For Authors: - What is the intuition behind the construction method of the conjecture dataset? Why can theorem Y be a good conjecture for theorem X? - How does the final re-training affect evaluation performance? What if it is removed? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer ubJp for their positive review, and for noting “The claims are well-supported by clear and convincing evidence”. In the following, we address the reviewer’s questions/comments in detail. > One claim appears problematic: "STP proves 26.3% of the statements in the LeanWorkbook dataset, doubling the previous best result of 13.2% achieved through expert iteration." I cannot find a reference for this reported 13.2%, and according to Figure 2, expert iteration appears to perform better than this stated number. The number 13.2% is reported in [1] — they prove 13.1% statements in LeanWorkbook and disproves 3.9% (we realize that we made a typo in the paper. The correct number should be 13.1%). We will add a reference in the paper upon revision. Figure 2 shows the performance of expert iteration with the base model DeepSeek-Prover-V1.5-SFT, which is indeed better than 13.1%. However, in the original paper [2], they do not report any number on LeanWorkbook. Therefore, Figure 2 is based on our reproduction of [2]. [1] Wu, Zijian, et al. Internlm2. 5-stepprover: Advancing automated theorem proving via expert iteration on large-scale lean problems. [2] Xin, Huajian, et al. Deepseek-prover-v1. 5: Harnessing proof assistant feedback for reinforcement learning and monte-carlo tree search. > What is the intuition behind the construction method of the conjecture dataset? Why can theorem Y be a good conjecture for theorem X? We construct the conjecture SFT dataset so that the generated conjecture (Theorem Y) is related to the given theorem (Theorem X) in the sense that they use the same seed lemma in the proof. The conjecture dataset at the SFT stage only serves as an initialization of the model, and the conjecturer will gradually learn to generate diverse, challenging yet approachable, and relevant conjectures by STP’s iterative training via the datasets constructed later (Section 3.2). > How does the final re-training affect evaluation performance? What if it is removed? The following table compares STP and STP w/o retraining. In general, STP w/o retraining has a higher pass@1 but lower pass@k for large k. We will add this result in the paper upon revision. | Method | Sample budget | miniF2F-test | ProofNet-test | |--------------------|---------------|--------------|---------------| | STP w/o retraining | 1 | 45.1% | 7.8% | | | 128 | 57.2% | 16.9% | | | 3200 | 60.7% | 21.5% | | STP | 1 | 41.1% | 5.9% | | | 128 | 57.2% | 18.0% | | | 3200 | 61.1% | 23.1% | --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I maintain my positive assessment of the paper. Great work!
null
null
null
null
null
null
ENAHPool: The Edge-Node Attention-based Hierarchical Pooling for Graph Neural Networks
Accept (poster)
Summary: The paper proposes a methodology to perform hierarchical pooling in GNNs along with a message-passing layer that aims at reducing oversquashing (actually oversmoothing). Claims And Evidence: No. There is an ablation study but I don't feel it covers all the claims and components of the proposed methodology. Anyway, that is not the main issue. Methods And Evaluation Criteria: Yes Theoretical Claims: There are no theoretical contributions in this work. Experimental Designs Or Analyses: The experimental evaliuation is rather standard. Supplementary Material: There is no supplementary. Relation To Broader Scientific Literature: The components of the proposed methodology are not novel as they appear in one or more papers from the GNN literature of the last 9 years. Essential References Not Discussed: There are several references missing. Other Strengths And Weaknesses: All the methods proposed in this paper are not new and appear in one or more papers from the GNN literature of the last 9 years. These include: - using straight-through estimator to make S binary - using attention in pooling (see e.g., Understanding attention and generalization in graph neural networks from 2019) - Using attention mechanism to weight the edges (see GAT) - using heat kernels and random walk in message passing (e.g., Diffusion Improves Graph Learning from 2019) As is, the paper is more a collection of tricks and optimizations rather than a proposal of a new idea and methodology. That would be OK if this was an applied work to solve a specific problem, which is not the case. I do not see a contribution to the basic ML research. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Q1: There is an ablation study but I don't feel it covers all the claims and components of the proposed methodology. A1: Due to space limitations, please refer to our response Q2 to Reviewer foip. Q2: The contribution to the basic ML research. A2: This paper proposes a new graph pooling that can overcomes the drawbacks of many existing pooling methods. Specifically, we would like to clarify that GNNs are always a crucial research direction for machine learning, and their development relies on continuously optimizing existing technologies. Specifically, graph pooling is an essential subfield of GNNs, and most existing methods still suffer from some theoretically drawbacks. For instance, global pooling methods fail to capture hierarchical structural features of graphs. Hierarchical pooling methods based on Top-K selection discard a significant amount of node information, influencing the connectivity construction of the coarsened graph. Cluster-based hierarchical pooling methods can solve the above problems, but they typically aggregate node features and edge connectivity strengths using simple summation, ignoring the differences between nodes and edges. Thus, we propose applying attention mechanisms for weighted aggregation, resulting in more effective coarsened graph representations. The attention mechanism has been widely applied across various fields. Its core idea is to dynamically assign weights to capture key information, enhancing the model's representation capability. Recently, attention mechanisms have gained traction in graph tasks. For instance, [1] and [2] employ multi-head attention to assess neighbor importance, enabling differentiated aggregation for effective node representations. In graph pooling, [3] uses soft attention to weight node importance, while [4] applies multi-head attention to identify task-relevant parts of the input data and learn each node’s global significance after pooling. [5] integrates attention into second-order pooling, further boosting model expressiveness. Inspired by these works, we introduce a node attention mechanism akin to the self-attention mechanism in [6], whose success highlights its ability to focus on graph-relevant nodes, providing a solid theoretical foundation for our approach. Among existing methods, edge attention mechanisms remain underexplored. A notable example is [7], which leverages attention to compute edge contraction scores. Another approach, [8], suggests assigning higher importance to edges connecting more different nodes. Inspired by these works, we introduce an edge attention mechanism akin to the edge information scoring function in [9]. The success of [9] demonstrates its ability to identify important edges in a graph, providing a strong theoretical foundation for our edge attention aggregation strategy. However, it is important to note that while GAT can be seen as an exploration of edge attention mechanisms, it essentially computes node attention to selectively aggregate neighboring node information, without directly processing edges. This differs from ours, which is designed to aggregate edge connectivity strengths between clusters. The connectivity strengths in the coarsened graph reflects the influence of the information propagation between clusters, determined by both edge importance and quantity. For instance, in a social network, the number of edges between the leadership teams of Companies A and B may be the same as that between the employees of Companies A and C, yet B and C’s influence on A could differ. Our edge attention mechanism aims to capture such differences. In conclusion, we believe this study provides both practical improvements and a new perspective for theoretical research in machine learning. [1] Velickovic, P. et al. Graph attention networks. In *ICLR*, 2018. URL https://openreview.net/forum?id=rJXMpikCZ. [2] Zhang, J. et al. Gaan: Gated attention networks for learning on large and spatiotemporal graphs. In *UAI*, pp. 339–349, 2018. [3] Li, Y. et al. Gated graph sequence neural networks. In *ICLR*, 2016. URL http://arxiv.org/abs/1511.05493. [4] Xu Y. et al. Multistructure graph classification method with attention-based pooling. In *IEEE TCSS*, 10(2): 602-613, 2022. [5] Wang Z. et al. Second-Order Pooling for Graph Neural Networks. In *TPAMI*, vol. 45, no. 6, pp. 6870-6880, 2023. [6] Lee, J. et al. Self-attention graph pooling. In *ICML*, volume 97 of *Proceedings of Machine Learning Research*, pp. 3734–3743, 2019. [7] Diehl, F. et al. Towards graph pooling by edge contraction. In *ICML workshop*, 2019. [8] Gao, Z. et al. Lookhops: light multi-order convolution and pooling for graph classification. *CoRR*, 2020. URL https://arxiv.org/abs/2012.15741. [9] Yu H. et al. Not all edges are peers: Accurate structure-aware graph pooling networks. In *NN*, 156: 58-66, 2022. Q3: There are several references missing. A3: Due to space limitations, please refer to our response Q1 to Reviewer PjzN. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed answer to my review. However, I still believe this is an engineering contribution which is more suitable for, e.g., a Kaggle competition or an applied venue such as KDD rather than a conference like ICML. If the edge attention mechanism is the main focus here, the authors should have introduced only that new component and spent time demonstrating in detail its advantages compared to existing techniques, by providing some theoretical results and controlled experiments to study the different properties of such an operation. But that's not the case. The paper, instead, proposes a methodology with too many other components such as - the straight-through estimator, - the attention on the nodes, - the message passing based on heat kernels. All these components distract and take space from what should be the main focus of the paper (i.e., the edge-attention mechanism for graph pooling) and make it difficult to isolate the single contributions of each individual part. The takeaway message I get from this paper is that edge-attention alone in graph pooling **does not work** and many other tricks and engineering are needed to make the method work. On top of that, edge-oriented pooling methods already exist and are used by the community (https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.EdgePooling.html), which questions the originality of the proposal. --- Reply to Comment 1.1.1: Comment: Thank you for your thorough review. We note that you have deleted the concern about GAT, since there were some misunderstanding between GAT and our method. However, it seems that there may still be some other misunderstandings for our work, and we would like to explain them again. First, our paper is not solely focused on the edge attention mechanism, it is just one component of our approach. One of our goal is to address the oversimplified aggregation of both nodes within clusters and edges between clusters in cluster-based hierarchical pooling methods. To this end, we propose the Edge-Node Attention Mechanism. Second, to better leverage the cluster representations and connectivity strengths between clusters obtained through attention-based aggregation, we also introduce MD-MPNN architecture. These components, when combined, constitute our proposed method. Furthermore, in terms of the EdgePool, as we have discussed in our previous response, this approach is theoretically different from ours. Specifically, the EdgePool performs pooling by progressively **removing edges** based on computed edge contraction scores, whereas our edge attention mechanism adaptively **aggregates edge** information for pooling. Since **removing edges** and **aggregating edge** are entirly different operations, we believe this distinction underscores the originality of our method. We sincerely hope the reviewer can significantly identify the above theoretically differences.
Summary: This paper introduces a novel graph pooling (ENAHPool), by combining the hard node assignment and the attention machnism in an interesting way. Different from other pooling operations, the new ENAHPool can compress the nodes and edges into hierarchical graphs associated with the node and edge attention rather than simply summing them up. The experiments on some standard datasets have demonstrated that the new ENAHPool can significantly improve the graph classification performance for the GNN models. Claims And Evidence: I think the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed ENAHPool makes sense. Theoretical Claims: I have checked. Experimental Designs Or Analyses: I have checked the experimental setups. Supplementary Material: This paper doesn’t provide any supplementary material. Relation To Broader Scientific Literature: The new ENAHPool can capture either node and edge attentions to re-weight the importance of the nodes belonging to the same cluster or the edges between tow clusters. So, the new pooling operation can adaptively discriminant the most significant node and edge information. This can provide useful structure feature information for the GNNs. Essential References Not Discussed: The references should be completed, and well support for the new pooling operation. Other Strengths And Weaknesses: This paper has some strengths, such as 1. This paper is well organized and easy to follow. 2. Different from the current graph pooling operation, the new proposed ENAHPool propose a new edge-based attention to discriminate the importance of the edges between two clusters. 3. To further improve the performance of the proposed ENAHPool, this paper also proposes a MPNN module to directly propagate the node information based on different distances, so that the new pooling operation can avoid the over-squashing problem. However, I still have some concerns for this paper, please refer to the following rebuttal questions. Other Comments Or Suggestions: See my rebuttal questions below. Questions For Authors: 1. Can you provide some detailed time complexity analysis for the proposed ENAHPool operation? 2. Excluding the GIN and GCN architecture, I wonder whether the proposed ENAHPool operation can be used for other GNN architecture, and improve the performance? 3. The authors stated that the MD-MPNN medule can address the over-squarshing problem, how about perform the medule for mutiple times? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Q1: References for the new pooling operations. A1: Thank you for your valuable suggestion. We have further investigated more recently proposed pooling operations and will refine the related work section in the final version. Recent research on graph pooling has primarily focused on cluster-based hierarchical methods. MathNet [1] leverages Haar-like wavelet multiresolution analysis to construct hierarchical graph representations. Tsitsulin et al. [2] introduced Deep Modularity Networks (DMoN), an unsupervised approach that optimizes the clustering process using the modularity measure. Bacciu et al. [3] proposed a pooling method based on Maximal k-Independent Sets (k-MIS), ensuring that selected nodes maintain a minimum pairwise distance of *k*. Zhou et al. [4] proposed Cross-View Graph Pooling, which integrates pooled graph information from both node and edge perspectives. WGDPool [5] utilizes a differentiable *k*-means clustering mechanism with Softmin assignments based on node-centroid distances. [1] Zheng, X. et al. Mathnet: Haar-like wavelet multiresolution analysis for graph representation learning. *Knowl. Based Syst.*, 273: 110609, 2023. [2] Tsitsulin, A.et al. Graph clustering with graph neural networks. *J. Mach. Learn. Res.*, 24:127:1–127:21, 2023. [3] Bacciu, D. et al. Generalizing downsampling from regular data to graphs. In *AAAI*, pp. 6718–6727, 2023. [4] Zhou, X. et al. Edge but not least: Cross-view graph pooling. In *ECML PKDD*, volume 13714 of *Lecture Notes in Computer Science*, pp. 344–359, 2022. [5] Xiao, Z. et al. Wgdpool: A broad scope extraction for weighted graph data. *Expert Syst. Appl.*, 249:123678, 2024. Q2: Detailed time complexity analysis. A2: In our proposed method, each pooling layer primarily involves the following steps. First, the MD-MPNN model performs convolution operations to obtain node embeddings. The computational complexity of MD-MPNN is $O(N^3)$ due to the need for matrix multiplications on the adjacency matrix to filter node information at different distances, helping to mitigate over-squashing issue. Second, the computational complexity of both node and edge attention mechanism are $O(N)$, as they are computed based on node features. Finally, the pooling operation has a time complexity of $O(KN^2)$, where $K$ represents the number of nodes in the next pooling layer, generally set as $rN$, with $r$ denoting the pooling ratio. Overall, the proposed method maintains an overall computational complexity of $O(N^3)$, which is comparable to other classic cluster-based hierarchical pooling methods such as StructPool [1] and MinCutPool [2]. [1] Yuan, H. et al. Structpool: Structured graph pooling via conditional random fields. In *ICLR*, 2020. URL https://openreview.net/forum?id=BJxg_hVtwH. [2] Bianchi, F. M. et al. Spectral clustering with graph neural networks for graph pooling. In *ICML*, volume 119 of *Proceedings of Machine Learning Research*, pp. 874–883, 2020. Q3: Improve the performance of other GNN associated with ENAHPool? A3: Of course! Our proposed ENAHPool operation is compatible with any GNN architecture, as it focuses on aggregating nodes and edges during pooling, without constraining the choice of GNN. However, it is preferable to use a GNN that aggregates neighborhood information based on the adjacency matrix. Otherwise, it may fail to leverage the edge connectivity strengths between clusters learned by the edge attention mechanism (such as GAT), potentially causing our method to degrade into a purely node attention-based hierarchical pooling operation. Q4: Address the over-squashing problem when perform the module for multiple times? A4: To address the over-squashing issue, we analyzed the impact of the number of MD-MPNN layers on model performance (as shown in Figure 7 of the paper). The results demonstrate that, unlike traditional MPNNs, increasing the number of MD-MPNN layers gradually improves model performance, mainly due to its novel message-passing mechanism. However, once the number of layers exceeds a certain threshold, performance begins to decline slightly. This is likely because information from excessively distant nodes introduces noise, which diminishes the effectiveness of feature representation. --- Rebuttal Comment 1.1: Comment: The authors addressed my concerns, and the additional experiments also make the statements more convincing, such as 1) the edge attentions, 2) the combination of edge and node attentions, and 3) the hard node assignment. All these indicate that the new proposed graph pooling is novel and effective. Overall, I think the new ENAHPool is an important contribution to the Graph ML community, and I am willing to raise my score by one point. I trust this paper may enlighten some new works in future.
Summary: This paper develops a novel graph pooling method, namely the ENAHPoo, for graph classification associated with GNNs. Different from the previous graph pooling methods, the ENAHPool simultaneously integrtes either node or edge attention for the hierarchical sturcutre learning. In addition, it also design an associated MD-MPNN mode to further mitigate the over-squashing problem through different distance relationship. Experimental evaluations demonstrates the effectiveness Claims And Evidence: Yes, the claims of this paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the evaluation makes sense. Specifically, the proposed graph pooling method is very fundamental and important in the field of graph neural networks. Theoretical Claims: I have checked. And, all proofs of theoretical claims are correct. Experimental Designs Or Analyses: Yes, the experimental setting is completer and results/analyses are convincing. Supplementary Material: There is no supplementary material for this paper. Relation To Broader Scientific Literature: A novel framework to simultaneously capture both the node and edge attention for hierarchical structure learning, i.e. for the resulting coarsened graph by compress the nodes belonging to the same cluster. Essential References Not Discussed: It seems that there is no necessary reference missed. The current references cited in this work provide sufficient contents for the proposed ENAHPool method. Other Strengths And Weaknesses: Strengths: S1. A novel hierarchical pooling method associated with the node and edge attention, for learning hierarchical structures. S2. Efficient computational procedures for the node and edge attentions. S3. reduce the over-squashing problem therough the associated MD-MPNN associated with the ENAHPool. Weakness: W1. The abstract is a little long, the authors should briefly summarize the contribution, and make it shorter. W2. Although the experiments demonstrate the effectiveness, for the Ablation Study, the author only evaluate the performance on two of the datasets for classification performance comparisons. More dataset for this study is prefered, and the author can put them in the supplementary material if the space is not enough. Other Comments Or Suggestions: See the weakness. Questions For Authors: This paper mentioned several times that the classical method is not efficient, however how about the the computational efficiency of the propose methods? I didn’t see any detailed discussion about the issue. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Q1: The abstract is a little long. A1: We will update the abstract in the final version to make it more concise and easier to understand. Q2: More dataset is preferred for Ablation Study. A2: Thank you for your constructive suggestion. We have conducted the ablation experiments on all datasets. However, due to time constraints, we only report the average results from 5 runs of 10-fold cross-validation here. |Assignment strategy|D&D|PROTEINS|NCI1|FRANK.|IMDB-B|IMDB-M|COLLAB|REDDIT-B| |-|-|-|-|-|-|-|-|-| |Soft assignment|77.33 $\pm$ 0.43|75.49 $\pm$ 0.03|77.08 $\pm$ 0.38|63.80 $\pm$ 0.04|72.73 $\pm$ 0.49|51.13 $\pm$ 0.35|76.67 $\pm$ 0.14|82.12 $\pm$ 1.06| |Hard assignment|**78.34 $\pm$ 0.62**|**76.54 $\pm$ 0.13**|**78.22 $\pm$ 0.91**|**65.24 $\pm$ 0.18**|**73.92 $\pm$ 0.05**|**51.18 $\pm$ 0.07**|**78.06 $\pm$ 0.25**|**83.31 $\pm$ 1.01**| |Node Att.|Edge Att.|D&D|PROTEINS|NCI1|FRANK.|IMDB-B|IMDB-M|COLLAB|REDDIT-B| |-|-|-|-|-|-|-|-|-|-| |$\times$|$\times$|79.20 $\pm$ 0.48|76.63 $\pm$ 0.21|78.68 $\pm$ 0.65|65.32 $\pm$ 0.62|74.21 $\pm$ 0.64|51.27 $\pm$ 0.09|78.29 $\pm$ 0.49|88.15 $\pm$ 0.15| |$\checkmark$|$\times$|79.50 $\pm$ 0.52|76.73 $\pm$ 0.33|78.85 $\pm$ 0.42|66.09 $\pm$ 0.58|74.31 $\pm$ 1.80|51.60 $\pm$ 1.00|79.38 $\pm$ 0.13|88.21 $\pm$ 0.51| |$\times$|$\checkmark$|79.88 $\pm$ 0.16|76.79 $\pm$ 0.09|79.22 $\pm$ 0.51|66.03 $\pm$ 0.16|74.29 $\pm$ 0.08|51.43 $\pm$ 0.09|78.80 $\pm$ 0.17|88.38 $\pm$ 0.41| |$\checkmark$|$\checkmark$|**79.91 $\pm$ 0.25**|**77.12 $\pm$ 0.15**|**79.34 $\pm$ 0.31**|**67.38 $\pm$ 0.13**|**74.54 $\pm$ 0.42**|**51.74 $\pm$ 0.16**|**81.09 $\pm$ 0.51**|**88.56 $\pm$ 0.25**| Q3: Detailed discussion of the computational efficiency. A3: In our proposed method, each pooling layer primarily involves the following steps. First, the MD-MPNN model performs convolution operations to obtain node embeddings. The computational complexity of MD-MPNN is $O(N^3)$ due to the need for matrix multiplications on the adjacency matrix to filter node information at different distances, helping to mitigate over-squashing issue. Second, the computational complexity of both node and edge attention mechanism are $O(N)$, as they are computed based on node features. Finally, the pooling operation has a time complexity of $O(KN^2)$, where $K$ represents the number of nodes in the next pooling layer, generally set as $rN$, with $r$ denoting the pooling ratio. Overall, the proposed method maintains an overall computational complexity of $O(N^3)$, which is comparable to other classic cluster-based hierarchical pooling methods such as StructPool [1] and MinCutPool [2]. It is worth noting that our paper repeatedly emphasizes the inefficiency of using attention mechanisms for node aggregation in existing cluster-based hierarchical pooling methods. For example, ABDPool [3] computes scaled dot-product self-attention for nodes within each cluster, while C2N-ABDP [4] extracts cluster representations via singular value decomposition and then computes attention between each cluster and its nodes for weighted aggregation. These methods require iterating over every cluster, leading to high computational complexity. In contrast, the attention mechanisms we propose are significantly more computationally efficient. [1] Yuan, H. et al. Structpool: Structured graph pooling via conditional random fields. In *ICLR*, 2020. URL https://openreview.net/forum?id=BJxg_hVtwH. [2] Bianchi, F. M. et al. Spectral clustering with graph neural networks for graph pooling. In *ICML*, volume 119 of *Proceedings of Machine Learning Research*, pp. 874–883, 2020. [3] Liu, Y. et al. Abdpool: Attention-based differentiable pooling. In *ICPR*, pp. 3021–3026,2022. [4] Ye, R. et al. C2N-ABDP: cluster-to-node attention-based differentiable pooling. In *GbRPR*, volume 14121 of *Lecture Notes in Computer Science*, pp. 70–80, 2023.
Summary: The paper proposes a cluster-based pooling method for graph neural networks (GNNs). The main feature of the proposal is that it performs an hard assignment of the input nodes, i.e., each node belongs to one cluster. Also, attention mechanism are employed to build node features and adjacency matrix of the coarsened graph. Additionally, the author proposes a new GNN to mitigate the over-squashing problem of GNNs The effectiveness of the combination between the proposed pooling and GNN is demonstrated with a set of experiment. ### Update after rebuttal I would like to keep my score as it is since the author's response does not completely address my main concerns. I suggest the authors to add more details about the experiments to make sure that the comparison is fair. For example, the proposed architecture employs more parameters (given the same hidden size and number of layers) than the baseline methods. This was also my concern about the hard vs. soft comparison: it is not clear if only the argmax of the assignment matrix $S$ has been removed leaving everything else the same. Finally, I suggest always including the anonymized code repository in the paper. Claims And Evidence: - The paper claims that soft-assignment worsens the performance of cluster-based pooling method. Nevertheless, I did not find convincing evidence in the paper's results since Table 4 shows the results on only 2 datasets. I believe that this is a key point that justifies the proposal; thus, the difference between hard and soft assignments should have been computed on all the datasets. - The paper claims that coarsened graphs are usually built without considering the importance of edges. To this end, it introduces an attention mechanism to build both node features and the structure of the coarsened graph. Nevertheless, the ablation study in Table 5 does not show that employing such a mechanism is always beneficial. For example, by looking at the statistical significance of the differences among the results, it seems that node attention is unnecessary. In general, I did not understand the idea behind the proposal, and the results were not convincing enough to justify alone the methods proposed. Methods And Evaluation Criteria: The evaluation criteria is reasonable, but the proposed methodology is not justified enough. Theoretical Claims: Ther are no theoretical claims. Experimental Designs Or Analyses: The experimental design is not clear since: - There is no mention of how the baseline methods have been trained, and how their hyper-parameters selected; - It is not clear how the data have been split in training validation and test. There is onluy a mention to 10-fold cross-validation , but it is not clear if it is for model assessment or model selection - There is a mention to auxiliary classifiers during training, but it is not clear how they are used, and if they have been applied also for baseline methods. - The results of baseline method are different from the ones obtained in other papers. For example, MinCut (Bianchi et al., 2020) reaches an accuracy of 80.8 +/- 2.3 on D&D dataset, that is higher than the one reported in the paper. Supplementary Material: There is no supplementary material, and the codebase is not public. Thus, it was no possible to read the code. Relation To Broader Scientific Literature: I did not carefully check if the key contributions have been discussed adequately with respect to the existing literature. Essential References Not Discussed: I do not have other papers to suggest. Other Strengths And Weaknesses: The paper is clear and well-written. To the best of my knowledge, the idea is novel. However, the methodology proposed has no substantial basis. Other Comments Or Suggestions: Nothing. Questions For Authors: Nothing. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Q1: Convincing evidence is needed to claims that soft-assignment worsens the performance. A1: Thanks for the suggestion. We conducted comparative experiments on all datasets to verify the positive impact of the hard assignment operation on model performance. However, due to time constraints, we only report the average results from 5 runs of 10-fold cross-validation here. |Assignment strategy|D&D|PROTEINS|NCI1|FRANK.|IMDB-B|IMDB-M|COLLAB|REDDIT-B| |-|-|-|-|-|-|-|-|-| |Soft assignment|77.33 $\pm$ 0.43|75.49 $\pm$ 0.03|77.08 $\pm$ 0.38|63.80 $\pm$ 0.04|72.73 $\pm$ 0.49|51.13 $\pm$ 0.35|76.67 $\pm$ 0.14|82.12 $\pm$ 1.06| |Hard assignment|**78.34 $\pm$ 0.62**|**76.54 $\pm$ 0.13**|**78.22 $\pm$ 0.91**|**65.24 $\pm$ 0.18**|**73.92 $\pm$ 0.05**|**51.18 $\pm$ 0.07**|**78.06 $\pm$ 0.25**|**83.31 $\pm$ 1.01**| Q2: Convincing experiments and discussions are needed to verify the significance of different attention mechanisms. A2: Theoretically, the aim of using edge-node attentions is to obtain a more meaningful coarsened graph, capturing more accurate hierarchical representations. These two attention mechanisms are indispensable. First, without the node attention, hierarchical pooling would flatten the aggregation of nodes within a cluster, disregarding their varying importance. Second, the edge connectivity strengths between clusters naturally serve as attention during graph convolution. To better leverage this property, we propose an edge attention mechanism, which enhances the aggregation weights of important edges and prevents the neglect of rare but highly valuable connections between each cluster. The misunderstanding may have arisen because we only presented the ablation results for two datasets in the paper. To clarify this, we conducted comparative experiments on all datasets. However, due to time constraints, we only report the average results from 5 runs of 10-fold cross-validation here. The results show that both the node and edge attention mechanism have a positive impact on model performance, though their degree of influence varies across datasets. Moreover, the best results are consistently achieved when both node and edge attention mechanisms are used together, which demonstrates the effectiveness of our approach. |Node Att.|Edge Att.|D&D|PROTEINS|NCI1|FRANK.|IMDB-B|IMDB-M|COLLAB|REDDIT-B| |-|-|-|-|-|-|-|-|-|-| |$\times$|$\times$|79.20 $\pm$ 0.48|76.63 $\pm$ 0.21|78.68 $\pm$ 0.65|65.32 $\pm$ 0.62|74.21 $\pm$ 0.64|51.27 $\pm$ 0.09|78.29 $\pm$ 0.49|88.15 $\pm$ 0.15| |$\checkmark$|$\times$|79.50 $\pm$ 0.52|76.73 $\pm$ 0.33|78.85 $\pm$ 0.42|66.09 $\pm$ 0.58|74.31 $\pm$ 1.80|51.60 $\pm$ 1.00|79.38 $\pm$ 0.13|88.21 $\pm$ 0.51| |$\times$|$\checkmark$|79.88 $\pm$ 0.16|76.79 $\pm$ 0.09|79.22 $\pm$ 0.51|66.03 $\pm$ 0.16|74.29 $\pm$ 0.08|51.43 $\pm$ 0.09|78.80 $\pm$ 0.17|88.38 $\pm$ 0.41| |$\checkmark$|$\checkmark$|**79.91 $\pm$ 0.25**|**77.12 $\pm$ 0.15**|**79.34 $\pm$ 0.31**|**67.38 $\pm$ 0.13**|**74.54 $\pm$ 0.42**|**51.74 $\pm$ 0.16**|**81.09 $\pm$ 0.51**|**88.56 $\pm$ 0.25**| Q3: Evaluation criteria is reasonable, but the proposed methodology is not justified enough. Due to space limitations, please refer to our response Q2 to Reviewer FuAf. Q4: The experimental design is not clear. To minimize the impact of data partitioning on model evaluation, we followed the testing methodology from [1]. Specifically, we performed 10 random splits of each dataset and conducted 10-fold cross-validation on each split. The model was evaluated on every validation set, resulting in a total of 100 evaluation results for each method on each dataset. The experimental setup of MinCutPool differs from ours. Although it also performs 10 random splits of the dataset, it does not conduct cross-validation. This operation may make the results more sensitive to the partition, which could explain the differences in reported accuracy. In addition, for the baselines and benchmarks that were tested in [1], we directly cite the reported results, as we follow the same experimental setup. For those not covered in [1], we adhere to the hyperparameter settings specified in the paper for evaluation. Additionally, regarding the auxiliary classifiers, we sincerely apologize for this oversight. In the initial version of our experiments, we attempted to include this module. However, after further discussion, we decided to remove it to ensure fair comparisons with the baselines. Unfortunately, due to insufficient proofreading of the final manuscript, this change was not properly reflected in the text. We will correct this in the final version and ensure that all experimental descriptions align with our code. If necessary, we can also submit the code with anonymous link, then you can check. [1] Wu, J. et al. Structural entropy guided graph hierarchical pooling. In *ICML*, volume 162 of *Proceedings of Machine Learning Research*, pp. 24017–24030, 2022. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. However, I still have some concerns: 1. It is not clear what architecture was used for the soft assignments. 2. The table reported here to assess the effectiveness of node/edge attention does not show a clear advantage. We should always consider the standard deviation when we compare results. 3. The paper is almost empirical and there are some tricks in the experiment that can hide the effectiveness of the proposale (e.g. the ENAHPool employes two networks rather than one, the node embeddings of each layer are concatenated) --- Reply to Comment 1.1.1: Comment: Thanks for the feedback. In response to the comments, we would like to clarify the following points. Q1: It is not clear what architecture was used for the soft assignments. A1: The architecture of soft assignment is formulated as $S^{(l)}=\text{GNN}(A^{(l)}, X^{(l)})$ that was widely employed for many hierarchical graph pooling operations. Q2: The table reported here to assess the effectiveness of node/edge attention does not show a clear advantage. We should always consider the standard deviation when we compare results. A2: Even when comparing standard deviations, the method incorporating both node and edge attention still performs the best on many benchmarks, such as NCI1 and FRANK. Q3: The paper is almost empirical and there are some tricks in the experiment that can hide the effectiveness of the proposal (e.g. the ENAHPool employes two networks rather than one, the node embeddings of each layer are concatenated) A3: It is important to clarify that we **did not use any tricks** in our experiments. First, we did not use two separated networks, instead, we simultaneously compute both the node and edge attention at each pooling step. As a result, **each pooling layer** produces only a **single coarsened graph**, where the cluster representations are obtained as the weighted sum of the original node features, and the connectivity strengths between clusters are computed as the weighted sum of the original edge connectivity strengths. Second, we are not sure what you mean by "the node embeddings of each layer are concatenated". If you are referring to the message passing layers in MD-MPNN, then yes, we do concatenate the node embeddings from different layers. This is because our goal is to capture neighborhood information at varying distances. In MD-MPNN, the node embedding at layer $i$ can only capture information from neighbors that are exactly $i$ hops away, so concatenating embeddings from different layers allows us to integrate multi-hop neighborhood information effectively. If necessary, we can submit the code with an anonymous link, indicating that we did not use any so-called trick for our experiments.
null
null
null
null
null
null
GCAL: Adapting Graph Models to Evolving Domain Shifts
Accept (poster)
Summary: This paper introduces GCAL, a novel framework designed to address the challenge of continual domain adaptation in graph models, particularly in scenarios involving evolving, out-of-distribution graphs. GCAL employs a bilevel optimization strategy: the "adapt" phase fine-tunes the model on new graph domains while mitigating catastrophic forgetting through memory replay, and the "generate memory" phase condenses original graphs into smaller, informative memory graphs using a variational memory generator guided by information bottleneck theory. Extensive experiments on regional and temporal graph datasets demonstrate that GCAL outperforms state-of-the-art methods in adaptability and knowledge retention. Claims And Evidence: The claims made in the submission are generally well-supported by the theoretical grounding and experimental results. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria in the paper are well-suited for the problem. Theoretical Claims: The proof of Theorem 3.1 is generally sound and applies information bottleneck theory and variational inference techniques to derive the lower bound. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are generally sound and valid. However, the clarifications of why the AF results of some baselines are N/A in Table 2 should be provided. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This paper is related to graph machine learning, domain adaptation, and continual learning. The paper’s approach makes contributions by addressing the gaps in handling evolving graph domains and mitigating catastrophic forgetting. Essential References Not Discussed: None Other Strengths And Weaknesses: S1. The paper is well-written, with clear problem formulation and illustrations. The motivation is well-justified and compelling. S2. The paper is grounded in a solid theoretical framework, leveraging information bottleneck theory to derive a lower bound for memory graph generation. S3: The experiments are extensive, showing the advanced performance of the proposed method compared to baselines. W1. It appears that the label is not given in the adaptation process, however, the label Y is explicitly referenced in the theoretical analysis. More explanation about how the labels are eliminated in this process should be added. W2. It is not clear how the variational memory graph generator is superior to the traditional graph generation method. W3. A deeper analysis of how the weights of losses $L_{Reg}$ and $L_{Gen}$ influence the continual adaptation process would be beneficial. Other Comments Or Suggestions: None. Questions For Authors: Q1. Why are the AF results of some baseline N/A in Table 2? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > **W1. It appears that the label is not given in the adaptation process, however, the label Y is explicitly referenced in the theoretical analysis. More explanation about how the labels are eliminated in this process should be added.** We appreciate the reviewer’s observation. Indeed, in our setting, true labels are not available during the adaptation process, as we focus on unsupervised continual adaptation to out-of-distribution graph domains. Thus, label variable $ \hat{Y}_t$ appears in the theoretical formulation in Equation. 3–5 is addressed via soft pseudo-labels derived from the model‘s predictions. Specifically, In Section 3.2.1 (Eq. 3–5), we derive a variational lower bound based on the graph information bottleneck: $L(\Phi) = \max_{\Phi} \left[I(\widehat{G}_t;\widehat{Y}_t) - \beta I(\widehat{G}_t;G_t,Z_t) + \beta I(\widehat{G}_t; Z_t|G_t)\right].$ Here in the first term, $ \hat{Y}_t$ refers to the training signal associated with the memory graph $\widehat{G}_t$, and is used to theoretically quantify task-relevant information retained during memory graph generation. Since true labels are not accessible during adaptation, we set $ \hat{Y}_t$ with soft pseudo-labels generated via self-supervised information maximization in Equation 1. We have clarified this in Section 3.2.3, where the condensation loss is minimized using pseudo-label-based adaptation objectives. This practice of using pseudo-labels has strong **precedents** in unsupervised domain adaptation and test-time training literature, such as Tent [1] and CoTTA [2]. We follow the same line of reasoning, adapting it to the graph domain. [1] Wang, Dequan, et al. "Tent: Fully Test-Time Adaptation by Entropy Minimization." ICLR 2021 [2] Wang, Qin, et al. "Continual test-time domain adaptation." CVPR 2022. > **W2. It is not clear how the variational memory graph generator is superior to the traditional graph generation method.** Thank you for raising this important point. We would like to clarify that the variational memory graph generator in GCAL is specifically designed to address the **unique requirements of continual domain adaptation in graphs**, which traditional graph generation methods do not fully support. Conventional generators typically focus on reconstructing entire input graphs or producing realistic samples based on learned distributions. However, these methods are not optimized for the goals of continual learning—particularly the need to retain and condense task-relevant knowledge for replay across evolving domains. In contrast, our variational memory graph generator is grounded in the information bottleneck principle, which explicitly balances compression and relevance. It learns a variational latent representation of the input graph and selectively generates a small, condensed graph that captures only the most informative features and structural signals necessary for downstream prediction tasks. It can also generate diversified representations of memory graphs via variational reparameterization for generalized training. The empirical results in Table 2 and the ablation study in Table 3 demonstrate that memory graphs generated by our variational method enable superior adaptation and retention performance compared to baseline approaches, even with a significantly reduced memory graph. > **W3. A deeper analysis of how the weights of losses $L_{Reg}$ and $L_{Gen}$ influence the continual adaptation process would be beneficial.** Response: We conduct a hyperparameter study on the loss weights $\lambda_1$ and $\lambda_2$ (initially set as 1,1), which control the two auxiliary losses---$L_{Reg}$ and $L_{Gen}$. The results are summarized in the table below: | L_Reg | L_Gen | Twitch | FB100 | Elliptic | OGB-Arxiv | | --- | --- | --- | --- | --- | --- | | 1 | 2 | 55.62±0.23 | 52.33±0.87 | 56.21±0.41 | 45.18±0.24 | | 1 | 3 | 55.56±0.15 | 52.48±0.11 | 56.17±0.38 | 45.12±0.23 | | 2 | 1 | 55.59±0.19 | 52.43±0.64 | 56.23±0.30 | 45.14±0.22 | | 3 | 1 | 55.55±0.18 | 52.69±0.38 | 56.27±0.23 | 45.15±0.26 | Across all datasets and configurations, the performance remains relatively stable, with only minor variations (generally within ±0.1%–0.3%). This indicates that GCAL is robust to moderate fluctuations in the weighting of these auxiliary objectives. > **Q1. Why are the AF results of some baseline N/A in Table 2?** Thank you for your inquiry. These results are marked as N/A because certain methods like EERM and GTrans operate by training a new set of parameters for each graph independently, without updating a model across multiple domains. Consequently, there is no shared memory or parameter set across tasks, thus eliminating the concept of "forgetting" as typically measured in continual learning scenarios. Thus, there is no way to measure their average forgetting.
Summary: This paper introduces **Graph Continual Adaptive Learning (GCAL)**, a novel framework for continual domain adaptation in graph models, specifically addressing challenges in adapting to multiple out-of-distribution (OOD) graph shifts. The method employs a bilevel optimization strategy with two phases: (1) **Adaptation**, using information maximization for self-supervised adaptation while mitigating catastrophic forgetting via memory replay, and (2) **Memory Generation**, utilizing a variational memory graph generation module based on an information bottleneck lower bound. The paper demonstrates through extensive experiments that GCAL outperforms existing state-of-the-art methods in continual graph adaptation. ## update after rebuttal Thanks the authors for their rebuttal, I keep my score unchanged. Claims And Evidence: Overall, the submission provides substantial empirical and theoretical support for its claims. However, there are a few areas where the claims could be better substantiated or require additional clarification. Some weakly supported claims: 1. **GCAL is efficient for continual adaptation in large-scale graphs.** - The paper does **not** provide computational complexity analysis or runtime benchmarks comparing GCAL to existing approaches. - Since bilevel optimization and variational memory graph generation introduce **additional computational overhead**, the authors should provide evidence on **training/inference time**, particularly for large-scale graphs. - Suggested improvement: Include runtime comparisons against CoTTA, EERM, or GTrans to demonstrate computational feasibility. 2. **Variational memory graph generation leads to significantly better knowledge retention.** - While the **ablation study** confirms that removing memory generation reduces performance, it is unclear **how much variational memory generation improves over simpler alternatives** (e.g., naive replay of stored subgraphs). - Suggested improvement: Compare GCAL’s memory generation against a **simpler heuristic-based memory selection** to isolate the exact benefits of the variational approach. 3. **GCAL can generalize well across different types of OOD shifts.** - The paper only evaluates **two types of shifts** (regional and temporal), which, while useful, do not fully represent all real-world graph distribution shifts (e.g., feature shifts, structural perturbations). - Suggested improvement: Add experiments on **synthetically perturbed graphs** to test robustness against **node feature corruption, edge rewiring, or adversarial attacks**. Most claims in the paper are well-supported with empirical results and theoretical justification. Addressing the above gaps would make the claims more robust and convincing. Methods And Evaluation Criteria: The Methods and Evaluation Criteria Largely Make Sense. Some Areas Need More Justification or Alternative Evaluations 1. **Lack of Large-Scale Graph Evaluation** - The datasets used (Twitch, Facebook-100, OGB-Arxiv, Elliptic) have **relatively moderate-scale graphs** (up to hundreds of thousands of edges). - **Real-world continual graph adaptation problems (e.g., social media networks, citation networks, e-commerce graphs) often involve millions of nodes and edges.** - **Suggestion:** Evaluate GCAL on **larger-scale dynamic graphs** such as: - **Reddit (social interactions, time-evolving)** - **MAG-Scholar (large citation network)** - **Amazon/Alibaba (e-commerce graphs, evolving product-user interactions)** 2. **Computational Efficiency Not Evaluated** - **Bilevel optimization and variational memory graph generation** add complexity. - The paper does **not** provide a **runtime analysis** or **memory usage comparison** against baselines. - **Suggestion:** Report **training time, inference time, and memory footprint** compared to simpler adaptation methods (e.g., CoTTA, GTrans). 3. **No Evaluation on Feature or Structural Distribution Shifts** - The datasets primarily evaluate **temporal and regional shifts**, but in real-world applications, **feature shifts** (e.g., node attribute changes) and **structural shifts** (e.g., edge rewiring, node insertion/deletion) are common. - **Suggestion:** Test GCAL on **synthetic or adversarial perturbations** to evaluate robustness under feature and structure shifts. - **Perturb node features (e.g., Gaussian noise, dropout).** - **Rewire graph structures (e.g., edge deletion/addition, graph sparsification).** Theoretical Claims: I did not check the correctness of their theoretical proofs and I assume they are all correct. Experimental Designs Or Analyses: I carefully read their experimental analysis and I think their analysis is reasonable Supplementary Material: all good Relation To Broader Scientific Literature: Related Prior Work like graph prompt learning [1-4] explores prompting techniques for cross-task generalization in GNNs. These methods suggest that prompt-based approaches can enable few-shot adaptation to new graph distributions. While GCAL does not explicitly use graph prompting, its memory replay approach serves a similar function—storing distilled graph representations for future adaptation. It would be more solid if they could include such a discussion in their related work section [2]. and give a summary of future work like exploring combining GCAL’s memory generation with graph prompting techniques to improve adaptability [1,3,4]. - [1] All in One: Multi-task Prompting for Graph Neural Networks. KDD 2023. - [2] All in One and One for All: A Simple yet Effective Method towards Cross-domain Graph Pretraining. KDD2024. - [3] Graph Prompt Learning: A Comprehensive Survey and Beyond. https://arxiv.org/abs/2410.01635 - [4] Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis. https://arxiv.org/abs/2410.01635 Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **Computational Efficiency** Thank you for your valuable feedback. Our approach leverages a variational-based generation strategy, which is inherently designed to be efficient and scalable. This strategy allows for effective memory graph generation without significantly increasing the computational complexity. Thus, as the **time complexity analysis** in the Response to Reviewer `izUY` shows, the computation cost of the memory graph generator and subsequent losses is significantly less than the propagation of the GNN backbone, indicating the efficiency of our method. Following your suggestion, we have conducted extensive experiments to compare the running times of GCAL with CoTTA and GTrans training across four datasets on four 4090 GPUs. The results of these experiments are summarized in the following table. | Time / Seconds | CoTTA | GTrans | GCAL | | --- | --- | --- | --- | | Twitch | 41.6537 | 34.2958 | 26.842674 | | FB100 | 156.3651 | OOM | 41.446378 | | Elliptic | 26.5828 | 32.1564 | 22.967516 | | OGB-Arxiv | 44.4698 | 38.3925 | 40.146278 | These results demonstrate that GCAL not only operates within a competitive time frame but also significantly outperforms the baseline methods on multiple datasets. This evidence supports GCAL's effectiveness in this task. Regarding the scale of the datasets, the choice of these four datasets is aligned with the **established benchmarks** in studies related to out-of-distribution graph generalization [1,2]. We would like to clarify that the numbers of nodes and edges, as indicated in Table 1, represent the range within each dataset, where each dataset comprises multiple graphs. The overall number of nodes and edges across these datasets is indeed **substantial**. For example, Elliptic comprises a total of $189,033$ nodes and $217,223$ edges, and Facebook-100 consists of $157,921$ nodes and $13,197,698$ edges. Thus, these datasets meet the large-scale criteria to some extent. We hope this response adequately addresses your concerns. [1] Wu Q, et al. Handling Distribution Shifts on Graphs: An Invariance Perspective, ICLR 2022 [2] Jin W, et al. Empowering Graph Representation Learning with Test-Time Graph Transformation, ICLR 2023 > **Comparison to heuristic-based memory selection** Our approach in the unsupervised and out-of-distribution setting **diverges** from traditional continual learning frameworks, which typically assume that labels are provided and focus on adapting models to incremental classes or tasks. Consequently, many memory selection methods that rely on label information in standard continual learning settings are not applicable to our context. Following your suggestion, we have used a widely recognized heuristic-based memory selection method, K-Center[3], into our framework for comparative evaluation. This method selects K-representative data points as centers without the need for labels. The comparative results are in the table below: | Method | Twitch | FB100 | Elliptic | OGB-Arxiv | |--------|--------|-------|----------|-----------| | K-Center | 54.74±0.25 | 51.88±0.28 | 54.37±0.26 | 43.18±0.38 | | GCAL | 55.65±0.09 | 52.72±0.36 | 56.57±0.14 | 45.22±0.17 | The results show that our variational memory graph generation method overall outperforms K-Center in our framework, demonstrating its effectiveness in learning meaningful graph memory. [3] Nguyen, Cuong V., et al. "Variational Continual Learning." ICLR 2018. > **Evaluation of Synthetic Feature or Structural Distribution Shifts** Thank you for your thoughtful feedback. The datasets used in our experiments inherently **involve both feature and structural shifts** across domains. For instance, social networks from different universities Facebook-100 exhibit variations in node attributes (e.g., user demographics) and structural patterns (e.g., friendship density). Citation networks OGB-Arxiv evolve over time, with node features (e.g., paper topics) and citation structures changing as research trends progress. These **real-world shifts** align with the challenges GCAL aims to address, validating its ability to adapt to combined feature and structural distribution shifts. Synthetic perturbations (e.g., Gaussian noise, edge rewiring) usually are more valuable for testing the robustness of graph neural networks, like against adversarial attacks. Our focus is on addressing practical, real-world OOD challenges within the continual adaptation framework using established benchmarks. Thank you again for your constructive feedback. Due to the time limitation, we will explore this direction in future research. > **Additional References** Thank you for your valuable feedback. As suggested, we will include a discussion of graph prompting techniques in Related Works, incorporating your highlighted references to provide a more comprehensive literature review.
Summary: This paper proposes GCAL, a continual graph domain adaptation framework that mitigates catastrophic forgetting through bilevel optimization, integrating information maximization for adaptation and variational memory graph generation for knowledge replay. The approach is theoretically grounded in information bottleneck theory. Extensive experiments on multiple datasets demonstrate that GCAL outperforms baselines. Claims And Evidence: Overall, the paper presents well-supported claims with theoretical support and superior experimental performance. Methods And Evaluation Criteria: The proposed GCAL framework and evaluation criteria are well-aligned with the problem of continual graph domain adaptation Theoretical Claims: The paper presents a theoretical lower bound based on the information bottleneck theory to support memory graph generation. Experimental Designs Or Analyses: The experimental designs appear sound. But a time complexity analysis would be helpful. Supplementary Material: I reviewed the supplementary material in the Appendix. The theoretical proofs and experimental details are provided. Relation To Broader Scientific Literature: GCAL contributes a novel framework for continual graph domain adaptation, related to the research topics of continual graph learning, domain adaptation, and graph condensation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1. The studied problem is new and practical. 2. The proposed model is supported by a sound theoretical foundation based on information bottleneck theory. 2. The paper introduces a novel variational memory graph generation method for graph continual domain adaptation. Weaknesses 1. The memory replay framework is commonly used in continual learning research. This paper does not introduce a fundamentally new framework in this regard. 2. This paper uses an adaptation learning objective to condense the graphs into memory graphs. The adaptation loss may not adequately capture the necessary structural or semantic information required for high-quality condensation. 3. The multiple learning objectives add to the complexity of this method. The time complexity of this method should be provided. 4. Some of the latest literature for graph domain adaptation is not included. Other Comments Or Suggestions: N/A Questions For Authors: See the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > **W1: The memory replay framework is commonly used in continual learning research. This paper does not introduce a fundamentally new framework in this regard.** We acknowledge that memory replay is indeed a well-known approach to continual learning. We would like to clarify that **our novelty specifically lies in how the replay mechanism is integrated within the context of evolving out-of-distribution graphs**. Unlike existing methods that rely on stored raw data or labeled samples for replay, GCAL introduces a variational information bottleneck-based graph generator that creates synthetic memory graphs. This component is theoretically grounded (Eq. 3–5 in Sec. 3.2.1) and uniquely capable of generating compact, informative, and generalizable memory graphs in an unsupervised manner—a critical advancement where labeled data is not available. Furthermore, existing continual learning replay techniques primarily target Euclidean data formats. Our design includes graph condensation, Gumbel-softmax-based differentiable edge sampling, and gradient-matching-based memory optimization, which are specifically tailored for graph topologies, accounting for both structural and feature-level preservation (Sec. 3.2.2–3.2.5). > **W2: This paper uses an adaptation learning objective to condense the graphs into memory graphs. The adaptation loss may not adequately capture the necessary structural or semantic information required for high-quality condensation.** We appreciate this valuable observation regarding the sufficiency of the adaptation loss for graph condensation. We would like to clarify that the adaptation loss is only one component of a multi-objective memory graph learning strategy in GCAL. In particular, **the quality and informativeness of the memory graphs are ensured combination of three specialized losses with theoretical grounding**. We leverage the information bottleneck principle to derive three loss functions, each designed to explicitly preserve structural and semantic characteristics of the original graphs. As detailed in Sections 3.2.3–3.2.5 of the paper, the generation of memory graphs is not solely guided by the adaptation loss. The condensation loss $L_{MGL}$ leverages gradient matching to ensure the generated memory graphs induce similar optimization trajectories (gradients) as the original graphs. Rooted in variational inference (Eq. 12–13), this KL divergence-based term controls the latent distribution and promotes stability and informativeness in node and edge generation, avoiding overfitting to spurious patterns. The generation loss (Eq. 14) minimizes the distributional discrepancy between the memory graph and the original graph in the model’s latent space. > **W3: The multiple learning objectives add to the complexity of this method. The time complexity of this method should be provided.** For time complexity, we use GCNs as the backbone. The propagation cost is $O(L N_t d h + L N_t h^2)$, where $L$ is the number of layers, $N_t$ is the number of nodes, $d$ is the average degree, and $h$ is the hidden dimension. For the memory graph generation part, the TopKselector involving a sorting costs $O(N_t\log K+ N_t h)$, the construction and reparameterization in Eq.7,8 costs $O(K h)$ and $K^2 h$. The loss computations in Eq. 10, 13, and 14 cost $O(L h^2)$, $O(Kh +K^2)$, and $O(N_th + Kh)$. Because $K<<N_t$, the computation cost of the memory graph generator and subsequent losses is **significantly less** than the propagation of the GNN backbone, indicating the efficiency of our method. > **W4: Some of the latest literature for graph domain adaptation is not included.** Thank you for the valuable observation. We will update our related work section to incorporate the latest literature, ensuring a more comprehensive overview of graph domain adaptation methods.
Summary: This paper proposes Graph Adaptive Continual Learning (GCAL), extending the graph domain adaptation from single-step adaptation to continuous adaptation over a sequence of multiple domains. The proposed GCAL adopts a bi-level optimization strategy and consists of two phases. The adapt phase fine-tunes the given graph model on new graph domains based on information maximization, and the generate memory phase condenses the original graphs into memories, which will be used in future adapt phases to avoid forgetting. The proposed method is evaluated on 4 public datasets. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: For task construction, it is unclear how the adopted datasets are constructed into different tasks with different distributions. In Table 2, actually most methods have a similar performance with Test, which is the lower bound, this is weird. Supplementary Material: I checked the Appendix Relation To Broader Scientific Literature: Continual graph domain adaptation learning is broadly related to different application scenarios involving evolving graph data. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: The proposed method resolves a weakness of the existing domain adaptation works, which is the limitation to single-step adaptation. The targeted continual multi-domain adaptation is more practical in real-world applications. The proposed generate memory phase is supported by theoretical analysis. The proposed method outperforms the baselines on all datasets. Weakness: For task construction, it is unclear how the adopted datasets are constructed into different tasks with different distributions. In Table 2, most methods have a similar performance with Test, which is the lower bound; this is weird. Other Comments Or Suggestions: See above. Questions For Authors: How are the datasets constructed into different domains, and how to ensure that the different domains have different distribution? What does #Nodes and #Egdes mean in Table 1? Why is it a range? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **W1: For task construction, it is unclear how the adopted datasets are constructed into different tasks with different distributions.** **Q1: How are the datasets constructed into different domains, and how to ensure that the different domains have different distribution?** We appreciate the reviewer’s valuable feedback on our task construction. In our study, we selected datasets from established benchmarks widely used in research on graph out-of-distribution generalization [1,2]. **Each dataset comprises multiple real-world graphs, with each graph considered an independent domain**. We organize these graphs sequentially, based on regional differences and temporal shifts, to construct a continual adaptation setting. Appendix B.1 and B.3 provide a comprehensive explanation of dataset construction and partitioning. Specifically, Facebook-100 dataset consists of 100 separate Facebook friendship networks, each representing a distinct American university. Twitch-Explicit dataset includes seven networks on Twitch, sourced from different regions, such as France, Germany, and Russia. In these two datasets, nodes represent users, and edges denote friendships. OGB-Arxiv dataset encompasses 169,343 Arxiv CS papers across 40 subject areas to construct their citation networks, with the graphs divided by publication years. Elliptic dataset features 49 sequential graph snapshots of Bitcoin transaction networks, which are evenly spaced with an interval of about two weeks, where nodes represent individual transactions and edges indicate the flow of payments. For each dataset, we selected multiple former source domains for pre-training and selected the remaining graphs as sequential target domains to facilitate continual adaptation under varying distributions. These datasets are widely recognized for exhibiting distribution discrepancies between their graphs in the literature. From Figure 1, we also have **empirical evidence** supporting that the graphs indeed have different distributions. The links for downloading the datasets and the code for dataset processing are provided in the anonymous link in the Abstract to ensure the reproducibility of our study. [1] Wu Q, et al. Handling Distribution Shifts on Graphs: An Invariance Perspective, ICLR 2022 [2] Jin W, et al. Empowering Graph Representation Learning with Test-Time Graph Transformation, ICLR 2023 > **W2: In Table 2, most methods have a similar performance with Test, which is the lower bound; this is weird.** We appreciate the reviewer’s insightful comment. To clarify this point, the metric we used, "Average Performance" (AP) and "Average Forgetting" (AF), assesses model performance across all previously encountered domains. The phenomenon that many baseline methods exhibit similar performance to the "Test" occurs because common adaptation methods do not fully address the continual adaptation problem. They tend to experience substantial forgetting when adapting to new tasks. As these models continually update their parameters, they easily forget previously learned tasks, significantly reducing overall performance. Conversely, the "Test" method does not update or fine-tune its model parameters. This lack of adaptation makes stable, though still low, performance across all datasets. Our proposed method, GCAL, distinguishes itself by specifically addressing these challenges through memory replay and variational memory graph generation, which effectively retain and reuse previously learned information, enabling improved performance. This phenomenon is **not unique** to our study and indeed exists in the broader field of continuous learning. For example, in a different continual learning setting on graphs—class-incremental graph learning[3], certain baselines perform similarly or even worse than the lower bound. This is because baseline methods often collapse quickly due to catastrophic forgetting when continuously learning new tasks. [3] Zhang, Xikun, Dongjin Song, and Dacheng Tao. "Cglb: Benchmark tasks for continual graph learning." Advances in Neural Information Processing Systems 35 (2022): 13006-13021. > **Q2: What does #Nodes and #Egdes mean in Table 1? Why is it a range?** Thanks for the valuable questions. \#Nodes represents the number of nodes, and \#Edges means the number of edges in each graph. Since each dataset contains multiple graphs, each graph exhibiting different distributions, the number of nodes and edges varies across these graphs. Therefore, we report the range (minimum and maximum values) of nodes and edges present within the graphs of each dataset. We will make it clear by introducing this explicitly in the caption of Table 1 in the final version.
null
null
null
null
null
null
Hybrid Batch Normalisation: Resolving the Dilemma of Batch Normalisation in Federated Learning
Accept (poster)
Summary: The paper introduces Hybrid Batch Normalization (HBN) as a new normalization method designed to overcome the limitations of Batch Normalization in FL. In FL, client data is Non-IID, leading to a discrepancy between local and global statistics, which degrades BN’s performance. HBN addresses this issue by adaptively combining global and local batch statistics. The experimental results demonstrate that HBN outperforms existing normalization methods on CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets. Claims And Evidence: The proposed method is sound, and the theoretical formulation aligns well with the motivation and objectives of the paper. Methods And Evaluation Criteria: here is a foundational and impactful paper, FedBN [1], that initially addressed the issue of BN in FL. The paper should introduce FedBN more explicitly and clearly highlight the differences between HBN and FedBN throughout the presentation. Additionally, further comparative experiments with FedBN would strengthen the paper’s contribution and clarify its novelty. [1] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization, ICLR 2021. Theoretical Claims: The paper does not provide any formal theoretical analysis or proofs. As a result, there are no theoretical claims to verify. Experimental Designs Or Analyses: ### Recommendations for Table 1 In **Table 1**, which presents experimental comparisons of different **normalization solutions in FL**, I recommend making the following modifications and additions: 1. **Include FedBN [1] as an additional baseline** - FedBN is a foundational work that directly addresses **batch normalization in FL**. Adding its results would provide a more comprehensive evaluation of existing normalization methods. 2. **Cite FedFN [2] and replace FN with FedFN** - The paper currently references **FN**, but a more direct and relevant work that first introduced **feature normalization in FL** is **FedFN**. I recommend citing **FedFN** explicitly and renaming **FN to FedFN** for accuracy. 3. **Expand the hyperparameter search space for local training epochs and learning rates** - **Section 4** states that the grid search for the **learning rate** was conducted over **{0.01, 0.005, 0.002, 0.001}**, with **local training epochs fixed at 1**. - I recommend **extending the grid search** as follows: - **Local training epochs**: Include **{3, 5, 10, 15}** in the search space. - **Learning rate**: Add **{1.0, 0.5, 0.1, 0.05}** to the search range. **Reasoning:** - As stated in **FedFN [2]**, > *"FedFN scales the gradient of **θ_cls** by dividing it by the feature vector norm. This scaling significantly impacts the gradient of **θ_cls** and, consequently, the applied learning rate."* - Similar to **FedFN**, the proposed method applies **feature normalization**, meaning it could exhibit **different optimal learning rates compared to baselines**. - Other FL studies that applied **scaled learning rates for feature normalization** include: - **SphereFed** [3] - **Neural Collapse Inspired FL** [4] - **FedDr+** [5] - Since the experimental setup appears to follow benchmarks similar to **[5]**, it is possible that the current **learning rate and local epoch settings are relatively small**. Expanding the **grid search range** would ensure a more robust evaluation. 4. **Provide detailed hyperparameter settings in the appendix** - Each **dataset and algorithm** should have a **detailed breakdown of the applied learning rate, local epoch, and other key hyperparameters** in the appendix to improve reproducibility. --- ### **References** [1] **FedBN** : Federated Learning on Non-IID Features via Local Batch Normalization, ICLR 2021. [2] **FedFN**: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning. [3] **SphereFed**: Hyperspherical Federated Learning, ECCV 2022. [4] **No Fear of Classifier Biases**: Neural Collapse Inspired Federated Learning with Synthetic and Fixed Classifier, ICCV 2023. [5] **FedDr+**: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning, TMLR 2025. Supplementary Material: I have reviewed the supplementary material in its entirety. However, I noticed that the paper lacks details on the hyperparameter settings applied to both the baselines and the proposed algorithm. For reproducibility, it is essential to provide a detailed explanation of the grid search process and the selected hyperparameters. Relation To Broader Scientific Literature: Apart from the points mentioned earlier, I have no additional comments. Essential References Not Discussed: 1. **Include FedBN [1] as an additional baseline** - FedBN is a foundational work that directly addresses **batch normalization in FL**. Adding its results would provide a more comprehensive evaluation of existing normalization methods. 2. **Cite FedFN [2] and replace FN with FedFN** - The paper currently references **FN**, but a more direct and relevant work that first introduced **feature normalization in FL** is **FedFN**. I recommend citing **FedFN** explicitly and renaming **FN to FedFN** for accuracy. ### **References** [1] **FedBN** : Federated Learning on Non-IID Features via Local Batch Normalization, ICLR 2021. [2] **FedFN**: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning. Other Strengths And Weaknesses: Apart from the points mentioned earlier, I have no additional comments. Other Comments Or Suggestions: There are existing approaches **[1,2,3]** that address **data heterogeneity in FL** by **freezing the classifier and modifying the loss function**. In contrast, this study focuses on modifying the **forward pass** of the model. Given this distinction, it seems that the proposed method could be effectively combined with these prior approaches. I am particularly interested in whether **HBN** maintains its advantage over **BN and FedBN** when integrated with the **classifier-freezing methods** from **[1,2,3]**. It would be valuable to examine whether **Simple BN vs. FedBN vs. HBN** still demonstrates performance improvements when applied alongside these approaches. ### **References** [1] **SphereFed**: Hyperspherical Federated Learning, ECCV 2022. [2] **No Fear of Classifier Biases**: Neural Collapse Inspired Federated Learning with Synthetic and Fixed Classifier, ICCV 2023. [3] **FedDr+**: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning, TMLR 2025. Questions For Authors: Apart from the points mentioned earlier, I have no additional comments. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable contributions to improving this paper. In response to your suggestions, please find our detailed replies below. **1. FedBN Baseline** FedBN is designed for personalised FL without obtaining a unified global model, as it keeps BN parameters client-specific, while focusing on the general FL scenarios, which train a global model for all the local clients. Therefore, we did not compare with FedBN in our experiments. However, we will include more detailed discussion of the techniques used in FedBN in our final version. **2. FedFN Terminology** We thank the reviewer for this valuable suggestion. We confirm that FedFN is implemented consistently with FN as described in our paper. To improve the accuracy of description, we will explicitly replace FN with FedFN in the revised version to avoid any potential ambiguity. **3. Hyperparameter Search** The use of inconsistent local update epochs would compromise the fairness of comparisons, as it primarily impact on the number of local updates. To mitigate this, we fixed the epoch value to 1 in our work. Similarly, the local batch size affects the communication frequency, which is why we haven't made adjustments for local update epochs. As you rightly observed, different algorithms exhibit distinct optimal learning rates across datasets (please refer to the anonymous link: https://anonymous.4open.science/r/ICML_2025-F29D/grid_search.png). We ensure all the methods could stably converge within the searched range. The empirical results indicate that larger learning rates prevent trainability under our small batch size constraints. **4. Combining with Classifier-Freezing Methods** In Table C, we compare the performance of the suggested three classifier-freezing approaches, integrated with our HBN, following the original experimental setup of MobileNet on CIFAR-100 (sharding partition strategies with 10 shards per client) [1]. In our implementation, we solely replace the standard BN layers in MobileNet with HBN. HBN achieves encouraging performance gains. HBN enhances activation normalisation during training by adaptively blending local and global statistics, thereby mitigating bias introduced by relying exclusively on local statistics. This hybrid approach ensures more stable and representative feature distributions across heterogeneous data partitions, which is particularly critical in classifier-freezing methods. *Table C: A comparison of classifier-freezing methods with BN and HBN.* | Methods | +BN | +HBN | |-----------|-------|----------| | FedAvg | 34.92 | **38.86**| | sphereFed | 42.80 | **49.57**| | FedETF | 32.30 | **45.38**| | FedDr+ | 47.58 | **50.22**| [1] S. Kim et al., "FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning," TMLR, 2025.
Summary: This paper introduces Hybrid Batch Normalisation (HBN), a normalization technique designed to address the limitations of standard Batch Normalisation (BN) in federated learning (FL) with non-IID data. HBN separates the update of statistical parameters (means and variances) from learnable parameters, enabling unbiased global statistical estimates. It incorporates a learnable hybrid distribution factor to adaptively blend local batch statistics with global statistics. HBN outperforms BN, Group Normalisation (GN), in classification tasks across CIFAR-10, CIFAR-100, and Tiny-ImageNet, particularly under data heterogeneity and small batch sizes. Claims And Evidence: - The proposed two-stage update for separating statistical and learnable parameters is simple yet effective, with solid theoretical support in the Appendix. - However, the paper does not fully explain why combining global and local statistics in local training is beneficial, and the evidence provided is limited. - Similarly, the paper presents two conflicting claims: "Using global statistics for batch normalization in local training is helpful" versus "FixBN and FBN overlook the diversity of local client statistics, a key challenge in non-IID settings." Simply stating that prior works ignore local diversity feels unclear, as their main contribution is using shared global statistics to tackle data heterogeneity across clients. The ablation study (Table 4) shows the hybrid component drives performance gains, but there’s little analysis or theoretical backing for why adding local batch statistics improves local training. Methods And Evaluation Criteria: The proposed HBN method, which separates statistical and learnable parameter updates and introduces a hybrid factor, is logically sound for addressing BN’s mismatch in FL’s non-IID settings. Evaluation on standard datasets (CIFAR-10, CIFAR-100, Tiny-ImageNet) with Dirichlet-distributed heterogeneity is appropriate and aligns with FL research norms. The use of Simple-CNN and ResNet-18 as benchmarks is reasonable, though testing on a broader range of architectures could strengthen generalizability. Metrics like top-1 accuracy are standard and suitable for the classification focus. Theoretical Claims: The paper provides derivations for unbiased global statistics (Appendix A.2, Equations 15-16), claiming they mitigate statistical bias in FL. I checked these proofs, and they appear mathematically correct, leveraging distributed statistical analysis to aggregate local statistics accurately. The formulation of the hybrid normalization (Equation 9) is conceptually clear, though its theoretical optimality lacks deeper justification beyond empirical success. Experimental Designs Or Analyses: - I think FedTAN should be included as a baseline for comparison. - The main results for FixBN, FBN, and FN in Table 1 show they often perform no better than BN, which contradicts the claims and results in their original papers. If these methods are implemented correctly, the paper should discuss the reasons for this discrepancy in the main paper. Supplementary Material: I reviewed the derivation of the unbiased estimator. Relation To Broader Scientific Literature: While existing methods focus on obtaining accurate global statistics, this paper concentrates on deriving an unbiased estimator and effectively using it with local statistics. It shows that combining both is better than using global statistics alone, but it doesn’t thoroughly analyze the issues with relying solely on global statistics. Essential References Not Discussed: Another line of research [1, 2] uses weight standardization instead of normalization to handle statistical differences between clients. I believe these studies are relevant to this work and should be included as baselines for comparison. [1] Siomos, Vasilis, et al. "Addressing Data Heterogeneity in Federated Learning with Adaptive Normalization-Free Feature Recalibration." arXiv preprint arXiv:2410.02006 (2024). [2] Zhuang, Weiming, and Lingjuan Lyu. "Fedwon: Triumphing multi-domain federated learning without normalization." ICLR 2024. Other Strengths And Weaknesses: If authors provide sufficient analysis or theoretical evidence on how the hybrid normalization works, its strength would lie in effectively combining local and global statistics, offering a compelling methodology. Other Comments Or Suggestions: - The T-SNE results in Figure 2 look very similar across normalization techniques. It’s hard to tell which one is better by eye, and the differences seem extremely marginal. Questions For Authors: - Is the statistics EMA technique used in the implementation also applied in other global statistics-based approaches? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your thoughtful suggestions. Please find our response below. **1. FedTAN Baseline** FedTAN employs real-time communication to synchronise the use of shared global statistics. However, to obtain these global statistics, FedTAN requires three rounds of communication per BN layer during the forward propagation. This requirement fails to accommodate a fundamental FL constraint: communication rounds restrictions. While the theoretical discussion of the impact of BN in FL provides valuable insights, we have intentionally excluded FedTAN from our comparative baselines because of these limitations. We acknowledge its conceptual contributions while noting its impractical communication overhead in real-world FL applications. Our method adds only one round overhead compared to the original BN, making it more suitable for practical scenarios. **2. Analysis of Results** FixBN and FBN employ shared global statistics for normalisation at different stages. When these global statistics are reliable, they benefit local training. Conversely, when the global statistics are unreliable, they exhibit detrimental effects. Our experimental scenario in Table 1 is deliberately challenging——featuring strong heterogeneity, numerous clients, and restricted batchsize. In such conditions, FixBN and FBN fail to judge whether the local statistics, which are derived from diverse local models, can directly be aggregated, resulting in unreliable global statistics. In addition, FN is a local normalisation method that normalises features using the statistics of a single sample. In non-iid scenarios, FN, like GN and LN, lacks awareness of the global structure, leading to performance degradation in our setup. **3. Weight Standardisation Methods** We further analysed the weight standardisation solution, Fedwon, which achieves promising results. Fedwon modifies the convolution layers directly to adjust the distribution, without using the standard normalisation layer. Therefore, the starting point of Fedwon is different from ours. What is exciting is that when we combine HBN with Fedwon, as shown in Table B, Fedwon+HBN yields impressive performance improvements, demonstrating the complementary power of our design towards Fedwon. In our work, we want to emphasise the supportive role of BN's global statistical information in FL. However, inappropriate implementations of BN in FL fail to fully activate its potential. Our key contribution is resolving the BN's dilemma in federated learning. Through analysis and empirical validation, HBN proves both simple and effective. Due to the lack of available open-source code for [1], we are unable to replicate it within a limited time. We will provide a comprehensive analysis of weight standardisation methods in the final version. *Table B: Compatibility with weight standardisation methods.* | Settings| Cifar10(β=0.3)| Cifar100(β=0.3)| Tiny(β=0.1)| Tiny(β=0.05)| |----------------|---------------|----------------|---------------------|----------------------| | HBN | 76.53 | 48.93 | 25.59 | 24.69 | | Fedwon | 77.87 | 49.66 | 26.97 | 24.82 | | Fedwon+HBN | **78.19** | **50.15** | **27.90** | **26.69** | [1] Siomos, Vasilis, et al. "Addressing Data Heterogeneity in Federated Learning with Adaptive Normalization-Free Feature Recalibration." arXiv, 2024. **4. Why do we need hybrid batch normalisation?** If we rely solely on local statistics, this reduces to the standard BN. The standard BN degrades performance in non-iid scenarios, as the local statistics are diverse among different clients. Consider a toy example: two clusters following Gaussian distributions, simulating data from two clients in FL. Normalising each cluster separately using local statistics would cause their distributions to overlap (please refer to the link: https://anonymous.4open.science/r/ICML_2025-F29D/adaptive_normalisation.png). In contrast, global normalisation preserves the global structure of two clusters. However, real-time global normalisation is impractical, as discussed above about FedTAN. Using only historical global statistics ,whose timeliness is constrained by intermittent communication, will yield a suboptimal result. But historical global statistics still retain valuable global structural information. Inspired by that, our proposed HBN adaptively combines historical global statistics with current local statistics, achieving more effective normalisation. **5. Amelioration of Figure 2** We replace Figure 2 with an intuitive toy example (please refer to the link: https://anonymous.4open.science/r/ICML_2025-F29D/adaptive_normalisation.png), where hybrid normalisation can standardise the size of two clusters while maintaining the global structure. **6. Technical Details** EMA is also used in FBN, excluding other methods.
Summary: Due to the lack of a coherent methodology for updating BN statistical parameters, standard BN degrades the federated learning performance. This paper proposes Hybrid Batch Normalization (HBN), which separates the update of statistical parameters from learnable parameters and adaptively combines local batch statistics with global statistics. The solution aims to obtain unbiased global statistical parameters while maintaining awareness of local data distributions during training. Claims And Evidence: Strengths: 1. Important question: Since federated learning uses distributed data, how to use aggregation of local statistics in batch normalization in federated learning is an important question. 2. Adaptive hybrid approach: The learnable hybrid distribution factor that balances global and local statistics is an elegant solution that adapts to varying degrees of data heterogeneity across clients. 3. Comprehensive empirical validation: The experiments cover multiple datasets (CIFAR-10/100, Tiny-ImageNet), network architectures (Simple-CNN, ResNet-18), and varying degrees of data heterogeneity and batch sizes. 4. Compatibility with existing FL methods: The authors demonstrate that HBN can be effectively combined with various federated learning approaches (FedAvg, FedProx, FedAdam, etc.), consistently boosting performance across different methods. Weakness: 1. It doesn't consider the scalability of client number. The paper does not present the experiment to change the number of clients for one dataset. 2. Hyperparameter sensitivity: While there is some discussion of hyperparameter selection, a more comprehensive analysis would be beneficial for practitioners, such as the selection of $\alpha, \beta and \gamma$ in eq(10) Methods And Evaluation Criteria: See Claims And Evidence part. Theoretical Claims: Since sampling is out of my research area, I did not check the correctness of proof. Experimental Designs Or Analyses: See Claims And Evidence part. Supplementary Material: No. Relation To Broader Scientific Literature: Unkonw Essential References Not Discussed: Unkonw Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: 1. In the algorithm, "// forward with gradient". Is this a typo and should be "// backforward with gradient"? 2. Could we explain the structure details of the Simple-CNN network? 3. what is the $\phi$ in the experiment? 4. What is B in the table 3? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive comments and suggestions. Please find our response below. **1. Client Number Experiment** We conducted comparative experiments on CIFAR-10 ($\beta$ = 0.6) across varying client numbers (10 clients sampled per round). As shown in Table A, HBN consistently outperforms baselines at all scales (from K=100 to K=1000). *Table A: Experiments on CIFAR-10 (β = 0.6) across varying client numbers.* | Method | K=100 | K=200 | K=500 | K=1000 | |---------|-------|-------|-------|--------| | BN | 75.82 | 73.88 | 68.94 | 61.25 | | GN | 75.74 | 67.93 | 60.31 | 52.24 | | LN | 74.08 | 69.08 | 61.22 | 51.55 | | FixBN | 75.65 | 71.67 | 69.07 | 63.08 | | FBN | 73.91 | 68.21 | 59.22 | 50.73 | | FN | 75.51 | 70.71 | 62.02 | 53.75 | | **HBN(Ours)**| **78.22** | **75.76** | **72.49** | **64.95** | **2. Hyperparameter Sensitivity** We apologise for any confusion caused. We clarify that $\alpha$, $\beta$, and $\gamma$ are learnable parameters (not hyperparameters). $\alpha$ is an adaptive hybrid distribution factor, while $\beta$ and $\gamma$ are learnable affine transformations. In this work, they are initialised as $\alpha=0$, $\beta=0$, and $\gamma=1$ respectively. **3. Algorithm Typo** Thank you for this observation. What we want to express is that updating statistical parameters does not require backpropagation to calculate gradients, which can save computational costs. For clarity, we will modify the algorithm annotation to 'without backpropagation' and 'using backpropagation' respectively in the final version. **4. Details of Model Architecture and Symbol Explanations** Regarding the model architecture of Simple-CNN, we provide its details in Appendix B.1. $\phi$ is the Dirichlet distribution factor that controls the degree of label heterogeneity [1] (visualisation is provided in Appendix B.2), while $B$ is the batch size. [1] Hsu, Tzu-Ming Harry, Hang Qi, and Matthew Brown. "Measuring the effects of non-identical data distribution for federated visual classification." arXiv preprint arXiv:1909.06335 (2019).
null
null
null
null
null
null
null
null
QuRe: Query-Relevant Retrieval through Hard Negative Sampling in Composed Image Retrieval
Accept (poster)
Summary: This paper proposes a QURE method to retrieval the target image and mitigate the false negatives for the task of Composed Image Retrieval (CIR). The authors introduce a hard negative sampling strategy that select images positioned between two sharp relevance score drops after the target to filter false negative. The paper also introduces human-annotated HP-FashionIQ dataset that explicitly captures user preferences beyond target retrieval. Experiments on the FashionIQ, CIRR and CIRCO datasets validate the effectiveness of the proposed method. Claims And Evidence: The article is well written, there are no typos or spelling errors, and the arguments are well stated. The state-of-the-art and related topics are comprehensive. However, it is suggested to supplement some improved experiments about false negative results to facilitate readers' understanding. Methods And Evaluation Criteria: The idea of QURE is novel and the HP-FashionIQ dataset provides a new evaluation metric for CIR. This contribution looks like alleviating the problem of false negative results in CIR research. Theoretical Claims: I'm not sure if the hard-negative sampling strategy can be used to filter out false negatives for all queries. There may be cases where there are no hard-negative samples? Experimental Designs Or Analyses: (1) Tables 2 and 3 compare different methods to the paper result (QURE). As described in Section 5, the authors use the BLIP-2 for the CIR task, but not all of the methods shown in table use BLIP-2. This makes it questionable whether much of the improvement in model performance is due to the introduction of BLIP-2. (2) The authors show in Figure 6 the selection of the QURE method for hard-negative samples, but there is no report on the improvement of false negative results. Supplementary Material: The authors provide the relevant code for the QURE method in the supplementary material. Relation To Broader Scientific Literature: The paper alleviates the problem of false negative results in the CIR dataset (FashionIQ) with a hard negative sampling strategy, which is one of the research directions in CIR. Essential References Not Discussed: The paper is well-written and fully cited. Other Strengths And Weaknesses: Strengths: 1. The idea of a hard negative sampling strategy is good. 2. The creation of the HP-FashionIQ dataset is a valuable contribution to the CIR field. It addresses the core limitations of current CIR evaluation metrics by focusing on user preferences rather than just retrieving target images. 3. The paper shows SoTA performance on the main CIR datasets. Weaknesses: 1. The paper does not sufficiently report the improvement of the false negative of the proposed approach. Other Comments Or Suggestions: The example provided in figure 1 is helpful, but it would be beneficial to highlight the target image and the novelty of method for clarity. This is important for readers who may not be familiar with CIR. Questions For Authors: The paper uses candidate images between two correlation sharp drops as hard negative samples is novel, but I am confused about how to determine whether it is a sharp drop or not. In addition, whether there will be some queries with multiple sharp drops affects the selection of hard negative samples. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer c7Tq for the positive feedback and for recognizing the novelty of our work in both the proposed method and the dataset. **Q1 : There may be cases where there are no hard-negative samples?** It is true that the number of hard negatives can vary significantly depending on the query (e.g., if the user asks for a "blue short-sleeve t-shirt," then "blue long-sleeve shirts" might be considered hard negatives) and the available candidate images in the corpus (e.g., if there are no blue shirts, no hard negatives might exist). However, identifying true hard negatives for all queries is impractical in both real-world applications and existing CIR datasets. Instead, we empirically demonstrate that QuRe improves model performance by selecting **relatively** hard negatives from the corpus. **Q2 : Ablation studies for the BLIP-2 model architecture** We conducted additional experiments using both BLIP and BLIP-2 architectures to ensure a fair comparison between QuRe and other baselines. Specifically, we trained QuRe on the BLIP backbone to compare with Bi-BLIP4CIR. For CoVR, we identified its latest version, CoVR-2[r1], which adopts BLIP-2 as its backbone, and compared it with our original QuRe model. Notably, the current state-of-the-art method SPRC originally used BLIP-2, and QuRe already outperformed it in our original submission. The results are presented across the following datasets: - **CIRR**: QuRe outperforms Bi-BLIP4CIR and CoVR-2 with average Recall improvements of **11.53** and **6.04**, respectively. - **FashionIQ**: QuRe outperforms Bi-BLIP4CIR and CoVR-2 with average Recall improvements of **4.50** and **3.41**, respectively. - **CIRCO**: QuRe outperforms Bi-BLIP4CIR and CoVR-2 with average mAP improvements of **17.12** and **0.53**, respectively. - **HP-FashionIQ**: QuRe outperforms Bi-BLIP4CIR and CoVR-2 with higher human preference rates of **7.95** and **2.56**, respectively. Please refer to our response in $\text{\color{blue}\textbf{Q2 under Reviewer 6GLy}}$ for detailed results. **Q3 : Report on the improvement of false negative results** We agree that providing additional qualitative results, similar to $\text{\color{red}\textbf{Figure 6}}$, helps illustrate the improvements in handling false negatives. Using the given query, we extracted the top 4 retrieved results from Bi-BLIP4CIR, CoVR-BLIP, SPRC, and our method, QuRe. The results are shown in the following figure: ### **<https://anonymous.4open.science/r/QuRe-ICML-Rebuttal/QuRe_FalseNegatives.png>** As shown, QuRe successfully retrieves relevant t-shirts that match the query attributes, such as navy blue color (close to black, like the query image) and the presence of a chicken or bird in the center. In contrast, other baselines retrieve less relevant images, such as those in light blue or without the chicken/bird. Such irrelevant results can lead to user dissatisfaction, as user satisfaction is generally proportional to the number of relevant items in the retrieved set. These results highlight the effectiveness of QuRe in assigning higher scores to relevant images by explicitly addressing false and easy negatives during training. **Q4 : Add target image and novelty of the method in Figure 1** Thank you for pointing this out. We agree that highlighting the target image for each query in $\text{\color{red}\textbf{Figure 1}}$ would improve clarity and overall understanding. Additionally, we will include the $\text{\color{blue}\textbf{figure provided in Q3}}$ ([**QuRe_FalseNegatives.png**](https://anonymous.4open.science/r/QuRe-ICML-Rebuttal/QuRe_FalseNegatives.png)) in the appendix to illustrate the effectiveness of our method, QuRe, compared with existing baselines. **Q5 : How is sharpness defined, and what if multiple sharp drops exist?** To determine whether the drop is sharp, we compute the relevance scores of all candidate images for each query. We then sort the scores and identify the top-2 largest drops occurring after the target. The corresponding images are selected as the hard negative set. An example of these steep degradation patterns is shown in $\text{\color{blue}\textbf{Q2 under Reviewer 13Df}}$ ([**QuRe_steep_one.png**](https://anonymous.4open.science/r/QuRe-ICML-Rebuttal/QuRe_steep_one.png)). While there may be multiple sharp drops, we believe that the top-2 degradations after the target likely represent the boundaries between false negatives, hard negatives, and easy negatives. We agree that more advanced approaches to identifying such transitions could be explored in the future. We will include this as a discussion point in the final version of the paper. We will update the manuscript to incorporate the feedback discussed in this rebuttal. We sincerely hope that our responses, along with the originality of our contributions, have addressed your concerns. If any questions remain, we would be glad to offer further clarification. --- Rebuttal Comment 1.1: Comment: I appreciate the rebuttal from the authors, but it does not address my fourth question very well, which is about the novelty of the paper. I'm changing my final rating to "weak accept". --- Reply to Comment 1.1.1: Comment: Thank you for your clarification. We first apologize; there might have been a misunderstanding regarding your fourth question on the novelty of our work. To better illustrate both the CIR task and the novelty of our method, QuRe, we have re-drawn Figure 1: ### **<https://anonymous.4open.science/r/QuRe-ICML-Rebuttal/QuRe_Figure_1.png>** In this revised figure, we retain the original CIR example, **while more clearly emphasizing QuRe’s novelty over existing CIR methods**. Specifically, QuRe sorts the candidate images by their relevance scores to the query and defines the hard negative set as the images between the two largest drops after the target. This approach excludes both false and easy negatives, unlike existing methods that rely on fixed candidate pools. We also annotate the **target image** and highlight that while existing methods may retrieve the target, they often include irrelevant images. In contrast, QuRe retrieves both the target and other relevant images. We appreciate your feedback, which has helped us improve the clarity of our contributions. We will revise the manuscript accordingly and sincerely hope this addresses your concern. We would be grateful if you would consider this clarification in your re-evaluation.
Summary: The paper introduces the QURE algorithm, leveraging the BLIP-2 framework and a Hard Negative Sampling strategy to address the challenges in Cross-Image Retrieval (CIR). The novel approach is demonstrated using a custom dataset, HP-FashionIQ. While the approach is innovative and effectively addresses key pain points, the paper does not provide ablation studies for the BLIP-2 model, making it difficult to discern the individual contributions of the model enhancements from the sampling strategy Claims And Evidence: The effectiveness of the Hard Negative Sampling strategy is well-supported by empirical evidence. However, the paper lacks a crucial ablation study of the BLIP-2 framework, which is necessary to isolate and understand its specific contributions relative to the overall performance improvements claimed. Methods And Evaluation Criteria: The QURE algorithm and Hard Negative Sampling approach are clearly articulated and address the identified issues in CIR tasks effectively. The validation on the HP-FashionIQ dataset appropriately benchmarks the model's performance. Nevertheless, the absence of BLIP-2 specific ablation studies undermines the ability to fully evaluate the method's effectiveness. Theoretical Claims: While practical in nature, the paper does not adequately justify the theoretical basis for the Hard Negative Sampling strategy's relevance score methodology. A rigorous mathematical or logical explanation is required to substantiate the claims made about the selection of negatives based on their relevance scores. Experimental Designs Or Analyses: The experimental design demonstrates a thorough understanding of the practical applications but lacks detailed ablation studies on the BLIP-2 model. This omission is critical as it hinders the clear differentiation of the effects of the model's capabilities from those of the sampling strategy. Supplementary Material: The supplementary materials provide comprehensive details on the algorithmic processes and data construction. These are well-prepared and contribute positively to the transparency of the research. Relation To Broader Scientific Literature: The paper positions itself well within the existing literature and effectively addresses significant challenges in the field. It brings innovative solutions to the forefront, though it could benefit from a more detailed discussion on theoretical frameworks related to negative sampling. Essential References Not Discussed: The current literature review is sufficient but would be enhanced by including discussions on theoretical frameworks that specifically address the role and impact of relevance scores in negative sampling within similar contexts. Other Strengths And Weaknesses: Strengths: 1.the paper is well-written, and easy to understand. 2.It effectively tackles key pain points in Cross-Image Retrieval tasks, notably through its novel Hard Negative Sampling strategy which improves model robustness 3.The detailed analysis and clear visualizations of the Hard Negative Sampling strategy aid in understanding and highlight its practical impact. Weaknesses: 1.the lack of ablation studies for the BLIP-2 framework 2.insufficient theoretical justification for the relevance score methodology in Hard Negative Sampling are significant drawbacks Other Comments Or Suggestions: It is recommended that the authors include ablation studies for the BLIP-2 model to clarify its contributions and provide a detailed mathematical justification for the relevance-based hard negative selection process. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 6GLy for recognizing the contributions of our work and for providing valuable suggestions. **Q1 : Mathematical justification for the Hard Negative Sampling strategy** We re-emphasize an important limitation in CIR datasets. Since only one or a few target images are annotated per query, **it is not feasible to accurately determine which images are false, hard, or easy negatives**. Hence, we first provide a theoretical justification for our approach, which is why defining the hard negative set and sampling from it in each epoch is effective in our scenario. The justification shows sampling negative from the hard negative set yields a higher expected loss than sampling from all corpus, thereby guiding the model to focus more on the defined hard negatives during training. Please refer to the following link: ### **<https://anonymous.4open.science/r/QuRe-ICML-Rebuttal/QuRe_Justification.pdf>** However, if the defined hard negative set is improperly constructed and includes false negatives, it can misguide training. In $\text{\color{red}Figure 4}$, we observe a clear performance improvement immediately after introducing the hard negative set, compared to sampling from the full corpus. Conversely, sampling from the Top-K set, which contains more false negatives than our defined set, leads to performance degradation. **Q2 : Ablation studies for the BLIP-2 model architecture** We conducted additional experiments using both BLIP and BLIP-2 model structures. Specifically, we trained QuRe with the BLIP backbone to compare with Bi-BLIP4CIR. We also identified the latest version of CoVR-BLIP, CoVR-2[r1], which uses BLIP-2, and compared it with our original QuRe model. We report results on the CIRR, FashionIQ, HP-FashionIQ, and CIRCO datasets. Notably, the current state-of-the-art method SPRC originally used BLIP-2, and QuRe already outperformed it in our original submission. | CIRR | backbone | Recall@1 | Recall@5 | Recall@10 | Recall@50 | Recall s@1 | Recall s@2 | Recall s@3 | Mean 5 + 1 | |:------------:|:--------:|:--------:|:--------:|:---------:|:---------:|:----------:|:----------:|:----------:|:----------:| | Bi-BLIP4CIR | BLIP | 32.55 | 64.36 | 76.53 | 91.61 | 63.54 | 82.46 | 92.48 | 63.95 | | **QuRe** | **BLIP** | **51.52**| **80.29**| **88.89** | **97.74** | **78.02** | **91.23** | **96.55** | **79.16** | | CoVR-2 | BLIP-2 | 42.80 | 74.60 | 83.90 | 96.22 | 69.49 | 86.22 | 93.98 | 72.05 | | **QuRe** | **BLIP-2**| **52.22**| **82.53**| **90.31** | **98.17** | **78.51** | **91.28** | **96.48** | **80.52** | | **FashionIQ** | **backbone** | **Dress - Recall@10** | **Dress - Recall@50** | **Shirt - Recall@10** | **Shirt - Recall@50** | **TopTee - Recall@10** | **TopTee - Recall@50** | **Recall@10** | **Recall@50** | **Mean** | |:---------------------:|:-------------------------:|:---------------------:|:---------------------:|:---------------------:|:---------------------:|:----------------------:|:----------------------:|:-------------:|:-------------:|:---------:| | Bi-BLIP4CIR | BLIP | 39.12 | 62.92 | 39.21 | 62.81 | 44.37 | 67.06 | 40.90 | 64.26 | 52.58 | | **QuRe** | **BLIP** | **40.80** | **64.90** | **45.93** | **65.90** | **52.07** | **72.87** | **46.27** | **67.89** | **57.08** | | CoVR-2 | BLIP-2 | 46.41 | 69.51 | 49.75 | 67.76 | 51.86 | 72.46 | 49.34 | 69.91 | 59.63 | | **QuRe** | **BLIP-2** | **46.80** | **69.81** | **53.53** | **72.87** | **57.47** | **77.77** | **52.60** | **73.48** | **63.04** | | **CIRCO** | **backbone** | **mAP@5** | **mAP@10** | **mAP@25** | **mAP@50** | |-------------------|:-------------------------:|-----------|------------|------------|------------| | Bi-BLIP4CIR | BLIP | 4.74 | 4.97 | 5.69 | 6.1 | | **QuRe** | **BLIP** | **20.85** | **21.48** | **23.35** | **24.31** | | CoVR-2 | BLIP-2 | 23.18 | 23.59 | 25.57 | 26.49 | | **QuRe** | **BLIP-2** | **23.22** | **24.23** | **26.26** | **27.24** | | **HP-FashionIQ** | **backbone** | **Preference Rate (%)** | |--------------------------|:-------------------------:|-------------------------| | Bi-BLIP4CIR | BLIP | 67.33 | | **QuRe** | **BLIP** | **75.28** | | CoVR-2 | BLIP-2 | 71.99 | | **QuRe** | **BLIP-2** | **74.55** | The results show that QuRe with a BLIP backbone consistently outperforms Bi-BLIP4CIR. CoVR-2, which uses a BLIP-2 backbone, still underperforms our original QuRe with BLIP-2. [r1] Ventura, Lucas, et al. "CoVR-2: Automatic Data Construction for Composed Video Retrieval." IEEE Transactions on Pattern Analysis and Machine Intelligence (2024). We will revise the manuscript to incorporate the points addressed in this rebuttal. We hope our responses and the novelty of our contributions have sufficiently addressed your concerns. If there are any remaining issues or points that need further clarification, we would be more than happy to provide additional details.
Summary: This work introduces a new method of QuRe to tackle the problem of composed image retrieval. The proposed method adopts and tailors the hard negative mining to emphasize not only the ranking of the target image, but also other relevant images in the dataset, aiming at improving the overall recall. Experiments on benchmark dataset demonstrate improved performance of the proposed method. Claims And Evidence: The claims could be problematic. See weakness. Methods And Evaluation Criteria: The method makes sense. Theoretical Claims: No proofs needed. Experimental Designs Or Analyses: The experimental designs could be biased. See weakness. Supplementary Material: N/A Relation To Broader Scientific Literature: Could be benificial to multimodal learning. Essential References Not Discussed: The related works are essential. Other Strengths And Weaknesses: The motivation of the proposed method could be problematic. The proposed method challenges the data labeling of existing benchmark datasets – existing benchmark datasets can mislabel relevant images as the false negative ones and thus could be highly biased. In this case, detecting and prioritizing hard negatives could be beneficial. Yet, there lacks justification and thorough study to this motivation. It seems that no visualizations and discussions on the steeps during the optimization and for different datasets. Other Comments Or Suggestions: Another strategy to validate the effectiveness of the proposed motivation is to apply the proposed method on a wide range of the datasets beyond the HP-FashionIQ and CIRR. Is the motivation to take advantage of the inherent bias of above datasets? Questions For Authors: The proposed method remains unclear to me. The proposed method monitors the steeps to distinguish between different levels of the negatives. Yet, how to make sure the quality of the ranking and steeps during optimization, especially at the initial stage? More illustrations or empirical evidence expected. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 13Df for the insightful comments and the time dedicated to reviewing our manuscript. Below, we provide detailed responses to each of your points. **Q1 : Justification and thorough study to the motivation of the proposed method** The primary motivation behind our method stems from a key limitation of existing CIR datasets, where only one or a few target images are annotated per query. As a result, false negatives are often included as negatives during training. Existing baselines typically treat all non-target images as negatives, which delays convergence and degrades performance [r1]. Our method addresses this issue by modifying the contrastive learning objective to compare the true positive against a single negative with increasing difficulty. This is achieved by selecting a small set of hard negatives for each query. We validate the effectiveness of our approach by achieving state-of-the-art performance on both CIRR and FashionIQ, and demonstrating the highest correlation with human preferences on HP-FashionIQ. We clarify the motivation of our study, we added visualizations of the steep drops in relevance scores (see $\text{\color{blue}\textbf{Q2}}$) and provided a mathematical justification in response to $\text{\color{blue}\textbf{Q1 in Reviewer 6GLy}}$. [r1] Huynh, Tri, et al. "Boosting contrastive self-supervised learning with false negative cancellation." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2022. **Q2 : Visualizations and discussions on the steep** Thank you for pointing this out. We agree that visualizing the steeps during optimization is beneficial. To illustrate this, we plotted the relevance scores for one sample from the FashionIQ dataset at two stages: before training and after the warm-up training. Please refer to the following link: ### **<https://anonymous.4open.science/r/QuRe-ICML-Rebuttal/QuRe_steep_one.png>** We defined the hard negatives as the images between the red and green lines. By selecting the top-2 largest drops in relevance score following the target, we observed steep degradations immediately before the red line and after the green line. These steep drops indicate substantial decreases in relevance[r2], suggesting that the selected boundaries effectively separate false negatives, hard negatives, and easy negatives. As the above figure shows only a single query, we also aggregated the results across all queries, which can be found here: ### **<https://anonymous.4open.science/r/QuRe-ICML-Rebuttal/QuRe_steep_agg.png>** This aggregated visualization demonstrates that after the warm-up phase, the hard negatives shift toward higher ranks, likely capturing more true hard negatives. This supports our design choice of including a warm-up stage before defining hard negatives. Without a warm-up, the hard negative set would contain many easy negatives, as shown in the left figure. [r2] Xia, Peng, et al. "Mmed-rag: Versatile multimodal rag system for medical vision language models." ICLR 2025. **Q3 : Apply QuRe on a wide range of the datasets beyond the HP-FashionIQ and CIRR** We evaluated our method on four datasets: CIRR, FashionIQ, HP-FashionIQ, and CIRCO. CIRR and FashionIQ are two representative datasets in CIR, where CIRR covers general domains (e.g., people, animals, food) and FashionIQ focuses on fashion-related queries (e.g., shirt, dress, etc). We agree that evaluating on a broader range of datasets would further validate our approach. To this end, we created the HP-FashionIQ dataset to better capture human preferences. We additionally evaluated on CIRCO, a dataset derived from COCO, which enables relevance-based retrieval evaluation using mAP in a zero-shot setting. **Q4 : How to make sure the quality of the ranking and steeps especially at the initial stage?** If the model is not sufficiently trained and the relevance scores are not reliable, our proposed method, which defines the hard negative set based on relevance scores, might produce inaccurate results. To address this, we include a warm-up phase where the model is trained by sampling negatives from the entire corpus without defining a hard negative set. As shown in $\text{\color{red}\textbf{Figure 4}}$, the model achieves comparable performance even when negatives are sampled from the full corpus ("All corpus"). Additionally, the **QuRe_steep_agg.png** visualization in $\text{\color{blue}\textbf{Q2}}$ demonstrates that defining the hard negative set without warm-up training tends to select easy negatives, which undermines the effectiveness of hard negative mining. We will revise the manuscript to reflect the points raised in this rebuttal. We hope our responses and the novelty of our contributions have adequately addressed your concerns. Should any issues remain, we are happy to provide further clarification. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' effort in rebuttal. Some of my concerns are solved, including visualizations and illustrations on datasets. Yet I'm not sure about the generalizability of the proposed method to a more broader spectrum of datasets. I'm willing to raise my score.
null
null
null
null
null
null
null
null
On the Tension between Byzantine Robustness and No-Attack Accuracy in Distributed Learning
Accept (spotlight poster)
Summary: This paper explores the trade-off between Byzantine robustness and standard accuracy in distributed learning. It provides a theoretical analysis of the error of robust aggregation methods when there are no Byzantine workers. In doing so, it establishes lower bounds on the deviation from the average of the datapoints as well as on the convergence rate of Byzantine robust gradient descent (ByzGD) in the absence of Byzantine workers. The authors present theoretical results demonstrating that the worst-case aggregation error (with respect to the average) increases as the number of expected Byzantine workers increases, leading to a potential degradation in accuracy. Empirical experiments are provided to support the theoretical findings. Claims And Evidence: The paper claims that making an aggregator more robust to Byzantine workers leads to an inevitable degradation in standard accuracy when there are no Byzantine failures. To support this claim, the paper studies the notion of **worst-case accuracy** of a robust aggregator $\textbf{Agg}$, defined as $$ \epsilon := \sup_{x_1, \dots, x_n \in \mathbb{R}^d} \frac{\Vert\textbf{Agg}(x_1, \dots, x_n) - \bar{x}_n \Vert^2}{\frac{1}{n} \Sigma \Vert x_i - \bar{x}_n \Vert^2} . $$ $\textbf{Note:}$ I am rewriting the definition here (off course excluding the trivial case of zero empirical variance). Theoretical results support the claim by proving lower bounds on a well-known class of robust aggregation methods known as $(f,\kappa)$-robust averaging introduced in [1]. This bound writes $\epsilon \in \Omega(\frac{f}{n-f})$, hence it linearly depends on the ratio $\frac{f}{n-f}$ where $n$ is the total number workers in the system and $f$ is the maximal amount of Byzantine workers the aggregation can theoretically tolerate. Based on this idea, the paper also provides a lower bound on the wort-case training error of ByzGD. These lower bounds are tight as proven by the matching upper bounds the paper presents. Empirical experiments show accuracy degradation for ByzGD in the absence of Byzantine workers in several context to further support the point made by the theoretical claims (this point will be mitigated below). Methods And Evaluation Criteria: The authors define worst-case accuracy using the norm difference from the true average of the datapoints (see above). The worst-case scenario (i.e., the lower bound) is constructed by assuming datasets with extreme skewness, which significantly affects the distance between the computed aggregation and the true average of the distribution. However, in such cases, targeting the average as the aggregation goal may not be statistically meaningful anymore (as the average is not a meaningful summary of the dataset anymore). Accordingly, I am not certain to understand the semantic of being distant from the average in such a scenario. In essence, I am unsure that the worst-case analysis is the best way to capture the accuracy vs robustness trade-off of Byzantine robust methods. A more informative approach would be to analyze statistical bounds under assumed data distributions to better model realistic data heterogeneity among honest workers. I also feel like there is a lack of deep analysis on state-of-the-art robust aggregation techniques. In fact, the paper does not truly discusses the impact of state-of-the-art pre-aggregation methods like NNM or Bucketing on the trade-off being studied. These methods have been shown to improve Byzantine robustness while mitigating accuracy degradation, making their omission a significant gap in my opinion. I also feel similar concerns on the experimental part. While the experiments confirm theoretical findings on simple aggregation rules like trimmed mean or median, they do not seem to explain very clearly the behavior of state-of-the-art methods like NNM or Bucketing. **Side note:** I would suggest including loss curves in the empirical evaluation to study whether the theoretical findings hold throughout the training process (as the theory in on training not testing). Theoretical Claims: I checked the correctness of some of the proofs (proof of Theorems 3.1, 3.2, and 3.4) and only had a quick look over the rest of the proofs. Overall the proofs seem correct to me even-though they also seemed quite limited in terms of technical novelty. In fact, the derivation of the lower and upper bounds on aggregation error appears to closely follow existing proofs in the literature see especially Section 8 in [1]. The Byzantine convergence analysis also resembles prior work (see e.g. [2] for lower bound and [1] for upper bound), and the paper does not sufficiently explain what modifications or adaptations were required. The connection of the convergence result with prior work is also not well explained in my opinion. Could the author explain what key insight we can get from Theorem besides that we have a lower bound in ($\frac{f}{n} G^2$), which is already what one has when considering the presence of Byzantine workers. [1] Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity, Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan (AISTATS 2023) [2] Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing, Sai Praneeth Karimireddy, Lie He, Martin Jaggi (ICLR 2022) Experimental Designs Or Analyses: The experimental setup seemed sound to me. Supplementary Material: I read the proofs from the supplementary material. Refer to "Theroretical claims" above for more details. Relation To Broader Scientific Literature: As I was mentioning above, the paper technical content seem to be quite derivative compared to [1,2]. Nevertheless, the idea of investigating the tension between accuracy and robustness theoretically seem like a novel and interesting idea to investigate. Essential References Not Discussed: There are some recent works that are very related to the Byzantine literature like [3,4] that could be cited but are not essential to the understanding of the paper. [3] Robust Distributed Learning: Tight Error Bounds and Breakdown Point under Data Heterogeneity, Youssef Allouah Rachid Guerraoui Nirupam Gupta Rafaël Pinot Geovani Rizk, (Neurips 2023) [4] Variance reduction is an antidote to Byzantoine workers, Eduard Gorbunov, Samuel Horvath, Peter Richtarik, Gauthier Gidel (ICLR 2023) Other Strengths And Weaknesses: NA Other Comments Or Suggestions: This paper raises an interesting and important question about the trade-off between Byzantine robustness and accuracy in non-attack scenarios. However, I am unsure if it currently provides a deep enough analysis of this trade-off (I will wait for discussion with the authors and the other reviewers to make my final decision I guess). Here are some suggestions, that might help Improve the paper. - Consider statistical bounds under realistic data distributions rather than focusing solely on worst-case skewed data. - Expand the analysis of state-of-the-art robust aggregation techniques like NNM and Bucketing. - Clarify the novelty of the proof techniques and how they differ from prior work. - Strengthen the empirical evaluation by including loss curves and more in-depth analysis of heterogeneity effects. Questions For Authors: Please answer my above concerns, especially regarding proof novelty and the limitations of using a worst-case analysis. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable time, insightful comments, and support of our work. We would like to answer the raised questions point by point as follows: **Q1. I am unsure that the worst-case analysis is the best way to capture the accuracy vs robustness trade-off of Byzantine-robust methods. A more informative approach would be to analyze statistical bounds under assumed data distributions.** We agree with the reviewer that a statistical bound would be informative and provide a simple way to obtain a statistical bound based on the definition of $\epsilon$-accuracy below. When $x_1,\ldots,x_n$ are sampled from a probability distribution, we can take expectation on both sides of the inequality in Definition 2.2 ($\epsilon$-accuracy) and obtain that $$\mathbb{E}||Agg(x_1,\ldots,x_n)-\bar{x}||^2 \leq \epsilon \cdot \mathbb{E}[\frac{1}{n}\sum_{i=1}^n ||x_i-\bar{x}||^2], $$ where $\mathbb{E}[\frac{1}{n}\sum_{i=1}^n ||x_i-\bar{x}||^2]$ can be viewed as a measurement of the diversity. Specifically, when $x_1,\ldots,x_n$ are independently sampled from the same probability distribution with variance $\sigma^2$, we have $$\mathbb{E}||Agg(x_1,\ldots,x_n)-\bar{x}||^2 \leq \epsilon\cdot \frac{n-1}{n}\sigma^2. $$ Meanwhile, we would like to point out that there are some other measurements of diversity such as $\max_{i\neq j}\mathbb{E}||x_i-x_j||^2$ in existing works [2]. It still requires much more effort to study what statistics can be used to better analyze the Byzantine robustness, which we will leave for future work. **Q2. Clarify the novelty of the proof techniques and how they differ from prior work.** **Q2.(a). the derivation of the lower and upper bounds on aggregation error appears to closely follow existing proofs in the literature, especially Section 8 in [1].** We would like to point out politely that we only follow the notations (e.g., the notations of $f$ and $n$) in [1]. Although the format of our results may be similar to those in [1], the proof of bounds for aggregation error in this work is substantially different since we consider a different scenario without Byzantine workers. **Q2.(b).Could the author explain what key insight we can get from the Theorem besides that we have a lower bound in ($\frac{f}{n} G^2$), which is already what one has when considering the presence of Byzantine workers?** We would like to politely point out that when there are $f$ Byzantine workers, the lower bound should be $\frac{f}{n-2f}G^2$ (please refer to Table 1 in [1]), which can be infinitely large when $f\rightarrow(\frac{n}{2})_{-}$. Previous work [2] considers a special case where $\delta=\frac{f}{n}\leq\delta_{\max}<\frac{1}{2}$ and obtains a lower bound of the order $O(\delta)=O(\frac{f}{n})$. However, it cannot be extended to general cases of $\delta<\frac{1}{2}$ because $$\frac{f}{n-2f}=\frac{f}{n}\times\frac{n}{n-2f}=\frac{f}{n}\times\frac{1}{1-2\delta}\leq\frac{1}{1-2\delta_{\max}}\frac{f}{n}.$$ When $\delta_{\max}\rightarrow\frac{1}{2}$, the term $\frac{1}{1-2\delta_{\max}}$ will diverge to $+\infty$ and cannot be viewed as a constant any more. In summary, the lower bound $\frac{f}{n}G^2$ in this work will approach $\frac{1}{2}G^2$ when $f\rightarrow\frac{n}{2}$ (or equivalently, $\delta\rightarrow\frac{1}{2}$). On the contrary, the lower bounds in existing works considering the presence of Byzantine workers diverge to $+\infty$ when $f\rightarrow\frac{n}{2}$. **Q3. I would suggest including loss curves in the empirical evaluation.** We sincerely thank the reviewer for the constructive suggestion and will add the loss curves in the final version. Since we are not allowed to attach figures here, we present the test accuracy during the training process of using Multi-Krum when the hyper-parameter of Dirichlet distribution $\alpha=0.1$ in the following table. The added empirical results further support the conclusion of our work. |Epoch|19|39|59|79|99|119|139|159(final)| |-|-|-|-|-|-|-|-|-| |$f=0$|77.27%|84.70%|84.46%|86.90%|87.81%|88.82%|89.17%|89.42%| |$f=1$|55.04%|78.78%|83.64%|84.55%|86.36%|87.27%|87.77%|88.05%| |$f=3$|45.24%|72.90%|78.00%|81.11%|82.60%|82.19%|82.98%|83.50%| |$f=5$|30.41%|37.60%|52.83%|63.48%|69.01%|69.55%|69.68%|69.86%| |$f=7$|18.68%|25.55%|27.64%|30.88%|33.98%|38.72%|40.05%|40.31%| **Q4. They do not seem to explain very clearly the behavior of state-of-the-art methods like NNM or Bucketing.** To answer the question, we provide another perspective of viewing NNM and bucketing below. An aggregator combined with NNM (or bucketing) can be considered as a new aggregator that can resist fewer Byzantine workers but have less aggregation error. Therefore, when using NNM or bucketing, we actually use the prior knowledge of the Byzantine worker number to make a better trade-off between robustness and accuracy. ---- We hope that our response can address the reviewer's concerns, and we greatly thank the reviewer again for the support of our work.
Summary: The paper analysis the learning error in distributed learning induced by robust aggregation schemes in the case when the actual number of Byzantine workers is 0, while the system is designed to handle a non-zero number of Byzantine workers $f$. The paper makes important contributions to the field of robustness in distributed learning by re-analyzing the errors of SOTA robust aggregation in the above-mentioned setting (i.e., no actual Bzyantine workers but non-zero $f$). The obtained bounds are better than those that assume presence of Byzantine workers, especially when $f$ approaches the limit $\frac{n}{2}$. The presented analysis is useful for studying the impact of Byzantine-robustness in ideal scenarios when the actual number of Byzantine workers (in most learning rounds) are much smaller than the maximum possible number of Byzantine workers (in any given learning round). The paper can have long lasting implications. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I (quickly) checked the proofs of Theorems 3.1, 3.2 and 3.6. I did not find any issues. Experimental Designs Or Analyses: While I did not reproduce the experimental results, they appear sound/valid. Supplementary Material: Yes, I read some proof details in Appendix A. Relation To Broader Scientific Literature: The paper makes important contributions to an important field of Byzantine-robustness in distributed learning. Essential References Not Discussed: Not that I could think of. Other Strengths And Weaknesses: Strengths: 1. Good comprehensible proofs with proper explanation and motivation between steps. 2. The proof techniques can be applied to future research in this field. Weaknesses: 1. Impact on learning error under the more general heterogeneous setting of (G, B)-dissimilarity is missing. Other Comments Or Suggestions: Some suggestions for future extensions: 1. Utility of the obtained results for studying the algorithmic stability (thereby, generalization power) of ByzGD. 2. Interpolation between the two extreme cases: i) actual number of Byzantine worker is 0 and ii) actual number of Byzantine workers is $f$. 3. Did you mean to present Theorem 4.6 for $\epsilon-$accurate aggregation rule? Lines 328 - 329 after the theorem mentions $\epsilon G^2$ and $\frac{\epsilon G^2}{2 \mu}$ terms. A few typos: 1. Line 110: For 'an' $(f, \kappa)-$robust ... Questions For Authors: See my suggestions and weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable time, constructive suggestions, and support of our work. We would like to respond point by point below. **Comment 1. Impact on learning error under the more general heterogeneous setting of (G, B)-dissimilarity is missing.** We agree with the reviewer that analyzing the tension under the more general setting of $(G, B)$-dissimilarity can help improve the theoretical contribution. Meanwhile, we politely think that adding the theory under $(G, B)$-dissimilarity requires substantial improvement, and the current version already makes an important contribution, as the reviewer mentioned. We will study the learning error under more general settings in future work and sincerely thank the reviewer again for the constructive suggestion. **Comment 2. Suggestions: (a) Utility of the obtained results for studying the algorithmic stability (thereby, generalization power) of ByzGD; (b) Interpolation between the two extreme cases: i) actual number of Byzantine worker is 0 and ii) actual number of Byzantine workers is $f$.** We greatly thank the reviewer for the constructive suggestions and will study the two problems in future work. **Comment 3. Did you mean to present Theorem 4.6 for $\epsilon$-accurate aggregation rule?** We sincerely thank the reviewer for pointing out the typo. Theorem 4.6 for $\epsilon$-accurate (instead of $(f,\kappa)$-robust) aggregation rule. Please refer to Appendix A.6 for the correct result. We will fix this typo in the maintext in the final version. Meanwhile, we would clarify that the novelty and the contribution of this paper will be almost not affected by the revision. Specifically, as mentioned in our response to Concern 1, the main contributions of this paper are the lower bounds. Please note that $\epsilon\geq\frac{f}{n-f}$ (Theorem 3.1) and that $TM_{f/n}$ is both $(f,\kappa)$-robust and $\epsilon$-accurate with $\epsilon=\frac{f}{n-f}$. It shows the tightness of the lower bound in Theorem 4.5. **Comment 4. A few typos ...** We greatly thank the reviewer for pointing out the typos. We promise to proofread the script carefully and fix the typos in the final version. We sincerely thank the reviewer again for the great support of our work, and we are always willing to answer any further questions. --- Rebuttal Comment 1.1: Comment: I thank the authors for responding to my comments. I will keep my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the quick acknowledgement and the support of our work.
Summary: This paper examines distributed learning in a setting where the server implements a robust aggregation rule. Motivated by the Byzantine-robust learning framework, it evaluates the performance of distributed gradient descent (GD) methods designed to cope with Byzantine workers, even when none are present. The authors extend the definition of $(\delta,\kappa)$-robustness to scenarios without Byzantine workers by introducing the notion of $\epsilon$-accuracy, derive accuracy coefficients for several robust aggregation rules, establish a convergence result and a lower bound, and support their findings with numerical experiments on CIFAR-10 that illustrate the impact of varying the "assumed" number of Byzantine workers. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I checked the correctness of the proofs for the upper and lower bounds (Theorems 4.5 and 4.6) . Experimental Designs Or Analyses: N/A Supplementary Material: Sections A.5, A.6 Relation To Broader Scientific Literature: The proof of Theorem 4.6 closely follows the proof of Theorem 1 in [1], with Definition 2.2 replacing Definition 2.1 (which is expected, given that the definitions coincide when $f=0$, as the authors mention). Therefore, the analysis is of limited novelty as it can be directly deduced from [1]. [1] Allouah, Farhadkhani, Guerraoui, Gupta, Pinot, Stephan. "Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity", AISTATS, 2023. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - The experimental setup is sound and detailed. - The analysis is done for ByzGD, where local gradients are computed exactly. While the authors mention ByzSGD as a variant, it is in fact much more common and brings up additional challenges: even in homogeneous scenarios (when $G=0$), simply applying a robust aggregation rule isn’t sufficient—even when Byzantine workers are absent—as seen in cases with skewed noise distributions (e.g., Counterexample 3 and others in [2]). They suggest that incorporating momentum or a similar strategy that leverages historical gradient data is necessary for true Byzantine-robustness, yet no analysis for momentum is provided (considering just SGD without momentum won't work in my opinion). Other Comments Or Suggestions: In the proof of Theorem 4.6, the term in the upper bound related to heterogeneity (for both non-convex and PL functions) is proportional to $\frac{f}{n-f}$, whereas according to the proof (in Appendix A.6) it should be instead proportional to $\epsilon$ (or $\kappa$) -- the accuracy of the actual aggregator used and not the optimal accuracy. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable time and the detailed review. We will respond to the raised concerns point by point as follows: **Concern 1. The proof of Theorem 4.6 is of limited novelty as it can be directly deduced from [1].** We thank the reviewer for letting us know their concern and would like to clarify the meaning of Theorem 4.6 below. + We would like to first politely point out that the notation $f$ in [1] is the **known** Byzantine worker number, while $f$ in this work is the **assumed** Byzantine worker number. To the best of our knowledge, this is the first convergence result for ByzGD with any $(f,\kappa)$-robust aggregator when there are actually no Byzantine workers. + Meanwhile, since this work mainly focuses on the tension between Byzantine robustness and no-attack accuracy, the main contribution of Section 4 lies in the lower bound (i.e., Theorem 4.5). The main purpose of presenting Theorem 4.6 here is to show the tightness of Theorem 4.5. **Concern 2. The analysis is done for ByzGD and mentions ByzSGD as a variant. However, considering just SGD without momentum won't work in my opinion.** We thank the reviewer for the insightful comment. As discussed in Section 4 (please refer to the right column of line 326), using momentum can reduce the variance (which the aggregation error is proportional to) in ByzSGD. Thus, ByzSGD with momentum has a better convergence rate and more Byzantine robustness than ByzSGD. However, compared to ByzGD, ByzSGD with momentum does not improve the theoretical learning rate w.r.t. the iteration number $T$. For example, we consider ByzGD with momentum as a special case of ByzSGD with momentum (with variance $\sigma^2=0$). For the same case as in line 314 to line 320 where $$F_1(w)=\ldots=F_f(w)=\frac{nG}{2\sqrt{f(n-f)}}w^2$$ and $$F_{f+1}(w)=\ldots=F_n(w)=\frac{nG}{2\sqrt{f(n-f)}}(w^2-2w),$$ the aggregated result will be totally controlled by worker $\{f+1,\ldots,n\}$, and from the server's perspective, it is equivalent to optimize the objective $\tilde{F}(w)=\frac{nG}{2\sqrt{f(n-f)}}(w^2-2w)$, whatever the optimizer (e.g. ByzGD with momentum) is used. Thus, ByzSGD with momentum cannot theoretically improve over ByzGD in this case. Additionally, we failed to find reference [2] in the review although the reviewer mentioned it. We politely guess that the reference might be [B], which is presented below. Please correct us if the guess is wrong. [B] Karimireddy, S. P., He, L., and Jaggi, M. Learning from history for Byzantine robust optimization. In Proceedings of the International Conference on Machine Learning, pp. 5311–5319, 2021. **Concern 3. In the proof of Theorem 4.6, the term in the upper bound related to heterogeneity (for both non-convex and PL functions) is proportional to $\frac{f}{n-f}$, whereas according to the proof (in Appendix A.6) it should be instead proportional to $\epsilon$ (or $\kappa$) -- the accuracy of the actual aggregator used and not the optimal accuracy.** We sincerely thank the reviewer for pointing out the typo, and we will fix this typo in the final version. Meanwhile, we would clarify that the novelty and the contribution of this paper are almost not affected by the revision. Specifically, as mentioned in our response to Concern 1, the main contributions of this paper are the lower bounds. Please note that $\epsilon\geq\frac{f}{n-f}$ (Theorem 3.1) and that $TM_{f/n}$ is both $(f,\kappa)$-robust and $\epsilon$-accurate with $\epsilon=\frac{f}{n-f}$. It shows the tightness of the lower bound in Theorem 4.5. **Additional Clarification** Finally, we would like to restate the main theoretical contributions of this work briefly. In this paper, we provide lower bounds for the aggregation error (Section 3) and convergence rate of ByzGD (Section 4) for $(f,\kappa)$-robust aggregators. Moreover, we show the tightness of the lower bounds. We sincerely thank the reviewer again for the insightful review and hope that our response can address the reviewer's concerns. Meanwhile, we would greatly appreciate it if the reviewer could re-evaluate our work in light of our response. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarification and for addressing my concerns. I understand that the main contribution is the lower bound in Theorem 4.5, and I agree with the authors that this result also applies to the stochastic case. While deriving an upper bound for ByzSGD with momentum (providing the **rate** of convergence to an $\epsilon G^2$-order neighborhood) would be interesting given its practical relevance, I no longer view its absence as a significant drawback of this work. I have updated my rating accordingly. I have one small question regarding the distinction between $(f,\kappa)$-robustness and $\epsilon$-accuracy. From Tables 1 and 2, it appears that although TM has a strictly suboptimal $\kappa$, it achieves optimal $\epsilon$-accuracy. Can the authors please elaborate on this discrepancy? I notice that the analysis differs from that of Allouah et al. (2023). --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the support of our work and the insightful follow-up comment. Our response to the follow-up question regarding the distinction between $(f,\kappa)$-robustness and $\epsilon$-accuracy is presented below. Firstly, the $(f,\kappa)$-robustness property measures the worst-case aggregation error in the presence of Byzantine workers, while $\epsilon$-accuracy measures the worst-case aggregation error without Byzantine workers. We politely think that for some specific aggregator such as TM, an optimal $\epsilon$ does not necessarily lead to an optimal $\kappa$. Secondly, in Section 8.2 of the Appendix of Allouah et al. (2023), it says that the $\kappa$ values are tight in order of magnitude. However, it is uncertain whether the numerical constants are tight. The numerical constants may be further optimized. Meanwhile, we think that further optimization of $\kappa$ values is a challenging but interesting direction for future work. We thank the reviewer again for their valuable time, support of our work, and insightful comments. We promise to take all the reviews into consideration and revise accordingly in the final version.
Summary: This work studies robust aggregation methods in the Byzantine setting. Specifically, let $x_i \in \mathbb{R}^d$ be information held by worker $i$, and suppose that the goal is to compute the mean $\frac{1}{n}\sum_{i=1}^n x_i$. In the Byzantine setting, an unknown subset of $f$ workers are adversarially corrupted, and thus a robust aggregator $Agg(x_1, \dots, x_n)$ is used to approximate the mean of the uncorrupted workers' vectors, i.e., $\frac{1}{n-f}\sum_{i \in S} x_i$, where $S$ denotes the unknown set of uncorrupted workers. Such aggregators aim to approximate this quantity reasonably accurately without knowing $S$ in advance. While prior work has developed methods for robust aggregation, this work considers the scenario where previously proposed methods are applied even though there are actually no Byzantine workers present. This serves as a sanity check measuring the "price of robustness." To quantify robustness, they define an aggregator to be $(f,\kappa)$-robust if it approximates the desired output within a factor of $\kappa$ times the variance of the inputs, for every subset of uncorrupted workers of size at least $n-f$. To measure accuracy, they say an aggregator is $\epsilon$-accurate if its estimate is within $\epsilon$ times the variance of the inputs from the true mean of all $n$ workers. In their first set of results, they derive upper and lower bounds relating the accuracy $\epsilon$ of an estimator to its robustness parameter $\kappa$. They show general bounds satisfying $\frac{f}{n - f} \leq \epsilon \leq \kappa$, along with tighter, method-specific bounds for particular estimators (namely the geometric median, coordinate-wise trimmed mean, and coordinate-wise median). In all cases, they demonstrate that their bounds are relatively tight by explicitly constructing examples where these bounds are attained. Then, they turn their attention to one of the most common use cases of Byzantine aggregation, namely the aggregation of gradients. In this setting, each worker has its own local dataset and computes a gradient $\nabla F_i(w)$, where $w$ denotes the current shared model parameters and $F_i$ represents the local loss function evaluated on worker $i$'s dataset. In this setting, under several (relatively standard) assumptions on the loss functions, they provide lower and upper bounds for how closely $T$ steps of Byzantine gradient descent (where each step updates $w$ based on a robust aggregation of gradients computed by the workers) approximate the optimal loss. Specifically, their lower bounds hold under the assumption that the Polyak–Łojasiewicz condition is satisfied, indicating that achieving near-optimal loss through standard gradient descent would, in principle, be feasible. In their upper bound, they explicitly show that the accuracy term $\frac{f}{n - f}$, previously derived for general Byzantine aggregators, directly affects how closely one can approach the optimal loss. Empirically, the authors validate their theoretical results by training a ResNet-20 model on CIFAR-10 using Byzantine gradient descent with robust aggregators, specifically multi-Krum and coordinate-wise trimmed mean. They vary the maximum number of Byzantine workers ($f$) the aggregators can tolerate and the degree of heterogeneity in data distributions across workers (controlled by the Dirichlet parameter $\alpha$). Their results demonstrate that even in the absence of actual Byzantine attacks, increasing the robustness parameter $f$ significantly reduces accuracy on the clean test set, with this effect being particularly pronounced when data distributions across workers are highly heterogeneous (small $\alpha$). These empirical findings confirm their theoretical claim of an inherent tradeoff between robustness to Byzantine workers and accuracy under no attack. Claims And Evidence: Yes. They prove their claims and their experiments make sense. Methods And Evaluation Criteria: Yes, especially given that their contributions are primarily theoretical. I view the experiments a basic sanity check to illustrate the effect of their theorems in practice. Theoretical Claims: I scanned through the proofs and they appear correct. They also all make very intuitive sense and so i do not doubt their validity. Experimental Designs Or Analyses: The experiment design made complete sense to me. It didn't require significant checking due to its very simple design (i.e. simply measuring the effectiveness of byazantine algorithms with varying levels of tolerance and heterogeneity). Supplementary Material: Just for the proofs. Relation To Broader Scientific Literature: They seem well related. This does appear to be the first work that sanity checks existing methods on uncorrupted data, and this paper does well in positioning itself as such. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: Strengths: i think the theory is simple, easy to follow, and also quite illustrative. I generally like the theme of understanding tension between accuracy and robustness (which appears in many different fields of machine learning!) Weaknesses: Some of the results on specific aggregation methods seem like they could be differed to the appendix (given that they are quite simple). I find the general bounds more interesting and would be interested in further theoretical analysis of these bounds in contexts of explicit heterogeneity (i.e. assume $x_i$ are sampled from some distirbution, etc. etc.). Other Comments Or Suggestions: None. Questions For Authors: What are some future directions for investigating this problem under more concrete assumptions on the hetrogeneity of the data? It feels like in those situations, one might hope for a "best of both worlds" by coming up with some sort of scheme to detect if one is in the byzantine setting or not. It also feels like there might be some more interesting theoretical bounds one could come up with under more structural assumptions on the data. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful comments, the constructive suggestions, and the support of our work. We would like to respond to the raised questions point by point below. **Comment 1. Some of the results on specific aggregation methods could be deferred to the appendix.** In the current version, we present the definitions and results of several common robust aggregation methods (GM, TM, and CM) in order to be more friendly to the readers who are not very familiar with Byzantine-robust distributed learning. Meanwhile, we agree with the reviewer that part of these texts could be deferred to the appendix for better readability. We will revise it in the final version and thank the reviewer for the constructive suggestion. **Comment 2. I find the general bounds more interesting and would be interested in further theoretical analysis of these bounds in contexts of explicit heterogeneity (i.e. assume $x_i$ are sampled from some distribution, etc. etc.).** For the case where $x_i$ are independently sampled from the same distribution with a variance of $\sigma^2$, we can take expectation on the inequality in Definition 2.2 (line 126). According to (15) on page 13 in the Appendix and using the fact that $x_i$ and $x_j$ are independent $(i\neq j)$, we can obtain that $$\mathbb{E}||Agg(x_1,\ldots,x_n)-\bar{x}||^2\leq \epsilon \cdot \mathbb{E}[\frac{1}{n}\sum_{i=1}^n ||x_i-\bar{x}||^2]=\epsilon\cdot \frac{n-1}{n}\sigma^2. $$ Meanwhile, we will also investigate more cases in future work and sincerely thank the reviewer for the suggestion. **Comment 3. What are some future directions for investigating this problem under more concrete assumptions on the hetrogeneity of the data?** Some potential future directions are presented below: + As pointed out by reviewer RMMi, investigating the tension between Byzantine robustness and no-attack accuracy under the more general $(G, B)$-dissimilarity assumption is a direction of future extension. + Additionally, reviewer RMMi also provides a potential future direction of studying the tension when the actual number of Byzantine workers is larger than $0$ but smaller than $f$. + The analysis of this work is for general cases, and the bounds are related to the case with an extreme large skeness. In some real-world applications, the distribution of training instances and gradients is usually not that extreme. Thus, we find it will be a potential future direction to investigate the data distribution in real-world applications and analyze the tension for some specific distributions. **Comment 4. It feels like in those situations, one might hope for a "best of both worlds" by coming up with some sort of scheme to detect if one is in the byzantine setting or not. It also feels like there might be some more interesting theoretical bounds one could come up with under more structural assumptions on the data.** We agree with the reviewer's insightful comments, which inspire us to think more about future directions. In this paper, we mainly investigate the tension and prove the tightness of our results for general cases. In real-world applications, as the reviewer pointed out, we could utilize the observation of some true data. Under the assumption that the non-Byzantine data is close to the observed data, we may obtain a better result about the tension between Byzantine robustness and no-attack accuracy. Since we mainly focus on the general cases, it is beyond the scope of this paper. We will explore this in future work and sincerely thank the reviewer for the constructive comments. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I will maintain my (positive) score and view of this paper. --- Reply to Comment 1.1.1: Comment: Thank you once again for your acknowledgement, constructive suggestions and support of our work. We will take all the reviews into consideration and make revisions accordingly in the final version.
null
null
null
null
null
null
MIRROR: Make Your Object-Level Multi-View Generation More Consistent with Training-Free Rectification
Accept (poster)
Summary: This paper introduced MIRROR, as a training-free rectification to improve the consistency of multi-view generation. The main contributions can be divided into (1) Trajectory Tracking Module (TTM) to pixel-wise trajectory tracking that labels identical points across views and (2) Feature Rectification Module (FRM) for explicit adjustment of each pixel embedding on noisy synthesized images by minimizing the feature distance. The overall idea is interesting, but the presentation of this paper is very unclear, while some details should be further clarified. Claims And Evidence: No. After reading the paper, I still could not understand why Trajectory Tracking Module (TTM) works with monocular depth estimation (depthanythingV2) without any metric alignment. The monocular depth is scale-invariable. Unlike the metric depth, monocular depth fails to be directly used as the condition of geometric warping. Methods And Evaluation Criteria: Yes. This paper proposed both qualitative and quantitative results based on various base methods, showing the effectiveness. Theoretical Claims: This paper includes some theoretical claims. However, the claim of Eq.5 is questionable, i.e., $t\rightarrow 0$ leads to the convergence of the trajectory tracking operator, because the depth is not aligned as the metric depth. Even the depth is extracted from $x_0$, it still fails to be used as the warping condition. Experimental Designs Or Analyses: Yes. Most results are based on qualitative comparisons. Supplementary Material: Yes. Experiments about the depth estimation. Relation To Broader Scientific Literature: The proposed method is a general approach to improve the consistency of multi-view diffusion models. But the discussion of this paper is limited to the object level with simple backgrounds, failing to be extended to the scene level. Essential References Not Discussed: No Other Strengths And Weaknesses: Except for the issue of TTM mentioned above. The presentation of this paper is also unclear, especially Sec4.3 is very hard to follow. Many symbols are defined, but their usage is not clearly discussed. For example, Line231(right) defines the 3x3 block as $M(u)$, which has completely not been used and mentioned in the subsequent paragraphs at all. Moreover, what are the meanings of $Z_{\alpha}$, $v\in B_u$. How to understand the feature distance of FRM (which symbol indicates block feature)? Other Comments Or Suggestions: N/A Questions For Authors: The authors should clarify about the usage of monocular depth in TTM, and clear and detailed presentation about Sec.4.3. It would be beneficial if the authors could present a clearer overview pipeline, while most symbols are labeled in this pipeline. Besides, there is another question: why the invalid background information should be explicitly excluded during the feature information fusion, even if the TTM is correct? Is this problem caused by the incorrect depth warping, resulting in mistakenly relating to background regions? Would this limit the extension to scene-level multi-view generation? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thorough analysis and constructive feedback on our paper. We will address the concerns you raised and hope our responses will clarify your doubts. ***Q1. Scale of Depth*** A1. Based on the camera parameters of the base model, we approximate relative-to-metric depth conversion and achieve depth alignment across multiple views. As the object-camera distance is fixed during training, the model learns a consistent depth scale under circular camera poses. This scale factor allows transforming relative depth into absolute depth for cross-view alignment. We further apply grid search over scale factors and use dual-anchor fusion of block features to reduce potential scale shifts during inference. The transformed depth enables more accurate trajectory tracking in TTM via geometric warping. Details on Eq. (5) can be found in **Reviewer y23b, A3**. And alignment details of depth estimation will be included in the appendix of the revised version. ***Q2. More Quantitative Results*** A2. Based on your suggestion, we have provided additional quantitative evaluation results. Please refer to the table in **Reviewer BJCB, A1**, for details. ***Q3. Extension to Scene-level Tasks*** A3. Importantly, the nature and manifestations of inconsistency differ between object-level and scene-level multi-view generation tasks. Object-level inconsistency mainly arises from the lack of 3D structural modeling and consistency supervision, often manifesting as the Janus problem and content drifting. In contrast, scene-level tasks typically suffer from layout disarray and semantic drift due to the absence of structural representations and layout supervision. Thus, the underlying challenges are fundamentally different. Our task track focuses on object-level multi-view generation, a key branch in 3D generation, with the baseline models representing mainstream, state-of-the-art methods. Our motivation is to correct the inconsistencies in object-level base models via explicit consistency supervision in a lightweight, plug-and-play, training-free manner, thereby improving 3D reconstruction quality. While scene-level generation is another important branch with fundamentally different inconsistency issues, we believe our method can offer insights for advancing this direction. To adapt MIRROR to this task, future work could incorporate layout-aware priors or scene maps to handle the broader spatial context. We consider MIRROR a core foundational step toward such extensions, and potential directions for scene-level adaptation will be discussed in the appendix. ***Q4. Symbols in FRM*** A4. We corrected the typo by redefining the 3×3 block as $B_{u _ \alpha}$ instead of $M(u _ \alpha)$ to align with the notation used in subsequent sections. Following your suggestion, we clarify key symbols in FRM, along with the updated pipeline (see Fig. 1 in https://anonymous.4open.science/r/mirror-A9B9/figs.pdf). $u, u_\alpha$ denote the coordinates of a point in the current view and its corresponding tracked point in the neighboring view, respectively. And $Z(u), Z_\alpha(u_\alpha)$ represent the feature values indexed by point $u$ and $u_\alpha$. By traversing all features ${Z_\alpha(v), v \in B_{u_\alpha}}$, within block $B_{u_\alpha}$, and applying the dual-anchor weights $W$, we obtain the aggregated block feature $\mathcal{M}(Z_\alpha(u_\alpha))$. The L2 feature distance between $Z(u)$ and $\mathcal{M}(Z_\alpha(u_\alpha))$ is then computed by Eq. (10) to form the consistency correction loss. ***Q5. Background Exclusion*** A5. As mentioned in our response A1, irrelevant background information would not caused by depth warping. The design of negative anchors aims to exclude both the additional information from depth map downsampling into the latent space and the redundant or irrelevant signals arising from expanding point features into block-level form to preserve spatial continuity. This design does not hinder extension to scene-level tasks. The dual-anchor feature fusion mechanism effectively suppresses irrelevant information while enhancing the contribution of relevant features. Moreover, dual anchors and their weights can be adapted to different application scenarios for better generalization. We will incorporate the revisions into the paper and hope our responses could address your concerns. We sincerely believe that our work is deserving of acceptance, and we would be grateful if you could recognize the contributions we’ve made. We kindly hope that, you might consider raising the score accordingly. Thank you again for your thoughtful feedback and consideration! --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I appreciate your response addressing my concerns regarding the presentation, background exclusion, and the extension to scene-level tasks. Given the authors' assertion that object-level and scene-level multi-view generation tasks are fundamentally different, I recommend that this distinction be clearly articulated in the title and abstract of the paper. This clarity will help convey that the focus of this work is solely on object-level multi-view generation, as the current phrasing may lead to misunderstandings. Additionally, the authors noted that the monocular depth is aligned using camera poses. Could you please provide more details on how this process is implemented? I noticed that this critical aspect was not discussed in the main paper. Specifically, is it achieved through a grid search? If so, this approach could come across as overly idealized, as it would imply strong prior knowledge, i.e., all camera distances are the same and share the same metric scale. --- Reply to Comment 1.1.1: Comment: Thank you again for your kind response and insightful suggestions. Following your advice, we will clarify the term “object-level” in both the title and abstract, and explicitly emphasize in the introduction that our task targets object-level multi-view generation to avoid potential misunderstanding. Additionally, we would like to further clarify the depth alignment process. The first step involves estimating the depth for each view using a depth estimator. Second, the estimated depth is normalized to the range [0,1] as relative depth. Third, the relative depth is then multiplied by the scale factor, defined as the ratio of the baseline model’s camera distance to the average relative depth, to obtain the absolute depth for the generated images. Moreover, it is important to note that the prior knowledge we rely on is entirely derived from the baseline models. Since different baseline models provide different camera distances, the resulting depth scales also vary. Accordingly, the grid search is conducted independently for each baseline rather than using a shared scale. Consequently, our method is not restricted to a unified depth scale. Thank you again for taking the time to read and comment on our work! We hope that this explanation helps to further address your concerns.
Summary: The paper introduces MIRROR, a training-free, plug-and-play method that improves consistency in multi-view image generation using diffusion models. At its core, MIRROR uses two novel modules: the Trajectory Tracking Module (TTM), which pinpoints corresponding 3D points across views using depth maps, and the Feature Rectification Module (FRM), which aligns features during sampling to fix inconsistencies. Unlike methods that require fine-tuning, MIRROR works directly during inference, making it compatible with popular pre-trained models like SyncDreamer and VideoMV. Experiments show it effectively tackles the Janus problem and content drift while preserving photorealism, offering a lightweight solution for high-quality 3D generation. Claims And Evidence: - MIRROR improves multi-view consistency in diffusion-generated images. Qualitative: Visual comparisons (Fig. 1, 5) show MIRROR resolves artifacts like the Janus problem (e.g., multiple faces) and content drift (e.g., misaligned geometry) in baselines (SyncDreamer, VideoMV). Quantitative: Metrics (Table 1) confirm gains in PSNR (up to +3.15), SSIM (up to +0.084), and LPIPS (up to -0.091), indicating improved alignment and reduced perceptual inconsistency. - MIRROR is a training-free, plug-and-play solution compatible with existing models. Uses DDIM inversion and rectification during inference (Fig. 3), requiring no fine-tuning or architectural changes to baselines. Applied successfully to diverse models (SyncDreamer, MVD-Fusion, VideoMV) for both image- and text-based tasks (Table 1, Fig. 5). - Depth-guided trajectory tracking enables precise geometric alignment. Removing depth guidance (Fig. 7) leads to erroneous correspondences (e.g., mismatched limbs on animals, distorted shapes). Proposition 4.2 and Appendix D show tracking errors diminish as denoising progresses, ensuring stable rectification. - Feature rectification achieves efficiency without sacrificing quality. Omitting UNet Jacobian terms (Theorem 4.4) reduces inference time by ~50% (Table 2) with negligible performance loss. Dual-anchor fusion (Fig. 8) filters background noise while retaining critical features, validated by improved SSIM/LPIPS (Table 5). Methods And Evaluation Criteria: - Methods The proposed method employs a two-stage rectification pipeline: 1. Utilizes off-the-shelf models (e.g., SyncDreamer, VideoMV) to synthesize initial multi-view images. 2. Rectification via TTM and FRM: Trajectory Tracking Module (TTM): Uses monocular depth estimation (Depth-Anything-V2) to establish 3D correspondences across views. Feature Rectification Module (FRM): Aligns pixel embeddings via dual-anchor fusion and gradient guidance, applied during DDIM sampling. **Limitations:** TTM assumes fixed elevation angles (Eq. 4), limiting applicability to rigid objects and predefined camera paths (e.g., azimuth-only rotations). Evaluated only on object-centric models (image/text-to-multi-view). Applicability to multi-view conditioned (e.g., EscherNet, CAT3D) or scene-level methods (e.g., CameraCtrl, MotionCtrl) remains unverified. - Evaluation Datasets: GSO (image-based), T3Bench (text-based). Metrics: PSNR, SSIM, LPIPS (multi-view consistency), CLIP score (text alignment). Quantitative and qualitative results clearly demonstrate the improvement over initial baseline models. Theoretical Claims: The paper presents several theoretical justifications, particularly in: - Trajectory tracking convergence analysis (Proposition 4.2). The proof tries to show that tracking error is bounded by diffusion model errors. - Gradient-based rectification (Theorem 4.4). The derivation in Appendix B.3 shows that neglecting the UNet Jacobian term introduces bounded errors, allowing efficient rectification by skipping the diffusion model backpropagation. **Concerns** While the theoretical framework is conceptually sound, gaps in notation, unverified assumptions, and incomplete derivations weaken rigor. Addressing these would strengthen the theoretical foundation. Equation (5) uses, which is not clearly defined in the main paper. Equation (15) in Appendix B.1 suggests that depth error is always smaller than pixel error, but this is not rigorously proven. Equation (19) contains typos, missing the conditional term . Also, the transition from Eq. (17) and (18) to (19) is not obvious, particularly in obtaining the coefficients. A more detailed derivation would improve clarity. Experimental Designs Or Analyses: The comparison with baselines is thorough, covering multiple models and metrics. The ablation studies (Fig. 6, 7, 8) effectively isolate contributions from TTM, FRM, and dual-anchor fusion. The timing analysis (Table 2) provides strong evidence that MIRROR is computationally efficient. **Concerns** The proposed method only works with camera poses at the same elevation angle (Eq. 4, Section 4.2), which is quite limited. How about using more generic point tracking methods, such as optical flow or dedicated point trackers? Will the current setting work with models that support arbitrary 6DoF poses and flexible numbers of views, such as EscherNet and CAT3D? Depth estimation is done using a monocular method. Could recent multi-view depth estimation approaches (e.g., Dust3R, Mast3R) improve scale and 3D consistency? Supplementary Material: Includes proofs (Appendix B), implementation details (C), convergence analysis (D), and correspondence visualizations (F). **Concerns** Parameter selection for gradient scale s(t) (Appendix C.2) is empirical; no systematic tuning strategy is provided. Failure case analysis and discussion of potential limitations on degrading the base models would provide more insights. Relation To Broader Scientific Literature: MIRROR builds on multi-view diffusion models (e.g., SyncDreamer, VideoMV) and depth-guided correspondence tracking. It differentiates from epipolar geometry-based methods (Ye et al., 2024; Zhou & Tulsiani, 2023). It also relates to inverse diffusion problems and test time optimization methods. Essential References Not Discussed: **Limitations** The paper does not discuss concurrent training-free rectification techniques and inverse diffusion papers, such as ”Denoising Diffusion Restoration Models“,“SOLVING VIDEO INVERSE PROBLEMS USING IMAGE DIFFUSION MODELS” The description of DreamFusion is not entirely accurate. While DreamFusion does require optimizing the 3D representation (e.g., NeRF or Gaussian Splatting), it does not require backpropagation through the diffusion network. This should be clarified. Recent works like Dust3R (2024) or Mast3R (2024) could enhance correspondence tracking and depth estimation but are not discussed. Methods like EscherNet (2024) or CAT3D (2024), which handle arbitrary 6DoF poses and multiview conditioning, are omitted. Other Strengths And Weaknesses: **Concerns** Reference image selection in the text-based method for tracking loss is unclear. In Fig. 5, applying MIRROR causes significant changes in text-based outputs compared to single-image-based results. Why? Table 1 shows that text-based methods gain the most improvement. Why is this the case? Other Comments Or Suggestions: How does MIRROR perform on real-world multi-view data? The evaluated data is either synthetic or under perfect lighting/imaging conditions. How sensitive is MIRROR to depth estimation errors, and could improving depth estimation further enhance performance? Why is the trajectory tracking method restricted to the same elevation? Could more flexible tracking methods be incorporated? How does MIRROR handle dynamic scenes or deformable objects where depth varies non-rigidly across views, say there is pretrained model can do 4D NVS. Questions For Authors: Please see my questions above and all the **limitations** and **concerns** in each part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate your thorough and detailed review, offering valuable insights on methodology, theory, experiments, and scalability, and thank you for recognizing our work! ***Q1. Limitation of TTM*** A1. (1) TTM is designed to ensuring uniform geometry coverage and is theoretically extendable to any pose. In fact, sampling multiple views around a fixed elevation outperforms arbitrary 6DoF poses for reconstruction by reducing occlusions and preserving overall geometry. Specifying a set of elevations and sampling around each can further enhance results. (2) The number of views is determined by baselines, providing flexibility and decoupling from MIRROR. (3) Optical flow is unsuitable for multi-view generation due to high computational overhead, while dedicated point trackers suffer from drift during viewpoint rotation, causing geometric and texture distortions. In contrast, TTM enables fast tracking with block-level spatial fusion, reducing errors. 3D metrics in **Reviewer BJCB, A1**, also confirm our effectiveness. ***Q2. Broader Applicability*** A2. Our task focuses on object-level multi-view generation and reconstruction, a key branch of 3D generation, with baselines being powerful mainstream methods. Generation of real-world or dynamic scenes and deformable objects are other important branches, and we believe MIRROR could inspire advancements in these areas. More details refer to **Reviewer ir3S, A3**. ***Q3. Definition of Eq.(5)*** A3. Eq.(5) defines the tracking error upper bound as base model's sampling error, with $x_0$ and $\hat{x}_0(x_t, t)$ representing the true image and predicted image at state t, respectively. ***Q4. Proof of Eq.(15)*** A4. Using the first-order Taylor expansion, we obtain: $$ H(x_0)=H(\hat{x}_0(x_t,t))+\nabla H(x_0) (x_0-\hat{x}_0(x_t,t))+o(||x_0-\hat{x}_0(x_t,t)||). $$ Here, $H$ is a pretrained ViT network with a continuous, bounded gradient that outputs scale-consistent absolute depth, showing that the depth-level error is of the same order as the pixel-level error: $$ ||H(\hat{x}_0(x_t,t))-H(x_0)||\approx||\nabla H(x_0)(x_0-\hat{x}_0(x_t, t))||\simeq O(||x_0-\hat{x}_0(x_t,t)||). $$ ***Q5. Derivation of Eq.(19)*** A5. There was a typo in Eq.(18), now corrected as: $$ \nabla_{z_t}\log p_\theta(z_t)=-\frac{1}{\sqrt{1-\overline{\alpha}_t}}\varepsilon _ \theta(z_t,t).\tag{18} $$ Using Eq.(17) and (18), Eq.(19) is easily derived: $$ \varepsilon_\theta(z_t,t,c)=\varepsilon_\theta(z_t,t)-\sqrt{1-\overline{\alpha}_{t}}\nabla _ {z_t} \log p _ \theta(c|z_t).\tag{19} $$ ***Q6. Depth Estimation*** A6. Fig.12 and Tab.3 shows improving depth estimation accuracy benefits MIRROR, but this does not necessarily mean that multi-view depth estimation methods are superior. Our goal is a lightweight plugin to enhance consistency. While methods like Dust3R and Mast3R improve robustness, they require constructing multi-view cost volumes or 3D reconstruction, leading to high memory usage (2GB vs. 100MB for DA2) and slow inference, which compromises our advantages. Besides, in monocular tasks, DA2 significantly outperforms Dust3R, indicating that multi-view methods, despite improving depth consistency, may exacerbate Janus Problem due to the accumulation of estimation errors during denoising. For depth scale consistency, refer to **Reviewer ir3S, A1**. ***Q7. Selection of s(t)*** A7. s(t) aims to match the consistency gradient magnitude with $\varepsilon_\theta$. With negligible differences across models, we use a general parameter, though adjustments per model are feasible. ***Q8. Failure Cases and Limitation of Baselines*** A8. Failure cases (see Fig.2 in https://anonymous.4open.science/r/mirror-A9B9/figs.pdf) show that when the baseline produces unreasonable, severely flawed geometry (an inherent limitation), we struggle to correct these fundamental issues. ***Q9. Essential References*** A9. Without training, ConsiStory enhances subject consistency in text-to-image generation by modifying the network with Subject-Driven Shared Attention and Feature Injection. However, it lacks plug-and-play compatibility with other models, limiting generalizability. Multi-view diffusion models generate consistent images from noise, while inverse diffusion infers the initial state from known outcomes, with distinct objectives. ***Q10. DreamFusion*** A10. You're right. We'll clarify it. ***Q11. Text-based Results*** A11. $x_0$ is the theoretical true image in tracking loss (5). As the text-based method lacks a reference image, VideoMV uses half of the views for reconstruction and rendering as pseudo-ground truth. Moreover, text-based tasks are more challenging and diverse than image-based ones, so small corrections in the denoising process have a stronger impact, highlighting MIRROR's power through both qualitative and quantitative improvements. Hope our responses address your concerns. A detailed revision will be in the appendix. Thanks again for your comprehensive feedback. --- Rebuttal Comment 1.1: Comment: Thanks for the author's reply. There are still several concerns that remain. Q1. (1) The author claims their method is "theoretically extendable to any pose". A theoretical possibility without experimental support is not rigorous. Also, the author claimed, "In fact, sampling multiple views around a fixed elevation outperforms arbitrary 6DoF poses for reconstruction by reducing occlusions and preserving overall geometry. " Why? It is known that fixed elevation views cannot cover the complete views of objects, especially for complex objects that have self-occlusions. (2) Although the method is designed to be flexible across baseline models, all baselines in the paper are constrained to fixed viewpoints and fixed view counts. Thus, it remains unclear whether TTM truly generalizes to multi-view settings with arbitrary or sparse camera poses. (3) The argument dismissing point trackers and optical flow lacks ablation or comparative experiments. In practice, advanced point tracking and optical flow methods can offer accurate pixel-level correspondences. The claim that TTM outperforms these approaches needs stronger empirical backing. Q2. The authors claim potential applicability to broader domains such as dynamic or scene-level generation. However, as discussed in Q1, there is no evidence showing MIRROR's ability to work beyond fixed-object scenarios. I echo Reviewer ir3S’s suggestion that the scope of the method should be explicitly limited to object-level multi-view generation with fixed viewpoints unless further validation is provided. Q3-Q5 Please clearly define all symbols and terms used in Eq. (5) in the main paper. Some notations are introduced without explanation, which affects readability and reproducibility. Q6. While the authors highlight the lightweight design of their depth estimation module, the argument that multi-view methods like Dust3R "may exacerbate the Janus problem" is somewhat speculative. Modern multi-view depth estimators can be efficient and may help enforce scale-consistent depth across views, a desirable property for multi-view consistency. A clearer comparative analysis—especially in terms of trade-offs between accuracy and resource usage—would strengthen this point. Q9. Several recent works on multi-view diffusion that directly address consistency—both spatial and temporal—are highly relevant and should be acknowledged. It would also be beneficial to demonstrate how MIRROR could complement such models. Evaluating MIRROR in conjunction with multi-view diffusion baselines would make a stronger case for its general applicability. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your feedback once again! Based on it, the experiments are in https://anonymous.4open.science/r/mirror-2C15/v2.pdf with figures and tables. ***Q1. TTM*** (1) We define a general TTM using azimuth $\alpha$ and elevation $\phi$: $$ x'=(x-\frac{W}{2})\cos{\alpha}+\frac{W}{2}-((y-\frac{H}{2})\sin{\phi}+\frac{H}{2}+z\cos{\phi})\sin{\alpha}, y'=(y-\frac{H}{2})\cos{\phi}+\frac{H}{2}-z \sin{\phi}. $$ We found that large viewpoint gaps in arbitrary 6DoF views reduce cross-view correlation, leading to higher errors and limited improvement. In contrast, the fixed elevation ($\phi = 0$) provides stronger inter-view correspondence and better performance, so we adopt this setting. Circular camera poses provide stable, uniform coverage with stronger view correlations, enhaning 3D reconstruction. Sampling at multiple elevations further recovers occluded regions. More results for multi-object and complex examples are shown in Fig.3. (2) We clarify that the adopted baselines allow adjustment of both the number and configuration of static camera views. For instance, SyncDreamer and VideoMV generate up to 16 and 24 views, respectively, with elevation ranges selected from [−10°, 40°] and [5°, 30°]. As noted in (1), although the baselines use static cameras, they are sufficiently effective for multi-view generation and reconstruction tasks. SOTA models like SyncDreamer, VideoMV, MVDiffusion++, and SV3D all use static cameras for dense multi-view generation. We will explicitly state in the paper that our task focuses on dense multi-view generation from static cameras. (3) We replace TTM with the recent point tracking method CoTracker3 for comparison. As shown in Tab.1, CoTracker3 brings limited improvement over the baseline, while TTM achieves greater gains. Fig.1 further shows that CoTracker3 fails to resolve Janus Problem, which TTM effectively mitigates, demonstrating its superiority in handling multi-view inconsistency. ***Q2. Task Clarification*** A. We will clarify "object-level" in the title and abstract and emphasize that our task focuses on object-level multi-view generation with fixed elevation. Additionally, we aim to extend our core training-free rectification pipeline to scene-level domains in future work. ***Q3-Q5. Eq.(5)*** A. We provide a detailed definition of Eq.(5), where $\mathcal{T}_\alpha$ is the trajectory tracking operator (Definition 4.1), $x_t$ is the intermediate noisy state at time t, and $\hat{x}_0(x_t, t)$ is the predicted image at t: $$ \hat{x}_0(x_t, t)= \frac{x_t-\sqrt{1-\overline{\alpha} _ {t}} \varepsilon _ \theta(x_t,t)}{\sqrt{\overline{\alpha}_t}}, $$ with $\varepsilon_\theta$ representing the noise prediction network, $x_0$ as the ground-truth image, and $O$ as an infinitesimal of the same order, $\Vert\cdot\Vert$ denotes the L2 norm. The clarified definition will be in the main paper. ***Q6. Comparison with DUSt3R*** A. We replaced DA2 with the multi-view depth estimator DUSt3R. As shown in Tab.2, DUSt3R does not significantly increase inference time but requires more memory, whereas DA2 incurs minimal overhead. In low-memory environments (e.g., a single NVIDIA 3090 GPU), DUSt3R fails to run. While DUSt3R slightly improves PSNR and LPIPS, SSIM and CLIP Score remain comparable to DA2. Visual results in Fig.2 show marginal improvement, but in some cases, DUSt3R underperforms DA2. In summary, multi-view depth provides modest gains but with memory trade-offs. As depth estimation is independent of our core contribution, these results—along with those in Appendix E—show that MIRROR can benefit from ongoing advances in depth estimation. ***Q9. Discussion of Multi-view Models*** A. Multi-view diffusion models, first introduced by MVDream, jointly train on 2D and 3D data for multi-view generation. To address inconsistency, several models employ strategies for both spatial and temporal alignment. SyncDreamer uses a 3D-aware attention mechanism for spatial consistency, MVD-Fusion employs noise-level depth estimations for reprojection, and VideoMV enhances spatiotemporal consistency with strong frame-to-frame coherence from video diffusion models. While these methods improve consistency, they lack explicit geometric constraints, entirely relying on learned networks. With limited 3D data, they are prone to local optima, and issues like Janus Problem and content drifting remain at inference. Methods like Consistent-1-to-3 and Era3D impose geometric constraints via epipolar geometry in multi-view attention, but Fig.4 in the main paper shows that epipolar correspondence leads to noisy supervision, causing over-smoothing and multi-face artifacts. Building on these efforts, MIRROR provides a training-free, efficient consistency enhancement, demonstrating significant effectiveness and generality across four SOTA diffusion models in the paper. As time constraints, we will explore additional models in future. Your comments are extremely helpful to us! Thank you.
Summary: The author present MIRROR, an efficient, training-free, plug-and-play method to enhance multi-view consistency in 3D asset generation. The proposed approach directly rectifies latent noisy images across views during the sampling process. To be specific, a Trajectory Tracking Module based on depth information is proposed to ascertain corresponding positions of points across distinct views. It use a Feature Rectification Module to eliminate ambiguity by enforcing consistency in the representation of the same physical point across different viewpoints. Qualitative and quantitative experiments demonstrate that MIRROR consistently enhances the performance of various generators. Claims And Evidence: The major claim of the proposed approach to be efficient, training-free, plug-and-play and enhance multi-view consistency is well supported by experimental results. Methods And Evaluation Criteria: Utilizing point-to-point corrections to ensure point features' consistency makes sense. Using the similarity computed between the feature of each pixel to provide a gradient map is an elegant way to guide the denoising procedure. Theoretical Claims: I check the proofs for the two major module and they look correct to me. Experimental Designs Or Analyses: 1. The author use multiple current multi-view generation approaches as baselines and compare the generation results with and without enhancement using the proposed approach. The experiment results showing that incorporating MIRROR into these baselines resolves the prominent artifacts and multi-face issues encountered in baselines. 2. Comprehensive experimental analysis and ablation studies are give to demonstrate proposed approach's effectiveness and help understand how it works. 3. The authors use PSNR, SSIM, LPIPS, and Clip Score as metrics for consistency. I have a concern on whether they can serve as effective metrics. Supplementary Material: I review the supplementary material to check video results and appendix for proofs and additional experiments. Relation To Broader Scientific Literature: This approach can serve as a plug-and-play module to the literature of multi-view generation, which may make impacts. Essential References Not Discussed: Essential references are well-discussed. Other Strengths And Weaknesses: The implementation and experiments details are well-described. Other Comments Or Suggestions: n/a Questions For Authors: 1. What version of Depth-Anything-V2 is used? Does it output metric depth or relative depth? Any discussions on the depth scale consistency among multiple views? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable suggestions and the recognition of our work in methodology, theoretical proof, and experimental design. Below are our responses to your concerns, and we hope they help clarify any doubts you may have. ***Q1. More Metrics*** A1. On the one hand, for fairness, we adopt the same quantitative metrics as baselines (including PSNR, SSIM, LPIPS, and CLIP Score), which are also widely used in current practice of multi-view generation. The quantitative results align well with the qualitative observations, further supporting the reliability of these metrics. In addition, **Reviewer y23b** acknowledges the quantitative metrics we used in the fourth line of their review in the **"Claims and Evidence"**. On the other hand, we additionally employ Chamfer Distance and Volume IoU to evaluate the 3D consistency of the reconstructed geometry from the generated multi-view images as follows: | Models | Chamfer Distance↓ | Volume IoU↑ | | -------------------- | ----------------- | ----------- | | SyncDreamer | 0.0415 | 0.5137 | | **+MIRROR (ours)** | **0.0387** | **0.5296** | | VideoMV (text-based) | 0.0459 | 0.5381 | | **+MIRROR (ours)** | **0.0276** | **0.6264** | For text-based methods, following VideoMV, we sample half of the views at regular intervals to reconstruct a pseudo-ground truth mesh for evaluation. Other evaluation settings follow those in Appendix C.4. The 3D reconstruction metrics in the table further validate the effectiveness of our method in improving the quality and consistency of both multi-view images and 3D reconstruction. ***Q2. Depth Estimation*** A2. All versions of Depth-Anything-V2 (DV2) achieve over 95% accuracy, with no significant performance differences observed in our task. Considering both accuracy and model size, we adopt DV2-Small as the depth estimation module. Moreover, based on the fixed camera distance used during training of each baseline, we convert the relative depth to metric depth using a consistent scale factor. Specifically, the baseline model, trained and inferred under circular camera poses, inherently provides a fixed depth scale, which ensures cross-view consistency. This allows the relative depth predicted by DV2 to be reliably transformed into absolute depth, ensuring consistent metric depth across multiple views. We hope our response could address your concerns. Based on your suggestions, we will include additional metrics in the appendix. Thank you once again for your recognition of our work and providing such insightful recommendations! --- Rebuttal Comment 1.1: Comment: After reviewing the other reviewers' comments and the authors' rebuttal, I have decided to downgrade my rating to 'weak accept.' While most of my original concerns have been addressed in the authors' responses, I align with the valid concerns raised by other reviewers. Specifically: 1. While I believe focusing on object-level settings is reasonable, I concur with Reviewer ir3S’s suggestion that the scope of the method should be explicitly limited to object-level multi-view generation with fixed viewpoints. 2. I would like to see additional comparisons with methods based on multi-view depth estimation or point tracking. 3. I am interested in a discussion of recent works addressing the consistency of multi-view diffusion and would like to see comparisons between the proposed approach and these recent methods, or results from their conjunctions. --- Reply to Comment 1.1.1: Comment: Thank you for your attention to these aspects. We hope our responses would address your concerns. ***Q1. Task Clarification*** A. We will clarify the term “object-level” in both the title and abstract of the paper, and explicitly emphasize in the introduction that our task focuses on object-level multi-view generation with fixed elevation to avoid potential misunderstanding. ***Q2. Comparisons with Point Tracking or Multi-view Depth Estimation*** A. The experiment results are presented in https://anonymous.4open.science/r/mirror-2C15/v2.pdf, along with figures and tables. **(1) Comparisons with Point Tracking Method:** We incorporate the recent point tracking SOTA method, CoTracker3, as a substitute for TTM for comparison. Tab.1 shows that while CoTracker3 improves the baseline’s generation quality to some extent, the gains are less significant than those achieved with TTM. Moreover, Fig.1 shows CoTracker3 fails to resolve multi-face artifacts, exhibiting noticeable inconsistencies such as multiple legs and misaligned heads, which TTM effectively mitigates. Both quantitative and qualitative results demonstrate that TTM outperforms point tracking methods like CoTracker3 in addressing multi-view inconsistency. **(2) Comparisons with Multi-view Depth Estimation Method:** We replaced the original monocular depth model Depth-Anything V2 (DA2) with the multi-view model DUSt3R, which leverages neighboring views for reconstruction, thus providing multi-view depth. As shown in Tab.2, DUSt3R does not significantly increase inference time. But it requires substantially more memory, while DA2 incurs minimal overhead. In low-memory environments (e.g., a single NVIDIA 3090 GPU), DUSt3R fails to run. Besides, DUSt3R slightly improves PSNR and LPIPS, while SSIM and CLIP Score remain comparable to DA2. Visual results in Fig.2 show a slight improvement of DUSt3R, but in some cases, it underperforms DA2. In summary, multi-view depth can offer minor gains but comes with trade-offs in memory cost. Since depth estimation is a modular component independent of our core contribution in the pipeline, these results—along with those in Appendix E—demonstrate that MIRROR can continue to benefit from advances in depth estimation. ***Q3. Discussion with Consistent Multi-view Diffusion Models*** A. Multi-view diffusion models were first introduced by MVDream, which jointly trains on 2D and 3D data for multi-view generation. To address inconsistency issues, several recent works incorporate strategies from both spatial and temporal perspectives. Specifically, SyncDreamer employs a 3D-aware attention mechanism to associate corresponding features across different viewpoints, enforcing spatial consistency. MVD-Fusion utilizes intermediate noise-level depth estimations for reprojection, also targeting spatial alignment for better 3D consistency. VideoMV further enhances spatiotemporal consistency by leveraging strong frame-to-frame coherence from video diffusion models. While these methods partially alleviate inconsistency, they lack explicit geometric constraints and rely entirely on learned networks. With limited 3D data, they are prone to local optima. As a result, issues such as Janus Problem and content drifting still persist at inference. Additionally, methods such as Consistent-1-to-3 and Era3D attempt to impose geometric constraints through epipolar geometry within multi-view attention to enhance consistency. However, we have demonstrated that epipolar correspondence provides noisy supervision, leading to over-smoothed results and multi-face artifacts, as shown in Fig. 4 of the main paper. Building on these efforts, MIRROR offers a training-free and efficient way to enhance generation consistency. Notably, we have demonstrated significant improvements in consistency across four SOTA diffusion models, validating its effectiveness and generality. Due to limited time, we will continue to explore additional multi-view diffusion models and include more experimental results in the appendix. We greatly appreciate your feedback, which has been extremely helpful in improving our work. We sincerely believe our work merits acceptance, and we kindly hope that, you might consider raising the score accordingly.
Summary: This work introduces MIRROR, a training-free plug-and-play module designed to enhance the multi-view consistency of existing text-to-3D and image-to-3D diffusion models. In particular, MIRROR consists of two stages: the first stage leverages an off-the-shelf diffusion model to generate multi-view images, while the second stage obtains the corresponding noise via DDIM and applies feature rectification based on a trajectory tracking module to enhance multi-view consistency during the denoising process. The problem is formally defined and proved, and experiments conducted on several diffusion models verify the effectiveness of the proposed module. Claims And Evidence: Kindly refer to **Other Strengths And Weaknesses** Methods And Evaluation Criteria: Kindly refer to **Other Strengths And Weaknesses** Theoretical Claims: Kindly refer to **Other Strengths And Weaknesses** Experimental Designs Or Analyses: Kindly refer to **Other Strengths And Weaknesses** Supplementary Material: Kindly refer to **Other Strengths And Weaknesses** Relation To Broader Scientific Literature: Kindly refer to **Other Strengths And Weaknesses** Essential References Not Discussed: Kindly refer to **Other Strengths And Weaknesses** Other Strengths And Weaknesses: ## Strengths: * The idea of enforcing consistency among adjacent frames to improve multi-view consistency is interesting and well-motivated. * The problem is formally defined and well-proven. * Experiments are conducted on several baseline models to demonstrate the versatility of the proposed module. ## Weaknesses: * Unclear justification for multi-view diffusion limitations. It is unclear why multi-view diffusion alone cannot guarantee multi-view consistency, while the proposed regularization among adjacent frames can. Multi-view diffusion applies cross-view attention to enforce correspondence among generated views, which at a high level seems similar to the motivation behind the MIRROR module. A more detailed discussion and analysis of how FRM is more effective than cross-view attention in addressing multi-view consistency would be beneficial. * Handling of non-Lambertian surfaces. Non-Lambertian (shiny) surfaces exhibit view-dependent effects, which contradict the underlying assumptions of MIRROR. It would be interesting to see how MIRROR will perform on assets with shiny surfaces. * Impact of monocular depth inconsistency on TTM. Since the depth information is extracted from a monocular model, it cannot guarantee consistent scale and shift across different views. It would be interesting to know whether this issue affects overall performance. If so, would metric-depth or normalized scale-invariant depth be helpful in mitigating this problem? Other Comments Or Suggestions: N.A. Questions For Authors: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. We truly appreciate your acknowledgment of our motivation, methods, theoretical proof, and experimental results. Your encouraging comments are highly valued, and we are grateful for your insights! ***Q1. Comparison with Cross-view Attention*** A1. Cross-view attention layers inherently struggle to enforce 3D consistency, as they lack explicit geometric constraints and rely solely on implicit learning through network weights. Given the limited availability of 3D data, such approaches are prone to local optima and often fail to generalize. Existing methods (e.g., MVDream, Consistent-1-to-3, Era3D) still suffer from issues like multi-face artifacts and content drifting at inference stage. Moreover, integrating our method into cross-view attention is impractical, as depth supervision is difficult to extract during training, and the attention layers demand substantial training costs in both computation and time. In contrast, FRM offers a more direct and effective solution. Leveraging the progressive denoising process of diffusion models, we introduce explicit multi-view consistency constraints by injecting expert priors from baseline models. It operates without requiring additional 3D supervision and is fully decoupled from the diffusion framework, making it lightweight, plug-and-play, and easily adaptable, which is a more novel approach at a high level. This explicit, geometry-aware mechanism enables FRM to correct inconsistencies more efficiently and reliably than cross-view attention, leading to more stable and coherent multi-view generation results. ***Q2. Handling of Non-Lambertian Surfaces*** A2. This is a promising direction, and it could be addressed by decomposing the generation of non-Lambertian surfaces into two tasks. First, our pipeline enhances multi-view consistency to reconstruct high-quality 3D models, improving the geometric details of objects, which forms a solid foundation for subsequent lighting and rendering. Building on this, lighting models and physics-based rendering techniques can be applied to illuminate and render the 3D model, accurately capturing the reflective properties of shiny surfaces from various viewpoints. In other words, while MIRROR enhances the performance of the first task, it also supports the rendering of shiny surfaces. Together, the two processes improve the generation of non-Lambertian surfaces. ***Q3. Scale Consistency of Depth*** A3. Although the depth information comes from the monocular estimation model, the depth ratio across different views remains consistent because the baseline model fixes the camera distance during training. Thus, the monocular model’s estimation capability is sufficient. Given that the depth scale perceived by the baseline model is fixed, we can use it as a scale factor to convert normalized relative depth into absolute depth, ensuring depth alignment across views. Furthermore, we perform a grid search for scale factors across multiple viewpoints and apply dual-anchor-based fusion of block features in FRM to mitigate potential scale shifts in TTM. We believe this approach is in line with your suggestion, and we will add these discussions in the appendix for further clarification. We hope our response has effectively addressed your concerns. Your suggestions have offered valuable insights that will greatly inform our future work, and we deeply appreciate them. Thank you once again for your constructive comments.
null
null
null
null
null
null
Accurate and Efficient World Modeling with Masked Latent Transformers
Accept (poster)
Summary: This paper proposes EMERALD, a world model that can produce highly accurate rollouts. The architecture is similar to prior works on using transformers as world models, with the exception that it uses MaskGIT to do prediction rather than a naive raster-scan next token prediction scheme. The authors argue that MaskGIT allows the model to learn more accurate predictions as well as improve computational efficiency for rollouts. EMERALD is evaluated on crafter, which is a well known benchmark. The results show that EMERALD can outperform the current state of the art as well as allowing for faster computation. Claims And Evidence: The main claim is that using MaskGIT decoding helps with generating more accurate rollouts in less time. These claims are supported by experimental evidence. On crafter, EMERALD supersedes state-of-the-art results with less training time compared to the baselines. Methods And Evaluation Criteria: The proposed method is sensible. Broadly speaking, the method builds on the Dreamer framework with a transformer world model and MaskGIT decoding scheme. All of these components are well established in the literature and combining these is a reasonable idea. Theoretical Claims: N/A Experimental Designs Or Analyses: The main experiment results is the success rates of the method across different crafter tasks. In all but one tasks, EMERALD outperforms the baselines and human experts. Overall I find this set of results convincing. The FPS of the world models are also reported and, as the authors claimed, EMERALD can do prediction faster. Supplementary Material: Yes I have read through the Appendix. I have one suggestion w.r.t. the Atari dataset. The results seem to suggest that EMERALD is only on par with the baseline models. This somewhat muddies the authors' claim that their method is better at prediction. I can believe that because of the simplicity of the Atari environments, current state-of-the-art already 'saturate' the performance. But this result should be more explicitly discussed in the main text to paint a fuller picture of the efficacy of the method. Relation To Broader Scientific Literature: This work situates within the literature of improving world models and RL for more complex tasks, and pushes the state-of-the-art in this domain. The main innovation here is the application of a more recent image generation method (namely MaskGIT) on existing architectures. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: Strength: - The idea is relatively simple and the result is promising both in terms of performance and computational efficiency. - The writing and presentation is in general clear. Weakness: - I have slight concerns about the novelty of the work. It seems that this paper has taken an architecture developed for image generation and applied it on standard transformer world models. While this does improve the results, I am not entirely sure whether the amount of innovation here would be of much interest to the broader community. Other Comments Or Suggestions: None. Questions For Authors: None. ## Post rebuttal The rebuttal clarified my questions. I believe the paper can be improved by a longer discussion on the results on the Atari games as well as new experiments during the rebuttal period. However, my concern on the novelty of the method (that the method is a simple combination of MaskedGIT and standard world models) remains. Overall I believe the results presented support the authors' claims. My recommendation for the acceptance of the paper remains. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback and valuable returns. Please find below our response to the concerns that you raised in the review. > The results seem to suggest that EMERALD is only on par with the baseline models. This somewhat muddies the authors' claim that their method is better at prediction. I can believe that because of the simplicity of the Atari environments, current state-of-the-art already 'saturate' the performance. But this result should be more explicitly discussed in the main text to paint a fuller picture of the efficacy of the method. We would like to remind that the motivation behind our work is the development of a novel world model that is both accurate and efficient. While EMERALD achieves results comparable with $\Delta$-IRIS and DIAMOND on Atari 100k, the training time required is significantly reduced. Moreover, contrary to the Crafter benchmark, the Atari 100k benchmark does not require long term memory to solve different games. Only a few history frames are sufficient to achieve strong performance. Many games do not require the use of spatial latents to achieve near perfect reconstruction. But we choose to still evaluate our method on the benchmark for comparison. > I have slight concerns about the novelty of the work. It seems that this paper has taken an architecture developed for image generation and applied it on standard transformer world models. While this does improve the results, I am not entirely sure whether the amount of innovation here would be of much interest to the broader community. While the architecture of EMERALD may appear similar, previous approaches for image and video generation differs in several major points: - First, Image/Video generation approaches use pre-trained discrete representation (VQ-GAN) to generate tokens in latent space. EMERALD does not require pre-trained representations and learns discrete representations during training. - Second, TECO learns a second inner VQ-VAE from the pretrained VQ-GAN representations. This is done by encoding pre-trained representations into a further compressed latent space for the world model. Details can be found on the official implementation (https://github.com/wilson1yan/teco/blob/1e92c088965586835005bb1891d4616c2b7bfd5c/teco/models/teco.py#L133). In contrast, EMERALD uses latent encoder/decoder networks to project spatial features to temporal feature vectors. - Image/Video generation approaches use vector quantized representations while EMERALD uses categorical representations with softmax sampling of the tokens. TECO uses a draft-and-revise decoding approach while we select the tokens with higher confidence at each decoding step. - Finally, we demonstrate that first: model-based agents can successfully be learned in a spatial latent space using MaskGIT predictions and second: EMERALD improves performance on the visually challenging Crafter benmark and commonly used Atari 100k benchmark. We think that our work provides notable contributions to model-based reinforcement learning and that it presents promising results for the use of spatial-temporal world models with maskGIT predictions. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response. I believe that including something similar to the above discussion on the results on Atari would be beneficial to the paper. My recommendation for the acceptance of the paper remains. --- Reply to Comment 1.1.1: Comment: Thank you for your swift response. We agree that a related discussion on the benchmark would be valuable to the paper. We are committed to incorporate discussions, tables, and illustrations that would complement and clarify the paper.
Summary: The paper mainly focuses on an approach towards world modeling where the prediction of dynamics is done by a spatial maskGIT. This results in significant improvements on the Crafter benchmark when compared to other models, and performs well on Atari. This also results in improved efficiency over an existing approach aimed at learning strong dynamics by doing reconstruction in the raw pixel space. ## Update after Rebuttal The authors successfully responded to my questions and the questions of the other reviewers. I feel confident in my score of a 4 and in the broader merit of the paper. The issues with the FLOP reporting seem slightly concerning, especially since delta iris has much lower FLOPs per env step than dreamerv3/EMERALD. However, the FLOPs regarding world model rollout now make sense and I trust that the authors will fix this issue for the final manuscript. Claims And Evidence: The claims made in the submission are decently supported by clear and convincing evidence. Although the results on the Crafter benchmark were great, outperforming all other existing world models, they did not generalize very well to other environments, particularly within the Atari 100k benchmark (table 10). Additionally, compared to DreamerV3 medium model, the proposed EMERALD is not more efficient. This is further exacerbated by the reporting of frames rather than FLOPs, which does not make the efficiency results hardware agnostic. Overall, the results do support the idea that EMERALD helps in environments with very difficult to predict dynamics/high dimensional states through great performance in crafter, but it would have been better to see more results in similar environments and more results regarding more interpretable efficiency metrics (FLOPs). Methods And Evaluation Criteria: The crafter and atari evaluation criteria does make sense for the problem application, although the results would be more believable if they were gotten on more high dimensional state space environments. Theoretical Claims: N/A Experimental Designs Or Analyses: I verified the validity of all experimental designs as well as the ablations, I did not find any issues. Supplementary Material: I reviewed all of the supplementary material and included my thoughts in all of the reviews. Relation To Broader Scientific Literature: The key contributions are directly related to Model Based Reinforcement Learning/World Models and their scalability, efficiency, and architectures. The contributions are also related to self-supervised learning in vision, particularly regarding the debate of direct pixel prediction vs latent prediction. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Weaknesses - Proposed approach is more expensive than dreamerv3 during inference because of maskgit - Transformer is more expensive than RNN during inference as well, it is unclear how much of a performance boost this causes from the experiments (it seems the results in Table 3 don't isolate this factor) - It's likely a significant amount of the progress is from just using Transformer instead of an RSSM - It's not clear to me that its not better than DreamerV3 solely because of a more complex head on top of the dreamerv3 latent predictor. Some ablation experiments trying different latent predictors (i.e. MLP, transformer, etc) would have confirmed that MaskGIT is a good choice. - Seems to do worse/similar to other MBRL approaches like STORM on Atari 26 - They need to report the number of RTX hours, and more importantly FLOPs, that EMERALD takes to train compared to other models Other Comments Or Suggestions: - line 142 meanwhile should be lowercase - 204 transformer world -> transformer world model - Figure 3 is not clear as it differs from equations (1) in the world model overview in terms of inputs - maskGIT equation in (1) should have z_{t+1} as the prediction? doesnt make sense to predict z_t from z_t? Questions For Authors: It's still not clear how the maskgit portion works--according to equation (1) in the world model overview it takes as input h and z but figure 3 shows as taking in z and o? - Is Latent Dec the same as the Decoder? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and providing valuable suggestions. Please find below our response to the concerns and questions that you raised in the review. > Weakness 1 In Table 3, the lines 2 and 4 compare the use of a RSSM or TSSM for world modeling when using a spatial latent space. Using EMERALD TSSM increases final performance from 15.8 to 16.8. The number of training FPS also increases from 20 to 27. This is because the RSSM is recurrent, which slows down training while the TSSM can process the temporal elements in parallel during the world model learning phase. The use of attention with the TSSM also increases final performance by providing stronger memory representations for the agent. When comparing the accuracy of world model predictions in percent of correctly predicted future tokens, EMERALD reaches an accuracy of 81.51% correctly predicted tokens against 66.27% when using a RSSM. This can be explained by the fact that some future predictions require having access to past information that the RSSM has difficulties to accumulate in the limited GRU recurrent state. > Weakness 2 We performed an additional ablation comparing the number of decoding steps used during imagination and the use of a MLP head for prediction. Please refer to Rebuttal Table 1 and Table 2 in reviewer mGS9’s response. Additional experiments on the Craftax benchmark reveals that using a simple MLP head instead of MaskGIT leads to the severe accumulation of hallucinations after a certain number of imagination steps. > Weakness 3 EMERALD achieves better performance on games where tiny details are crucial. This is helpful in games like Pong or Breakout where our method reaches final scores superior to 200 for some of the seeds. In contrast, STORM suffers from higher reconstruction error, impacting results for these games. Concerning FLOPs (#multiply-and-add operations generated by matrix multiplies), it can be a relevant metric for the efficiency of world models that is indeed hardware agnostic. However, FLOPs do not take into account the ability of the Transformer architecture to process temporal information in parallel while recurrent units require processing each element sequentially during the world model learning phase. We nevertheless compute the amount of FLOPs for each world model using the standard batch size of 16 and sequence length of 64: - \#FLOPs World Model Forward (number of FLOPs to process the observations and predict next state, rewards and episode continuations) - \#FLOPs World Model Imagination (number of FLOPs to imaginate H=15 steps in the future) - \#FLOPs Env step (number of FLOPs to process the observations and sample actions for N=16 parallel environments during exploration) | Method | \#FLOPs World Model Forward (Billion) | \#FLOPs World Model Imagination (Billion) | \#FLOPs Env Step (Billion) | |----------|:------------:|:------:|:------:| | DreamerV3 (M) | 189 | 360 | 1.2 | | DreamerV3 (XL) | 812 | 2033 | 5.4 | | $\Delta$-IRIS | 699 | 2273 | 0.7 | | EMERALD | 275 | 392 | 3.1 | As expected, we find that a correlation can be made between FPS recorded during training and number of FLOPs. $\Delta$-IRIS uses image tokens, which requires encoding observed frames to predict next latent states sequentially. This requires decoding predicted latent states to observations for each time step. The policy also predicts the next actions given reconstructed observations. This greatly increases the amount of FLOPs required for imagination compared to DreamerV3 and EMERALD. The number of RTX 3090 hours on the Atari 100k is provided in appendix A.6 (line 899-900). On the Crafter benchmark, EMERALD takes 100 hours to reach 10M env steps while DreamerV3 (M) and DreamerV3 (XL) take 75 hours and 120 hours, respectively. On the other hand, training one seed of $\Delta$-IRIS takes 230 hours on one 3090 GPU. > Figure 3 is not clear as it differs from equations (1) in the world model overview in terms of inputs maskGIT equation in (1) should have z_{t+1} as the prediction? doesnt make sense to predict z_t from z_t? No, the equation in (1) is actually correct. The MaskGIT predictor takes the masked $z_{t}$ as input for predicting the unmasked $z_{t}$ target. For better clarity, we can update the formula to specify that the input is masked as $z_{t}^{mask}$. > in the world model overview it takes as input h and z but figure 3 shows as taking in z and o? The maskGIT predictor takes $h_{t}$ and $z_{t}^{mask}$ as input. The dotted arrow in figure 3 actually refers to the use of $h_{t}$ as input for decoder reconstruction, not the input of observations $o_{t}$ to the MaskGIT predictor. > Is Latent Dec the same as the Decoder? No, the decoder maps $h_{t}$ and $z_{t}$ to pixel observations predictions. On the other hand, the latent decoder network projects $h_{t}$ to spatial features for the MaskGIT predictor. You can find the detailed architecture of the two networks in appendix A.3 (Table 5 and Table 7). --- Rebuttal Comment 1.1: Comment: [Accidentally posted as official comment]: Are the authors confident in the FLOP reporting in the table? It seems very strange that EMERALD, using a TSSM, has lower FLOPs in general than RSSMs for imagination (this behavior is expected during training but not imagination where the prediction from the transformer has to be fed in several times sequentially). Aside from this confirmation, the authors addressed many of my concerns. The results for the craftax benchmark further confirm that for more difficult to predict environments involving long-term memory EMERALD performs well, although results for other models on craftax would be very important to make this claim stronger. The authors also provide strong points regarding the benefits of latent reconstruction over raw pixel reconstruction, which is strongly supported by recent literature [1, 2], and further solidifies the motivation for EMERALD. I feel confident keeping my score as a 4. [1] https://arxiv.org/pdf/2407.03475 [2] https://arxiv.org/pdf/2402.11337 --- Reply to Comment 1.1.1: Comment: Thank you for reposting your initial reply. We are pleased that our response was able to effectively address your concerns. We again measured the amount of FLOPs for the world models and indeed found that some of the operations were not correctly recorded. Here is the corrected table showing FLOPs in billions obtained using the standard batch size of 16 and sequence length of 64: | Method | #FLOPs World Model Forward (Billion) | #FLOPs World Model Imagination (Billion) | #FLOPs Env Step (Billion) | |:-----------|-----------|-----------|-----------| | DreamerV3 (M) | 151.8 | 360.9 | 1.2 | | DreamerV3 (XL) | 648.7 | 2147.4 | 5.4 | | $\Delta$-IRIS | 740.4 | 9153.4 | 0.7 | | EMERALD | 309.9 | 1629.0 | 3.1 | As stated previously, we can find a correlation between the number of FLOPs and FPS measured during training. EMERALD makes predictions in latent space, which significantly reduces the amount of FLOPs necessary for imagination compared to $\Delta$-IRIS. EMERALD’s world model also processes temporal and spatial information independently, which limits the increase of FLOPs due to the processing of spatial information. We also note that some differences related to architecture and decoding mechanisms can affect the relation between FLOPs and FPS. For instance, EMERALD uses a Transformer architecture, which can process temporal information in parallel during the world model training phase. In contrast, DreamerV3’s RSSM world model processes temporal information sequentially.
Summary: This paper proposes a world model architecture in which spatial latent states are predicted using a MaskGIT predictor. Experiments are conducted on the Crafter benchmark, achieving superhuman performance. Claims And Evidence: Partially. See Q1&2. Methods And Evaluation Criteria: Partially. See Weakness 1. Theoretical Claims: N/A. No theoretical results are provided. Experimental Designs Or Analyses: Yes. See Q2&3. Supplementary Material: Partially. Appendix A.6. Relation To Broader Scientific Literature: N/A. Essential References Not Discussed: Yes. A key contribution of this paper is using MaskGIT for latent prior prediction, but this technique was previously introduced in *GITSTORM* (https://arxiv.org/abs/2410.07836), which is not cited or discussed. Other Strengths And Weaknesses: Strengths: 1. Achieving superhuman performance on the Crafter benchmark. Weaknesses: 1. Limited Benchmarking: The primary experiments focus on Crafter, with only supplementary Atari results in the appendix. 2. Novelty: As the use of MaskGIT for latent prior prediction was already (or concurrently) explored in GITSTORM, it should be discussed to highlight the difference with EMERALD. Furthermore, the proposed spatial latent space structure appears relatively straightforward. TSSM has already been validated as effective for modeling long-range dependencies in prior work. 3. Unclear motivation and missing ablation study of MaskGIT design: See Q3. Other Comments Or Suggestions: See below. Questions For Authors: 1. The paper claims that "training agents directly from pixels" prevents the agent from benefiting from inner representations. Could the authors provide clearer evidence for this? In contrast, DIAMOND (a diffusion-based world model for training agents from pixels) currently sets the state-of-the-art on the widely used Atari100k benchmark. 2. In Figure 2, why is DreamerV3 M used for comparison instead of DreamerV3 XL? Given that DV3 M has a relatively small recurrent state dimension (1024) compared to XL (4096), this could directly impact reconstruction quality, especially in visually demanding environments like Crafter. 3. EMERALD can better perceive crucial environment details due to its more expressive spatial latent states. However, what is the necessity of the MaskGIT predictor? No ablation study is provided in Section 4.3 to justify its role. Additionally, in Table 3, DreamerV3 with RSSM (Line 1) outperforms TSSM (Line 3). What could be the reason for this? 4. Writing suggestions: - A figure (or a paragraph) is needed to explicitly highlight the difference between proposed spatial latent spaces ($H\times W \times G \times (D/G)$) and DreamerV2/V3 ($G \times (D/G)$). - Table 3 should have clearer notation indicating which experiments use MaskGIT prediction. 5. Equation (3) originates from the KL balancing loss in DreamerV2 and should be properly cited. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your constructive feedback on our paper. Please find below our response to the concerns and questions that you raised in the review. > Weakness 1: The Crafter benchmark evaluates a wide range of general abilities (survival, memory) and was used by $\Delta$-IRIS to evaluate its method. It provides an adequate challenge for developing accurate world models with long term memory capacity. For further benchmarking, we performed experiments on the Craftax benchmark (https://openreview.net/forum?id=hg4wXlrQCV). Craftax was proposed at the ICML 2024 conference to provide a more challenging alternative to Crafter. We crop and pad the images to obtain 128x128 pixels inputs. Given the larger resolution, we also add a strided convolution layer to all world models considered. The following Table summarizes the results after 10M environment steps over 3 seeds: | Method | Score (\%) | Return | \#Params | FPS | |----------|:------------:|:------:|:------:|:------:| | DreamerV3 (M) | 2.4 | 13.5 $\pm$ 1.3 | 37M | 27 | | DreamerV3 (XL) | 2.6 | 15.7 $\pm$ 0.3 | 200M | 18 | | EMERALD (Ours) | 3.0 | 16.6 $\pm$ 0.3 | 30M | 20 | As for Crafter, we find that EMERALD achieves faster convergence and better performance compared to DreamerV3. $\Delta$-IRIS experiments are underway but may not be completed on time given the longer associated training time. > Weakness 2: We observe that the use of MaskGIT as prior for model-based RL was indeed explored concurrently to our work in the GITSTORM paper. GITSTORM was recently peer reviewed at the ICLR 2025 conference but reviewers suggested that the work needed further development before publication. The paper proposed to apply MaskGIT decoding using a draft and revise strategy to the recently proposed STORM model. However, we note that the motivation behind GITSTORM is different from our work: Similarly to image and video generation works, EMERALD uses MaskGIT as an alternative to sequential token decoding in order to improve decoding efficiency. In contrast, the GITSTORM paper applies MaskGIT to the vector latent space of STORM to improve the quality of sampling. Key differences also lie in the architecture of the MaskGIT network. GITSTORM performs attention on G=32 token positions while EMERALD first concatenates the 32 group tokens along the feature dimension and performs attention on HxW=16 spatial positions. Given the relation of GITSTORM to our work and despite its recent refusal at the ICLR 2025 conference, we are nevertheless ready to discuss it and highlight the key differences in motivation and architecture with EMERALD in the related works section! > the proposed spatial latent space structure appears relatively straightforward. We propose a novel TSSM network that processes both spatial and temporal information in a carefully designed manner to increase accuracy while maintaining efficiency. EMERALD's TSSM embodies a classical TSSM to process temporal-only relationships and a spatial MaskGIT predictor to process spatial-only relationships. We see our proposed spatial and temporal MaskGIT TSSM as a serious contribution to world modeling that balances prediction accuracy and efficiency. > Q1: Yes, we explain that training agents directly from pixels prevents the agent from benefiting from inner representations learned by the world model. This requires learning additional image encoders to learn separate representations that may not be as effective for the agent. The self-supervised objectives of the world model learn both compressed representations of observations using the reconstruction loss and memory representations of past observations by predicting future latent states. > Q2: We choose DreamerV3 M to compare with a world model having a similar amount of training parameters. DreamerV3 XL achieves a lower reconstruction error with a L2 error of 0.000359. However, despite the increased number of parameters, DreamerV3 XL still does not achieve better reconstruction than EMERALD and effectuates similar mistakes such as predicting different blocks or not perceiving mobs. > Q3: The motivation for our paper is the development of an accurate and efficient alternative to recently proposed $\Delta$-IRIS and DIAMOND, which requires generating trajectories in pixel space. As illustrated in Figure 4, we propose to use a spatial latent space and to replace sequential decoding by MaskGIT decoding to further improve training efficiency. We provide a response to your question and new studies in reviewer mGS9’s dedicated response. > in Table 3, DreamerV3 with RSSM (Line 1) outperforms TSSM (Line 3). The parameter choice of EMERALD is aligned with DreamerV3 Small. For a more accurate comparison, we also performed a 5 seeds experiment for DreamerV3 (S). The agent achieves a return of 11.6 $\pm$ 0.7 and an achievement score of 22.7, which is lower than (line 3). > Q5: Thanks for pointing this out! The reference to DreamerV2 will be cited accordingly. --- Rebuttal Comment 1.1: Comment: I appreciate the response. Some concerns are well addressed, but some responses are still not convincing: - W1: Craftax is quite similar to Crafter, which cannot prove the general effectiveness of EMERALD in other environments. - W2: I still hold the opinion that the difference between proposed spatial latent spaces ($H \times W \times G \times(D / G)$) and DreamerV2/V3 ($G \times(D / G)$) does not have sufficient novelty and do not provide new insights to the community. Increasing spatial dimensions for groups may have equivalent effects to increasing the number of groups ($G$ => $HWG$). - Q1: The authors did not respond to the fact that DIAMOND currently sets the state-of-the-art (at least outperforming EMERALD) on Atari100k. I currently keep my rating. --- Reply to Comment 1.1.1: Comment: > W1 We remind that the Crafter and Atari 100k benchmarks are commonly acknowledged as sufficiently general to verify the effectiveness of algorithms. This is in line with the $\Delta$-IRIS paper that demonstrated its method effectiveness on the Crafter benchmark while providing supplementary Atari results. > W2 As stated in the abstract (lines 17-22) and introduction (lines 80-95), our work introduces a solution to the problem of reduced training efficiency of recently proposed accurate world models. EMERALD constitutes an alternative to $\Delta$-IRIS and DIAMOND that is both accurate and efficient, generating trajectories in latent space instead of pixel space. We propose to apply MaskGIT to model based RL and demonstrate that agents can successfully be learned with spatial latents using MaskGIT predictions, resulting in improved performance. An increasing number of works are proposing world models making accurate predictions in pixel space, notably using Diffusion. EMERALD proposes an alternative line of research for model-based RL, training world models in spatial latent spaces using MaskGIT. We think that both research directions are promising for applying model-based RL to more complex environments and deserve to be explored. > Increasing spatial dimensions for groups may have equivalent effects to increasing the number of groups We initially experimented with increasing the number of groups of DreamerV3's latent space to improve world model accuracy without scaling up to larger model sizes. However, we found that increasing the capacity of the latent space introduces several major limitations, making it unsuitable for application: - Since the tokens are organized along the feature dimension, increasing the vector latent space capacity results in a significant increase of parameter count. Increasing the vector latent space capacity by a factor N also increases the amount of parameters for projection layers by the same factor. In contrast, EMERALD benefits from weight sharing along spatial dimensions, making it straightforward to apply to larger latent spaces. - The resulting increase in FLOPs and memory for increasing the latent space capacity by a factor $N=H \times W$ does not make it possible to train agents without memory overflow, even with lower batch sizes. - Increasing the amount of groups or using more categories does not result in a noticeable decrease of reconstruction error. We suppose this is due to the loss of the position bias, encouraging the learning of tokens with global representations rather than local ones that are more adapted to image data. > A figure or a paragraph is needed to explicitly highlight the difference between proposed spatial latent spaces We are in favor of adding a figure in the appendix to illustrate the difference between the two latent spaces. On the left on the figure: A standard RSSM/TSSM with the DreamerV3 latent space using a MLP head for prediction. And on the right: Our proposed spatial and temporal TSSM using the spatial MaskGIT network for prediction. > Table 3 should have clearer notation indicating which experiments use MaskGIT MaskGIT is an alternative decoding method that was originally proposed for spatial latent spaces. We did not judge necessary to indicate that MaskGIT was not used for vector latents. However, as GITSTORM pointed out, the technique can be used when using groups of tokens for a vector latent space. We hence propose adding a column (Decoding) to indicate whether MaskGIT decoding is performed or a simple MLP is used. > Q1 We thank the reviewer for following up on points that we could not address in the limited rebuttal. We evaluate our method on the commonly used Atari benchmark to assess EMERALD’s performance on simpler environments that do not necessarily require a complex world model to achieve strong performance. We also demonstrate improved training efficiency compared to $\Delta$-IRIS and DIAMOND. DIAMOND uses a diffusion-based world model to generate accurate trajectories on the benchmark. It is remarkably data efficient at learning a world model for atari games. This is in contrast to VAE-based approaches like DreamerV3, $\Delta$-IRIS and EMERALD, which first require learning compressed representations. This makes it highly effective at learning a policy from pixels on Atari. On the other hand, DIAMOND fails to learn an effective policy on the Crafter benchmark where successive frames are less correlated. We find that DIAMOND has difficulties predicting future frames, even when increasing model size, generating hallucinations. The algorithm also suffers from increased training time due to the diffusion-based nature of its world model. When using a RTX 3090 GPU for training, EMERALD requires only around 17 hours per game on Atari 100k while DIAMOND requires 75 hours. — Thank you for your response. We take your comments very seriously and hope that our rebuttal helped to address your remaining concerns.
Summary: This paper introduces EMERALD, a world modeling approach that balances accuracy and efficiency. EMERALD leverages spatial latent states and MaskGIT-based prediction to generate precise trajectories in the latent space. By improving the perception of critical environmental details, EMERALD enhances the quality of imagined rollouts, ultimately boosting agent performance. Empirical evaluations on the Crafter benchmark demonstrate that EMERALD outperforms existing methods by generating high-fidelity latent trajectories. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: NA Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: They may have potential for real-world applications that require high-precision reconstruction and prediction (e.g., robotics, automated exploration). Essential References Not Discussed: No. Other Strengths And Weaknesses: This paper constructs an accurate and efficient world model using a Masked Latent Transformer, advancing the state-of-the-art in Transformer-based world models. Improving prediction accuracy is crucial for generating more realistic imagined rollouts, which in turn facilitates more effective training in imagination. The approach of jointly predicting the next spatial token based on the current spatial latent state and temporal latent state is well-motivated, as it explicitly enhances the world model’s awareness of the current state, leading to more precise predictions. Unlike most Transformer-based world models, EMERALD incorporates temporal hidden states during decoding, rather than relying solely on spatial tokens for reconstruction. This design improves reconstruction accuracy compared to purely spatial-token-based methods. The proposed algorithm significantly outperforms DreamerV3 (RSSM-based) and the IRIS series (Transformer-based) on the Crafter benchmark. Furthermore, the method introduces parallel prediction with scheduled refinements, substantially reducing decoding time while preserving the coherence of predicted tokens. The experimental results on Crafter are compelling. The core contribution of this paper lies in generating more precise and efficient latent trajectories, rather than merely improving reconstruction accuracy. However, in the ablation study, while EMERALD's advantage over RSSM-based frameworks does not stem from higher reconstruction fidelity, the paper does not discuss whether it results from reduced dynamics or representation error. For example, does EMERALD exhibit fewer hallucinations over longer imagination rollouts? Additionally, in the Atari benchmark experiments (Appendix), we observe that EMERALD's performance is inconsistent in environments where high-precision prediction and reconstruction are critical. For instance, in Breakout, the IRIS series, which achieves higher reconstruction accuracy, significantly outperforms other methods, whereas EMERALD does not exhibit a similar advantage. Providing more detailed experiments on latent trajectory prediction would further strengthen the contribution of this work. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for reviewing our paper. Please find below our response to the concerns and questions that you raised in the review. > in the ablation study, while EMERALD's advantage over RSSM-based frameworks does not stem from higher reconstruction fidelity, the paper does not discuss whether it results from reduced dynamics or representation error. For example, does EMERALD exhibit fewer hallucinations over longer imagination rollouts? The ablation study in Table 3 shows that the initial performance of world models can be improved by using a spatial latent space and using a Transformer-based world model. On the increase of performance due to architecture change (line 2 VS line 4): Our spatial and temporal TSSM uses self-attention which allows the model to easily perceive past information while the RSSM uses a recurrent state with a limited capacity. To further understand the reason behind the observed results, we performed further studies on the latent state predictions of both world model alternatives. We compared the token prediction accuracy over 5 seeds for both world models when predicting future states. We find that EMERALD achieves an average accuracy of 81.51% correctly predicted tokens for next state prediction against 66.27% when using a RSSM (line 2). We also analyzed attention maps of EMERALD's TSSM and found that the world model learns to attend to all positions seen during training, up to the context of 64 time steps. We conclude that our proposed spatial and temporal TSSM has a positive impact on representation capability and prediction accuracy, which leads to better final performance. Concerning hallucinations, we did not observe an increase of hallucinations when using a RSSM. However, given the lower token prediction accuracy, we notice that the model can sometimes have difficulties to predict futures that are coherent with past context. > For instance, in Breakout, the IRIS series, which achieves higher reconstruction accuracy, significantly outperforms other methods, whereas EMERALD does not exhibit a similar advantage. Yes we are aware that $\Delta$-IRIS achieves strong results on Breakout, the method uses a max pixel reconstruction loss which focuses on reducing the error on the pixel with maximum error. This is very helpful for games like Breakout or Pong where the ball object is crucial for achieving strong results. We observed that performance in Breakout is very linked to the correct reconstruction of the ball. EMERALD facilitates reconstruction and reaches final mean scores superior to 200 for some of the seeds, but we did not use the max pixel loss. > Providing more detailed experiments on latent trajectory prediction would further strengthen the contribution of this work. As explained earlier, we performed further studies on the prediction of the world model in latent spaces. We also computed the average accuracy (%) of predictions for different numbers of decoding steps at imaginations time (No MaskGIT designates predictions made by the MLP head learned by the dynamics loss $L_{dyn}$): **Rebuttal Table 1:** | Num pred steps in future: | 1 | 5 | 10 | 15 | Rollout duration (second) | |----------|:------------:|:------:|:------:|:------:|:------:| | No MaskGIT | 78.57% | 72.22% | 64.80% | 58.59% | 0.10 | | S=1 step | 78.52% | 72.19% | 65.75% | 59.92% | 0.14 | | S=3 steps | 81.51% | 74.32% | 68.09% | 62.61% | 0.22 | | S=8 steps | 82.69% | 75.34% | 68.47% | 62.82% | 0.42 | | S=16 steps | 82.80% | 75.56% | 68.51% | 62.64% | 0.73 | The accuracy is averaged of the 5 EMERALD seeds and computed by comparing the target future states with predicted states during rollout, conditioned on the correct sequence of future actions. We find that using less than 3 decoding steps during imagination results in a drop of accuracy. We also compare the rollout time in seconds required to imaginate 15 time steps in the future. Using a larger number of decoding steps can lead to a small increase in accuracy but also results in longer rollout duration, which impacts training efficiency. We performed a corresponding ablation to study the impact of the number of imagination decoding steps on final performance over 5 seeds: **Rebuttal Table 2:** | \#Decoding Steps | Score (\%) | Return | FPS | |----------|:------------:|:------:|:------:| | No MaskGIT | 51.6 | 16.1 $\pm$ 0.7 | 33 | | S = 1 step | 53.8 | 16.1 $\pm$ 0.5 | 33 | | S = 3 steps (EMERALD) | 58.1 | 16.8 $\pm$ 0.6 | 27 | | S = 8 steps | 55.1 | 16.5 $\pm$ 0.6 | 23 | We find that the decrease of prediction accuracy has a noticeable impact on final performance. The decrease of average accuracy in world model predictions leads to the generation of less accurate trajectories for the actor and critic networks. Our experiments using 3 and 8 decoding steps achieves higher returns and achievement scores compared to using a single decoding step or a simple MLP head for prediction. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. The authors state that EMERALD improves the accuracy of future trajectory generation in TSSM by enhancing the precision of predicted tokens, rather than by reducing hallucination effects in long-sequence predictions. I agree with this perspective, as the fidelity of imagined trajectories directly impacts agent training performance, and more precise token prediction enables better modeling of the imagination trajectories.  Furthermore, the authors' analysis of the results in Breakout demonstrates that EMERALD avoids using the max-pixel loss employed in the IRIS series. This design choice effectively mitigates potential reconstruction artifacts, thereby improving the model's robustness. This explanation alleviates my concerns about generalization capability. However, we maintain that EMERALD still has limitations that warrant addressing upon re-evaluation of the paper: 1. Limited Evaluation Benchmark: The experiments are confined to discrete action spaces (e.g., Atari 100k, Crafter), lacking validation on more complex, high-dimensional control tasks such as Meta-World or DMControl. Notably, the absence of high-dimensional vision-based benchmarks raises concerns about computational scalability and generalization—high-dimensional observations (e.g., DMC vision) may impose prohibitive memory or training costs. Without broader empirical validation, the claim of cross-domain robustness remains unsubstantiated.  2. Questionable Practical Utility: While the integration of MaskGIT improves trajectory prediction accuracy in simplified game environments, its real-world applicability is unclear. For instance, [1] demonstrates that even crude imagined trajectories (with high reconstruction error) suffice for successful drone control under the Dreamer framework. This suggests that excessive focus on precision in imagination may not translate to downstream task performance but could instead introduce unnecessary computational overhead. [1] Romero, Angel, et al. "Dream to Fly: Model-Based Reinforcement Learning for Vision-Based Drone Flight." arXiv preprint arXiv:2501.14377 (2025).  Based on these unresolved issues, we uphold our initial rating of 2 (Weak Reject). --- Reply to Comment 1.1.1: Comment: Thank you for your positive and thoughtful feedback. We sincerely appreciate the time you took to read our rebuttal and engage with our responses. --- [Edit following the reviewer comment update] > Limited Evaluation Benchmark: The experiments are confined to discrete action spaces (e.g., Atari 100k, Crafter) > the claim of cross-domain robustness remains unsubstantiated. We remind that the Crafter and Atari 100k benchmarks were used by previous works (SimPLe,TWM , IRIS, STORM, $\Delta$-IRIS, DIAMOND) as evaluation metric and are commonly acknowledged to be sufficiently diverse and general to study algorithms. > lacking validation on more complex, high-dimensional control tasks such as Meta-World or DMControl. > While the integration of MaskGIT improves trajectory prediction accuracy in simplified game environments We are very familiar with the DMC benchmark, it is well adapted to evaluate algorithms for continuous action tasks but features tasks that are visually simplistic compared to Crafter, Craftax and some Atari games. While appearing high-dimensional, the DMC tasks are in reality simpler to model, with less crucial details, and not difficult for world models to achieve very low reconstruction error. Furthermore, the application of Transformer world models to continuous control tasks is still very limited ([TransDreamer](https://arxiv.org/abs/2202.09481)) and linked to performance decline ([GIT-STORM](https://openreview.net/forum?id=2gTEW29qsM)). Although the DMC benchmark does not inherently require spatial latents for accurate modeling and to achieve strong performance, we nonetheless conducted a single-seed experiment on the commonly used 20 visual tasks, $\textbf{without hyper-parameter changes}$, to experiment with EMERALD and found that our method can successfully competes with DreamerV3 on most tasks: | Task | DreamerV3 | EMERALD (ours) | |:-----------|-----------:|-----------:| | Acrobot Swingup | 210.0 | 42.9 | | Ball In Cup Catch | 957.1 | 963.2 | | Cartpole Balance | 996.4 | 997.6 | | Cartpole Balance Sparse | 1000.0 | 1000.0 | | Cartpole Swingup | 819.1 | 855.6 | | Cartpole Swingup Sparse | 792.9 | 735.5 | | Cheetah Run | 728.7 | 670.8 | | Finger Spin | 818.5 | 945.0 | | Finger Turn Easy | 787.7 | 988.0 | | Finger Turn Hard | 810.8 | 879.8 | | Hopper Hop | 369.6 | 306.0 | | Hopper Stand | 900.6 | 855.6 | | Pendulum Swingup | 806.3 | 843.8 | | Quadruped Run | 352.3 | 170.2| | Quadruped Walk | 352.6 | 421.2 | | Reacher Easy | 898.9 | 964.9 | | Reacher Hard | 499.2 | 430.4| | Walker Run | 757.8 | 421.6| | Walker Stand | 976.7 | 976.9| | Walker Walk | 955.8 | 957.2| | Mean | 739.6 | 721.3| | Median| 808.5 | 855.6| > high-dimensional observations (e.g., DMC vision) may impose prohibitive memory or training costs > could instead introduce unnecessary computational overhead. Given the low complexity of DMC tasks, input resolution is usually scaled to 64x64 pixels, which corresponds to the input resolution used for Crafter. No computation overhead is added by performing on the DMC benchmark. Moreover, when scaling to higher resolution such as Craftax, the computation overhead of EMERALD and DreamerV3 is only due to the addition of strided convolution for the encoder and decoder networks. This is not the case for $\Delta$-IRIS and DIAMOND where imagination and policy learning is performed in higher resolution pixel space rather than a fixed size latent space. > [1] demonstrates that even crude imagined trajectories (with high reconstruction error) suffice for successful drone control under the Dreamer framework Yes, our paper does not deny that DreamerV3 can learn policies under complex environments. In fact, the contribution of our paper is an efficient and accurate solution that is designed to improve model-based potential for complex environments. As stated in our paper abstract, when it comes to crucial details, the compressed nature of DreamerV3 latent space can result in the loss of crucial information and negatively impact the agent’s performance. EMERALD contributes to the research direction of accurate model-based RL, proposing an efficient alternative to $\Delta$-IRIS, DIAMOND. > This suggests that excessive focus on precision in imagination may not translate to downstream task performance Previously published works ($\Delta$-IRIS, DIAMOND) and EMERALD show that the increase in precision effectively leads to higher performance on environments where details are crucial and DreamerV3 compressed latent space fails to perceive. --- We appreciate the time taken by the reviewer to further evaluate our paper and provide additional responses. We note that the reviewer had additional concerns related to benchmarking, computational overhead and the utility of accurate model-based RL. We did our best to address the reviewer's additional concerns and hope that this discussion has effectively clarified the contribution of our work.
null
null
null
null
null
null
Quantum Speedup for Hypergraph Sparsification
Accept (poster)
Summary: Graph sparsification has been extensively studied [SS11, BSS12, LS17] and has numerous applications in graph algorithms and machine learning. As a natural generalization of graphs, hypergraphs have gained increasing attention. Similarly, hypergraph sparsification has attracted significant interest following the pioneering work of [SY'19]. Motivated by the successful application of quantum computing to graph sparsification [AD'20], this paper presents the first quantum algorithm for hypergraph sparsification. Specifically, for a hypergraph $H$ with $n$ vertices and $m$ edges, the proposed algorithm constructs an $\epsilon$-spectral sparsifier of size $O(n \log n \log r / \epsilon^2)$ in time $\widetilde{O}(r \sqrt{mn} / \epsilon)$, where $r$ denotes the rank. This result significantly outperforms the best known sequential algorithm, which runs in $\widetilde{O}(mr)$ time [JLS'23]. Moreover, the proposed quantum algorithm extends naturally to quantum hypergraph cut sparsification, mincut solving, and $s-t$ mincut solving, broadening its applicability to fundamental problems in hypergraph optimization. $\textbf{Reviewer vTAd update after rebuttal:}$ I thank the authors for their clear clarification. I will retain my score and am inclined to recommend acceptance of this paper. Claims And Evidence: The main theorem 4.3 and three corollaries 5.1, 5.2, and 5.3 are clearly stated and proved. Methods And Evaluation Criteria: This paper is purely theoretical and has no experiments. Theoretical Claims: I reviewed the proofs in the supplementary material but didn't read them carefully. Experimental Designs Or Analyses: No experiments. Supplementary Material: I reviewed all the supplementary material: Appendix A introduces some properties of effective resistance; Appendix B and Appendix C present the proofs for Theorem 3.4 and Theorem 4.1, respectively. Relation To Broader Scientific Literature: This paper outlines several potential directions for future work. Given the wide range of applications of graph and hypergraph sparsification, the proposed quantum graph/hypergraph sparsifier algorithm naturally opens the door to developing quantum algorithms for other graph-related problems. Additionally, considering existing research on directed and online hypergraph sparsification, it would be interesting to explore quantum algorithms for these settings as well. Essential References Not Discussed: (1) $\textbf{Cut Sparsification and Succinct Representation of Submodular Hypergraphs}$, ICALP 2024. This paper explored the cut sparsifier of submodular hypergraphs. (2) $\textbf{Near-optimal Linear Sketches and Fully-Dynamic Algorithms for Hypergraph Spectral Sparsification}$, STOC 2025. This paper proposed algorithms for hypergraph spectral sparsifier under the fully-dynamic settings, which allow hyperedge insertions/deletions. Other Strengths And Weaknesses: $\textbf{Strengths:}$ (1) Hypergraph sparsification has various applications and has been extensively studied in the past few years. This paper proposed the first quantum algorithm for hypergraph sparsification. (2) The proposed quantum hypergraph sparsifier has nearly linear size $\widetilde{O}(n / \epsilon^2)$ and takes time $\widetilde{O}(r \sqrt{mn} / \epsilon)$, which outperforms the running time $\widetilde{O}(m r)$ of [JLS'23, Lee'23], under the settings $\epsilon \ge \sqrt{n/m}$ and $m \ge n r$. When $r$ is a constant, this time complexity almost matches the lower bound $\Omega(m)$. (3) The motivation and idea of this paper are straightforward and natural. Additionally, this paper is well-written and easy to understand. $\textbf{Weaknesses:}$ (1) The running time of the proposed quantum algorithm improves the classical complexity $\widetilde{O}(m r)$ under the two assumptions $m \ge n r$ and $\epsilon \ge \sqrt{n / m}$, which weakens this paper's contribution. (2) This paper follows the sampling-based framework of [JLS'23] and primarily builds on existing techniques from [AD'20, Hamoudi'22], which somewhat limits its novelty. Other Comments Or Suggestions: (1) In the paper "Hypergraph Diffusions and Resolvents for Norm-Based Hypergraph Laplacians", Ameranis et al. proposed the first nearly-linear-time algorithm for approximately computing resolvents of the hypergraph Laplacian operator. An intriguing direction for future research could be exploring quantum speedup techniques to further improve its running time. (2) Typo: Line 100, adopts -> adopt Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough evaluation and constructive feedback. Below, we address the key concerns raised: 1. Essential References Not Discussed: Thanks for point out these two references, we will add them for the revision. Thanks for pointing out these references. We will add them in the revised version of our paper. 2. Running Time Assumptions: Regarding the assumptions ($\varepsilon>\sqrt{n / m}$ and $m>n r$), we would like to point out that they are reasonable and arise naturally in the context of the sparsification task: (a) $\varepsilon>\sqrt{n/m}$ is a natural assumption in the sparsification task, as this is equivalent to that the resulting sparsified graph (with $O(n/\varepsilon^2)$ edges) contains less edges than the original graph ($O(m)$ edges). This assumption also appears in previous works quantum sparsification algorithms (e.g., [AdW'20]). (b) $m>n r$ naturally holds whenever hypergraphs are not highly sparse. In dense hypergraphs, the number of hyperedges scales as $m=\Theta\left(n^r\right) \gg n r$. In practice, $r$ is typically treated as a constant greater than 2, which means that the number of hyperedges only needs to be larger than the linear size of the number of vertices. 3. Novelty and Technical Contributions: We acknowledge that our algorithmic analysis builds on the results of [JLS'23]. However, our core subroutine, QHLSO, is inspired by another critical work [Jambulapati et al.'2023]. The non-trivial contribution lies in identifying, adapting, and synthesizing existing classical frameworks to the quantum setting---a task requiring meticulous integration of recent classical and quantum algorithmic tools, including [AdW'20, Hamoudi'22]. The classical literature on hypergraph sparsification encompasses numerous advanced approaches, and selecting the right framework for quantum acceleration demanded substantial domain-specific insight. Furthermore, we intentionally prioritized readability to provide the quantum algorithms community with a clear foundation for exploring broader applications in this domain, while demonstrating how classical and quantum techniques can be cohesively combined to achieve novel efficiencies. 4. Future Work Suggestion: We thank the reviewer for directing us to the resolvent computation work by Ameranis et al. Exploring quantum speedups for hypergraph diffusion is a compelling direction, and we will mention this in our revised future work section. 5. Typos and Grammar: We will meticulously proofread the manuscript to address grammatical errors, including the noted typo (Line 100: "adopts" → "adopt").
Summary: The authors claim to give the first quantum algorithm for hypergraph sparsification. Their main theorem claims that they can find a sparsifier of size $O(n/\epsilon^2)$ in time $O(r \sqrt{mnr} + r\sqrt{mn}/ \epsilon)$ with high probability. Besides the introduction, the paper is concerned with proving this result. Claims And Evidence: All theorems and claims are supported with proofs. I am unable to verify that the proofs are correct. Methods And Evaluation Criteria: The paper does not contain any experiments, or other evaluation methods. Theoretical Claims: I have read the proofs and skimmed the appendix, and the claims look plausible, but since I lack expertise in quantum algorithms I can't judge the correctness very well. Experimental Designs Or Analyses: The paper does not contain any experiments. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: I am not an expert in quantum algorithms, however, the fact that this algorithm can provide sublinear running times in dense hypergraphs is of interest, as in the classical setting this seems like an unlikely result. Essential References Not Discussed: I am unfamiliar with the literature on quantum algorithms, so I am not sure if any references were missed. Other Strengths And Weaknesses: Unsure Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments and review. Feel free to reach out if additional clarifications are needed.
Summary: This work introduces the first quantum algorithm for hypergraph sparsification, producing an $\varepsilon$-spectral sparsifier of size $\widetilde{O}(n / \varepsilon^2)$ in time $\widetilde{O}(r \sqrt{m n} / \varepsilon)$ for a weighted hypergraph with $n$ vertices, $m$ hyperedges, and rank $r$. This result demonstrates a quantum speedup over the classical state-of-the-art $\widetilde{O}(m r)$-time algorithm (Jambulapati et al., 2023; Lee, 2023) and matches the quantum lower bound of $\widetilde{\Omega}(\sqrt{m n} / \varepsilon)$ for $r = 2$ (Apers & de Wolf, 2020). The method combines a classical sampling-based framework with quantum techniques, including quantum graph sparsification (Apers & de Wolf, 2020), state preparation (Hamoudi, 2022), and sum estimation. Applications include sublinear-time quantum algorithms for computing hypergraph cut sparsifiers and approximating hypergraph mincuts and $s$-$t$ mincuts. The three primary contributions align with Sections 3, 4, and 5, summarized as follows. **Quantum Algorithm for Leverage Score Overestimate (Section 3)** The authors introduce a quantum algorithm (Algorithm 1) to estimate hyperedge leverage score overestimates, based on classical approaches from Cohen et al. (2019) and Jambulapati et al. (2023). They adapt the previous concept of group leverage score overestimate and give a slightly different algorithm. The algorithm iteratively updates the weights $c^{(t)}$ of the underlying graph (line 2 of Algorithm 1), sparsifying the underlying graph (GraphSparsify), and then letting $c^{(t+1)}$ take the weight of the hyperedge with the proportion of its energy (WeightCompute). Then, it outputs the average of energies $\ell^{(t)}$ with appropriate scaling (QOverestimate). Of course, this algorithm is in a quantum way, and specially, the authors describe their subroutines and output as quantum data structures with initialization and query capabilities. The overestimate property (Proposition B.7) is proven using a telescoping argument akin to Cohen et al. (2019). The time complexity of this algorithm hinges on the graph sparsification step, and the quantum speedup here is mainly achieved through the result of Apers & de Wolf (2020). **Quantum Hypergraph Sparsification (Section 4)** Algorithm 2 presents a quantum sampling approach for hypergraph sparsification. It leverages the MultiSample subroutine to access the leverage score overestimate vector and samples a sequence of hyperedges with probabilities proportional to their overestimates. By combining the information of each sampled hyperedge with the normalization factor obtained via SumEstimate, the algorithm then reweights the hypergraph. Correctness is established via a chaining argument, adapted from Lee (2023) and Jambulapati et al. (2023). The time complexity is dominated by the sampling phase, which benefits from the precomputed overestimate data structure from Algorithm 1 and the quantum sampling subroutine from Corollary 2.8. **Applications (Section 5)** As a direct application, a cut sparsifier for hypergraphs is obtained. Since the hypergraph cut sparsifier preserves the cut energy, the quantum speedup for hypergraph sparsification extends to mincut and $s$-$t$ mincut problems. Claims And Evidence: The primary claim—that the algorithm constructs an $\varepsilon$-spectral sparsifier in $\widetilde{O}(r \sqrt{m n} / \varepsilon)$ time—is substantiated by Theorem 1.1 (formalized as Theorem 4.1). A detailed proof, provided in Appendices, employs leverage score overestimates and adapts a chaining argument from Lee (2023) to establish correctness. The speedup over the classical $\widetilde{O}(m r)$ bound is evident under reasonable assumption $\varepsilon \ge \sqrt{n/ m}$. To achieve the speedup, the authors design many quantum subroutines (e.g., GraphSparsify, MultiSample), which are derived using known techniques such as quantum graph sparsification (Apers & de Wolf, 2020) and basic operations such as addition and multiplication. Applications outlined in Section 5 are straightforward to see, though details are not fully provided. No other unsupported claims stand out; all the evidence is theoretical analysis. Methods And Evaluation Criteria: The methods presented make sense for the challenges of hypergraph sparsification. The algorithm is built on a sampling-based framework, utilizing hyperedge leverage score overestimates derived by quantum graph sparsification (Apers & de Wolf, 2020) and some calculations. Other quantum techniques are employed, including preparing multiple state copies (Hamoudi, 2022) for sampling and sum estimation (Li et al., 2019) for reweighting. The evaluation centers on theoretical time complexity, the number of quantum gates, queries and QRAM operations. This is an increasingly adopted way to evaluate quantum algorithms (e.g., Apers & de Wolf, 2020). No benchmark datasets are used, as expected for a theoretical contribution. Theoretical Claims: I almost reviewed the entire theoretical claims in this paper, including the proofs in Appendices. I think the following needs to be addressed. 1. **Unitary Operations:** It would be helpful to include brief explanations of the unitary properties of some basic operations (e.g., $U_{\mathrm{div}}, U_{\mathrm{square}}, U_{\mathrm{star}}$). 2. **Initialization of EffectiveResistance in Proposition B.3:** This part is quite confusing. - The authors attribute their approach to Claim 7.9 from Apers & de Wolf (2020) (abbreviated as the AW paper). However, Claim 7.9 in the AW paper does not mention or use the $ \tilde{O}(m/\epsilon^2)$ time complexity stated in Proposition B.3. - Moreover, the AW paper actually provides a quantum method for obtaining $Z_G$ with a better time complexity, specifically $\widetilde{O}(\sqrt{mn}/\varepsilon + n/\varepsilon^4)$. Despite this, the authors opt for the slower classical algorithm with $\tilde{O}(m/\epsilon^2)$ time complexity instead of leveraging the more efficient quantum approach. - Thus, the authors should either justify their preference for the classical method or consider adopting the faster quantum alternative from the AW paper. 3. **Appendix C:** In line 940, you want to prove the claim that : $E_{H_{\mu}} [ | Q_H(x) - Q_{H_{\mu}}(x) | ] \leq \epsilon \cdot Q_H(x), \quad \forall x \in \mathbb{R}^n$. Later, you proved in Line 1047 that $E_{H_{\mu}} \max_{x: \|x\| \leq 1} | Q_H(x) - Q_{H_\mu}(x) | \leq \epsilon$. How do you use this result to complete the proof of the original claim? Experimental Designs Or Analyses: NA Supplementary Material: The supplementary material in Appendices offer detailed proofs of the results. I almost reviewed the entire appendices and included my comments or concerns in the *Theoretical Claims* part as a whole. Relation To Broader Scientific Literature: The paper builds on and extends several areas of prior literature: - Graph Sparsification: It utilizes the result of quantum graph sparsification (Apers & de Wolf, 2020) and generalizes the graph case to hypergraphs, addressing future research directions outlined in Apers & de Wolf (2020). - Hypergraph Sparsification: By incorporating quantum speedups, it advances classical algorithms (Lee, 2023; Jambulapati et al., 2023), achieving sublinear time complexity while preserving near-linear size. - Quantum Algorithms: It leverages and extends quantum tools like state preparation (Hamoudi, 2022) and sum estimation (Li et al., 2019) to achieve the speedup. Essential References Not Discussed: All essential references appear to be appropriately discussed. Other Strengths And Weaknesses: Strengths: - The paper presents the first quantum algorithm for the fundamental problem of hypergraph sparsification. Weakness: - The techniques employed in this paper are largely adapted from previously established methods (e.g., Lee, 2023; Jambulapati et al., 2023; Apers & de Wolf, 2020), and the technical contribution appears to be somewhat incremental. - About the write-up: The main content of the submission does not provide much detail on the formal techniques, instead devoting too much space to preliminaries. I suggest that the authors expand the later sections with more substantive information and include some proofs to help readers better understand and verify the claims. Other Comments Or Suggestions: Some typos and minor comments are listed below: 1. In Section 2, line 190 (left), notation $D$ is redundant; line 210 (right), 'an weighted' -> 'a weighted'; 2. In Section 3, line 308 (left), 'a underlying' -> 'an underlying'; line 329 (left), 'a underlying' -> 'the underlying'; line 304 (right), $c_{e,f}$; 3. In Section 4, line 347 (right), 'denote by' -> 'denoted by'; line 348 (right), $|\tilde{E}| = O(n \log n \log r / \varepsilon^2)$; 4. In Section 5, line 390 (left), 'directly corollary' -> 'direct corollary'; line 410 (left), 'sparsity the' -> 'sparsify the'; 5. In Section 6, line 392 (right), $O$ -> $\widetilde{O}$; line 429 (right), 'whether we' is redundant; line 437 (right), 'none which' -> 'none of which'; 6. In Appendix, line 617, lack an 'is'; line 793, 'corresponds' -> 'correspond'; line 816, lack an 'is'; line 887, $\xi_1, \ldots, \xi_M$ -> $\xi_1, \ldots, \xi_m$; 7. In Proposition B.5, the time seems to be $\widetilde{O}(r/\varepsilon^2)$, since each query to $\mathcal{R}$ requires $\widetilde{O}(1/\varepsilon^2)$ time from Proposition B.3; 8. In Proposition B.6, the first step seems to use $U_{\mathrm{star}}$ instead of $U_{\mathrm{clique}}$, as in Proposition B.5; 9. In line 1047, I suggest additional justification for the first inequality. Questions For Authors: - I have provided some comments in the *Theoretical Claims* section above. It would be great if the authors could address them. - Furthermore, could you briefly explain why the time complexity has a linear dependency on $r$? What are the main barriers to improving this dependency in your current approach? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thorough evaluation and constructive feedback. Below, we address the key concerns raised: 1. Unitary Operations: The unitary operators $U_{\mathsf{mult}},U_{\mathsf{sum}},U_{\mathsf{div}},U_{\mathsf{square}},U_{\mathsf{minus}}$ are quantum gate implementation of the basic arithmetic operations. Specifically, they satisfy: $U_{\mathsf{mult}}\ket{a}\ket{b}\ket{0} = \ket{a}\ket{b}\ket{ab}$, $U_{\mathsf{sum}}\ket{a}\ket{b}\ket{0} = \ket{a}\ket{b}\ket{a+b}$, $U_{\mathsf{div}}\ket{a}\ket{b}\ket{0} = \ket{a}\ket{b}\ket{a/b}$, $U_{\mathsf{square}}\ket{a}\ket{0} = \ket{a}\ket{a^2}$, $U_{\mathsf{minus}}\ket{a}\ket{b}\ket{0} = \ket{a}\ket{b}\ket{a-b}$. And $U_{\textup{star}}$ satisfies $U_{\textup{star}}(\otimes_{i \in e}\ket{i})\ket{0}=\otimes_{g \in S_e}\ket{g}\ket{0}$, as defined in line 746. These unitary transformations offer quantum counterparts to classical operations while preserving computational efficiency. 2. Initialization of EffectiveResistance (Proposition B.3): We sincerely appreciate your careful reading and would like to clarify the technical details here. In Claim 7.9 of the AW paper, the stated runtime $\widetilde{O}\left(\sqrt{m n} / \varepsilon+n / \varepsilon^4\right)$ comprises two components: (a). The first term $\widetilde{O}(\sqrt{m n} / \varepsilon)$: This corresponds to the time for quantum sparsification, which produces a sparse graph with $m^{\prime}=\widetilde{O}\left(n / \varepsilon^2\right)$ hyperedges. (b). The second term $\widetilde{O}\left(n / \varepsilon^4\right)$: This arises from applying the classical algorithm (Theorem B.2 and Theorem B.3 in our paper) to compute effective resistances on the sparsifier. Specifically, this step incurs a cost of $\widetilde{O}\left(m^{\prime} / \varepsilon^2\right)=$ $\widetilde{O}\left(n / \varepsilon^4\right)$. Our processing is consistent with that in the AW paper, but for the sake of algorithm clarity, we carefully write out the implementation of each step. 3. Appendix C: We acknowledge the need for greater clarity and will revise the text to explicitly outline the equivalence between the original claim (line 940) $$ E_{H_\mu}\left[\left|Q_H(x)-Q_{H_\mu}(x)\right|\right] \leq \varepsilon \cdot Q_H(x), \quad \forall x \in {R}^n, $$ and the proved result (line 1043) $$ E_{H_\mu} \max_{x: Q_H(x) \leq 1} \left|Q_H(x)-Q_{H_\mu}(x)\right| \leq \varepsilon $$ More specifically, for any $x \perp 1$, we have $Q_H(x)>0$. We then define $z=x / \sqrt{Q_H(x)}$. By homogeneity of $Q_H$, this ensures $Q_H(z)=1$, placing $z$ in the set $T=\lbrace x: Q_H(x) \leq 1\rbrace$. The result in Line 1043 implies that for all $z\in T$, $$ E_{H_\mu}\left[\left|Q_H(z)-Q_{H_\mu}(z)\right|\right] \leq \varepsilon. $$ Substituting $z=x / \sqrt{Q_H(x)}$ gives: $$ E_{H_\mu}\left[\left|\frac{Q_H(x)-Q_{H_\mu}(x)}{Q_H(x)}\right|\right] \leq \varepsilon. $$ Multiplying through by $Q_H(x)$ yields the original claim. 4. Technical Contributions: We agree that our algorithms builds on the previous results (e.g., Lee, 2023; Jambulapati et al., 2023; Apers de Wolf, 2020). The non-trivial contribution lies in identifying, adapting, and synthesizing existing classical frameworks to the quantum setting---a task requiring meticulous integration of recent classical and quantum algorithmic tools. The classical literature on hypergraph sparsification encompasses numerous advanced approaches, and selecting the right framework for quantum acceleration demanded substantial domain-specific insight. 5. About the write-up: We appreciate the reviewer’s feedback on the balance between the introduction and formal techniques. We will carefully revise the manuscript to streamline the preliminaries and move some important parts from the appendix to the main text, thereby enhancing clarity and facilitating the verification of our claims. 6. Typos and Grammar: We sincerely appreciate the reviewers’ careful reading and for pointing out these typos. We will meticulously proofread the manuscript to correct grammatical errors, including the noted typos. 7. Dependency on rank: The linear dependency on $r$ arises because our quantum algorithm QHLSO converts each hyperedge into a star graph (with $O(r)$ edges) to construct the underlying Laplacian system. This step inherently scales linearly with $r$, as each hyperedge of size $r$ requires explicit interactions among its vertices. Notably, our algorithm's linear dependence on $r$ is already an improvement over the quadratic dependency $O(r^2)$, which is achieved by utilizing a sparse underlying graph (see Definition 3.3) instead of general underlying graph (see Definition 3.1). Further reducing this dependency is challenging: classical hypergraph sparsification methods face fundamental limits, and quantum representations inherently require enumerating all $r$ vertices in a hyperedge for unitary operations. Thus, both classical and quantum approaches encounter a barrier for hyperedge processing.
Summary: Hypergraph sparsification is the process of reducing the number of hyper edges of a graph while preserving (as much as possible) the energy of the graph. The paper introduces an algorithm for hypergraph sparsification, addressing an open problem proposed in a previous paper by Apers and de Wolf. More specifically, the authros show that give an hypergraph with n vertices, m hyper edges and rank r, and error parameter e, e-eparsifier with Õ(n/e^2) edges, in time Õ(r*sqrt(mn)/e). When rank r is constant, the proposed algorithm matches the quantum lower bound. Additionally, it provides a quantum speedup with respect to state of the art classical algorithms, which run in time Õ(mr). To obtain the results, the authors obtain a faster quantum algorithm to compute the hyperedge leverage score overestimate, providing a quantum speedup over a classical algorithm proposed by Lee and Jambulapati (2023). They also use of a technique introduced by Hamoudi to make copies of a quantum state specified by an oracle. Claims And Evidence: Yes, the proofs of the claims are provided in the appendix, although I did not check carefully all proofs. Methods And Evaluation Criteria: This is a purely theoretical paper. Theoretical Claims: I read carefully the first part of the paper, and skimmed through the appendix. I did not check carefully the correctness of the proofs in the appendix. Experimental Designs Or Analyses: Not applicable. The paper is purely theoretical. Supplementary Material: I read the appendix, but did not check carefully the correctness of all proofs. Relation To Broader Scientific Literature: The paper refers to relevant literature in an appropriate way. Most of the main cited papers are recent, and are actually used as a basis for the development of quantum the new quantum algorithms. Notably, the paper from Apers and De Wolf, where the hypergraph sparsification problem is mentioned, the paper by Lee and Jambulapati which is used as a basis for the new quantum algorithm for hyperedge leverage score overestimate, and the paper by Hamoudi, which describes a technique to prepare multiple copies of a quantum state specified by an oracle. Essential References Not Discussed: Not that I’m aware of. Other Strengths And Weaknesses: The main strength of the paper is that it addresses an open problem proposed in a previous paper (Apers and de Wolf). Another interesting aspect is that the proposed solution comes by adapting and extending results obtained quite recently in neighboring areas (eg, the papers by Lee and Jambulapati , and the poper by Hamoudi 2022). Other Comments Or Suggestions: I caught some very minor grammar typos when reading the paper. So I would recommend the authors to make a revision in this aspect. (Eg: “we adopts” in the last paragraph of page 2.) Questions For Authors: No question Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your helpful comments and suggestions. We will carefully revise our paper to correct all typos, especially we will change the word "adopts" in the last paragraph of page 2 to "adopt".
null
null
null
null
null
null
Consistent Multigroup Low-rank Approximation
Reject
Summary: The paper introduces the concept of "consistent multigroup low-rank approximation," which extends the principles of singular value decomposition (SVD) to handle data partitioned into multiple groups. The goal is to find a set of basis vectors that minimize the maximum reconstruction error across all groups while maintaining the consistency property of SVD. Claims And Evidence: - The Claims in this paper are mostly well formulated and supported by an analytical framework - The empirical evaluations on various datasets demonstrate the practical applicability and effectiveness of the proposed methods Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem at hand. The use of real-world datasets and synthetic data ensures a comprehensive evaluation of the algorithms' performance. Theoretical Claims: I have not checked the proofs in detail, but the theoretical claims look sound. Comment: The convexity analysis indicates that the primal problem is non-convex, which might raise concerns about convergence guarantees for more than two groups. Can the authors provide comment and a numerical test to mitigate this concern? Experimental Designs Or Analyses: The experimental designs seem sound and valid. The paper evaluates the algorithms on both real-world and synthetic datasets, providing a diverse set of scenarios to test the algorithms' performance. The experiments cover the case of more than 2 groups, where some theoretical guarantees (optimality) are not valid. The results look reasonable. Question: How does the method perform for a high number of groups, i.e. more than 20 groups? Supplementary Material: I looked at the additional numerical experiments. they look reasonable. Relation To Broader Scientific Literature: The paper builds upon existing techniques in low-rank approximation, fair PCA, and algorithmic fairness. It references foundational works and recent advancements in the field, positioning its contributions within the broader literature. I am not an expert in tensor completion, thus I defer a more detailed judgement to fellow reviewers. Essential References Not Discussed: see above. Other Strengths And Weaknesses: Pro: The paper is well-structured, with clear explanations of the algorithms and their underlying principles Contra: A more detailed disussion about the limitations for a high number of groups would be beneficial for the paper. Non-convexity of the optimization problem raises concerns of convergence issues in practise. Other Comments Or Suggestions: - Please fix the formatting of Figure 3 in the appendix, it seems to be out of bounds. Questions For Authors: see above. Additional questions: - How does the performance of the method degrade as the number of groups increases? - Could the framework be extended to nonlinear low-rank approximation methods (e.g., kernel PCA)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comprehensive review and for sharing new interesting ideas. > "The convexity analysis indicates that the primal problem is non-convex, which might raise concerns about convergence guarantees for more than two groups. Can the authors provide comment and a numerical test to mitigate this concern? A more detailed discussion about the limitations for a high number of groups would be beneficial for the paper. Non-convexity of the optimization problem raises concerns of convergence issues in practice." Thank you for raising this excellent point. We will add to the current manuscript a discussion on the regime of a high number of groups in the data, focusing on the convergence of our algorithms and offering numerical tests. **A note on the convergence of our method.** We clarify that, even though the primal problem is indeed non-convex, the dual problem is always convex. In our work, we introduce algorithms that either solve the dual problem or its dual (the bidual). Thus, from an optimization perspective, our algorithms solve convex problems so that they are always expected to converge, which is why we have not included an extensive discussion of convergence issues in the manuscript. **The effect of the number of groups on convergence.** As we discuss in the manuscript, for more than two groups strong duality does not hold in general, meaning that our algorithms are not guaranteed to retrieve the optimal solution of the primal problem (but only the optimal solution of the dual problem). However, as we show in the experimental evaluation, the duality gap is consistently narrow. In practice, our algorithms quickly converge, regardless of the number of groups. If the reviewer is interested, we have added a simple numerical experiment to our repository showing that our method quickly converges also when the number of groups becomes large (https://anonymous.4open.science/r/multigroupSVs-F716/notebooks/ManyGroupsExample.ipynb). > "The experiments cover the case of more than 2 groups, where some theoretical guarantees (optimality) are not valid. The results look reasonable. Question: How does the method perform for a high number of groups, i.e. more than 20 groups?" This is certainly an interesting question. **A note on the choice of datasets in our experiments.** Consistent with related work, we primarily rely on benchmark datasets that can be naturally partitioned in two, three or four groups, which is a setting particularly widespread in the real world. At the same time, for the specific purposes of the experiments, considering too large datasets (with many large groups), provided that they can be found, would preclude the comparison with the main baseline (FAIR-PCA), which hinges upon an expensive SDP solver and thus quickly incurs scalability issues as the data size grows. **The setting with a large number of groups.** We note that the Frank-Wolfe algorithm and the SDP we introduce can handle any number of groups, and their performance is not intrinsically connected to the number of groups. This is also suggested by the experimental results on the *COMPAS* and *COMMUNITIES* dataset (although the number of groups increases only up to three and four). Nevertheless, as thoughtfully observed by the reviewer, it is interesting to study the behavior of our method as the number of groups becomes very large. Should the reviewer already wish to explore this question further, we have added to our repository a simple numerical experiment to investigate the duality gap (and hence the performance) of our method as the number of groups increases from 3 to 25 (https://anonymous.4open.science/r/multigroupSVs-F716/notebooks/ManyGroupsExample.ipynb). > "Please fix the formatting of Figure 3 in the appendix, it seems to be out of bounds." Thank you for noticing. We will fix the figure to constrain it within the page bounds. > "Could the framework be extended to nonlinear low-rank approximation methods (e.g., kernel PCA)?" We thank the reviewer for this stimulating suggestion. Developing a *nonlinear* method for consistent multigroup low-rank approximation surely represents an exciting avenue for future research. We believe that both our problem formulation and method are amenable to extension toward nonlinear settings. Specifically, since our method is inspired from the SVD, future work could introduce a method inspired from Kernel SVD (or Kernel PCA). To accomplish this, we would compute a kernel matrix and then extract and study the multigroup singular vectors associated with the kernel matrix.
Summary: This manuscript addresses the problem of consistent low-rank approximation for multigroup data. It aims to find a sequence of k basis vectors that treats all groups equally by minimizing the maximum error among them and satisfies the consistency property. The paper proposes an iterative algorithm that adds the vector with the best rank - 1 projection according to the min - max criterion and projects the data onto its orthogonal complement. It uses primal - dual approaches or semidefinite programming to find the best rank - 1 projection. The theoretical analysis shows that for two - group data, the rank - 1 problem can be solved optimally and the algorithm has polynomial - time complexity. Experimental results on real - world datasets in the FAIR - PCA task demonstrate that the proposed method outperforms existing methods in terms of fairness and efficiency, with a more balanced low - dimensional data representation and shorter running times. Claims And Evidence: The claims in the paper are generally supported by clear and convincing evidence. Methods And Evaluation Criteria: The method proposed by the author may bring new inspiration in solving large-scale problems? Theoretical Claims: - Experimental Designs Or Analyses: The experimental part is relatively sufficient Supplementary Material: no supplementary material Relation To Broader Scientific Literature: - Essential References Not Discussed: The author should give a more detailed description of the relevant background. Other Strengths And Weaknesses: Although the paper is innovative as a whole, there is insufficient emphasis on the comparison and differentiation between the paper and the existing research when explaining the innovation points. The essential differences in principle, algorithm and performance from existing multi-group data dimensionality reduction methods should be pointed out more clearly. Other Comments Or Suggestions: It is suggested that the author improve the content of related work and introduce the research content in detail to facilitate readers' understanding. Questions For Authors: I have a question about whether the method studied by the author can be effectively transferred to other fields such as clustering and matrix completion Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your review and valuable comments, which will also be very useful in future work. > "The method proposed by the author may bring new inspiration in solving large-scale problems?" **Our method guarantees high scalability.** We appreciate the suggestion of the reviewer. An important advantage of our method over existing approaches for closely related problems is the scalability it ensures. By imposing the consistency property on the output low-rank representation, we effectively decompose a larger problem into multiple smaller problems that can be efficiently solved. The Frank-Wolfe algorithm we introduce as well as the algorithm we design tailored to the two-group case (but not the SDP solver) scale to large problem instances. **Possibilities for future work addressing large-scale problems.** For future work, it would be interesting to study even more scalable (stochastic) hardware-accelerated gradient-based solutions to extract multigroup singular vectors. It would also be interesting to study different applications of our method beyond fairness, since our method can provide high-quality data compression whenever a partitioning of the data in multiple groups is relevant. > "Although the paper is innovative as a whole, there is insufficient emphasis on the comparison and differentiation between the paper and the existing research when explaining the innovation points. The essential differences in principle, algorithm and performance from existing multi-group data dimensionality reduction methods should be pointed out more clearly. It is suggested that the author improve the content of related work and introduce the research content in detail to facilitate readers' understanding." As pointed out when addressing the comments of Reviewer 2Q4n, we fully share this concern. We will give more details on related work, with particular emphasis on the work of Samadi et al (2018)., which represents the closest existing work to ours. We will highlight more the fundamental aspects that set our problem and method apart from related work, and we will separate more clearly our innovation points from the state of the art in multigroup-data dimensionality reduction. > "I have a question about whether the method studied by the author can be effectively transferred to other fields such as clustering and matrix completion " We again thank the reviewer for their useful suggestions. **Clustering.** Unlike clustering methods, our method considers a partitioning of the data into groups that is given as input. However, our method, being a low-rank approximation method, could be used to address clustering tasks. For instance, one can take advantage of our method in preprocessing, to extract informative features that can be used by existing clustering algorithms. Similarly, clustering algorithms can be used to address multigroup low-rank approximation. In particular, we can leverage clustering algorithms prior to our method to find a meaningful partitioning of the data into clusters (i.e., a clustering). Then, we can give the clustering in input to our method and obtain a balanced low-rank approximation that takes the clustering into consideration. For future work, it would be valuable to design an iterative procedure that combines clustering and multigroup low-rank approximation, by iteratively optimizing the clustering structure and the corresponding multigroup low-rank approximation. **Matrix completion.** Our method can also be regarded as an extension of the singular value decomposition (SVD) that incorporates a partitioning of the data into multiple groups. The SVD has been successfully leveraged to tackle matrix-completion tasks. Given a partially observed matrix, our method could be extended to address matrix-completion tasks by accounting for the missing values. In this direction, a straightforward approach would be to impute the missing values (e.g., by the mean), and then compute the multigroup singular vectors and the resulting multigroup low-rank approximation as usual. In future work, we will study more refined approaches for multigroup matrix-completion tasks.
Summary: The paper studies the problem of estimating the mutligroup singular vectors for the multigroup FAIR PCA method. The Frank-Wolfe method and the SDP relaxation method are both conduct to solve the min-max type nonconvex object function. ## update after rebuttal (Sorry for the late update.) The paper studies the problem of estimating the mutligroup singular vectors for the multigroup FAIR PCA method, which shares similar formulation and objective target to Samadi et al. (2018). The strategy of iterative updating rank-1 approximation, and the proposed Frank-Wolfe method and SDP relaxation method described in the main context are different to previous works. However only little discussions rather than theoretical justifications are provided for these algorithms. In theory, under the same two-group setting in Samadi et al. (2018), this work study the theory of a further more different algorithm, and show the optimality of it. But the detail of the third algorithm is only shortly mentioned in Lemma E.1 in the appendix. In general, it could be a novel and interesting one, but the quality of the overall organization makes it hard to recognize the contribution of the whole work, which needs a detailed refinement. For the above reasons, after the rebuttal, I decide to give a score of 2 for the final decision of myself. Claims And Evidence: On lines 18–19, the authors assert that prior methods, such as those in Samadi et al. (2018), do not guarantee the consistency property of the SVD. This is confusing because the min-max loss considered in this paper is identical to that in Samadi et al. (2018), and the same SDP relaxation is applied. How can the authors claim that their method is consistent while others are not? Methods And Evaluation Criteria: The min-max PCA loss is commonly used for the multi-source aggregation problem and is suitable for the Fair-PCA problem. The numeric evaluations make sense. But the authors claim that they show their SVD estimations are consistent through numeric study. The metrics like the marginal, incremental, and reconstruction loss do not directly imply the consistency. One may also be interested in the estimation error of the estimated singular vectors. Theoretical Claims: The theoretical results in Theorem 7.1, Lemma 7.2, Lemma 7.3 is incomplete. Which algorithm achieves the poly-nomial time? What does the tight property guarantee? etc. Some detailed discussions and remarks are needed. Experimental Designs Or Analyses: In the numeric study, a lot of dataset is considered, which is solid. Supplementary Material: I have reviewed the proofs in the supplementary material, though not line-by-line. The theoretical justifications appear to be correct and make sense. Relation To Broader Scientific Literature: The Fair-PCA problem, the solution (the min-max loss approach), the convergence, and the polynomial-time property have all been studied in Samadi et al. (2018). This paper does not provide novel findings. Essential References Not Discussed: The paper provides a thorough review of related works. Other Strengths And Weaknesses: The main strengths and weaknesses have been discussed in the preceding sections. The primary issue lies in the novelty of both the task and the method examined in this work. The paper employs the exact same formulation and loss function for the Fair PCA problem as those found in the existing literature. Other Comments Or Suggestions: I have no further comments. Questions For Authors: I have no further questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thorough review. We understand your concerns, and we believe that your feedback gives excellent input for improving the manuscript. We begin by carefully addressing the reviewer’s main concern, i.e., the *novelty of our work in the context of previous work*, notably that of Samadi et al. (2018). We acknowledge that the distinction between our work and that of Samadi et al. has not been sufficiently clarified. **Different problem.** Our problem and the problem studied by Samadi et al. are fundamentally different. Intuitively, given an integer $d$, Samadi et al. seek a balanced rank-$d$ approximation of the data. The obtained solution is not related to the solutions of lower rank by any straightforward transformation. On the other hand, our problem asks for a rank-$d$ approximation where balance is also achieved in all dimensions lower than $d$. We call this the consistency property, inspired by the SVD: given a rank-$d$ SVD, for all $d’ < d$ the rank-$d’$ approximation is also optimal. **Different loss function.** We note that our min-max loss is only equal to the Samadi et al. *marginal* loss in the rank-$1$ sub-problem. Our final solution of rank-$d$ (by Algorithm 1), optimizes the *incremental* loss not the marginal loss. There is a crucial difference: the incremental loss for a rank-$d$ solution is the sum of $d$ rank-$1$ min-max losses, whereas the marginal loss directly applies the min-max criterion to a rank-$d$ solution and cannot be decomposed into separate rank-$1$ losses. **Different methodology.** Our approach for obtaining a rank-$d$ solution is based on an orthonormalization procedure similar to that of the SVD (Algorithm 1) based on solving iteratively a rank-$1$ problem, instead of directly solving a rank-$d$ SDP. We highlight important differences: * The rank-$1$ sub-problem is solved by a novel primal-dual framework unlike Samadi et al. For this problem, we use a variety of methods such as Frank-Wolfe, root-finders, or SDP. * Regarding the SDP: our SDP is tailored to the rank-$1$ dual-problem only, and is based on the bidual of the rank-$1$ problem, with different constraints. * Our approach always returns a rank-$1$ solution, unlike Samadi et al., which requires one extra dimension. Finally, we visually demonstrate that the method of Samadi et al., unlike our method, does not guarantee the consistency property through a simple example, which can be found in our repository (https://anonymous.4open.science/r/multigroupSVs-F716/notebooks/ConsistencyExample.ipynb). **Relevance of consistency in view of previous work.** Consistency yields several practical advantages. Just like SVD or PCA, the user can run our method once, and then retain the desirable number of basis vectors. Instead, the method of Samadi et al. needs to be run separately for all possible values of $d$, possibly obtaining drastically different approximations, which can be cumbersome in many applications (e.g., in extracting and selecting features for a machine-learning model). The consistency property furthermore means that the basis vectors output by our method are meaningful, and they can be interpreted as the principal components, that is, the orthogonal directions of maximum variance when considering all groups. In addition, as remarked in Section 1, the consistency property breaks down a large problem into several smaller problems, offering significant benefits in terms of computational efficiency and scalability. **Empirical evaluation of consistency.** As rightly noted by the reviewer, the metrics do not imply the consistency on their own. The consistency is achieved by the sequential orthonormalization process described in Algorithm 1. In the experimental evaluation, to ensure an *unbiased* comparison, we monitor the different loss functions that are optimized by our method and the baselines. The results of the experiments in Figure 2,3 and 4 empirically demonstrate the consistency property: the errors incurred by the method of Samadi et al. can deviate drastically across groups for $d' < d$, which is not the case for our method. **More discussion in Section 7.** We agree with the reviewer; more details will be added to Section 7. * Theorem 7.1 states that Problem 1 is computationally tractable, and can be solved efficiently by any of the algorithms we propose, as will be clarified. * As for Lemma 7.2, a tight semidefinite-program relaxation is one where the optimal solution of the relaxed problem corresponds to the optimal solution of the original problem. We will expand the statement of the lemma. * Finally, in Lemma 7.3, we will explain the exact meaning of equal error in this context. **Conclusion.** Our work bridges the gap between the existing literature in multigroup (or fair) low-rank approximation and the standard SVD and PCA, and, as shown by our thorough experimental evaluation, provides an unprecedented trade-off between result quality and efficiency. --- Rebuttal Comment 1.1: Comment: The authors' responses during the rebuttal period address my concerns. I agree that incrementally find the leading vector like PCA and SVD is a reasonable proposal. However I will maintain my point that though Samadi et al. solved rank-d approximation without the orthogonal constraints, it is natural to perform the rank-1 approximation incrementally like PCA and SVD since their results covered the rank-1 case. In general, I decide to raise the score to 2. --- Reply to Comment 1.1.1: Comment: Thank you very much for the constructive discussion. We are pleased that we were able to address your concern. While it is indeed possible to solve the rank-$1$ problem using the approach of Samadi et al., their method is not directly applicable to our setting. This key distinction motivated the development of the novel approach we introduce. To this end, we highlight several differences between our method and that of Samadi et al., which also illustrate why their approach is not well-suited to our context. * __Guaranteed rank-$1$ solutions__: Our method guarantees a rank-$1$ solution by design, whereas the approach of Samadi et al. requires an additional embedding dimension and can return solutions of rank $2$. In our setting, obtaining a true rank-$1$ solution is essential. * __Novel analysis__: Our method is built on a new analysis of the rank-$1$ problem. Unlike the rank-$d$ case, which requires optimization over the space of rank-$d$ positive semidefinite matrices, the rank-$1$ problem admits a tractable dual formulation. This enables us to recast the problem as a parametric eigenvalue problem, offering new insights into the min-max objective. Notably, our analysis reveals a connection between the min-max criterion and the leading eigenvector of the optimal convex combination of the group-specific matrices. * __Efficiency and scalability__: This analysis allows us to design efficient and scalable algorithms—such as Frank-Wolfe and root-finding methods—that are particularly well-suited for the rank-$1$ case. In contrast, the SDP-based approach of Samadi et al. is significantly more computationally intensive. * __Support for multiple groups__: Our method seamlessly accommodates more than two groups, while the approach of Samadi et al. was primarily developed for the two-group case. Finally, even if one were to apply the Samadi et al. SDP directly to the rank-$1$ problem, extending it to obtain a valid rank-$d$ solution remains nontrivial. Despite the simplicity of our main algorithm (Algorithm 1), it incorporates a novel and non-obvious idea: the use of sequential orthonormalization to construct rank-$d$ solutions with desirable properties, akin to those of PCA or SVD.
null
null
null
null
null
null
null
null
Alberta Wells Dataset: Pinpointing Oil and Gas Wells from Satellite Imagery
Accept (poster)
Summary: The paper presents the large-scale benchmark dataset Alberta Wells for pinpointing oil and gas well, comprising over 210,000 wells and including three classes (abandoned, suspended and active), which frames the problem of identification of wells as a challenge for object detection and binary segmentation. To create a well-distributed dataset, the paper introduces a splitting algorithm based on clustering, which makes the dataset maintain an equal distribution of well and non-well images and be more diverse for evaluation and testing. In experiment, the paper selected the well-known baseline models for binary segmentation and object detection to evaluate their performance on Alberta Wells Dataset, which build up a new benchmark and validate the value of NIR imagery and multiple well types. Claims And Evidence: The authors present a comprehensive procedure of data collection, dataset splitting and label creation. Besides the detailed description of the experiments and quantitative results for both segmentation and detection tasks make the claims evidential. Methods And Evaluation Criteria: The authors select a series of well-known baseline models for both segmentation and detection tasks, and they provide a thorough evaluation using standard metrics such as IoU, mAP, precision, recall, and F1-score. The dataset splitting algorithm, which is explained in detail in the form of pseudo-code, ensures a diverse and representative distribution of wells across different geographical region. The value provided by the inclusion of NIR imagery and two new types of wells is proved through comparative experiment. Theoretical Claims: The paper does not make too many theoretical claims but rather focuses on the practical application of deep learning models to a real-world problem and a new dataset. The theoretical foundation of the models used are well-established in the literature, and the paper does not attempt to extend these theories. Instead, it focuses on the evaluation of these models on a novel dataset. Experimental Designs Or Analyses: This paper tests a series of well-known models for semantic segmentation and target detection on Alberta Wells Dataset and analyzes the performance differences in terms of their framework features to set a benchmark. The evaluation of models is comprehensive, with respect to IoU, Precision, Recall, and F1-Score. The authors also consider the value provided by inclusion of NIR imagery and all three types of well. However, the paper doesn’t provide a more detailed discussion of the potential impact of label noise on the results. Supplementary Material: The supplementary material includes additional experiments and visualizations, which provide further introduction of the dataset to illustrate the diverse distribution of wells. And the inclusion of the impact of NIR imagery is valuable. Relation To Broader Scientific Literature: The paper sits well within the broader remote sensing and machine learning for environmental monitoring literature. The authors reference previous work on oil and gas infrastructure detection and they highlight the novelty of their dataset in terms of its scale and focus on abandoned wells, which contributes to the work on using remote sensing and machine learning for climate change mitigation. Essential References Not Discussed: The references in this article are adequate and specific. It fully describes the shortcomings of existing datasets in the detection of oil and gas infrastructure, and introduces the role that existing models can play on remote sensing imagery. Other Strengths And Weaknesses: Strengths: • The introduction of a large-scale, high-quality dataset for an impactful problem of climate change and greenhouse gas emissions. • Thorough evaluation of well-known models, with clear results and insights, to set a benchmark. • Inclusion of NIR imagery and comparison of models trained on different well types to validate the value of the dataset. Weaknesses: • Limited discussion of the potential impact of label noise on the results. • The dataset is limited to Alberta, which may limit its generalizability to other regions. Other Comments Or Suggestions: The paper is well-written and provides a valuable contribution to the field. However, adding some experiments in transfer learning based on this dataset to other regions will be better to validate the generalizability. Questions For Authors: 1. I wonder if it is possible to address the influence false negatives in the dataset due to missing well locations in the Alberta Energy Regulator's data. 2. In Table 7, why does U-Net get a remarkable drop in Precision from 0.998 to 0.913 trained on all three types in the dataset? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for the thoughtful and constructive review. We appreciate your recognition of the dataset’s scale, quality, and importance for climate-relevant applications, as well as your positive assessment of our methodological rigor and experimental design. Below, we address your questions in further detail. Please let us know if there are other points you would like us to clarify. ## Generalization beyond Alberta Thank you for raising this point. While we do not claim that models trained on Alberta data will generalize zero-shot to other geographies, we believe Alberta provides a valuable and representative testbed for developing and benchmarking detection models. It is also an especially impactful region in its own right, given its position as one of the largest oil-producing regions globally. That said, we fully agree on the importance of broader applicability, and future iterations of the benchmark will include other regions to test transfer learning, as you suggest. ## Impact of AER Record Incompleteness / False Negatives Thank you for raising this important point. Like many regulatory sources, the Alberta Energy Regulator (AER) dataset is known to have occasional omissions—particularly for older, undocumented, or improperly decommissioned wells. While our dataset reflects the most complete and authoritative ground truth publicly available (from the AER ST37 database), we acknowledge the presence of potential false negatives stemming from unrecorded sites. That said, one of our core motivations is to support exactly this gap: enabling ML models trained on known wells to help identify undocumented or potentially missing ones. In that sense, false negatives in the label set may represent real-world deployment opportunities for models rather than just noise in the training data. We are currently exploring whether it is possible to examine by hand a subset of “false” well detections from our models to see whether these in fact represent omissions in the dataset. We can endeavor to have this ready for a camera-ready version. ## Precision Drop in U-Net (Table 7) Thank you for noticing this. The drop in precision from 0.998 to 0.913 when training on all well types (compared to only active wells) is due to a tradeoff between generality and specificity. Including all three types (active, suspended, abandoned) broadens the model’s learning target, introducing more subtle and ambiguous visual patterns—especially for older, overgrown, or decommissioned sites. This leads to slightly more false positives in some regions, but substantially improves recall, F1-score, and class-wise generalization, as demonstrated in our class-specific breakdown .
Summary: This paper introduces the Alberta Wells Dataset, the first large-scale benchmark dataset for detecting oil and gas wells from satellite imagery. The dataset contains over 213,000 wells (abandoned, suspended, and active) across Alberta, Canada, represented in high-resolution (3m/pixel) multi-spectral satellite imagery from Planet Labs. The authors frame well detection as both binary segmentation and object detection tasks, providing appropriate annotations for each approach. They evaluate several deep learning architectures as baselines, including U-Net, UperNet, FCOS, and SSD Lite. Their experiments demonstrate that including near-infrared spectral bands significantly improves detection performance compared to RGB-only data, and that training on all well types outperforms training on active wells alone. The paper introduces a novel dataset splitting algorithm that ensures geographical diversity across training, validation, and test sets. Quality control was performed with domain experts to refine the regulatory data from the Alberta Energy Regulator. The dataset addresses a significant environmental challenge - abandoned wells that leak methane into the atmosphere and toxic compounds into groundwater - by providing a resource for developing algorithms to detect wells that may not appear in official records. ## Update after rebuttal Thank you for your thoughtful rebuttal. I maintain my original recommendation of a weak accept. The Alberta Wells Dataset itself represents a valuable contribution to the field, particularly in addressing an important environmental challenge through the collection and annotation of high-resolution multi-spectral imagery. The dataset's creation and quality control represent significant work that will benefit the research community, which justifies acceptance despite the experimental limitations. The additional DETR results showing strong localization performance are encouraging, and I appreciate the authors' commitment to include failure case analysis in the camera-ready version. While I still believe that multi-class modeling and transfer learning to publicly available imagery would greatly strengthen the practical impact, these could be addressed in future work as the authors suggest. The core contribution of the dataset itself remains valuable and worthy of publication. Claims And Evidence: The key claims in the paper are adequately supported by evidence: The dataset's scale (213,000+ wells) and composition are well-documented with detailed statistics. The performance benefits of multi-spectral imagery over RGB-only data are clearly demonstrated in Table 6, showing improved IoU and F1 scores. Similarly, Table 7 provides convincing evidence that training on all well types outperforms training on active wells alone. However, several claims lack sufficient supporting evidence: - The claim about Alberta's geographical diversity being sufficient for generalization is not validated with any cross-region testing. - The comparative performance of different architectures is inconclusive since hyperparameter tuning is not covered. - The authors don't provide evidence about the detection performance specifically for abandoned wells. Methods And Evaluation Criteria: Appropriate aspects: - Using both segmentation and object detection approaches makes sense given the nature of the task. - Standard evaluation metrics (IoU, F1-score, mAP) are appropriate for these computer vision tasks - The dataset splitting algorithm ensuring geographical diversity is well-designed - Comparing RGB vs. RGB+NIR performance directly addresses the value of multi-spectral data Limitations: - The lack of hyperparameter optimization makes the architecture comparisons inconclusive. - Limited data augmentation techniques (only resizing and basic flipping) don't address the full range of appearance variations. - No evaluation of performance specifically on abandoned wells, despite their environmental significance. - Absence of error analysis or visualization of failure cases to understand model limitations Theoretical Claims: The paper does not make formal theoretical claims requiring mathematical proofs. This is primarily an empirical contribution focused on dataset creation and baseline benchmarking rather than theoretical advancement. Experimental Designs Or Analyses: The experiments are generally sound. I list the following Issues: - Multi-class analysis absence: Despite creating multi-class annotations (active/suspended/abandoned), the authors only performed binary detection, limiting insights about performance on environmentally critical abandoned wells specifically. - Limited data augmentation: Only basic resizing and flipping were used, neglecting more sophisticated augmentations. - No transfer learning evaluation: The experiments don't assess if models trained on high-resolution commercial imagery can generalize to publicly available imagery, limiting practical applicability. - Missing error analysis: No breakdown of performance by well type or geographical context is provided, offering limited understanding of where models succeed or fail. Supplementary Material: No Relation To Broader Scientific Literature: Oil and gas infrastructure detection datasets: - The paper advances prior work by creating a dataset orders of magnitude larger than existing ones. Previous datasets like NEPU (Wang et al., 2021) contained just 1,192 wells, and even larger collections like the Well Pad Dataset (Ramachandran et al., 2024) only included 12,490 wells, compared to this paper's 213,447 wells. - Benchmark datasets in remote sensing: The paper follows the model of other domain-specific remote sensing benchmarks like BigEarthNet for land use classification (Sumbul et al., 2019) and CropHarvest for agriculture (Tseng et al., 2021). - Methane emissions detection: This work complements recent efforts to identify methane sources, such as the METER-ML dataset (Zhu et al., 2022), by focusing on the infrastructure that may be emitting methane. - Computer vision for climate action: The work extends the growing body of research applying computer vision to climate challenges, particularly those focused on monitoring fossil fuel infrastructure. Essential References Not Discussed: None Other Strengths And Weaknesses: The biggest contribution of the paper is the acquisition & processing of (commercial) Planet Labs imagery and pairing it with quality-controlled labels. Details: - Access to premium data: Planet Labs' high-resolution (3m/pixel) multi-spectral imagery is commercial and not freely available like Landsat or Sentinel. - Weak supervision at scale: The authors essentially created a weak supervision pipeline by combining the Alberta Energy Regulator's records with the satellite imagery, then having domain experts refine and validate the dataset. - Processing and standardization: They've done the heavy lifting of processing commercial imagery (standardizing well site diameter annotations at 90 meters, managing cloud cover, ensuring temporal alignment, etc.) which saves other researchers substantial effort. The limitations of the paper: - Problem framing: avoiding multi-class modeling conflicts with the different purposes of such data. Abandoned Wells Detection's Value is in Immediate regulatory action, Environmental emergency management, and Public health protection. While Active Wells Detection Value is in Transparency and public awareness, Energy production monitoring, and Regulatory compliance verification. - Modeling: notable misses are: class balancing, consistency regularization, experimenting with more data augmentation techniques (like crop-zoom, noising, color jitter, rotation, etc), and hyper-parameter tuning for each architecture. - Evaluation: lack of breakdown by well type & error analysis (failure modes). - Impact: Environmental agencies and researchers working across multiple regions can't realistically purchase Planet imagery for large-scale monitoring, creating models that only work on premium data means the approach can't be maintained long-term without significant ongoing funding. Without demonstrating performance on public imagery (Landsat, Sentinel, etc.), there's no clear path from their research to practical deployment. Other Comments Or Suggestions: Suggestions: 1. Use high-resolution Planet data to train teacher multi-class segmentation models. 2. Generate pseudo-labels on the full dataset. 3. Train student models that work with freely available data (e.g., Sentinel-1/2). 4. Evaluate the performance trade-offs and scale predictions. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your extremely thorough and thoughtful review. We very much appreciate the helpful and constructive feedback. We respond to specific comments and questions below: ## Architecture comparisons and hyperparameter tuning: Thank you for raising this point. We agree that fully tuning hyperparameters in all experiments could be valuable, but believe our use of standard performant settings during training nonetheless provides meaningful benchmarks, especially given that we did not observe significant sensitivity in our models. We have obtained additional experimental results on the object detection task for the transformer-based model DETR, using a ResNet50 backbone. As shown below, DETR achieves strong localization performance, particularly at higher IoU thresholds and mAP. Despite these performance gains, our more lightweight models (e.g. SSD Lite and FCOS) may be more usable in practice, especially for remote sensing practitioners outside of ML. Table R1 : Object detection results on the test set. We report Intersection over Union (IoU) at thresholds 0.1, 0.3, and 0.5, as well as mean Average Precision (mAP) at IoU = 0.5 and IoU ∈ [0.5, 0.95]. | Architecture | Backbone | IoU_0.1 | IoU_0.3 | IoU_0.5 | mAP_50 | mAP_50:95 | |--------------|-------------|------------------|------------------|------------------|------------------|------------------| | FCOS | ResNet50 | 34.79 ± 0.99 | 48.51 ± 0.59 | 62.66 ± 0.43 | 9.67 ± 1.47 | 30.46 ± 3.11 | | DETR | ResNet50 | **41.78 ± 0.11** | **51.15 ± 0.14** | **63.17 ± 0.11** | **15.22 ± 0.28** | **38.45 ± 0.31** | ## Multiclass vs binary problem Thank you for bringing up this point. We agree that there are multiple different use cases (and relevant stakeholders) associated with detection of abandoned vs active wells. However, binary detection is nonetheless very useful, as (i) determination of well types can often be done by hand, as detection is the more time-intensive step, and (ii) where centralized records for wells exist, active wells are more likely to be documented than abandoned wells, meaning that in many cases newly discovered wells are likely to be abandoned. There is also considerable noise in well status labels, and the visual distinctions between the different classes are blurry (e.g. a recently abandoned well may be difficult to distinguish from a suspended or active well). ## Data augmentation We agree that further data augmentation experiments could be helpful. We will aim to evaluate several other simple techniques in the camera-ready version, in addition to resizing and flipping. ## Accessibility of Planet data Thank you for raising this excellent point. We agree that for certain users and application settings, Planet data will be less accessible than Landsat / Sentinel imagery. However, a large number of research institutions already have subscriptions to Planet imagery. Furthermore, for many use cases, a user may wish to target a relatively narrow area (e.g. abandoned wells within a regional jurisdiction that can be targeted for plugging). For some stakeholders, it also seems likely that applying algorithms to very large areas will be limited by computational constraints (given the relatively high resolution of the images), independent of data access. We very much appreciate your suggestion to use Planet data to train teacher models, then derive student models for lower resolution publicly available data. This sounds like a fruitful follow-up paper, which we would be happy to mention in the conclusion. ## Failure analysis: Thank you for raising this point. In the camera-ready version, we will include representative failure cases from visually complex scenarios. Most patches contain only 1–5 wells, and we observe performance degradation in rare, high-density regions. Abandoned and suspended wells—due to their subtle visual signatures and lack of surface infrastructure—can be especially challenging to detect. Enhancing model performance in these edge cases is a priority for future work.
Summary: This work proposes a large-scale remote sensing multispectral dataset for pinpointing oil and gas wells. The data comes from real scenes, and the authors carefully designed a reasonable data filtering method and data split scheme to ensure the quality of the data. This work proposes binary segmentation and object detection tasks for oil and gas well pinpointing, and uses a variety of mainstream models in the field of deep learning for training to obtain a benchmark. This large-scale benchmark has made a significant contribution to the field of climate change mitigation. ## update after rebuttal Thanks for the rebuttal, which has addressed most of my concerns. I would like to maintain my original rating. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. The amount of benchmark data proposed in this work is much larger than that of previous work, and has wider geographical area characteristics and gas well characteristics, which means the data distribution is richer. Theoretical Claims: The 2-step clustering algorithm for dataset split. This clustering method is based on the consistency between the distribution of oil and gas wells and geographical features. I hope the author can provide further research to prove the effectiveness of this clustering algorithm, or provide a basis for the method. Experimental Designs Or Analyses: Binary detection: Why didn't the authors add the Transformer-based model to the binary detection task for comparison as you did in the segmentation experiment? Therefore, the statement "performance in the object detection task is overall lower than for segmentation" in the analysis of the detection results on Page 7, Line 361-363 is not completely rigorous and reliable. Supplementary Material: The additional experiments and the qualitative results of the sample distribution. Relation To Broader Scientific Literature: Previous works have used Google Earth and Sentinel-2 satellite images to construct remote sensing image datasets for oil and gas well detection. However, these datasets are small in scale, cover a small geographical area, and lack extensive data distribution. This work makes efforts to bridge the gap. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The paper is well organized and easy to read. The proposed benchmark is targeted at the needs of the field of climate change and makes important contributions to the society. - The dataset fully considers the diversity of data distribution in the real-world scenario, and the proposed dataset has rich distribution in both geographical features and oil well status. Weaknesses: - The labeling of the dataset is too rough. The oil and gas wells are directly labeled with a fixed size, which lacks quality control and is not conducive to the training of deep learning models. - The images in the paper need to be optimized. For example, the layout of Figure 3 is messy and difficult to read. Other Comments Or Suggestions: If the license allows, I suggest further utilizing the rich gas well information in the metadata when constructing the data, and proposing multimodal oil and gas well pinpointing tasks, such as referring segmentation, which may further contribute to the community. Questions For Authors: None Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive review. We're grateful for your recognition of the dataset's potential impact on climate change mitigation, as well as for your helpful suggestions. Below are our responses to the points you raised: ## Transformer-based models for well detection Thank you for bringing this up. In our initial detection benchmarks, we focused on lightweight models (like SSD Lite and FCOS) because of their practical deployment relevance. However, we agree it is valuable to include transformer-based detection models for a more comprehensive comparison. We have performed experiments with the transformer-based model DETR, using a ResNet50 backbone. As shown below, DETR achieves strong localization performance, further validating the relevance of transformer-based models for this task. Table R1 : Object detection results on the test set. We report Intersection over Union (IoU) at thresholds 0.1, 0.3, and 0.5, as well as mean Average Precision (mAP) at IoU = 0.5 and IoU ∈ [0.5, 0.95]. | Architecture | Backbone | IoU_0.1 | IoU_0.3 | IoU_0.5 | mAP_50 | mAP_50:95 | |--------------|-------------|------------------|------------------|------------------|------------------|------------------| | FCOS | ResNet50 | 34.79 ± 0.99 | 48.51 ± 0.59 | 62.66 ± 0.43 | 9.67 ± 1.47 | 30.46 ± 3.11 | | DETR | ResNet50 | **41.78 ± 0.11** | **51.15 ± 0.14** | **63.17 ± 0.11** | **15.22 ± 0.28** | **38.45 ± 0.31** | We also appreciate the reviewer’s point regarding the statement on Page 7, Lines 361–363 comparing object detection and segmentation performance. While earlier detection models such as RetinaNet and Faster R-CNN underperformed compared to segmentation counterparts, our updated DETR results demonstrate that transformer-based detectors can help close this gap. We will revise the manuscript to present this comparison more carefully and to avoid overgeneralizing across architectures. ## Clustering algorithm The goal of our clustering algorithm is to allow for a dataset split that avoids spatial autocorrelation across splits while still including many different locations in each split. By construction, our clusters demonstrate spatial coherence, and as shown in Figure 2 they are geographically distributed. (Note that since the wells are unevenly distributed across Alberta, a simple gridding approach would lead to significant imbalances between splits.) While we would be interested in analyzing our algorithm’s efficacy across similar geospatial datasets, we feel this would distract from the principal focus of the present paper and is not necessary to establish that the dataset splits are reasonable. ## Annotation granularity and quality control Thank you for raising this important point. We use standardized 90m circular masks and square bounding boxes for several reasons. Firstly, well pads are relatively standardized in size and shape; the 90m circular shape is typical, not merely a placeholder. Secondly, the very large size of our dataset would make the creation of human-annotated masks challenging, especially given the expert knowledge required for annotation in many cases. Finally, the imagery we use includes a near-infrared channel (and certain well pads are not visible in RGB alone), which makes human interpretation of the images harder. Despite the coarse annotations, our results show that modern models achieve strong performance. This aligns with prior work (e.g., Rolnick et al., 2017) showing that deep learning models are robust to modest label noise. We will be exploring partial human annotations in future versions of the dataset, in collaboration with domain experts. ## Figure readability Thank you for the suggestion. We will revise Figure 3 and other figures by reorganizing subpanels, standardizing annotations, and increasing resolution to better highlight qualitative comparisons between the ground truth and predictions. ## Use of metadata for multimodal extensions Thank you for the suggestion. However, due to licensing, we can release imagery but not metadata. We’re actively exploring similar tasks using open data sources (e.g., Sentinel) to support future multimodal benchmarks.
null
null
null
null
null
null
null
null
Contextual Linear Bandits with Delay as Payoff
Accept (poster)
Summary: This paper investigates contextual linear bandits in which the payoff (loss or reward) is observed after a delay proportional to the payoff itself. This extends prior research on multi-armed bandits (MAB) with payoff-dependent delays. The authors propose a phased arm elimination algorithm for the non-contextual setting, which selects actions from a *volumetric spanner* of the action set. ## update after rebuttal After reviewing the other reviewers' comments and the authors' responses, some of my initial concerns have been adequately addressed. I also note that prior work, such as "Multi-player Multi-armed Bandits with Delayed Feedback," considers sub-Gaussian delays. Given this, if extending the noise model to sub-Gaussian distributions is straightforward, I recommend incorporating this improvement in a subsequent version of the paper. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: No Supplementary Material: No Relation To Broader Scientific Literature: This paper extends the understudied delay-as-payoff model from MAB to linear bandits. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths**: This paper extends the understudied delay-as-payoff model from MAB to linear bandits. The work is well-structured, featuring a clear problem formulation, detailed algorithm pseudocode, and concise proof sketches. **Weaknesses**: The reliance on mature techniques, such as the spanner method and phased arm elimination, limits the novelty of the proposed approach. While the application of these techniques to the delay-as-payoff model is commendable, the paper does not introduce fundamentally new methodologies. Other Comments Or Suggestions: No. Questions For Authors: 1. The algorithm requires actions and parameters to lie in $\mathbb{R}^n_+$. How critical is this restriction? 2. About the mentioned challenges: The linear form of Eq. (2) remains an efficient LCB for linear bandits. Specifically, at round $t$, let $\hat{\theta}_t$ denote the estimator and $V_t$ represent the covariance matrix of historically selected arms. For any arm $a\in\mathcal{A}$, its estimated reward is given by: $$ \langle a, \hat\{\theta\}\_t \rangle = a \cdot V_t^{-1} \cdot \left( \sum_{\tau=1}^{t-1} u_\tau \cdot a_\tau \right) = a \cdot V_t^{-1} \cdot \left( w_t(\mathcal{A} \setminus \{a\}) + a \cdot R_t(a) \right), $$ where $w_t(\mathcal{A} \setminus \{a\})$ is the weighted sum of historically selected arms excluding $a$, and $R_t(a)$ is the cumulative reward of arm $a$. Without loss of generality, we assume the rewards in $w_t(\mathcal{A} \setminus \{a\})$ are observed. Since $V_t$ is positive definite, $a \cdot V_t^{-1} a \geq 0$. Thus, for arm $a$, setting its unobserved rewards to zero yields a lower bound compared to the observed case. For another arm $a'\neq a$, the efficiency of the LCB requires $a' \cdot V_t^{-1} a \geq 0$. Given that $\mathcal{A} \subseteq \mathbb{R}^n_+$, the inner product of any two arms is non-negative, ensuring $a' \cdot V_t^{-1} a \geq 0$ holds. This raises the question: Is the spanner technique truly necessary? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your valuable comments. We address the issues mentioned in your review below. - **Q1: The reliance on mature techniques, such as the spanner method and phased arm elimination, limits the novelty of the proposed approach.** While we agree that neither volumetric spanner or phased arm elimination is new, combining them to solve the issues that other standard ideas for linear bandits do not seem to be able to solve in our model is a significant contribution in our opinion. In fact, we are not aware of algorithms using similar ideas even for standard linear bandit models. - **Q2: The algorithm requires actions and parameters to lie in $\mathbb{R}^n_+$.** As mentioned in Footnote 1, we enforce both the action $a$ and the underlying parameter $\theta$ to lie in $\mathbb{R}^n_+$ in order to make sure that the payoff $\langle a,\theta\rangle$, and hence the delay, is non-negative. It is just an important restriction to properly define our model, instead of an assumption to make the analysis work. - **Q3: An alternative way of estimating LCB of action $a$ using $V_t^{-1} = (\sum_{\tau=1}^ta_{\tau}a_{\tau}^\top)^{-1}$.** There is a critical issue in your proposal and reasoning. Specifically, you argued that for $a'\neq a$, we have $a^\top V_t^{-1} a' \geq 0$ since all actions are in $\mathbb{R}^n_+$. This is just simply wrong since $V_t^{-1}$ can have negative entries even though $V_t$ does not (for a simple example, consider $V_t=a_1a_1^\top+a_2a_2^\top$ where $a_1=[1;1]$ and $a_2=[1;0]$; then direct calculation shows that $V_t^{-1}=[1,-1;-1,2]$ and $a_2^\top V_t^{-1}a_3= -1<0$ for $a_3=[0;1]$). Therefore, ignoring these actions as you proposed does not lead to an LCB. We hope that this convinces the reviewer that constructing LCB in our problem is nontrivial and that there is clear novelty in our spanner-based approach. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The assumption that the inherent parameters are strictly positive is a strict condition that may not hold in practice, as these parameters are often unobservable for a given problem. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the further response. We clarify that we in fact only require the realized payoff to be non-negative so that the delay is well-defined. For simplicity, we do so by enforcing parameters $\theta$ and actions $a$ to lie in $\mathbb{R}_+^n$, but again, all our results and analyses naturally follow as long as the realized payoff is non-negative. We believe this is both natural and practical. Also, if our response to Q3 addresses your concern about the necessity of the spanner technique, please kindly acknowledge that (so other reviewers know there is no trivial solutions to our problem) and consider re-evaluating the technical strength of our paper. We thank the reviewer again for your time and effort.
Summary: This paper studies a contextual linear bandit setting where the reward/loss is delayed by a length of time proportional to the realised reward/loss. For this problem, the authors propose an arm elimination strategy and analyse the regret (including delay penalty) of the proposed algorithm. Experiments in a simulated environment are also provided to complement the theoretical results. Claims And Evidence: In line 156 (2nd column) it is claimed that it is hard to construct a lower bound equivalent to eq2. It is not clear why this is, and the evidence is a bit vague and not convincing. In particular, it is not clear why we cannot minimize over an appropriately defined confidence set. Methods And Evaluation Criteria: Would potentially be good to see experimental comparison to other work on delayed feedback in linear bandits, even if that work does not consider the delay as payoff setting. Theoretical Claims: The assumption of bounded noise seems very strong. In particular, it excludes the common linear-Gaussian regression setting. Can the results be relaxed to hold for sub-Gaussian rewards (this may require replacing the maximal delay scaling with some sort of average which I also think would help strength and interpretability of the results, see below). The proof sketch in section 3.1 seems reasonable to me. The only question I have about the proof is how lamba_{mi} being random effects the analysis? Experimental Designs Or Analyses: See above (methods section). The experiments also consider a fairly small number of arms and only one sort of reward distribution. Supplementary Material: I was convinced by the proof sketch in the main paper so did not check the supplementary. Relation To Broader Scientific Literature: There is some missing work on delayed feedback in (generalized) linear bandits missing, e.g. the below. Yang, Y., Zhong, H., Wu, T., Liu, B., Wang, L., & Du, S. S. (2023). A reduction-based framework for sequential decision making with delayed feedback. Advances in Neural Information Processing Systems. Howson, B., Pike-Burke, C., & Filippi, S. (2023, April). Delayed feedback in generalised linear bandits revisited. In International Conference on Artificial Intelligence and Statistics. Essential References Not Discussed: See above Other Strengths And Weaknesses: I question the optimality of the D\Delta_max terms in the regret penalty. In the case where the delays are independent of the reward, the delay penalty can be reduced to the expected delay, why can we not get some sort of average appearing here? In particular, I imagine this average delay will relate to the average reward of the algorithm which in turn could be related to the regret plus average delay of the optimal algorithm. I would find these sorts of results much more insightful. If this cannot be done, it would be interesting to see a lower bound to show that this maximal delay penalty is unavoidable. As it is, I don't really find the results in this paper surprising given prior results in the MAB setting. The extension from linear bandits to contextual linear bandits seems to follow from a reduction in prior work. I therefore wonder whether there is enough novelty. On the positive side, I think the algorithm design is interesting, and it is pleasing to see that it works empirically. Other Comments Or Suggestions: na Questions For Authors: 1) Can the results be extended to capture more realistic noise models? 2) Can the dependence on the maximum delay be reduced to something more reasonable and more interpretable? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your valuable comments. We address the issues mentioned in your review. - **Q1: It is not clear why it is hard to construct a lower bound equivalent to Eq.(2). In particular, why we cannot minimize over an appropriately defined confidence set.** We emphasize again the difficulty of obtaining a lower bound similar to Eq.(2) using classic LinUCB estimator ~$\hat{\theta}_{t}$~. When the delay is payoff-independent, Theorem 1 of [Vernade et al., 2020] indeed shows that certain construction of ~$\hat{\theta}_{t}$~ ensures that the norm of ~$\hat{\theta}_{t} - \theta$~ under $V_t^{-1}$ is controlled (where $V_t\approx\sum_{\tau=1}^ta_ta_t^\top$). However, their proof heavily relies on the independence between the delay and payoff. In our model, the payoff and the delay are dependent, and we do not see a way to construct or even approximate a similar confidence set, so we resolve this issue by proposing a novel arm-elimination algorithm that constructs UCB/LCB based on volumetric-spanners. - **Q2: Can the results be relaxed to hold for sub-Gaussian rewards? Can the results be extended to capture more realistic noise models?** Since delay is essentially the same as payoff in our model, it does not make sense to have negative payoff, which is why we only consider bounded noise. In other words, allowing sub-Gaussian rewards would require a different delay model, which might be an interesting future direction. - **Q3: How $\lambda_{m,i}^{(a)}$ being random affects the analysis?** We are not sure we fully understand this question. Yes, the coefficients $\lambda_{m,i}^{(a)}$ are random, but this does not really introduce any complication to the analysis, and we only use the property $||\lambda_{m}^{(a)}||_2\leq 1$ to control the scale of the confidence range. - **Q4: More experiment settings.** Following your (and Reviewer B3d9's) suggestion, we further tested our algorithm in a setting with a larger $K=70$ and $u_t$ drawn from the beta distribution with $\alpha=\mu_{a_t}$ and $\beta=1-\mu_{a_t}$. Moreover, we also added two baselines: OTFLinUCB and OTFLinTS in [Vernade et al., 2019], which are linear bandit algorithms with payoff-independent delay. The results (presented in https://anonymous.4open.science/r/PaperID-8852) show that our algorithm consistently outperforms all baselines in both the payoff-as-reward and payoff-as-loss settings. - **Q5: The optimality of the $D\Delta_{\max}$ terms in the regret penalty. In the case where the delays are independent of the reward, the delay penalty can be reduced to the expected delay, why can we not get some sort of average appearing here? Can the dependence on the maximum delay be reduced to something more reasonable and more interpretable?** Note that while $D$ represents the maximum per-round delay, the term $D\Delta_{\max} = \max_{a}D\mu_a - \min_aD\mu_a$ by definition is the gap of the **expected delay** between the best and the worst arms. Therefore, $D\Delta_{\max}$ already represents some kind of ``expected delay'' mentioned by the reviewer. Whether the exact term $D\Delta_{\max}$ is necessary in the regret bound is indeed unclear though. Thanks for the additional references you suggest and we will incorporate these in our next revision. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for addressing some of my question - in particular the interpretation of my question about lambda was correct. However, I am still concerned about the noise assumptions. I agree with the authors that we do not want negative delays. However, the fact that their model means that they necessarily cannot consider Gaussian noise (which is the most common example of noise in linear bandits) is concerning. This therefore suggests the model is perhaps not realistic enough/not capturing the problem completely. Therefore, I will keep my rating the same. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for elaborating on this concern, but we have to respectfully disagree with the statement that our model is not realistic enough. The fact that sub-Gaussian noise is common in the linear bandit literature does not mean that it is always the most realistic assumption. Indeed, in the applications that we care about, such as those in clinical studies and online advertising (see 2nd paragraph of our introduction or our response to Reviewers PfQm and B3d9), delay and payoff are basically the same thing, so given that the reviewer agrees that negative delay does not make sense, it is clear that sub-Gaussian noise also does not make sense here. Moreover, several prior works also consider bounded payoff models in the delayed feedback setting, both in stochastic and adversarial environments (e.g., Vernade et al., 2020; Ito et al., 2020; Van Der Hoeven et al., 2023). Thus, we believe that our modeling choice is not only realistic for many important applications but also aligned with the existing literature on bandits with delay feedback. [Ito et al., 2020]: Delay and Cooperation in Nonstochastic Linear Bandits, NeurIPS 2020 [Vernade et al., 2020]: Linear Bandits with Stochastic Delayed Feedback, ICML 2020 [Van Der Hoeven et al., 2023]: A Unified Analysis of Nonstochastic Delayed Feedback for Combinatorial Semi-Bandits, Linear Bandits, and MDPs, COLT 2023
Summary: The paper extends the delay-as-payoff model (Schlisselberg et al., 2024) from standard multi-armed bandits (MABs) to contextual linear bandits. This setup arises in practical situations such as clinical trials and modeling time-to-event data for other medical procedures, advertising, wherein the delay in observing a reward (or loss) actually depends on the reward (or loss) itself. More, specifically, the authors assume that the delay at time $t$ is $d_t = D \cdot u_t$, where $u_t$ is the loss at time $t$ and $D>0$ is the maximum possible delay. The paper considers two set ups: 1) non-contextual linear bandits where the decision-maker selects actions from a fixed action set, 2) contextual linear bandits, where the actions are not fixed but drawn from an unknown distribution $\mathcal{P}$. The objective is to minimize expected pseudo regret. In order to achieve this objective, the authors propose a volumetric spanner based phased elimination method instead of using standard confidence ellipsoid based methods such as LinUCB. They provide an instance-dependent regret bound for non-contextual bandits which matches the standard linUCB regret bound in the no-delay case, and extend the analysis to the contextual linear bandits by adopting the results from Hanna et al., 2023. Their experimental results show that their approach outperforms LinUCB in both delay-as-loss and delay-as-reward settings. Claims And Evidence: The paper provides the first regret guarantee for contextual linear bandits with payoff-dependent delays. The regret bound for the phased elimination algorithm in the non-contextual linear bandit setting is similar to the no-delay cases and aligns with previous lower bounds in delayed bandit settings. This bound ensures that delay does not significantly increase regret when the optimal action has a small loss. The authors extend their approach to contextual bandits with time-varying action sets. The authors adapt the Hanna et al. (2023) reduction technique to handle varying action sets in contextual bandits. This allows them to use their non-contextual algorithm as a subroutine in the contextual case. This extension provides the first regret guarantee for contextual linear bandits with payoff-dependent delays. The authors test their algorithm on synthetic linear bandit instances. The results show that their approach outperforms LinUCB in both delay-as-loss and delay-as-reward settings. In Proposition 3.2, the authors state the theoretical complexity of volumetric spanner $\mathcal{S}$ of $\mathcal{A}$ with $|\mathcal{S}| = 3n$ within $O(K n^3 \log{n})$, which is high for large action sets (K Is large) and since $\mathcal{S} \subset \mathcal{A}$, for large $K$, $n$ would potentially also be large, thus making it computationally demanding. There is no discussion of the scalability of this approach for when $n$ (and $K$) is large. The terms $W_1$ and $W_2$ in the regret bound in Theorem 3.3 depend on the expected delay of the optimal action, $d^*$, and the maximum possible delay, $D$ but scaled by $\Delta_\text{max}$. Thus if all actions have different losses and large losses, the performance could degrade. Although the paper varies the delay structure to check how the algorithm handles different delay settings, it would be nice to see how the algorithm performs under extreme delay settings, which are common in applications such as clinical trials. Methods And Evaluation Criteria: The use of volumetric spanners is a new exploration technique that reduces the need for explicit parameter estimation. The phased elimination strategy is well-motivated and theoretically justified, and is commonly used in the linear bandits literature. The primary evaluation metric is cumulative regret, which is standard in bandit literature. It is not clear as to how well the proposed methodology would generalize to cases where one might have stochastic delays or more complex delay scenarios, such as one would expect in real life. Also, the method only compares against LinUCB but there is a vast literature on delayed bandits (reward independent ones) and also modifications to the UCB type algorithms to cater to delayed feedback in contextual bandits framework, such as Zhou et al (2019), Vernade et al (2020), Lancewicki et al (2021), Vakili et al (2023). *Zhou, Zhengyuan, Renyuan Xu, and Jose Blanchet. "Learning in generalized linear contextual bandits with stochastic delays." Advances in Neural Information Processing Systems 32 (2019).* *Vernade, Claire, et al. "Linear bandits with stochastic delayed feedback." International Conference on Machine Learning. PMLR, 2020.* *Lancewicki, Tal, et al. "Stochastic multi-armed bandits with unrestricted delay distributions." International Conference on Machine Learning. PMLR, 2021.* *Vakili, Sattar, et al. "Delayed feedback in kernel bandits." International Conference on Machine Learning. PMLR, 2023.* Theoretical Claims: While I haven’t verified every step of the proofs, the proof sketch appears correct and aligns with standard linear bandit theory. The authors do not claim minimax optimality for their results, which presents an interesting open question deserving further analysis. Experimental Designs Or Analyses: The paper compares the proposed phased elimination algorithm against LinUCB, a standard method for contextual bandits. The delay structure is systematically varied, allowing for a controlled evaluation of how delay impacts regret. The experiments test both delay-as-loss and delay-as-reward settings, covering different practical scenarios. Supplementary Material: No, only glanced through it. Relation To Broader Scientific Literature: There is a vast literature on bandits with delayed rewards. While a lot of the literature assumes delays as being independent of rewards, arms and contexts, there is a growing interest in studying the more practical setting where delayed feedback could be more complicated, such a arm-dependent delays (Gael et al 2020), heavy tailed delays (Blanchet et al, 2020) depend on a variety of factors such as payoffs (losses or rewards) as in this paper and others like Tang et al, 2024 as cited in this paper. In that light, to my knowledge this is the first paper studying delayed rewards dependent on payoffs in a linear contextual bandits setting. *Gael, Manegueu Anne, et al. "Stochastic bandits with arm-dependent delays." International Conference on Machine Learning. PMLR, 2020.* *Blanchet, Jose, Renyuan Xu, and Zhengyuan Zhou. "Delay-adaptive learning in generalized linear contextual bandits." Mathematics of Operations Research 49.1 (2024): 326-345.* *Tang, Yifu, Yingfei Wang, and Zeyu Zheng. "Stochastic multi-armed bandits with strongly reward-dependent delays." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.* Essential References Not Discussed: Phased elimination is used a lot in the best arm identification literature and there is also a lot of work on delayed rewards in that realm that might be cited, such as Grover et al 2018, and there is plenty of other literature but I am not aware of one that studies reward dependent delays in contextual bandits setup. *Grover, Aditya, et al. "Best arm identification in multi-armed bandits with delayed feedback." International conference on artificial intelligence and statistics. PMLR, 2018.* Other Strengths And Weaknesses: Strengths: - While delayed feedback in bandits has been studied (e.g., Joulani et al., 2013), this paper is one of the first to formalize and analyze payoff-dependent delays in contextual linear bandits. - Standard delayed bandit methods (e.g., LinUCB) use confidence ellipsoid-based exploration. The paper instead leverages volumetric spanners to reduce exploration complexity and circumvents the challenges in using confidence ellipsoid based approach in the presence of delayed feedback. This work generalizes prior results to dynamic action sets, which makes it a meaningful extension of bandit theory. - The delay-as-payoff model is clearly defined, with a rigorous problem setup. The regret analysis follows standard proof techniques in bandit literature, making it easy to follow. Weaknesses: - There is no comparison with other delayed bandit methods, even those assuming delays are independent of payoffs. It would be valuable to see how they perform relative to the proposed algorithm. Additionally, a computational complexity comparison in terms of runtime would be beneficial. - The methods seem to have high computational burden when large action sets are considered, especially in terms of scalability of the volumetric spanner. - The reduction from contextual linear bandits to non-contextual linear bandits using Hanna et al. (2023) is not very clear and the relationship with an $\epsilon$- misspecified model in not apparent to a reader not familiar with the paper. Other Comments Or Suggestions: It would be nice to see how robust your proposed algorithm is to delays that may also depend on other factors (like contexts) along with payoffs. Questions For Authors: 1) Why consider the expected pseudo regret and not the expected regret itself? 2) In the delay-as-reward setting, it is assumed that the noise $\epsilon_a \leq \epsilon$, which seems to be a strong assumption. Can this be relaxed? 3) How does one check if the delays actually scale with payoffs in real life? 4) Can you take into account contextual information in your framework? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your valuable and positive comments and your acknowledgment on our initiation to study linear contextual bandits with payoff-dependent delay. We address the issues mentioned in your review below. - **Q1: There is no comparison with other delayed linear bandit methods, even those assuming delays are independent of payoffs.** Following your (and Reviewer ddMd's) suggestion, we conducted extra experiments where we added two other baselines: OTFLinUCB and OTFLinTS in [Vernade et al., 2019], which are linear bandit algorithms with payoff-independent delay. The results (presented in https://anonymous.4open.science/r/PaperID-8852) show that our algorithm consistently outperforms all baselines in both the payoff-as-reward and payoff-as-loss settings. - **Q2: No discussion of the scalability/computational cost of this approach for when $n$ (and $K$) is large.** We apologize for not discussing time complexity in the paper. However, we emphasize that our algorithm is in fact **even more computationally efficient** than the classic LinUCB algorithm. More specifically, the time complexity of our algorithm over $T$ rounds is $O(nT+Kn^3\log n\log(T/n))$ in total since we only compute the volumetric spanner at the beginning of each epoch, and the total number of epoch is $\log(T/n)$. On the other hand, LinUCB's time complexity is $O(Kn^2T)$ since computing the UCB of each action requires $O(n^2)$ time. Therefore, our algorithm is in fact more efficient. We will add this discussion in the next revision. Thanks for pointing this out. - **Q3: Why expected pseudo regret?** This is standard in the stochastic bandit literature (such as the UCB paper), especially when the goal is to derive logarithmic regret. This is because if one were to consider expected regret, then even if the algorithm always picks the optimal arm, the derivation in the stochastic losses still contributes to $\sqrt{T}$ regret. - **Q4: The reduction from contextual linear bandits to non-contextual linear bandits using Hanna et al. (2023) is unclear; assumption on the misspecification level $\epsilon_a\leq \epsilon$.** Note that the reason we consider the misspecified setting is mostly to enable the use of the contextual-to-non-contextual reduction proposed by [Hanna et al., 2023], and in that reduction, the misspecification level of each arm is indeed **uniformly bounded by a known value**. To be clear, we explain the high-level idea of the reduction and why it requires a misspecifed model below, and we will add more discussion to the paper: at a high level, their reduction treats each model parameter $\theta$ as an action and goes in epochs. Since the distribution of the action set ~$\mathcal{A}_t$~ at each round $t$ is unknown, at epoch $m$, they can only estimate each action $g(\theta) = E_{\mathcal{A} \sim \mathcal{P}} \left[ \arg\min_{a \in \mathcal{A}} \langle a, \theta \rangle \right]$ using historical data (denoted as $g^{(m)}(\theta)$ in Line 1 of Algorithm 2). This leads to a misspecified model since the true expected loss of picking $a_t$ is $\langle g(\theta_t), \theta\rangle$ while the algorithm considers the loss model of $\langle g^{(m)}(\theta_t), \theta\rangle$. According to Lemma C.1, the gap between $\langle g(\theta_t), \theta\rangle$ and $\langle g^{(m)}(\theta_t), \theta\rangle$ is indeed bounded by a known value $O(\sqrt{1/2^m})$, which is the reason why we only require an algorithm with a uniform and known misspecification level. - **Q5: How does one check if the delays actually scale with payoffs in real life?** As mentioned in our introduction, for some applications, this is true by definition: in medical domains, the delay in observing progression-free survival (reward) or postoperative length of stay (loss) is exactly the metric itself; in advertising, the delay in observing average time on page (reward) or the time to re-engagement (loss) is also the metric itself. In other applications where this is less obvious, one possibility is to collect historical data and apply a linear regression on delays versus payoff to see if this is a good model. - **Q6: Can you take into account contextual information in your framework?** We assume that reviewer is asking whether we can also allow context-dependent delay. However, note that our payoffs depend on the context, so the delay, essentially being the same as the payoff in our model, already depends on the context. Thanks for pointing out other references. We will incorporate these into our next revision. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I would like to maintain my score.
Summary: The authors try to extend the delay-as-payoff model to contextual linear bandits. The main novelty here is to apply a phased arm elimination procedure by only picking the **volumetric spanners** of the action set in order to handle both payoff-dependent delays and large action sets. Further extension is discussed in the case with varying action sets. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I have checked the proof of the main theorem (theorem 3.3), which reads correct to me. Experimental Designs Or Analyses: Yes. I checked the experimental setting, which looks good to me. Supplementary Material: I reviewed the supp materials and validated the replication codes. Relation To Broader Scientific Literature: This work broadens the literature of delayed bandits. Essential References Not Discussed: Not aware of any. Other Strengths And Weaknesses: To Note: I am not an expert in the algorithmic theory of bandit analysis. Hence, I will provide more general questions regarding the work and less on the technical side. Strength: The authors did interesting extensions to the MAB setting for delay as payoff. The phase-elimination trick avoids the difficulty of estimating the LCB in the delayed case and leads to meaningful regret bounds. The technical discussion is quite deep. Weakness and general questions: 1. I noticed that the authors (as well as earlier literature) used a particular form for modeling the delay as a linear function of the loss. Can this be more general? Say only assume delay is within some function class defined on the loss. It's hard to imagine in practice the delays are always linear dependence. How much does the theory rely on such an assumption? 2. The authors did a great explanation on the derived regret bounds. Is there an optimality result (lower bound) as well? 3. The derived regret bound imposes bounded delay assumptions. In practice, there could be infinite scale of delay, which is actually discussed in paper such as Gael et al (2020). Is there a way to treat for example long delay vs short delay to generalize the analysis? 4. From the simulation results and the discussion around, LinUCB also performs fair as an ad-hoc strategy, especially when $n$ is small. Is there any quantification for the performance of LinUCB to show it is theoretically worse or completely fail in certain delay-as-pay-off setting? Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments. We address the issues mentioned in your review below. - **Q1: Why assume the delay as a linear function of payoff; other general models?** Our goal is to extend the same delay-as-payoff model of Schlisselberg et al. (2024) from MAB to contextual linear bandits, and as mentioned in the 2nd paragraph of our introduction (as well as in Schlisselberg et al.), there are indeed many applications where this model is valid. For example, in medical domains, the delay in observing progression-free survival (reward) or postoperative length of stay (loss) is exactly the metric itself; in advertising, the delay in observing average time on page (reward) or the time to re-engagement (loss) is also the metric itself. The analysis of both our work and Schlisselberg et al. (2024) does heavily rely on this model. That said, we agree that extending the results to more general models is definitely an interesting direction (as also pointed out in the conclusion of our paper). - **Q2: Is there an optimality result (lower bound) as well?** As also mentioned in the conclusion section, we do not know whether our bounds are optimal and leave this as future directions. In fact, the optimal bounds in the simpler MAB case is also unknown. - **Q3: In practice, there could be infinite scale of delay, which is actually discussed in paper such as Gael et al (2020). Is there a way to treat for example long delay vs short delay to generalize the analysis?** While this is indeed an interesting case, it unfortunately does not fit into the delay-as-payoff model since infinite scale of delay also implies infinite scale of payoff, in which case no sublinear regret should be possible. - **Q4: Is there any quantification for the performance of LinUCB to show it is theoretically worse or completely fail in certain delay-as-payoff setting?** To the best of our knowledge, this question is indeed still open even for the simpler MAB setting. Our conjecture is that since these algorithms do not take into account the delay-as-payoff structure, they indeed could completely fail in some worst-case environments. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. Q1 & Q2: I am happy to see the future extension of the work that addresses these questions. Q3: for infinite-scale delays, that is related to my question Q1 regarding the model of the delay-payoff. Also would like to further see the extension here. Q4: Thanks for clarifying this point. In general I can tell that this is a hard yet valuable setup with many open questions. Although I am not an expert in bandit algorithm theory, I do respect the authors' efforts in provide theoretical insights in modeling/analyzing such a setting and will keep my rating as 3.
null
null
null
null
null
null
Lightweight Protocols for Distributed Private Quantile Estimation
Accept (spotlight poster)
Summary: The authors study the problem of estimating quantiles under local differential privacy and under shuffle differential privacy, with applications in distributed and private quantiles estimation. To do so, the paper presents new algorithms. The article presents both upper and lower bounds for the problems at hand. Finally, the article compares experimentally the new methods that it introduces with already existing methods. ## update after rebuttal After the rebuttal, I will maintain my original score. Claims And Evidence: The claims of the article are well-supported by clear proofs and empirical evaluation. Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the problem at hand. Theoretical Claims: I checked the proofs that are presented in the main body of the article, and I looked at the proof techniques that are used in the appendix. No issue was found, yet I did not check every single detail. Experimental Designs Or Analyses: I checked the soundness of the experiments that are presented in the main body of the article. I did not however check the correctness of the code that was provided by the authors. Supplementary Material: I did read some of the proofs that are presented in the supplementary material, and I read the comments from the authors on continuous distributions. Relation To Broader Scientific Literature: The authors link their work to prior literature and claim (with theoretical evidences) that their new algorithm improves prior state of the art by a polylogarithmic factor in the size of the set on which they work. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths : - The article is theoretically sound and overall well written. - The problem that is tackled is of interest for the community of privacy preserving machine learning. I overall have a positive opinion of the paper, so the following weaknesses are more in order to improve the paper than they are to point out reasons to reject it. Weaknesses : - The structure of the paper makes it slightly difficult to follow, as the technical details of the algorithm are introduced quite late. This forces the reader to jump back and forth to understand how the challenge outlined in the introduction is actually addressed. Adding brief “how-to” explanations earlier could improve readability. - The definition of an approximate median in Definition 2.2 could be clarified. While it is defined for empirical distributions in the introduction, it is not explicitly extended to general distributions. A quick reminder of this concept in Definition 2.2 would be helpful. - The article would benefit from a brief conclusion summarizing key insights, highlighting open questions, and outlining possible directions for future work. Other Comments Or Suggestions: NA Questions For Authors: 1. The study focuses on "quantile error," which measures how much probability mass the estimate is off by. However, in some applications, the primary concern is the uncertainty on the actual value (especially when the grid is a discretization of the real line and inherits a distance). Can the results be extended to this setting? 2. In Theorem 1.4, the condition n=O(…) suggests that the procedure works as long as there isn’t too much data. This seems counterintuitive, as one might expect more data to make the problem easier. Could you clarify this? 3. In the same theorem, why do 𝛼 and 𝜖 appear to interact independently in the bound? Is this a consequence of the specific regime being considered? 4. The paper states: “We are typically interested in the high-probability setting, where β=1/poly(B).” Is this because setting β=1/B would be equivalent to making a random guess? If so, adding a sentence to explain this, and clarifying that the polynomial must be greater than B for relevance, could be helpful. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your questions and valuable feedback! > The structure of the paper makes it slightly difficult to follow, as the technical details of the algorithm are introduced quite late. This forces the reader to jump back and forth to understand how the challenge outlined in the introduction is actually addressed. Adding brief “how-to” explanations earlier could improve readability. We will do our best to add more such explanations. Feel free to let us know if there was any part that was particularly confusing. >The definition of an approximate median in Definition 2.2 could be clarified. While it is defined for empirical distributions in the introduction, it is not explicitly extended to general distributions. A quick reminder of this concept in Definition 2.2 would be helpful. Thanks for pointing this out. We will clarify in the paper. >The article would benefit from a brief conclusion summarizing key insights, highlighting open questions, and outlining possible directions for future work. Thanks for the suggestion. We agree, and will include it in the next version of the paper. >The study focuses on "quantile error," which measures how much probability mass the estimate is off by. However, in some applications, the primary concern is the uncertainty on the actual value (especially when the grid is a discretization of the real line and inherits a distance). Can the results be extended to this setting? Our results cannot be generalized to a domain-based error without making an assumption on the underlying data distribution. Lower bounds from [1] show that any median estimation protocol must have error that grows linearly with B when error is measured with the average distance from the estimate to each data point. Additionally note that minimizing the distance in the domain between the estimated and true median must also have error $\Omega(B)$, even under central DP. Indeed, if $n$ is odd and $(n+1)/2$ of the $x_i$’s are 0 and the remaining $(n-1)/2$ $x_i$’s are $B$, then the median is $0$, but changing the data of just a single user can change the median to $B$. We expect that similar lower bounds would hold for other data-domain based error functions. [1] Duchi, J. C., Jordan, M. I., & Wainwright, M. J. (2018). Minimax Optimal Procedures for Locally Private Estimation. Journal of the American Statistical Association, 113(521), 182–201. https://doi.org/10.1080/01621459.2017.1389735 >In Theorem 1.4, the condition $n=O(\dots)$ suggests that the procedure works as long as there isn’t too much data. This seems counterintuitive, as one might expect more data to make the problem easier. Could you clarify this? Thanks for pointing this out. The statement should be that only $n=O(\dots)$ users are needed to get an $\alpha$-approximate median. We will update the writing. >In the same theorem, why do $\alpha$ and $\epsilon$ appear to interact independently in the bound? Is this a consequence of the specific regime being considered? In the following, we ignore logarithmic factors and focus on the polynomial dependence on $\alpha$ and $\epsilon$. In order to apply privacy amplification by shuffling (Lemma G.5), we have to pick the number of users in a batch $n’\gg 1/\epsilon^2$. With this many users, shuffle DP ensures that essentially each user in the batch answers threshold queries correctly with probability $>3/4$. With $>3/4$ probability of correct answers, it suffices to have $O(1/\alpha^2)$ users in a batch to estimate the fraction of these users with $x_i$’s below a given threshold $t$ within $O(\alpha)$. Additionally, there is another reason that the batch has to be this big, namely to ensure that the threshold query of the batch is within $O(\alpha)$ of the threshold query of the full data set. Together, this gives sample complexity $O(1/\alpha^2+1/\epsilon^2)$. The additional logarithmic dependencies come from the fact that these events have to hold with less than constant error probability (since we union bound over $\lg B$ steps of the binary search). Finally, we need $\lg B$ times as many users, since there are $\lg B$ steps to the binary search. >The paper states: “We are typically interested in the high-probability setting, where β=1/poly(B).” Is this because setting β=1/B would be equivalent to making a random guess? If so, adding a sentence to explain this, and clarifying that the polynomial must be greater than B for relevance, could be helpful. We note that $\beta$ is the \emph{failure} probability. A random guess would thus give $\beta=1-\Theta(\alpha)$, since the probability of guessing an $\alpha$-approximate median is $\Theta(\alpha)$. Our algorithm guarantees that our median estimate is bad only with very low probability $1/poly(B)$, say $1/B^{10}$. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their response and for integrating the modifications that they mentioned in the paper. I will maintain my score.
Summary: This paper considers the estimation of quantiles under the LDP framework with bounded integral data. It derives a series of lower bounds under both shuffle-DP and LDP, and proposes an LDP algorithm in an adaptive setting. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: See the pros below. Essential References Not Discussed: No. Other Strengths And Weaknesses: Pros: Quantile estimation under LDP is a significant problem and the non-asymptotic results appear to be interesting. Cons: The authors assume data are drawn from a bounded integral space (with finitely many values), which seems fairly restrictive and uncommon. Other Comments Or Suggestions: See the questions below. Questions For Authors: 1. For random variables with infinitely many possible values (e.g., Poisson), does the proposed algorithm or framework still apply? 2. Regarding footnote 1, I may be mistaken, but I believe some prior work considers privacy-preserving quantile estimation on a continuous interval (e.g., on $[0,1]$), such as [1] in CDP setting and [2] in the LDP setting. Could the authors clarify how these results relate to their work? 3. The paper specifies a particular range for $\epsilon$. For very large privacy budgets (i.e., $\epsilon \to \infty$), will the results degrade or converge to the non-private results (as seen in [3] and [4])? In other words, do these bounds become consistent with classical results in the absence of privacy constraints? [1] Lalanne, C., Garivier, A., & Gribonval, R. (2023). Private statistical estimation of many quantiles. In International Conference on Machine Learning (pp. 18399-18418). PMLR. [2] Liu, Y., Hu, Q., Ding, L., & Kong, L. (2023). Online local differential private quantile inference via self-normalization. In International Conference on Machine Learning (pp. 21698-21714). PMLR. [3] Chen, L., Keilbar, G., & Wu, W. B. (2023). Recursive quantile estimation: Non-asymptotic confidence bounds. Journal of Machine Learning Research, 24(91), 1-25. [4] Howard, S. R., & Ramdas, A. (2022). Sequential estimation of quantiles with applications to A/B testing and best-arm identification. Bernoulli, 28(3), 1704-1728. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your questions and your consideration of our paper >For random variables with infinitely many possible values (e.g., Poisson), does the proposed algorithm or framework still apply? Without any assumptions on the distribution of the random variable, known lower bounds (as we discuss under related work) show that it is impossible to obtain any meaningful error guarantees, even in the central setting of DP. For concrete distributions (like Poisson with a reasonable bound on its rate, or even continuous distributions), one can usually apply our algorithm combined with bucketing as we discuss in Appendix H. The error guarantees will then depend on how much the CDF can change within a single bucket which in turn depends on the parameter space of the class of distributions. While it is interesting how well classes of infinite or continuous distributions can be discretized, this direction of work is somewhat orthogonal to ours. >Regarding footnote 1, I may be mistaken, but I believe some prior work considers privacy-preserving quantile estimation on a continuous interval (e.g., on ), such as [1] in CDP setting and [2] in the LDP setting. Could the authors clarify how these results relate to their work? [1] considers continuous distributions with the same error function as ours. However, related to our discussion in Appendix H, their results depend on a parameter $\Delta$ that expresses how well the continuous distribution can be discretized into a finite set of intervals on which the CDF does not increase too quickly. [2] extends a line of work (see e.g., Duchi et al. [3]) aiming to minimize the average distance between the estimated median and the data points (and a more generalized form for other quantiles). Lower bounds in [3] show that this error must grow linearly with the range in which the true median lies. In our setting, without further assumptions on the data distribution, this would be linear in $B$. We additionally note that minimizing the distance in the domain between the estimated and true median must also have error $\Omega(B)$, even under central DP. Indeed, if $n$ is odd and $(n+1)/2$ of the $x_i$’s are 0 and the remaining $(n-1)/2$ $x_i$’s are $B$, then the median is $0$, but changing the data of just a single user can change the median to $B$. On the other hand, our work aims to minimise the *rank error* of the quantile and is not subject to these lower bound. [1] Lalanne, C., Garivier, A., & Gribonval, R. (2023). Private statistical estimation of many quantiles. In International Conference on Machine Learning (pp. 18399-18418). PMLR. [2] Liu, Y., Hu, Q., Ding, L., & Kong, L. (2023). Online local differential private quantile inference via self-normalization. In International Conference on Machine Learning (pp. 21698-21714). PMLR. [3] Duchi, J. C., Jordan, M. I., and Wainwright, M. J. Minimax optimal procedures for locally private estimation. Journal of the American Statistical Association, 113(521):182– 201, 2018. >The paper specifies a particular range for $\epsilon$. For very large privacy budgets (i.e., $\epsilon\to \infty$), will the results degrade or converge to the non-private results (as seen in [3] and [4])? In other words, do these bounds become consistent with classical results in the absence of privacy constraints? Our result states that $O((\log B)/\alpha^2)$ users suffice to find an $\alpha$-approximate median under LDP with high probability in $B$ even for $\epsilon=O(1)$.iThis matches known lower bounds even absent privacy constraints for statistical median estimation (when requiring high probability in $B$). On the other hand if we just want success probability $2/3$, absent privacy constraints, $O(1/\alpha^2)$ users suffice for statistical median estimation. It is an interesting open question to design a protocol with the correct convergence to $O(1/\alpha^2)$ as $\epsilon\to \infty$, but would likely require quite different techniques than the ones presented in our paper. In the *empirical* setting, absent privacy constraints, an algorithm can simply output the median of the data set, so one should expect that as $\epsilon\to \infty$ the required number of users would converge to $1$. Again, it is interesting to understand this convergence. In any case, we focus on the common choice of $\epsilon=O(1)$. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and their efforts to address my concerns. It is indeed different to derive non-asymptotic rank error results, as compared to mean absolute error, which is a common choice in applications (as another reviewer also mentioned). This paper indeed presents many nontrivial results about LDP quantile estimation, except for the finite assumption about $B$, because the entire framework is based on given data sets rather than on potential distributions. Therefore, I will raise the score to 3. However, I still suggest that the authors add more explanation in final version, regarding the proposed setting and error metrics, especially the rationale for choosing this particular quantile error and the additional challenges it poses compared to the traditional setting.
Summary: This paper studies quantile estimation under local differential privacy. They are interested in the sequentially adaptive local model, where the aggregator queries each user only once, but in rounds where the set of users and the randomizer they are asked to use can depend on information learned in previous rounds. They first argue via folklore tricks that estimating any quantile can be reduced to estimating the median with twice as many users. Hence, the main task is to estimate the median of either a distribution (where each user samples one example in an i.i.d. fashion from it), or of a dataset from a discrete domain B. They give a sequentially adaptive $\epsilon$-DP $n$-round algorithm in the local model that outputs the $\alpha$-approx median whp, as long as the number of users is $O(\log B / \alpha^2 \epsilon^2)$. This is the main contribution of the paper and follows by using the noisy binary search primitive considered in prior work (Gretta and Price 2024) where you have a number of coins with monotonically increasing success probabilities, a target probability, and can flip coins, with the goal of finding a pair of successive coins containing the target probability. The statistical setting is a straightforward reduction to this problem, but the empirical setting is more challenging since you can't simply sample with replacement from the empirical distribution (since then you might see the same user multiple times- resulting in an algorithm that is not sequentially adaptive). The authors instead use sampling without replacement, along with randomly permuting the users, and argue via martingale-based techniques that for any threshold in $[B]$, and any $t > 0$, the probability that we draw one of the remaining $n-t$ users that are less than the threshold remains almost unchanged from what it was when $t=0$. They then argue that the techniques of Gretta and Price for the noisy binary search problem satisfy a robustness property; they work even if we don't flip exactly the same coins, but rather ones with close probabilities. This is sufficient to solve the problem in the empirical setting. They also show a matching lower bound for sequentially adaptive local model algorithms using Fano's inequality-based techniques of Duchi et al. They also appeal to a result of Edmonds, Nikolov, and Ullman to argue that this gives a separation from the non-adaptive local model, where there is a lower bound of $\Omega(log^2 B / \epsilon^2)$ for this problem (for constant $\alpha$). They also give a result in the shuffle model that achieves better round complexity than the $n$-round local model protocol (with similar total number of users). Claims And Evidence: The claims seem correct and are largely well explained. Methods And Evaluation Criteria: The primary results are theoretical and experiments are secondary considerations in the paper - however, they do consider experiments where they compare their method to two other more standard baseline algorithms, and run on datasets of different sizes, drawn from two types of distributions- pareto and uniform. They evaluate the absolute quantile error and success rate (the fraction of times a good quantile is released). They also choose reasonable $\epsilon$ values in their evaluations. Overall, the experimental setup makes sense. Theoretical Claims: Yes, I checked all the proofs for correctness, and am convinced that the most significant results are correct. The only one I was more unsure of was the shuffle model result, where I didn't fully understand the algorithm that they were using (the proof of Theorem 1.4 is rather vague on this point so fully specifying the algorithm would make it easier to verify). Experimental Designs Or Analyses: Yes, see methods and evaluation criteria. Supplementary Material: Yes, I reviewed all the supplementary material (read the theoretical sections in detail, skimmed the experimental section in the appendix) Relation To Broader Scientific Literature: Prior work has obtained non-adaptive algorithms for quantile estimation in the local model of differential privacy with suboptimal dependence on the domain size; this is the first work to get the correct dependence (leveraging adaptivity). This work fits in the larger literature on the local model of differential privacy (using tools in the model like randomized response and mutual information-based lower bonds), as well as the larger literature on private quantile estimation (extensively explored for the central model of differential privacy as well and a fundamental problem both theoretically and practically). Essential References Not Discussed: Not essential, but Cohen et al. (STOC 2023) is the latest paper on quantile estimation/interior point in the central model that should be cited in the related work (where Kaplan et al. and others are mentioned) since it gets asymptotically tight bounds. Also I'd add in a more detailed exposition of work on adaptivity (and separations with non-adaptivity) in the local model (for e.g. Joseph Mao Neel Roth 2019, Joseph Mao Roth 2020, Daniely Feldman 2019, Acharya Canonne Sun Tyagi 22 for e.g.). Other Strengths And Weaknesses: This paper cleverly uses existing tools, and has to jump through a number of technical hoops in order to get them to work for empirical quantile estimation (in the sequentially adaptive local model). Demonstrating the robustness of the Gretta-Price noisy binary search algorithm may be of independent interest. Other Comments Or Suggestions: 1) instead of adversarial monotonic NBS I'd call the problem robust monotonic NBS (adversarial makes it sound like an adversary can choose coins maliciously). 2) The indices in Lemma 3.1 and 4.1 are overloaded (i and j) which was confusing on first read. Also in Lemma 4.1 the set in the theorem statement should have c_j = 0 not c_i=0. 3) in the proof of Lemma 3.1, the comment on conditioning on pi(1),...pi(i) is confusing because conditioning on X_js does not imply conditioning on pi(1)....pi(i) (but you don't need to condition on that part of the permutation for the argument to go through). Questions For Authors: See questions on shuffle model in the 'theoretical claims' section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your interest in our paper and your valuable feedback! > The only one I was more unsure of was the shuffle model result, where I didn't fully understand the algorithm that they were using (the proof of Theorem 1.4 is rather vague on this point so fully specifying the algorithm would make it easier to verify). Thanks for pointing this out. We agree that the writing surrounding the proof of theorem 1.4 can be improved. We will update the paper by describing the algorithm in detail and including pseudocode before proving the theorem in the appendix. To clarify, the algorithm randomly partitions the users into batches of size roughly $\frac{1}{\epsilon^2}+\frac{1}{\alpha^2}$. For a given threshold $t\in [B]$, our analysis then shows that the shuffled randomized responses to whether the users are above or below $t$ suffice to determine the cdf at $t$ within additive $\alpha$ with sufficiently high probability. Our algorithm thus uses $\log_2(B)$ such batches to perform a binary search on $t$. This is similar to the algorithm given by Karp and Kleinberg [1] (section 1.2), but with extra care needed to ensure privacy. > Not essential, but Cohen et al. (STOC 2023) is the latest paper on quantile estimation/interior point in the central model that should be cited in the related work (where Kaplan et al. and others are mentioned) since it gets asymptotically tight bounds. Also I'd add in a more detailed exposition of work on adaptivity (and separations with non-adaptivity) in the local model (for e.g. Joseph Mao Neel Roth 2019, Joseph Mao Roth 2020, Daniely Feldman 2019, Acharya Canonne Sun Tyagi 22 for e.g.). Thanks for drawing these to our attention. We will include a discussion of these in the paper. In particular, we agree that a more detailed exposition on adaptivity in LDP should be included. > instead of adversarial monotonic NBS I'd call the problem robust monotonic NBS (adversarial makes it sound like an adversary can choose coins maliciously). We note that the algorithm for NBS works even in the case where an adversary can change coin probabilities maliciously (by at most some amount), hence the name. There is indeed no malicious adversary when we apply the NBS algorithm in our main protocol, but on the other hand, the changing coin probabilities have a complicated distribution. The strong adversarial assumption allows us to handle this, and it is unclear how to analyse our LDP protocol without such an assumption. >The indices in Lemma 3.1 and 4.1 are overloaded (i and j) which was confusing on first read. Also in Lemma 4.1 the set in the theorem statement should have c_j = 0 not c_i=0. Thanks for making us aware of this, we will fix the $c_i$ typo and streamline the use of free variables in these lemmas. > in the proof of Lemma 3.1, the comment on conditioning on pi(1),...pi(i) is confusing because conditioning on X_js does not imply conditioning on pi(1)....pi(i) (but you don't need to condition on that part of the permutation for the argument to go through). Thanks for this point. In our first writeup, we defined the martingale as a Doob martingale, wrt the random choices of $\pi(1),...\pi(2n)$, but you are right that with the current definition, we should just condition on $(X_j)_{j<i}$. We will update the paper.
Summary: The paper studies the problem of finding quantiles with constraints of differential privacy. More specifically, it studies shuffle and local differential privacy. The authors proved that the algorithms have utility higher than any known algorithm for the problem and also proved that the local DP algorithm’s bounds are tight and it is impossible to improve it further. Claims And Evidence: The evidence sufficiently supports the claims. Methods And Evaluation Criteria: Methods and evaluations are appropriate for the problem at hand. Theoretical Claims: The proofs presented in the paper are correct. Experimental Designs Or Analyses: The experimental design seems valid. Supplementary Material: I haven't reviewed supplementary material. Relation To Broader Scientific Literature: The problem of estimating quantiles in differentially private way is well-studied in the literature since it is a common routine for may data analysis algorithms. Local and shuffle differential privacy are also well-studied fields since they allow simplifying privacy guarantees of the systems using DP. However, getting high utility from this algorithms is often hard. Unfortunately, the LDP algorithms for quantile estimation were not studied enough, so I am really glad that this gap is getting closed. Essential References Not Discussed: The paper doesn't lack any specific reference. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your interest in our paper!
null
null
null
null
null
null
Uncertainty-aware Preference Alignment for Diffusion Policies
Reject
Summary: This paper proposes Diff-UAPA, focusing on handling inconsistent and diverse offline preference data across different user groups. Building upon diffusion policies, the authors first propose a maximum likelihood estimation (MLE) setup or preference alignment and then augment it with the Beta prior to capture the uncertainty, which is learned through variational inference. Empirical results on some robotic manipulation tasks and D4RL tasks demonstrate the improved performance and stability of the proposed method under noisy feedback. Claims And Evidence: The claims are clean and the paper provides results to show improved performance and robustness in general. However,  the evidence could be more convincing if the authors conduct more ablation studies to illustrate the contribution of the modeling of beta distribution or expand the range of benchmarks (e.g. medium-replay data in D4RL benchmarks). Methods And Evaluation Criteria: The proposed method aligns well with the problem. However, as the main components are each taken from other established works [1] and [2], it would be helpful if the authors clarified how their approach differs from or improves upon these earlier works. [1] Wallace, B., Dang, M., Rafailov, R., Zhou, L., Lou, A., Purushwalkam, S., ... & Naik, N. (2024). Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8228-8238). [2] Xu, S., Yue, B., Zha, H., and Liu, G. A distributional approach to uncertainty-aware preference alignment using offline demonstrations. In International Conference on Learning Representations, 2025. Theoretical Claims: I have roughly checked the correctness of the proofs and did not find any obvious errors. However, the main theoretical claims seem largely derived from [1], weakening the contributions. [1] Wallace, B., Dang, M., Rafailov, R., Zhou, L., Lou, A., Purushwalkam, S., ... & Naik, N. (2024). Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8228-8238). Experimental Designs Or Analyses: Regarding the experiments, although they demonstrate the method’s superior performance, they are somewhat limited in scope. More extensive ablation or sensitivity analyses would reinforce the paper’s claims. For instance, visualizing trajectories (e.g., in Maze2d) could more intuitively demonstrate the approach’s strengths. Supplementary Material: I have roughly reviewed the codebase provided by the authors. It would be more user-friendly if they included a README with clear instructions on how to run the code. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. It provides an iterative training procedure that can naturally adapt to changing user groups or inconsistent labeling in offline PbRL, which is beneficial for real-world applications. 2. The method creatively merges two lines of work based on max entropy RL. 3. The method shows demonstrable performance gains on several tasks Weaknesses: 1. The key methodological components appear to be largely drawn from existing works, **without much additional design**. For example, the text from lines 220–257 on page 5 is very similar to Section 4 in [1]. 2. While the experimental results cover several tasks, the breadth of testing could be expanded, perhaps with additional experiments or deeper analyses in the supplementary material. 3. There are a few typos (see other comments). [1] Wallace, B., Dang, M., Rafailov, R., Zhou, L., Lou, A., Purushwalkam, S., Ermon, S., Xiong, C., Joty, S., and Naik, N. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8228–8238, 2024 Other Comments Or Suggestions: Typos: 1. Lack of space in Line 66: (Diff-UAPA),a 2. Misuse of \citep and \citet in Line 173. 3. Wrong superscript in Line 174. 4. Abuse of notations: Should it be T instead of k in Equations 5, 6, 11, 12 ... Questions For Authors: Could you clarify how the two main methodological components (MLE-based preference alignment and the Beta prior) differ in this work compared to [1] [1] Wallace, B., Dang, M., Rafailov, R., Zhou, L., Lou, A., Purushwalkam, S., Ermon, S., Xiong, C., Joty, S., and Naik, N. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8228–8238, 2024 [2] Xu, S., Yue, B., Zha, H., and Liu, G. A distributional approach to uncertainty-aware preference alignment using offline demonstrations. In International Conference on Learning Representations, 2025. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, we sincerely value your time and effort in evaluating our work. We have prepared comprehensive responses and clarifications to address each point you raised. We hope these responses can resolve your concerns. > Q1. The authors could conduct more ablation studies or expand the range of benchmarks (e.g. medium-replay data in D4RL benchmarks). **A1.** Thanks for your valuable suggestions. we conducted additional experiments, including more tasks and ablation studies. Please refer to E.1 and E.3 at https://anonymous.4open.science/r/Diff-UAPA-Rebuttal. --- > Q2. Could you clarify how the two main methodological components (MLE-based preference alignment and the Beta prior) differ in this work compared to [1][2]. **A2.** Thank you for raising the question. We would like to provide a more detailed discussion highlighting the differences between this paper and the two prior works as follows. 1. **Difference between [1]**. We would like to highlight that the alignment for diffusion policy via MLE is not our primary contribution. Instead, we primally follow the approach developed for LLM in [1], and adapt it to the RL setting in our work. While we adopt some techniques from [1] (we have correctly cited [1]), the problem setting, preference model, and derivation are different. Specifically, - **Problem setting.** [1] is formulated in the context of **LLM alignment**, where rewards are assigned exclusively at the final step, and preferences are based solely on the **final output** of the LLM. This approach optimizes the LLM’s ultimate output (see Eq. 14 in [1]). In contrast, Eq. 12 in this paper extends the framework to a **trajectory-wise** setting within the **RL field**. The key distinction is that in RL, we incorporate intermediate rewards. As a result, this paper formulates preference alignment based on the entire trajectory rather than focusing solely on the final state-action pair. - **Preference model and deviation.** [1] employs a **reward-based preference** model while regularizing the **KL-divergence** with respect to a reference policy (see Eq. 3 in [1]). In contrast, this work adopts a **regret (advantage)-based preference** model (see Eq. 5) within the **maximum entropy RL** framework. To achieve trajectory-wise alignment under the advantage-based preference model, we define the chain advantage function (i.e., Eq. 8) and compute its expected value with respect to the diffusion latent variable. 2. **Difference between [2]**. While both works utilize the Beta prior with a MAP objective to model uncertainty during the alignment process, this work differs significantly regarding the problem formulation, motivation, and approach to incorporating the Beta prior. Specifically, - **Problem formulation and motivation**. [2] addresses **epistemic uncertainty** from an offline preference dataset with imbalanced comparison frequencies across trajectories, where fewer compared trajectories induce greater uncertainty in reward prediction. In contrast, our work targets **aleatoric uncertainty** in human preferences, which arises from the inconsistent preferences from different annotator groups for the same trajectory pair. In other words, by interpreting the parameters $\alpha$ and $\beta$ of the Beta distribution as counts of 'vote' and 'unvote' feedback, [1] models the difference in their absolute values across trajectories (e.g., Beta(10,2) vs. Beta(100,20), where the former shows **greater uncertainty due to fewer counts**). In contrast, this work uses the Beta prior to model the relative strength of $\alpha$ and $\beta$ for different $\tau$ (e.g., Beta(6,6) vs. Beta(10,2), where the former shows **greater uncertainty due to vote inconsistency**). - **Approach.** [2] adopts a **two-step** procedure, proposing a MAP objective for learning a **distributional reward model**. To achieve this, [2] introduces an iterative update rule that refines the reward model using the learned Beta model, which is then used for policy learning. In contrast, this work derives a unified MAP objective for directly aligning the **diffusion policy** in a **single-step** process. By maximizing the likelihood of the diffusion policy's output under the learned Beta distribution, the process guides the policy to align the estimated $\phi(\tau)$ with their prior distribution $p_0(\phi(\tau))$, which is more straightforward and efficient. --- > Q3. It would be more user-friendly if they included a README with clear instructions on how to run the code. **A3.** We have included a README file to assist with running the code. Please check https://anonymous.4open.science/r/Diff-UAPA. --- > Q4. Some typos. **A4.** Thank you for your valuable suggestions. We have corrected them accordingly. --- **References** [1] Diffusion model alignment using direct preference optimization. CVPR 2024 [2] A Distributional Approach to Uncertainty-Aware Preference Alignment Using Offline Demonstrations. ICLR 2025. --- Rebuttal Comment 1.1: Comment: Thanks for the explanation. I will keep my recommendation for now, but will keep watching the progress of other reviewers' interactions. --- Reply to Comment 1.1.1: Comment: Thanks for your acknowledgment, and we appreciate the time and effort you have taken to review our work. Your insightful feedback has been invaluable in refining our research.
Summary: This paper proposes Diff-UAPA, an uncertainty-aware preference alignment method for diffusion policy, designed to address inconsistencies in preference pairs. Diff-UAPA uses a maximum posterior objective to align the diffusion policy with a regret-based preference model, incorporating an informative Beta prior to mitigate these inconsistencies. Extensive experiments demonstrate the effectiveness of the proposed method. ## Update After Rebuttal: I raised my score from 2 to 3 as the authors' rebuttal addressed most of my concerns. Claims And Evidence: Yes, the design of the proposed learning framework has the potential to address inconsistencies and noisy preference labels in the dataset. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria are reasonable for real-world applicability. Theoretical Claims: The derivation of each loss term appears sound and valid. Experimental Designs Or Analyses: There are many methods designed to be robust against noisy preferences, but the paper does not consider them. Including these methods would improve the soundness and validity of the experimental results. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The PbRL framework, designed to address noisy and inconsistent preferences, is an important contribution to the broader scientific literature. Essential References Not Discussed: There are several papers addressing noisy preference labels, including approaches such as data filtering, label smoothing, and robust loss functions. Other Strengths And Weaknesses: Strengths - The paper is well-written and easy to follow. - The final performance of the proposed method appears strong. Weaknesses - The proposed method incorporates multiple existing components, such as using a diffusion policy instead of a simple MLP and a beta prior instead of a uniform prior. This complexity makes the algorithm difficult to interpret, particularly in understanding the contribution of each component. A more detailed ablation study with a decoupled analysis of each effect would help clarify the impact of these choices. - The use of a diffusion policy and beta prior introduces computational overhead. A head-to-head comparison with other baselines would provide a clearer assessment of the method’s effectiveness. - The paper lacks robust preference learning baselines (e.g., data filtering, label smoothing, and robust loss functions) and does not explore different types of noisy preference setups. Incorporating such setups, as discussed in [1], would strengthen the evaluation. Reference: [1] Robust Reinforcement Learning from Corrupted Human Feedback. Bukharin et al., NeurIPS 2024. Other Comments Or Suggestions: Line 174 & 293: Use textual citations instead of parenthetical citations. Line 175: The in-text math equation appears to be incorrect. Questions For Authors: Please refer to the "Other Strengths And Weaknesses" section above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, we sincerely value your time and effort in evaluating our work. We have prepared comprehensive responses and clarifications to address each point you raised. We hope these responses can resolve your concerns. > Q1. There are many methods designed to be robust against noisy preferences , including these robust preference learning baselines (e.g., data filtering, label smoothing, and robust loss functions) would improve the experiments. In addition, the paper does not explore different types of noisy preference setups. **A1.** We sincerely thank the reviewer for highlighting this valuable concern. In response to your suggestions, we have conducted **additional experiments across three distinct noisy preference setups** as described in [1], including stochastic noise, myopic noise, and irrational noise. We also incorporated a wider range of **baseline methods that are robust to noisy preferences**, such as $R^3M$ [1], RIME [2], and UA-PbRL [3]. For detailed results and discussions, please refer to Section E.4 at https://anonymous.4open.science/r/Diff-UAPA-Rebuttal. --- > Q2. There are several papers addressing noisy preference labels. **A2.** Thank you for raising this concern. We would like to emphasize that, as defined in Definition 3.1, this paper considers the iterative preference alignment setting, where the preference dataset is updated in each round (potentially with inconsistencies), thus requiring the learned policy to adapt to the new preferences progressively. As shown in Proposition 4.1, the proposed prior model could capture the uncertainty within the process. In contrast, prior works on robust PbRL typically assume a static preference dataset without updates, making them not directly work for this iterative setting. However, we acknowledge the importance of robustness and its tight relationship with uncertainty. In the revised paper, we have expanded the related works section to provide a more detailed discussion of existing studies on robustness in the context of noisy preference labels, including data filtering, label smoothing, and robust loss functions [1-7]. --- > Q3. The proposed method incorporates multiple existing components, such as using a diffusion policy instead of a simple MLP and a beta prior instead of a uniform prior. A more detailed ablation study with a decoupled analysis of each effect would help clarify the impact of these choices. **A3.** Thank you for your valuable suggestions. We acknowledge the importance of a more detailed ablation study and analysis, and have conducted **ablation studies on the two components** individually. For further details, please refer to Section E.3 at https://anonymous.4open.science/r/Diff-UAPA-Rebuttal. --- > Q4. The use of a diffusion policy and beta prior introduces computational overhead. A head-to-head comparison with other baselines would provide a clearer assessment of the method’s effectiveness. **A4.** Thank you for raising this concern. The additional computational overhead can primarily be attributed to the following components: - **Diffusion policy**. While diffusion policies incur higher computational costs than simpler architectures like MLPs, this overhead is partially offset by the action sequence prediction strategy in [8]. More importantly, diffusion models are widely adopted in RL for their strong generative capabilities and superior performance. In practice, training time for diffusion is roughly twice that of the transformer in our experiments. - **Beta model**. In this work, we use efficient techniques like the reparameterization trick to improve scalability. In practice, the computational cost of training the Beta model is **similar to training a reward model** in traditional PbRL. Since our method avoids training a reward model, the added cost is less effective compared to conventional PbRL. Additionally, the extra computational cost only slightly increases training time—by a few minutes—while the subsequent RL phase is much more demanding, often taking several hours. --- > Q5. Some typos. **A5.** Thank you for your valuable suggestions. We have corrected them accordingly and thoroughly checked the paper to avoid similar issues. --- **References** [1] Robust reinforcement learning from corrupted human feedback. NeurIPS 2024. [2] Rime: Robust preference-based reinforcement learning with noisy preferences. ICML 2024. [3] A Distributional Approach to Uncertainty-Aware Preference Alignment Using Offline Demonstrations. ICLR 2025. [4] Corruption robust offline reinforcement learning with human feedback. arXiv:2402.06734. [5] Sample selection with uncertainty of losses for learning with noisy labels. arXiv:2106.00445. [6] A note on dpo with noisy preferences \& relationship to ipo. 2023, [7] Distributionally Robust Reinforcement Learning with Human Feedback. arXiv:2503.00539. [8] Diffusion policy: Visuomotor policy learning via action diffusion. IJRR 2023. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the detailed and thoughtful rebuttal. I gained a clearer understanding from the additional experiments. However, I still have some uncertainties regarding the advantages of the proposed method compared to a DPO framework combined with robustness formulations. First, the authors argue that the proposed method outperforms robust PbRL methods due to its iterative preference alignment design. However, I believe that robust PbRL methods can also be adapted to the iterative setting by explicitly or implicitly retraining their reward models across rounds. Second, as I understand it, the proposed method is effectively a two-stage approach: (1) learning the Beta prior and (2) updating the policy. If that is the case, I am curious about the benefit of this two-stage formulation over a unified one-stage approach such as DPO. Any clarification or additional insight on these points would help me better understand the effectiveness and distinct advantages of the proposed method. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, we sincerely appreciate your constructive feedback and are grateful for the time and effort you've invested in reviewing our work. We hope the following response can address your remaining concerns in two points. > *"First, the authors argue that the proposed method outperforms robust PbRL methods due to its iterative preference alignment design. However, I believe that robust PbRL methods can also be adapted to the iterative setting by explicitly or implicitly retraining their reward models across rounds."* **Response.** Thanks for your question. We would like to provide a more detailed explanation of the distinct mechanisms between the robust PbRL methods and our method, particularly in how each addresses the "outlier" samples in the dataset. - Robust PbRL methods (e.g., data filter), generally aim to **exclude noisy or inconsistent data from the training process**. While this may help reduce the impact of outliers, it also risks discarding valuable information if certain data points are mistakenly deemed as outliers. This filtering approach can result in lost opportunities for learning from diverse, potentially useful preferences. - In contrast, our method employs an uncertainty-aware framework, which utilizes the Beta prior for handling uncertainties. **Rather than discarding uncertain data points, our approach assigns them lower confidence values**, which effectively down-weights their influence on the policy learning. This means that outliers or uncertain samples are not removed outright but are treated more conservatively. By modeling uncertainties, our method ensures these samples contribute to learning while minimizing their negative impact on the policy. In the iterative alignment process, some data points may shift in each round. By simply "retraining their reward models across rounds", the reward model is very likely to disregard the "outliers" within the single round. However, the proposed method captures these potential inconsistencies **throughout the entire learning process** (i.e., across rounds, updated iteratively), enhancing overall performance. --- > *"Second, as I understand it, the proposed method is effectively a two-stage approach: (1) learning the Beta prior and (2) updating the policy. If that is the case, I am curious about the benefit of this two-stage formulation over a unified one-stage approach such as DPO."* **Response.** Thank you for your question. We appreciate the opportunity to clarify the benefits of our two-stage approach. - The two-stage formulation in our method—(1) learning the Beta prior and (2) updating the policy—offers a clear advantage in terms of **uncertainty modeling**. The Beta prior in the first stage helps to explicitly **capture and account for the uncertainties arising from inconsistent human preferences**. This prior serves as **a guidance for subsequent policy updates**, allowing the method to effectively handle noisy or conflicting data without discarding valuable information. - In contrast, a one-stage approach such as DPO **directly optimizes the policy without explicitly modeling the underlying uncertainties**. While DPO can be effective in many cases, it may struggle when dealing with noisy, inconsistent, or evolving preferences. Our approach allows for a more principled treatment of these challenges by handling the uncertainty (via the Beta prior) in the first stage, which leads to more stable and reliable policy updates in the second stage. Thus, the two-stage approach used in this work provides a more structured and robust framework, especially in dynamic environments with varying levels of preference inconsistency. In addition, as discussed in the previous rebuttal, **the computational cost of training the Beta model is slight**. We believe this separation of concerns is what enables our method to achieve superior performance in practice. Thank you once again for your thoughtful question and the opportunity to elaborate further.
Summary: This paper proposes a method to align RL policy using human demonstration and preference feedback. The method works as follows: (1) learn a reference policy from a set of human demonstration trajectories via behavior cloning; (2) learn a prior distribution about the probability that a trajectory is preferred using a set of human preferences of pairs of trajectories; (3) align a discrete-time RL policy, parameterized by θ, where each action is running a diffusion policy for a fixed amount of time. The key contribution is in step (2), where it learns a Beta prior distribution from the preference dataset that maps to one trajectory to the probability that it is preferred, and this distribution is represented by a transformer-based neural network. The motivation of this contribution is to allow the human preference dataset to be inconsistent, potentially due to the different populations who provide preference data. Results are based on comparing the 2 implementations of the proposed method with 6 benchmark methods for simulated robotic tasks (3 tasks in Robomimic, 1 long horizon Franka Kitchen, 2 environments in D4RL) with real human preferences. The authors reverse some of the preference labels in the human preference dataset to simulate inconsistent preferences. The proposed method always achieved better performance. Also, as the number of inconsistent preference labels, performance drops, but the proposed method still performs the best relatively. ## update after rebuttal I appreciate the authors' Rebuttal in addressing my concerns. I have adjusted my score accordingly. Claims And Evidence: * The main claim is that the proposed method can use inconsistent human preference data to learn RL policies where each action is one diffusion policy. This is supported by the empirical simulation study. * One limitation is that the proposed method also uses a reference policy, which is trained from some human demonstration data via behavior cloning. So, the proposed method uses both the demonstration data and the preference data. However, when I first read this paper, I didn't realize this until I saw Alg.1. * Since I didn't expect that the proposed method also uses reference policy, I got confused when reading Sec.3 and Sec.4. So it would be great to clarify the fact that the proposed method requires both human data throughout the paper from the introduction. * A small claim made in the introduction is that one advantage of the proposed method is to "bypass the reward learning". This makes sense because the proposed method learns a prior distribution about the probability of a trajectory being preferred. However, it does not seem to be backed up by the empirical study. It might be useful to compare the performance of having a reward function vs not, or cite prior literature to show this. * The paper is motivated by the inconsistency of human preferences. However, the empirical study seems to use datasets of consistent human preferences and convert them to inconsistent datasets. It would be stronger to motivate this work to choose some problem domains that human preferences datasets that are originally inconsistent. Methods And Evaluation Criteria: Yes. The empirical study compares the proposed method with baselines to show the effectiveness of learning RL policies from demonstration and preference data. Theoretical Claims: I checked the proof of the key derivation for the proposed method (Prop.4.1), which is correct. I didn't check the derivation for Eq.12 and Eq.16. Experimental Designs Or Analyses: Yes. The simulation study makes sense. Supplementary Material: I checked App.C, which is the proof of Prop.4.1. Relation To Broader Scientific Literature: This paper proposed a new method for RL from preference feedback, where each discrete RL action is a diffusion policy. It situates well in the field of using diffusion policy and RL for robotic applications. Essential References Not Discussed: No. Other Strengths And Weaknesses: ## Strength * The motivation is strong. * The method is sound. ## Weakness * One key weakness is that the paper is not very accessible, with several inconsistencies that make it hard for me to follow. * Eq.2 is very hard to understand. It has notations, such as ε_ref, that are not defined. I could not understand Eq.2 before understanding Eq.12. * Related to this, it seems that one loss function used in the proposed method, Eq.12, is a multi-step extension of Eq.2. If this is actually the key innovation behind Eq.12, then it would be great to clarify this early on when presenting Eq.2. * Sec.4.2 is also a bit hard to follow. I think it would be very helpful to explain what the prior p0(ϕ(τ)) and p0(A^πθ(τ)) means. When reading the paper, I got a bit confused about the notation p0(ϕ(τ)), because ϕ(τ) is a probability, and I guess p0(ϕ(τ)) is the density function about the probability ϕ(τ). It might be helpful to clarify that ϕ(τ) is the random variable, and p0(ϕ(τ))(.) is the density function. * Eq.6 is obtained by plugging Eq.4 into Eq.5. However, this plugging-in operation relies on that the human's reference policy is actually π_ref. However, everyone who contributes to the preference dataset might have different reference policies, and these reference policies might be different from π_ref (learned via behavior cloning). It would be more convincing to motivate this assumption. * Eq.13 is an assumption. It would be more convincing to discuss the implication or motivation of such an assumption. Other Comments Or Suggestions: NA Questions For Authors: * Line 278 says the equation: P_MAP(A(τ)) ∝ p0(A(τ)) · P_MLE(A(τ)). I am a bit confused. Is it directly coming from Bayes rule? * Sec.4.3 learns the prior p0(ϕ(τ) | D_pref). I wonder after you learn this prior as a transformer, how would you plug this learned function into Eq.16 to further optimize the policy? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, we greatly appreciate your constructive comments. We have seriously considered your suggestions, and we hope the following response can address your concerns: > Q1. The proposed method uses both the demonstration data and the preference data. **A1.** Thank you for your comment. As shown in Definition 3.1 and Figure 1, the preference dataset is obtained by comparing samples from the trajectory dataset, so no additional demonstration dataset is required. Our pre-training is performed only with the trajectories in the preference dataset, following standard PbRL practices where pre-training is widely adopted (check "Pretraining" paragraph in [1]). To enhance clarity, we have updated the Introduction section to clarify this point. --- > Q2. One advantage of the proposed method is to "bypass the reward learning"...It is useful to compare the performance of having a reward function. **A2.** Thank you for raising this concern. The advantages of bypassing reward learning have been demonstrated in many studies, including robotics [1] and LLMs [2]. To better solve your concern, we conducted **additional experiments with two-step PbRL baselines**. Please refer to E.2 at https://anonymous.4open.science/r/Diff-UAPA-Rebuttal. --- > Q3. It would be stronger to motivate this work to choose some problem domains that human preferences datasets that are originally inconsistent. **A3.** Thank you for highlighting this. The experiments on D4RL (Section 5.2) utilize real human preferences from Uni-RLHF benchmark, which were collected from 100 annotators with diverse backgrounds. We believe these real human datasets are originally inconsistent. --- > Q4. It seems that Eq.12 is a multi-step extension of Eq.2...it would be great to clarify this early. **A4.** Sorry for the misunderstanding. $\epsilon_\theta$ represents the optimized diffusion policy parameters, and $\epsilon_{ref}$ represents the reference diffusion parameters. Regarding Eq.12, it can be viewed as a trajectory-wise extension of Eq.2 (step-wise) in the RL setting. To ensure clarity, we have updated the paper with more detailed descriptions when presenting Eq. 2. --- > Q5. Sec.4.2 is hard to follow...It might be helpful to clarify that $\phi(\tau)$ is the random variable, and $p_0(\phi(\tau))(\cdot)$ is the density function. **A5.** We apologize for any confusion. As defined in the sentences following Eq. 14 and Eq. 15, $\phi(\tau)\in(0,1)$ is the probability that a trajectory $\tau$ wins against the average candidate in the dataset, and it is a **Bernoulli random variable**. The prior $p_0(\phi(\tau))$ is indeed the **probability density function** of $\phi(\tau)$, which follows a Beta distribution, serving as the conjugate prior for the Bernoulli variable $\phi(\tau)$. It reflects our initial belief on the **winning probability** of different trajectories. Based on it, $p_0(A^{\pi_\theta}(\tau))$ defines our initial belief on the **strength** of different $\tau$. We have enhanced our presentation accordingly. --- > Q6. Eq.6 is obtained by plugging Eq.4 into Eq.5...However, everyone who contributes to the preference dataset might have different reference policies. **A6.** Thank you for raising the point. We would like to emphasize that, when learning from a single preference dataset, the reference policy acts as a third-party baseline, not necessarily aligned with personal preferences. It can be any fixed policy used for applying constraints or regularization during training. For example, in LLMs, the reference policy is usually obtained via Supervised Fine-Tuning, while in RL, it's often derived from BC. --- > Q7. Eq.13 is an assumption...discuss the motivation. **A7.** Thanks for your valuable advice. The assumption is based on the fact that preferences align with the negative discounted regret (i.e., $regret(\tau)=\sum_{t}\gamma^t regret(s_t,a_t)=-\sum_{t}\gamma^t A(s_t,a_t)$). Intuitively, we can define $A(\tau)=\sum_{t}\gamma^t A(s_t,a_t)$ to represent the negative regret $-regret(\tau)$. More details can be found in [1]. --- > Q8. Line 278...Bayes rule? **A8.** Yes. --- > Q9. Sec.4.3 learns the prior $p_0(\phi(\tau)$...how to plug it into Eq.16 to further optimize the policy? **A9.** Thank you for your question. The prior model $p_0(\phi(\tau))$ encodes our learned belief about $\phi(\tau)$ for a given trajectory $\tau$. During policy optimization, for each input $\tau$, the diffusion policy computes $\phi(\tau)$ based on its definition, while the prior model outputs $[\alpha_\tau, \beta_\tau]$, which defines the target Beta distribution of $\phi(\tau)$. By maximizing the likelihood of the **diffusion model's output under this Beta distribution**, the process guides the policy optimization. --- **References** [1] Contrastive preference learning: Learning from human feedback without reinforcement learning. ICLR 2024. [2] Direct preference optimization: Your language model is secretly a reward model. NeurIPS 2023.
null
null
null
null
null
null
null
null
Data-Juicer Sandbox: A Feedback-Driven Suite for Multimodal Data-Model Co-development
Accept (spotlight poster)
Summary: The paper introduces Data-Juicer Sandbox, an open-source suite that supports co-development of multimodal data and models. It proposes a feedback-driven approach in which data processing and model training are iterated together, rather than in isolation. A central component is the “Probe-Analyze-Refine” workflow, where data are filtered or enriched using various operators, then tested with smaller reference models. Feedback is collected on both data characteristics (such as quality and diversity) and model metrics (across multiple benchmarks), leading to insights on how specific data operations affect performance. These insights guide incremental recipe-building steps, combining or scaling up promising data operations to improve model outcomes at a lower computational cost. Through experiments on three multimodal tasks (I2T generation, T2V generation, and I-T pre-training), the paper reports consistent gains in performance (including a top entry on the VBench leaderboard). The work further highlights how these data-driven refinements can transfer to larger-scale models and training runs, showing data efficiency gains and reinforcing the synergy between well-designed data curation and model training. Claims And Evidence: The paper’s core claims—that a unified sandbox improves multimodal data and model co-development by (1) revealing useful data operations via small-scale experiments, (2) reducing trial-and-error costs through incremental “probe-analyze-refine,” and (3) boosting performance on tasks such as image-to-text, text-to-video, and image-text pre-training—are mostly backed by clear experimental outcomes. For example, the text-to-video experiments include leaderboard results on VBench, which supports the performance gains they claim. The authors also provide cost comparisons and scaling analyses, lending some credibility to the claim of more efficient and systematic co-development. Potential gaps lie in: 1) Generality: Most evidence comes from three tasks; more diverse tasks or larger sets of models could further strengthen or limit the conclusions. 2) Comparative baselines: Although they compare with existing methods (e.g., Runway’s Gen-3 or other pretraining strategies), it is not always clear that the baseline setups or alternative data-processing methods were exhaustively explored. Overall, while the main claims are supported by strong empirical findings on chosen tasks, further evidence across broader data types or tasks could make the presented framework’s benefits even more convincing. Methods And Evaluation Criteria: The paper’s proposed probe–analyze–refine workflow aligns well with the intended goal of iteratively improving data and model quality. The authors use task-relevant metrics (e.g., MMBench for image-to-text, VBench for text-to-video, and standard contrastive-learning metrics for image-text pretraining), which are fairly standard and reflect the models’ target abilities. These evaluation criteria effectively show how data interventions translate into performance gains. A minor limitation is that most experiments are within three canonical tasks, so the broad generalizability of the chosen metrics or methods could be explored further. Even so, the approach is well-reasoned and uses suitable benchmarks to highlight measurable improvements. Theoretical Claims: The authors do provide a concise statistical bound (inAppendix C) that links performance changes observed in small-pool experiments to those expected at full scale, using a standard concentration inequality. Their derivation assumes (1) a simple i.i.d. sampling scheme from the original dataset and (2) that the relevant performance changes can be bounded within a fixed interval [a, b]. Under these assumptions, their stated probability bound, $ P[\Delta_{\text{pool}} - \mathbb{E}[\Delta_{\text{full}}]\ge\epsilon] \le \exp\bigl(\tfrac{-2\epsilon^2}{(b-a)^2}\bigr), $ is a straightforward application of Hoeffding’s inequality. The argument appears correct on its face, though it is relatively high-level and does not deeply address issues like data distribution shifts or non-i.i.d. data. In other words: - If the dataset and performance metrics meet the i.i.d. assumptions, the bound is formally correct. - If there is strong distribution mismatch or outlier-heavy data, the theoretical guarantee would require additional considerations. The paper does not provide deeper or more elaborate proof, and no immediate errors stand out. The result is intended mainly as a heuristic justification for why small-pool insights might translate to the full dataset. Experimental Designs Or Analyses: The paper’s “probe–analyze–refine” workflow is generally well-conceived: they start with single-operator data pools to observe isolated effects, combine top-performing operators, and then scale data. This incremental approach reduces trial-and-error costs while producing comparative feedback. They also use a consistent set of baseline models and hyperparameters so that only the effect of data changes is measured. One caution is that they rely on single-pass experiments (with no mention of repeated runs) to quantify the impact of each operator or recipe. While this is typical for many large-scale training studies (due to resource limits), it means we don’t see variance estimates that might highlight sensitivity to random seeds or sampling. Additionally, there is an assumption that small-scale subset behavior extrapolates to larger scales. They do provide a brief theoretical justification and some empirical scaling analyses, but distribution shifts or non-i.i.d. sampling could reduce perfect transfer. In short, the design is sensible and largely valid for controlled comparisons, but it would be stronger with additional runs or variance reporting to confirm each operator’s reliability in improving results. Supplementary Material: I looked over the appendices (A–E), focusing on the infrastructure details in Appendix B (including how the sandbox integrates with Data-Juicer), the cost analysis in Appendix C, the experimental setups in Appendix D, and the additional tables/plots in Appendix E (where most extended results and operator rankings are listed). These supplementary sections provide useful clarifications on implementation, extra experiments, and technical justifications. Relation To Broader Scientific Literature: ### Multimodal Model Training: Traditional vs. Co-Development - **Traditional Training:** Treats data and model design separately; data is fixed before training, limiting feedback. - **Co-Development:** Data-Juicer integrates data and model training through an iterative “Probe-Analyze-Refine” workflow, improving both simultaneously. --- ### Data-Centric AI vs. Data-Juicer - **Traditional Data-Centric AI:** Focuses on dataset curation and filtering before training, lacking real-time feedback. - **Data-Juicer:** Iteratively refines datasets using model performance, optimizing data selection dynamically. --- ### Cost-Efficient Co-Development - **Challenges:** Large models require costly computation; efficient iteration is crucial. - **Data-Juicer’s Solution:** Uses small-scale experiments before full training, reducing unnecessary compute. --- ### Optimizing Data Selection - **Data-Juicer’s Insight:** Task-specific data strategies—image-text models benefit from diversity, video models need high-quality samples. - **Compared to Other Methods:** Unlike RL and gradient-based selection, Data-Juicer enables empirical testing of data modifications. Both reinforce feedback-driven data selection, optimizing dataset quality alongside model training. Essential References Not Discussed: The paper does not cite Ling et al. (2025), "Diversity as a Reward" (DAAR), which introduces a structured framework for optimizing data diversity during fine-tuning. DAAR provides an automated approach to balance inter-domain and intra-domain diversity, using model-driven feedback to guide data selection. This contrasts with Data-Juicer's human-guided, iterative approach to dataset refinement. Additionally, JEST (Evans et al., NeurIPS 2024) formulates data selection as a dynamic optimization problem, significantly reducing training iterations by prioritizing high-value samples. This aligns with Data-Juicer’s goal of efficient data-model co-development but takes a different approach by embedding selection directly into the training loop. Ling, Zhenqing, Daoyuan Chen, Liuyi Yao, Yaliang Li, and Ying Shen. "Diversity as a Reward: Fine-Tuning LLMs on a Mixture of Domain-Undetermined Data." arXiv preprint arXiv:2502.04380 (2025). Evans, Talfan, Nikhil Parthasarathy, Hamza Merzic, and Olivier Henaff. "Data curation via joint example selection further accelerates multimodal learning." Advances in Neural Information Processing Systems 37 (2024): 141240-141260. Other Strengths And Weaknesses: Strengths: 1. The paper proposes a tangible, open-source sandbox framework that addresses data-model co-development – a recognized but underexplored area in multimodal research. 2. The “Probe–Analyze–Refine” strategy is clear, methodical, and well-presented. It extends beyond high-level theory by offering a practical pipeline (with code) that other researchers can use. 3. Empirical results underscore the potential cost savings of systematically iterating on small data subsets before scaling up – valuable in a field where full-model training costs can be huge. 4. Demonstrations in image-to-text, text-to-video, and image-text pretraining are reasonably diverse, strengthening the paper’s impact and showing general applicability. Weaknesses: 1. No Formal Validation of Combining Operators: While the paper clearly shows individual operators’ effects, the process of stacking multiple top operators sometimes degrades performance. It remains unclear why certain combinations interact negatively. For instance, a multi-operator strategy could preserve only a narrow slice of data distribution or inadvertently filter out complementary samples. A deeper exploration of these “operator interaction” phenomena (e.g., synergy vs. conflict) would strengthen the paper’s argument that combining top operators is not trivially beneficial. 2. Limited Empirical Repetitions: The paper often uses single-run experiments for each operator/recipe, presumably to control computing. However, large-scale training can show variance in results (random seeds, sampling). Without repeated trials or uncertainty estimates, it is difficult to assess how reproducible (or fragile) these improvements are. The text occasionally notes that cost constraints make multiple runs challenging, but even small-scale tests repeated 2–3 times would add confidence that results do not hinge on luck. 3. Unclear Impact of Distribution Shifts: The framework assumes that small-pool “probe” experiments reliably anticipate behavior at full scale. Yet in practice, distribution shifts or domain mismatches could exist between the reduced sampling and the real-world training set. In particular, data rebalancing or filtering might produce biases that only manifest when the entire large dataset is used. More discussion or case studies on how best to mitigate such shifts would be valuable. Other Comments Or Suggestions: 1. While the blueprint for combining data-centric methods (e.g., filtering, annotation generation) with model-centric improvements is very appealing, the Data-Juicer Sandbox as presented does not clearly include many mechanisms for the model’s evolving needs. In other words, there could be additional iterative routines where model improvements inform new data requests or automated prompt adjustments. Explaining how the pipeline might be extended to facilitate continuous model-data evolution would strengthen the paper’s vision. 2. Consider adding a “best practices” subsection in the appendix, summarizing lessons about which operators and metrics are especially critical under different conditions. This might serve as a quick reference for new sandbox users. 3. The text occasionally references “industry-level” or “production-ready” usage; clarifying any special hardware or cluster requirements would make it clearer how to adopt this in large-scale settings. 4. Minor editorial note: Some references in the bibliography appear slightly inconsistent in formatting (e.g., capitalization in titles). It may help to standardize them. Questions For Authors: 1. Why All Operators Are Filters? The paper positions several “operators” (OPs) as different filtering or perplexity-based selection procedures. It is not fully clear whether the pipeline includes other augmentation methods or more diverse operators (e.g., paraphrasing Q&As, image transformations). Highlighting (or adding) such diversity might boost the effectiveness and generality of this work. 2. Could you elaborate on why the highest-performing individual operators sometimes fail to improve results when combined? Are certain filters redundant, or do some systematically remove data that others rely on? How might future versions of the sandbox help diagnose when operators conflict? 3. Do you have preliminary variance estimates (e.g., dev set performance standard deviations) for a small subset of your probe experiments? If so, how large are the fluctuations? Knowing approximate variability would help judge whether a single-run improvement of, say, +1% is robust or possibly random. 4. Have you found any real-world cases where an operator looks beneficial in small-pool settings but fails to extrapolate at full scale (or vice versa)? If so, how do you suggest researchers detect and mitigate such mismatches early on? 5. One appealing application would be continuously refining data as new samples appear (e.g., updated data streams). Would your sandbox accommodate dynamic additions to the dataset, or is it currently designed only for static snapshots? 6. You mention “industry-level” usage in passing. Do you envision a canonical workflow for large organizations to adopt Data-Juicer Sandbox, or do you foresee each group creating highly customized pipelines? A brief discussion of how you see this platform evolving in real deployments would be illuminating. 7. Because reproducibility and generalization are important for future work, could you share more implementation details or guidelines to ensure that this framework can be adapted to different tasks? How to ensure the complex codebase can be easily adapted and reused. 8. I intend to give a rating of "accept" due to the potential impact of this work. However, I am still concerned about the reusability of this work. I will consider raising my rating further if all concerns are addressed properly. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your time, thorough evaluation and valuable feedback! Below, we address all your raised Weaknesses (W), Comments (C), Questions (Q) and Suggestions (S). > New results: https://anonymous.4open.science/r/icml_submission8414-7C89/rebuttal.pdf --- ## [C on Claims & Methods] "Potential Gaps in Generality ... are within three tasks" We have added new experiments to enhance generalization verification, including scaling on the InternVL series (1B, 2B, 4B, 26B) and a new captioning task. For details, see reply to Reviewer CLJN. --- ## [C on Variance, W2 & Q3] "... Without repeated trials ... " You may have missed this detail (e.g., error bars in Fig. 3). We have ran experiments 2–5 times, but included std in the appendix (lines 901, 991, Table 6) due to space limits. Note the reported performance reflects changes relative to *random pools*. Consistent *positive values* across trials demonstrate reproducibility and generalizability. --- ## [C on Literature; S1 & Q5] DaaR & JEST; "Iterative routines..." Thank you for pointing out these works! DaaR was released (Feb 5) after the ICML submission ddl (Jan 9) and already cites and builds upon our work (see its Sec B.4). JEST's dynamics and your mentioned iterative sandbox align with future directions we aim to explore. We will cite both works in the final version and expand discussions accordingly. Besides, during the rebuttal, we added new experiments supporting iterative training: starting with ckpt0 → identifying recipe1 → training ckpt1 → refining recipe2 → training ckpt2. Interestingly, while recipes evolved, ckpt2 still improved performance. For details, see reply to Reviewer 3zp6. --- ## [W1 & Q2] OP Combination: Synergy vs. Conflict, Failure vs. Detection Great suggestion! In Appendix E.3–E.5 (Table 8, Figs. 5–8), we provide quantitative analyses using Pearson correlations and entropy. We plan to extend this suite with automated and visualized lineage analysis tools. For example, t-tests can detect significant changes in filter outputs' stats dimensions after adjacent OPs. A practical use case: if an image-text matching filter notably reduces text length stats, it may indicate noisy captions affecting accuracy. --- ## [W3 & Q4] Impact of Distribution Shifts Thank you for this important comment! While addressing distribution shifts robustly requires substantial effort (left for future work), we highlight: 1. Empirically, our results already cover challenging distribution shifts (Table 5), where small-scale recipes benefit larger scales across model architectures, data, & compute. 3. Methodologically, the framework’s open-source extensibility allows flexible customization of workflows while reusing data/model infra (see Sec. B.1–B.3). New hooks can integrate advanced strategies like dataset distillation [1] and intermediate tensor shaping [2]. Tools mentioned in [W1, Q2] also offer intuitive signals. [1] Large scale dataset distillation with domain shift, ICML'24 [2] Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement, ICLR'24 --- ## [S2~S4] Writing Improvements Thank you! We have provided hardware details (lines 927–992, 1037–1041) and will expand the "best practices" subsection with summarized OP-specific labels from Tables 6–7 and Figures 2, 9. We’ll also unify reference formatting as suggested. --- ## [Q1] OPs Beyond Filters (e.g., Paraphrasing Q&As, Image Transformations) Insightful question! Per your request, we conducted new experiments on two representative Mappers using MGM: - image_diffusion_mapper: Regenerates a new image for each sample based on the caption. - image_captioning_mapper: Recaptions the existing image in each sample. `Results` - As shown in *Table 3* and *Fig. 3* of the new PDF, *image_diffusion_mapper* significantly improves performance compared to the original images. - Fig. 3 highlights a visual example before and after processing with these mappers. Core objects in captions are marked in red. We observe: (1) Diffusion models effectively locate and better present key objects (e.g., "setting sun") in generated images. (2) They also remove redundant information (e.g., watermarks, link text), likely contributing to performance gains. --- ## [Q6 & Q7] Deployment & Sys Details - Appendix B discusses the architecture, capability factory, behavior hooks, extensibility. This decoupling enables rapid adaptation to new tasks while reusing workflows, helping us do quick new exps during rebuttal. The work is also already deployed in several industrial scenarios (specific entities undisclosed due to anonymity). - To simplify complexity, we are service-ifying it: RESTful APIs & MCP servers for OPs, unified & transparent environment switch for third-party modules. These efforts reduce programming complexity. --- ## [Q8] Reusability & Rating We will incorporate these improvements into final version and hope they strengthen your confidence. Thanks again! --- Rebuttal Comment 1.1: Comment: Thanks for the author's responses. Most of my concerns have been addressed. However, one of my personal questions has been ignored, i.g., "While the blueprint for combining data-centric methods with model-centric improvements is very appealing, the Data-Juicer Sandbox as presented does not clearly include many mechanisms for the model’s evolving needs. In other words, there could be additional iterative routines where model improvements inform new data requests or automated prompt adjustments. Explaining how the pipeline might be extended to facilitate continuous model-data evolution would strengthen the paper’s vision." Specifically, authors claimed that they present a new sandbox suite tailored for integrated model-centric and data-centric developments. So where is the model development part? I haven't seen any implementation in Data-Juicer Sandbox proposed for the convenience of model development yet. If my understanding is biased, please point it out also. --- Reply to Comment 1.1.1: Comment: > Thanks for the author's responses. Most of my concerns have been addressed. However, one of my personal questions has been ignored ... We appreciate your reply! Due to space limit, we consolidated answers under "[C on Literature; S1 & Q5]," which may have been too brief. Below, we address your question in more detail. --- ## Clarification on Model-Dev > So where is the model development part? I haven't seen any implementation in Data-Juicer Sandbox proposed for the convenience of model development yet. If my understanding is biased, please point it out also. We clarify that foundation model development involves three core phases: model selection (architecture), training (pre/post-training), and evaluation (foundational/downstream tasks). To streamline these stages, our sandbox acts as middleware, decouples and reuses existing infra from open-source community into all-in-one solutions. ### Design Perspective As detailed in Appendix B, we employ a modular architecture for these phases: - *Bottom Layer*: Factory classes integrate diverse training and evaluation libraries. - *Middle Layer*: Encapsulates model-agnostic behaviors like probe, train & evaluation hooks. - *Top Layer*: Implements workflows via ordered job lists, enabling users to adjust configuration files to define scenarios, hooks, metrics, and workflows. This design reduces the need to learn disparate libraries or start from scratch, allowing flexible reuse and combination of representative data-model dev modules: - For example, our experiments cover rapid prototyping across 5 architectures, ~80 metrics, and diverse pre-/post-training setups. - Thanks to its middleware design, the sandbox evolves with its underlying libraries, amplifying its utility through open-source contributions. For instance, with ModelScope-Swift & OpenCompass, it indirectly supports 500+ LLMs & 100+ eval datasets. The proposed "probe-analyze-refine" workflow is just one instantiation of sandbox's capabilities. More workflows can be extended while reusing pre-built codes, as illustrated later. ### Code-Level Efforts To simplify user actions and reduce programming complexity, we provide - One-line scripts to run the sandbox and adjust cfgs. - Dynamic dependency resolution and containerization for auto-setup. - Lazy-loader of Python packages for easy module loading. - Unified scripts to handle environment preparation, with third-party modules organized as services, each with its own entry program, and mapping to conda cfgs. --- ## Extending the Pipeline > There could be additional iterative routines where model improvements inform new data requests or automated prompt adjustments. Explaining how the pipeline might be extended to facilitate continuous model-data evolution would strengthen the paper’s vision. Beyond the flexibility of layered architecture, we demonstrate extensibility with two examples aligned with your suggestion: ### Case 1. Model Improvements Inform New Data Requests - `How to Extend` By wrapping a configurable loop around the top-level workflow (5–10 lines of code changes), we reuse the "probe-analyze-refine" workflow to enable iterative model improvements. - `New Example Added During Rebuttal` - Setting: Starting with *ckpt0*, we refined *recipe1* → trained *ckpt1* → refined *recipe2* → trained *ckpt2*. The 1st iter aligns with single-OP exp from previous tasks. We then selected the best ckpt and applied top OP to original dataset as baseline. - Results (summarized in Table 2 of uploaded PDF): - The 2nd round further improves performance. - Compared to the 1st round: - *2 new OPs* emerge in the top-10. - *7/10 OPs* increase their rankings by an average of *6.14 positions*. - *3/10 OPs* decrease their rankings by an average of *3.33 positions*. - Using the same OPs, the perf of data pools changed, providing actionable insights into data processing needs for dynamic environments. ### Case 2. Model Improvements Inform Automated Prompt Adjustments - `How to Extend` By adding a model inference hook in the middle layer (~40 lines of code changes) and updating its prompt cfg file, we can reuse the infra to study and optimize prompts. - `New Example Added During Rebuttal` - Setting: Using top-1 recipes found in these two iters, we studied the effects of 10 different prompts, which were auto-generated by Qwen2.5-max with "You are a prompt expert, plz optimize the given one: {pmt}." - Results (summarized in Table 4 of uploaded PDF): - The ranking and performance of these prompts vary significantly across iterations (e.g., ranging from -1.673 to +0.944). - Some prompts beat the baseline. - Some prompts enhance/degrade performance in both iters, while others show inconsistent behavior: performing better in the 1st round (>0) but worse in the 2nd round (<0), or vice versa. - These findings highlight sandbox's potential for auto-optimizing model configurations in the context of co-development
Summary: This paper introduces the data juicer sandbox, an open-source suite that analyzes various metrics and make use of heuristics to facilitate the integrated development of multimodal data and models. The proposed "Probe-Analyze-Refine" workflow was validated through image-text pre-training with CLIP, MLLMs, and text-to-video generation. The `feedback-driven` approach, manually executed rather than automated, yields performance improvements and provides insights into the interplay between data quality, diversity, model behavior, and computational costs. Claims And Evidence: The primary contribution of the paper lies in its observations derived from comprehensive small-scale exploration. However, the generalizability of these conclusions, derived in low data regimes, is questionable, particularly during specific model and task stages (eg the pretraining of a 2B MLLM). It remains uncertain whether these insights can effectively scale up with increased model parameters and data volume. Furthermore, the conclusions are not expressed with precision or sufficient validation . For example, the definitions of quality and diversity are based on the performance/feedback of minitest on the target tasks, with the number of combinations of different quality levels serving as the diversity metric, rather than widely acknowledged measures such as embedding or domain specification. The definition of diversity, based on varying OP scores or the entropy used in Sec.E3, works like simple bin sampling in traditional AL, may work but not effective, which makes the claim less persuasive. Besides, the conclusion differ from previous findings, such as Scaling Laws for Data Filtering -- Data Curation cannot be Compute Agnostic, CVPR24. Methods And Evaluation Criteria: The cost of using the sandbox to identify high-quality training data is described as (1 + mr) ≤ M, where M represents the epochs required for training on the original data, m is the number of epochs times the number of OPs on the subset, and r is the subset ratio. The authors claim this cost is "feasible," yet it appears impractical. In fact, the cost of searching for optimal hyper-OPs might exceed that of simply training on the full dataset. This issue is common in data selection work, such as Diddee etal. Chasing Random: Instruction Selection Strategies Fail to Generalize, 2024 Xiaetal. Rethinking Data Selection at Scale: Random Selection is Almost All You Need, 2024. Theoretical Claims: There is no theoretical claims in the manuscript. Experimental Designs Or Analyses: Yes I've checked the validity of the proposed experimental designs. The experiments conducted are comprehensive. My concerns are listed above. Supplementary Material: I've reviewed Sec. C of the cost analysis, Sec. D for evaluated metrics, statistics and training details, and Sec. E for the diversity analysis and data recipes. Relation To Broader Scientific Literature: The significance of the task and the motivation behind the work are both significant. However, there are doubts regarding the practical applicability of the experimental design and the generalizability of the conclusions drawn from the study, as discussed above. Essential References Not Discussed: Scaling Laws for Data Filtering -- Data Curation cannot be Compute Agnostic, CVPR24. Active learning for convolutional neural networks: A core-set approach. ICLR 2018. Other Strengths And Weaknesses: The authors' engineering implementation and comprehensive experiments covering three different tasks should be acknowledged. Other Comments Or Suggestions: no further comments Questions For Authors: no further questions Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your time, insightful feedback, and recognition of the work's significance! Below, we address your comments point by point. > The new results: https://anonymous.4open.science/r/icml_submission8414-7C89/rebuttal.pdf ### [Para 1 in Claims part] "generalizability is questionable, ..., particularly during specific model and task stages (eg the pretraining of a 2B MLLM)" Per your suggestion, we conducted new experiments on a larger model suite (*1B, 2B, 4B, 26B InternVL*) and a different task (*fine-tuning for captioning*). For details, plz see our response to Reviewer cLJN. Collectively, extensive experiments demonstrate the generalizability, covering different - *Scenarios*: Text-to-video generation, image-text pre-training, image-to-text generation for general image understanding & image captioning, iterative enhancement (see response to Reviewer 3zp6). - *Stages*: both pre-training and fine-tuning. - *Model Architectures*: EasyAnimate, T2V-Turbo, MGM, InternVL2, and CLIP. - *Scales*: Model parameters, data sizes, and compute FLOPs. - *Data Processing Solutions*: over 40 Data-Juicer Filters and Mappers (see response to Reviewer 5ZDF) - *Model Feedback*: 78 metrics. --- ### [Para 2 In Claims part] "... measures such as embedding or domain specification ... " "differ from previous findings ... CVPR24" `More acknowledged measures` Following your suggestion, we conduct new experiments to dive into the training data in the text-to-video task with a tagging model, extracting core objects, concepts, and domains. We analyzed the domain-specific distribution of selected examples (*video_nsfw_filter*) via word clouds of these tags. Results are in Fig. 2 of the uploaded PDF. As we can see, although the data pool with high video NSFW scores is the most diverse one, our model achieves the best performance on the data pool with low video NSFW scores, which is consistent with the argument presented by the left column of lines 296-298 in our submission. `The CVPR24 work` Thanks to bring this work into our attention. We respectfully highlight that the differences in conclusions stem from distinct experimental settings, which we believe lead to complementary insights: 1. **Data Curation Setup:** In their most relevant scaling experiments (Figs 5 & 11), the authors examine data filtering strategies by varying the *"quantile thresholds"* while keeping the filtering operator fixed. In contrast, we explore *different filtering operators* while maintaining the same "quantile threshold." 2. **Task and Compute Setup:** Their work mainly focuses on DataComp and CLIP pretraining task, scaling compute with duplicated data. Beyond CLIP, we validate our data recipes across more tasks, including text-to-video generation, image-to-text pretraining, and post-training. For a detailed comparison, please refer to Fig.3 & Table 5 in our original paper, and the newly added InternVL experiments mentioned above. --- ### [Methods part] "feasible, ... yet it appears impractical." "This issue is common in data selection work, such as ..." `HPO considerations` We acknowledge that the cost analysis in our claim omits the coefficient introduced by HPO, under the practical assumption that *HPO is equally required for fair comparisons in large-scale scenarios*. In fact: - Large-scale systems often introduce additional complexities (e.g., debugging in distributed clusters, hardware/software errors), which can inflate the HPO coefficient significantly. - Industry practices typically rely on small-scale experiments to predict or generalize HPO for larger settings, as discussed in [1]. [1] *Predictable Scale: Part I — Optimal Hyperparameter Scaling Law in Large Language Model Pretraining.* `The work "Random Selection is Almost All You Need"` We thank the reviewer for pointing out it. However, our work differs from it notably in both scope and baseline comparisons: 1. **Different Scenarios:** - Our work focuses on *four diverse multimodal tasks* (text, image, video), whereas the cited works primarily study *pure text post-training*. - Our data selection strategy is derived from *40+ operators and 70+ performance metrics*, offering a broader and more adaptive framework compared to specific strategies studied in this work, like GPT-based scoring or heuristic methods (e.g., longest instruction). 2. **Empirical Evidence of Feasibility:** - All numerical results in our small-scale experiments are benchmarked against a *Random baseline pool of equivalent data size* (lines 216 & 227). - The positive values reported in Table 3 and the Flops comparison across different scales demonstrate the effectiveness of our method in identifying solutions that consistently outperform random selection across various scenarios. --- We appreciate the valuable feedback and hope that these responses can address your comments, leading you consider an increase in the rating. Thank you once again!
Summary: This paper introduces a sandbox suite with a feedback-driven experimental platform, which supports cost-effective iteration and guided refinement of both data and models. The authors conduct experiment on image-to-text generation, text-to-video generation and image-text pretrianing. The results demonstrate the proposed sandbox can effectively improve dataset and models, and also give valuable insights into the relationship data processing and model performance. Claims And Evidence: The paper claims that the proposed sandbox effectively achieves data-model co-development. However, only the contribution to data refinement has been proved, while how the sandbox improves model development compared to other platforms remains unclear. Methods And Evaluation Criteria: The proposed method for data and model development is reasonable. However, the evaluation mainly focuses on the sandbox's application in data refinement, which may not be enough to demonstrate its superiority over other platforms. Theoretical Claims: I have checked the correctness of theoretical claims in the submission. Experimental Designs Or Analyses: The evaluation mainly focuses on the sandbox's application in data refinement, which may not be enough to demonstrate its superiority over other platforms. Supplementary Material: I have reviewed the appendix and the anonymous code repository. There is no additional supplementary material of this submission. Relation To Broader Scientific Literature: The authors introduce a new sandbox for cost-effective data and model refinement, which can enhance the efficiency of model development and is valuable to the research community. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper proposes a platform for efficient data and model development, which is valuable for improving research efficiency. 2. The paper analyzes the impact of different data recipes on model performance and provides several valuable insights. Weaknesses: 1. Although the paper claims the proposed sandbox effectively enables data-model co-development, only the contribution to data refinement has been proved, how the sandbox improves model development compared to other platforms remains unclear. 2. Data recipes obtained from small-scale experiments may not generalize to large-scale experiment, thus its challenging to determine the effectiveness of the sandbox on large-scale training. 3. The organization of the paper is somewhat confusing. I think the sandbox's properties should be the core focus, but there are many paragraphes talking about observations from data experiment. This makes it hard for readers to focus on the key advantages of the sandbox. Other Comments Or Suggestions: It would be better to improve the organization of the paper to enhance readability. Questions For Authors: 1. I am curious to learn about how the data recipes obtained from this sandbox compare to the optimal data recipes in large-scale experiments. 2. What are the advantages of the sandbox in model refinement? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for recognizing our work as "valuable for improving research efficiency" and providing "valuable insights"! Below, we address the raised weaknesses (W) and questions (Q) with point-to-point clarifications. > The mentioned new results: https://anonymous.4open.science/r/icml_submission8414-7C89/rebuttal.pdf --- ## [W1 & Q2] "how the sandbox improves model development compared to other platforms remains unclear." "What are the advantages of the sandbox in model refinement?" - In the perspective of **development process**: The sandbox is designed for *cost-sensitive* model optimization, enabling broader and more quantitative experiments within the same resource constraints. Compared to naive trial-and-error, this allows *more possibilities* for model improvement with less computational cost. - In the perspective of **transferable mechanism**: Our sandbox is insight-driven and emphasizes *analysis and scaling laws* derived from extensive experimental feedback. This approach has been validated across 40+ data processing operations and 70+ performance metrics, with empirical success on 5 cross-task, cross-scale models (e.g., EasyAnimate, T2V-Turbo, MGM, CLIP, InternVL). - **Rapid Expansion & New Experiments**: The framework is highly flexible and extensible: - We added experiments on a new model series, *InternVL (1B, 2B, 4B, 26B)*, to verify generalization in captioning tasks. Details are in our reply to Reviewer CLJN. - We demonstrated iterative training: starting with ckpt0 → refining recipe_1 → training ckpt1 → refining recipe_2 → training ckpt2. Notably, ckpt2 improved performance despite evolving recipes. See our reply to Reviewer 3zp6. - **Evidence Summarization**: Collectively, extensive experiments demonstrate the effectiveness and usability of the sandbox, covering different - *Scenarios*: Text-to-video generation, image-text pre-training, image-to-text generation for general image understanding, and image captioning, iterative enhancement. - *Stages*: Both pre-training and fine-tuning. - *Model Architectures*: EasyAnimate, T2V-Turbo, MGM, InternVL2, and CLIP. - *Scales*: Model parameters, deduplicated data sizes, and compute FLOPs. - *Data Processing Solutions*: Over 40 Data-Juicer Filter OPs and Mappers (see response to Reviewer 5ZDF) - *Model Feedback*: 78 metrics. --- ## [W2 & Q1] "Data recipes obtained from small-scale experiments may not generalize to large-scale" "I am curious to learn about how the data recipes obtained from this sandbox compare to the optimal data recipes in large-scale experiments." - **Clarification on Workflow**: Recall that our workflow begins with small-scale experiments (Sec 4.2–4.3) but rigorously validates findings on larger scales (Sec 4.3–4.4). Empirically: - Small-scale recipes benefit larger scales across architectures, data, and compute (see structural summarization in Table 5). - Scaling behavior (Fig.3 & new InternVL exps), high data efficiency (Table 2), and SOTA performance (Table 3, with many *SOTA* large-scale recipes used by baselines) are also validated even under challenging distribution shifts. - **Methodological Flexibility**: The open-source framework supports custom workflows while reusing data/model infra (Sec B.1–B.3). Advanced strategies like dataset distillation [1] and tensor shaping [2] can be integrated via new hooks. We leave theoretically addressing distribution shifts as future work, which requires substantial new effort. [1] Large scale dataset distillation with domain shift, ICML'24 [2] Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement, ICLR'24 --- ## [W3 & Appendix] "There is no supplementary material of this submission", "I think the sandbox's properties should be the core focus, but there are many paragraphs talking about observations from data experiment" We agree that the sandbox's properties are critical and clarify potential misunderstandings: - **Extensive Appendix**: Contrary to the comment (you may not have noticed our details), we provide a dedicated appendix (~1.5 pages) focusing on infrastructure: - Overview architecture (Sec B.1), - Capability factory and behavior hooks (Sec B.2), - Extensibility (Sec B.3). - **Main Paper Focus**: Due to space constraints, we prioritize showcasing practical results and insights in the main text. Without concrete demonstrations, readers may struggle to appreciate the sandbox’s utility and understand its usability. System details are thus deferred to the appendix and source code for a smooth and progressive reading experience. --- Thanks again for your time and valuable comments! We hope that you can consider an increase in the rating if these responses and clarifications address your comments. If you have any additional comments, we warmly invite you to share them with us. --- Rebuttal Comment 1.1: Comment: Thanks for the author's responses. I still have some concerns that the experiments presented in the main paper may not effectively demonstrate the advantages of the proposed sandbox compared to existing approaches-particularly regarding the claimed property of integrated data-model co-development. Nonetheless, I acknowledge the sandbox’s contribution to the multimodal research community and would like to raise my score to 3. --- Reply to Comment 1.1.1: Comment: We appreciate your acknowledgment of the sandbox's contributions! To address the remaining comments on its advantages, we provide clarifications below. --- ## Key Differentiations from Existing Works In Appendix A (lines 672–752), we outlined how the sandbox differs in three areas: ### 1. Model-Centric Approaches Existing efforts focus on refining training algorithms, architectures, and applications. However: - They rely heavily on scaling laws and optimization techniques, which are computationally expensive and dataset-specific. - They rarely analyze how data processing impacts downstream performance. Our advantage: The sandbox links data processing effects to model performance through systematic experiments, offering actionable insights into data-model interactions. ### 2. Data-Centric Approaches Recent trends emphasize data quality and scale. However: - These methods isolate data processing from model training. - They depend on heuristic methods, such as filtering based on human intuition. Our advantage: The sandbox provides a systematic framework for data-model co-development, treating both as equally important. For example: - We analyze correlations between data pools and reference model metrics across tasks. - We extend beyond CLIP-like models to include LLaVA-like and DiT-based models for broader applicability. ### 3. Open-Source Infrastructures While strong infrastructures exist for model training and evaluation, multimodal data-model co-development remains underdeveloped: - Current tools focus on single-modal data or dataset-specific preprocessing. - No dedicated open-source platform exists for foundation model co-development. Our advantage: The sandbox integrates cutting-edge model-centric infrastructures with the Data-Juicer system, creating a streamlined environment for co-development. --- ## Key Differentiations in System Design Our design is modular and one-stop. Appendix B details its distinct architecture: - *Bottom Layer*: Factory classes integrate diverse training/eval libraries. - *Middle Layer*: Encapsulates model-agnostic behaviors like probing, training, and evaluation hooks. - *Top Layer*: Implements workflows via ordered job lists, enabling users to adjust configurations via files. This design reduces the need to learn disparate libraries or start from scratch, allowing flexible reuse of representative modules: - Our experiments span 5 architectures, ~80 metrics, and diverse pre-/post-training setups for multimodal models. - Thanks to its middleware design, the sandbox evolves with underlying libraries, amplifying utility through open-source contributions (e.g., supporting 500+ LLMs and 100+ eval datasets). At the code level, we simplify user actions by providing: - One-line scripts to run and adjust configurations. - Dynamic dependency resolution and containerization for auto-setup. - Lazy-loading of Python packages for easy module loading. - Unified scripts for environment preparation, with third-party modules organized as services. --- ## Key Differentiations in Extensibility Another strength is the sandbox's high extensibility, adaptable to various scenarios. Below are two examples of extending workflows. ### Case 1: New Data Requests - How to Extend: By wrapping a configurable loop (~5–10 lines of code changes), we reuse the "probe-analyze-refine" workflow for iterative model improvements. - New Example Added During Rebuttal - Setting: Starting with *ckpt0*, we refined *recipe1* → trained *ckpt1* → refined *recipe2* → trained *ckpt2*. The first iter aligns with single-OP experiments from previous tasks. We then selected the best checkpoint and applied top OPs to the original dataset as a baseline. - Results (summarized in Table 2 of the uploaded PDF): - The second round further improves performance. - Compared to the first round: 2 new OPs emerge, and OP rankings changed by an average of -3.33~6.14 positions. - Using the same OPs, data pool performance changes, providing actionable insights for dynamic environments. ### Case 2: Automated Prompt Adjustments - How to Extend: By adding a model inference hook (~40 lines of code changes) and updating its prompt cfg file, we reuse prior codes to study and optimize prompts. - New Example Added During Rebuttal: - Setting: Using top-1 recipes from two iterations, we studied the effects of 10 different prompts auto-generated by Qwen2.5-max with the prompt: "You are a prompt expert, please optimize the given one: {pmt}." - Results (summarized in Table 4 of the uploaded PDF): - Prompt rankings and performance vary significantly across iterations (e.g., ranging from -1.673 to +0.944). - Some prompts outperform the baseline, while others show inconsistent behaviors. - These findings highlight the sandbox's potential for model cfg optimization. --- Hope the response can clarify its unique pros to advance integrated data-model co-dev. Thank you again for your valuable feedback!
Summary: In their work, authors describe and implement a method and procedures for data-model co-development, aiming on improving pretraining of various foundation model types (language-vision CLIP, diffusion based text-to-video generative models, Llava based image-text generative model). The framework authors introduce is Data Juicer Sandbox. It sets up a pipeline for data composition, making use of available data pools to apply various data processing operators (OP) aiming for data composition that will improve model training. Authors define a probe-analyse-refine workflow, which applies OPs to data pools, employing low cost model training and evaluating those, which provides dataset quality scores. Based on those, OPs leading to top performing evals can be combined to processing pipelines. Those are used to create datasets at larger scales, which should further improve model pre-training. Authors test their approach by performing dataset composition for the aforementioned model types and showing that it leads to strong scores on standardized benchmarks like MMBench, VBench, ImageNet zero-shot classification, such that obtained models compare well to the known strong reference baselines. Claims And Evidence: Main claim of the study is to be able to perform model-dataset codevelopment. Authors argue that dataset composition performed by their method leads to stronger models by comparing models they obtain to well known references. The comparison is done on fixed compute scales for each model (for video models) or on few reference scales (for CLIP). For full comparison, I think it would require a proper scaling law per each model, as taking fixed reference compute scales might be misleading (eg differences models show for certain fixed scales can change across scales, so trend across scales for each tested model is the actual interesting measure for comparison). Another problematic issue I think is that to claim codevelopment, authors would have to show at least two iteration cycles of dataset composition and model improvement. In current form it seems to me that while the evidence might enough to claim dataset composition via the described procedure leads to training a strong model, it is not clear whether then the obtained model can be used to further improved dataset composition, or at least that the dataset can be further improved from insights of conducted experiments, and so on (which would be the case of co-development). Methods And Evaluation Criteria: Authors make use of a number of standardized benchmarks to measure the model quality, which makes sense. Theoretical Claims: Authors make a theoretical argument to back up the approach for carrying over observations done on small scale pool experiments to larger dataset composition. It seems to me the derivation makes sense. Experimental Designs Or Analyses: The experimental design that describes the dataset composition testing pipeline seems sound to me. Supplementary Material: Supplementary material contains some crucial parts, eg details experiments with CLIP reference models, that should be rather included in the main text. It seems one important figure mentioned in the text related to CLIP experiments is missing - Figure 12: "For the CLIP experiments, refer to Figure 12" (p. 26, L. 1424) Relation To Broader Scientific Literature: The work continues tracks set by important previous works like DataComp, emphasizing its difference in attempt to incorporate model signals during dataset composition. It is however I think not correct that previous works were not doing so, eg DataComp was incorporating CLIP models to filter the data, which then can be also considered as a case of model-data codevelopment. As authors do not show many iterations of such a cycle, they do not really push further in approach compared to DataComp, in my opinion (which would be different if authors have demonstrated iterative improvement of model over some cycles of their co-development) Essential References Not Discussed: Relevant scientific literature is well covered. Other Strengths And Weaknesses: Strength of the work is variety of different models authors consider and in attempt to define a generic pipeline for dataset composition improvement. Weakness are missing scaling laws that would be necessary to see whether improvements are happening indeed across various scales and not on some fixed reference scale authors happen to choose. Other Comments Or Suggestions: A lot of important information on CLIP experiments is in the Appendix. It would be good to have those more visible in the main part, as this is important reference model class and showing improvements there provides strong evidence for the validity of the dataset composition approach. UPDATE: increasing score to 3 after clarifications on model-data co-development. Questions For Authors: Scaling law derivation is performed on smaller scales. Would it be possible to conduct dataset composition at smaller scales and show that compositions made by the approach result in consistent trend improving across scales, without investing much compute? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your recognition of the *method, evaluation, theoretical argument & exp design* of our work. Regarding your raised concerns, we address all of them point by point as follows: > New results added: https://anonymous.4open.science/r/icml_submission8414-7C89/rebuttal.pdf --- ### [In Evidence, Weakness & Question parts] "experiments conducted on *per model* with *various scales*; consistent trend improving across scales?" Following your suggestion, we conducted additional experiments on a larger model suite (*1B, 2B, 4B, 26B InternVL*) and a new task (*fine-tuning for captioning*). These results further validate the generalizability of our work. For details, please refer to our response to Reviewer cLJN. --- ### [In Evidence & Literature parts] "iterative co-development, ... push further than DataComp" Thank you for your insightful comments! To address this, we performed new experiments to demonstrate iterative training: - *Setting*: Starting with `ckpt0`, we refined `recipe_1` → trained `ckpt1` → refined `recipe_2` → trained `ckpt2`. The first iteration aligns with the single-operation (OP) experiments from previous tasks. Based on the first-round results, we selected the best checkpoint and applied the found top OP, *text_length_filter*, to the original training dataset as our base model. We then conducted a second iteration of single-OP continuous training to evaluate its impact. - *Results*: Summarized in Table 2 of the uploaded PDF, the second iteration continues to improve performance. Notably, while the ranking of OP effects changes slightly between iterations, most effective OPs remain consistent: 8 out of the top-10 OPs in the second iteration overlap with those from the first iteration. - *Remark*: This highlights promising future directions and underscores the distinction between our work and DataComp. While DataComp primarily focuses on CLIP pretraining and scaling compute with duplicated data, our data recipes are validated across a broader range of tasks, including text-to-video generation (*EasyAnimate, T2V-Turbo*), image-to-text pretraining (*MGM*), post-training (*newly added InternVL in the rebuttal*) and this iterative co-development. For a more detailed comparison in the scaling perspective (including scaling across compute and number of distinct data samples), plz see Fig. 3, Table 5 in the original paper, and the newly added InternVL experiments mentioned above. --- ### [Writing Suggestion & Appendix] "have CLIP info more visible in the main part", "one important figure 12 is missing" We appreciate these constructive suggestions! Regarding "Figure 12" mentioned in L1424, it should indeed be "Table 12." For CLIP-related information, we will move more detailed settings and results analysis from the Appendix into Section 4.1 and the first paragraph of Section 4.5. --- **Empirical Evidence Summarization** Collectively, extensive experiments demonstrate the effectiveness and usability of the sandbox, covering different - *Scenarios*: Text-to-video generation, image-text pre-training, image-to-text generation for general image understanding, and image captioning, iterative enhancement (see details above). - *Stages*: Both pre-training and fine-tuning. - *Model Architectures*: EasyAnimate, T2V-Turbo, MGM, InternVL2, and CLIP. - *Scales*: Model parameters, deduplicated data sizes, and compute FLOPs. - *Data Processing Solutions*: Over 40 Data-Juicer Filter OPs and Mappers (see response to Reviewer 5ZDF) - *Model Feedback*: 78 metrics. --- Thank you again for your helpful feedback! We will incorporate these improvements into the final version, and kindly request you to review these responses and re-evaluate the merits of this work. Your feedback is highly anticipated and greatly valued. --- Rebuttal Comment 1.1: Comment: I am pleased to see one of the central questions on model-data codevelopment character was addressed in insightful manner. I will thus raise the score to 3, as I think the work is now substantial enough to be of interest for ICML community.
Summary: This paper introduces Data-Juicer Sandbox, a feedback-driven suite for multimodal data-model co-development. The system integrates the data processing system with model-centric infrastructure, and designs a "Probe-Analyze-Refine" workflow to systematically explore the relationship between data processing operators and model performance. The main contributions of the paper include: (1) the first open-source sandbox platform supporting joint optimization of multimodal data and models; (2) empirical validation demonstrating significant performance improvements in image-to-text generation, text-to-video generation, and image-text pretraining tasks ; (3) scalability verification showing that optimization strategies obtained from small-scale experiments can be transferred to large-scale scenarios, thereby reducing computational costs. Claims And Evidence: The paper's main claims are supported by experimental results. The authors assert that Data-Juicer Sandbox can optimize data processing strategies through feedback-driven approaches, thereby enhancing model performance, which has been validated through experimental results across multiple tasks. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally reasonable. Theoretical Claims: The paper does not present theoretical claims requiring rigorous mathematical proof. Its core contribution is a practical system framework and methodology, primarily validated through experimental results. Experimental Designs Or Analyses: Experimental results in table 3 showcase the significant performance gain by distillation on T2V-Turbo. There could be additional experiments on other models to explore the broader applicability and benefits of Data-Juicer Supplementary Material: I have reviewed the experimental details and additional results sections in the supplementary materials. The supplementary materials provide more details about the experimental setup and data processing workflow, which helps to better understand the results presented in the main text. Relation To Broader Scientific Literature: This work is related to the broader research field of multimodal data processing and model training. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: - The system design has practical value, especially for researchers and engineers who need to rapidly iterate on multimodal model development. - Experimental results indicate that the method is effective across multiple tasks, demonstrating its versatility. Weaknesses: - Baseline comparisons are not comprehensive enough. Other Comments Or Suggestions: I suggest adding experiments with models of different scales (from small to large) to better demonstrate the scalability of the method. Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your acknowledgment and constructive feedback on our work! We respond to the raised only concern with the following new experiments. > The mentioned new results: https://anonymous.4open.science/r/icml_submission8414-7C89/rebuttal.pdf --- ### [Exp Design, Weakness & Suggestion] "additional experiments on other models," "... with models of different scales (from small to large), ..." Thank you for your valuable suggestion! Per your comments, we have conducted new experiments on an additional model, *InternVL2.0*, across different scales (1B, 2B, 4B, 26B), covering a new task (*image captioning*) and new stage (*fine-tuning*). `Settings` To ensure consistency with our proposed workflow: - We first experiment with the smallest model size (1B parameters) on *23 single-op* and *6 multi-op data pools*, deriving the optimal data recipe. - The identified recipe is then applied to all selected model scales. Specifically: - We fine-tune InternVL2.0 for image captioning using the *COCO Caption training dataset* (567k image-caption pairs). - Each single-op data pool contains ~189k samples, while multi-op pools are reduced to ~24k samples due to intersections. - Evaluation metrics include *Bleu-1/2/3/4, METEOR, ROUGE_L, and CIDEr*, along with the average performance change relative to baseline models trained on randomly sampled data pools. For fine-tuning implementations, we set the global batch size to 512. We only take 1 epoch for each data pool with 1B-parameter model. For all experiments, we use H20 GPUs to fine-tune and evaluate the models. Training on one single-op, multi-op data pool with 1B-parameter model takes about 8 and 1.5 GPU hours. Training the optimal recipe including about 24k samples on models with 1B/2B/4B/26B parameters takes about 1.3/2/4/18 GPU hours respectively. Due to time constraints, experiments on various scales using the optimal recipe were repeated twice (random seeds: 42, 1024). Other experiments were conducted once. `Key Results` 1. *Single-OP Results*: - Full results are ranked in *Table 1* of the updated PDF. - Models trained on shorter captions (*Text Length Low*) achieve the best performance. - Most OPs show positive improvements. 2. *Multi-OP Results*: - Top-3 OP combinations do not yield further gains, likely due to low correlation between these OPs (plz see *Figure 4* in the updated PDF for Pearson correlation coefficients). 4. *Scaling Behavior*: - Using the top-3 combination recipes, we fine-tuned InternVL models across all scales. - Results are shown in *Fig.1(b)* of the updated PDF (x-axis in log scale). - *Highlight*: All three recipes maintain consistent and steady performance advantages as the model scale increases from 1B to 26B, demonstrating clear *scaling law behaviors*. In conclusion, our sandbox can consistently achieve performance improvements across these studied scales. --- Thank you again for your time and support! We hope these new results and revisions can address your concern and strengthen the work's applicability. If you have further questions or suggestions, we warmly invite your input.
null
null
null
null
Let LLM Tell What to Prune and How Much to Prune
Accept (poster)
Summary: This paper propose a pruning method that targets multiple LLM modules with dynamic pruning ratios. It find the intrinsic of LLM can help to determine the importance of each module and thus distribute the pruning load on demand, i.e., what to prune and how much to prune. Extensive experiments on multiple benchmarks and LLM variants demonstrate that the proposed method effectively resolves the trade-off between efficiency and performance. Claims And Evidence: YES Methods And Evaluation Criteria: YES Theoretical Claims: N/A Experimental Designs Or Analyses: YES Supplementary Material: YES Relation To Broader Scientific Literature: This paper proposes an effective improvement to the existing LLM pruning method. Essential References Not Discussed: NO Other Strengths And Weaknesses: **Strengths:** 1. The paper is clearly written and easy to understand. 2. The proposed method outperforms existing LLM pruning techniques. 3. The experimental results are comprehensive, demonstrating the method's effectiveness across various LLM sizes. 4. The paper also shows actual inference speed improvements, highlighting its practical application value. **Weakness:** 1. While the core idea of the paper is effective, it does not introduce much novelty. 2. The primary contribution appearing to be a combination of several minor enhancements. Other Comments Or Suggestions: N/A Questions For Authors: Please see the weakness Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 3YW3: Thank you for your insightful review. All experimental result tables have been compiled and are available at the following available link: https://anonymous.4open.science/r/5BDF/README.md. We have thoroughly considered your concerns and respond to them as follows : --- ### **[W1]: _While the core idea of the paper is effective, it does not introduce much novelty._** Our hierarchical pruning stretagy differs significantly from existing pruning approaches, particularly in its use of dynamically assigned pruning ratios and multi-structure pruning. Unlike methods such as SliceGPT, which necessitates a uniform and prescribed pruning ratio across all blocks, our approach dynamically allocates pruning ratios to each block based on the transfer entropy metric. Furthermore, we perform pruning across multiple structure units simultaneously, including blocks, layers, and rows or columns of weight matrices, while previous work mainly focus on a single unit. As shown in Table 1 and Table 2 in the paper, our method achieves strong performance across different pruning ratios in multiple LLMs. In terms of inference speed, our method outperforms approaches that only prune rows or columns of weight matrices, such as SliceGPT and FLAP. Compared to Table 2 (LLaMA2-7B), Table 5 of the Appendix shows our method achieves greater performance gains on larger models (LLaMA2-70B) and at higher pruning ratios, demonstrating its scalability. These findings demonstrates our insight: **pruning multiple structure units with dynamic pruning ratios can lead to balance between efficiency and effectiveness**. We adopt information entropy as the pruning criterion, which distinguishes our method from commonly used activation-magnitude and gradient-magnitude approaches. Ablation study in Table 3 in the link demonstrates that entropy provides a more effective measurement of structural importance, leading to improved pruning performance compared to activation- or gradient-based methods. --- ### **[W2]: _The primary contribution appears to be a combination of several minor enhancements._** Our method is not a simple aggregation of several heuristics but a systematic pruning framework grounded in information-theoretic principles. Indeed, our approach fundamentally challenges the prevailing philosophy in prior works, which involve **pruning individual units based on a predefined ratio**. The pruning process under our framework is divided into two stages: In the first stage, we introduce the metric of transfer entropy to analyze the interaction among blocks in LLMs. This enables us to dynamically determine the pruning ratio for each block and perform pruning on coarse-grained structure units such as blocks and layers. In the second stage, we further allocate the pruning load within each block based on the information entropy of individual structural units, enabling fine-grained pruning of weight matrix rows and columns. Finally, our method introduces the bias compensation to enhance the pruned model without the need of post-training. The ablation studies further validate the critical role and necessity of each component in the proposed framework, including the entropy-based importance metric, the hierarchical pruning strategy, and the dynamic ratio assignment. The results of these ablation studies are presented in Table 3,4 and 5 in the linked document. This hierarchical pruning strategy not only significantly improves the inference speed of the pruned model, but also maximally preserves its original performance. --- We welcome any further questions or points of clarification the reviewer may have regarding our responses. Thank you very much, Authors --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. My concerns have been addressed. However, I strongly encourage the authors to reorganize the Method section. The current writing makes it difficult to recognize that the proposed approach is "a systematic pruning framework grounded in information-theoretic principles". Overall, I would raise my score to 4 accept. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your constructive feedback. We will incorporate your suggestions and thoroughly revise the Method section in our manuscript.
Summary: The paper proposes a structured pruning framework for large language models (LLMs) that dynamically determines "what to prune" (specific modules) and "how much to prune" (pruning ratios) based on their importance. Specifically, the method employ TE to quantify block-layer interaction and information entropy to guide pruning. A hierarchical strategy allocates pruning ratios to blocks and layers, balancing efficiency (inference speed) and performance (perplexity) Claims And Evidence: - The Gaussian assumption for entropy estimation (Eq. 3) is not validated, raising questions about its applicability to non-Gaussian activations. - Some baselines (e.g., ShearedLLaMA, LayerDrop, OWL, DSA) are missing, limiting context for dynamic pruning comparisons. OWL: https://arxiv.org/abs/2310.05175 DSA: https://proceedings.neurips.cc/paper_files/paper/2024/file/ff997469ac66cf893c4183efeb22212a-Paper-Conference.pdf Methods And Evaluation Criteria: Methods and evaluation criteria seems reasonable. Theoretical Claims: the paper focuses on empirical validation. There is no theoretical proofs. Experimental Designs Or Analyses: Code availability is unclear, affecting reproducibility. (no additional supplementary provided) Supplementary Material: I have checked the appendix A for GQA and MHA and Appendix B for additional experiments. Relation To Broader Scientific Literature: The work extends structured pruning literature (e.g., SliceGPT, LLM-Pruner) by introducing dynamic multi-unit pruning. Essential References Not Discussed: Some baselines (e.g., ShearedLLaMA, LayerDrop, OWL, DSA) are missing, limiting context for dynamic pruning comparisons. DSA: https://proceedings.neurips.cc/paper_files/paper/2024/file/ff997469ac66cf893c4183efeb22212a-Paper-Conference.pdf OWL: https://arxiv.org/abs/2310.05175 Other Strengths And Weaknesses: This paper is generally well-written but algorithm steps (e.g., depth-first search compensation) could use more intuition. Also, this paper ignore the previous layer-wise assignment methods. The ablation study is missing: - over the metric TE and entropy - over the search method (DFS) - over the bias compensation Other Comments Or Suggestions: Typo in Table3: WADNDA → Wanda Questions For Authors: - How does the Gaussian assumption in Eq. 3 impact results if activations are non-Gaussian? Validation could strengthen the method’s generality. - Why weren’t dynamic pruning methods like ShearedLLaMA included as baselines? Their inclusion would clarify novelty against related work. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer nYT6, Thank you very much for your valuable feedback. We first address the questions raised in the “Questions for Authors” and “Other Strengths and Weaknesses” sections. For any additional concerns mentioned in other parts of the review(if they are distinct from those already covered), we also provide detailed responses and all experimental result tables have been compiled and are available at the following available link https://anonymous.4open.science/r/5BDF/README.md: --- ### **[Q1]: _How does the Gaussian assumption in Eq. 3 impact results if activations are non-Gaussian? Validation could strengthen the method’s generality?_** We acknowledge that the true feature distribution in a LLM is unlikely to be perfectly Gaussian. Howeveer, the law of large number [1] suggests that the distribution can often be approximated by a Gaussian. We also find gaussian assuamption has been successfully applied to accelerating network like network quantization [2]. Empirically, we find that this approximation is sufficient to guide the network pruning in the ablation study of the pruning metrics. The detailed results are presented in Table 3 at the provided link. #### [1] Mean field analysis of neural networks: A law of large numbers. SIAM Journal on Applied Mathematics, 2020 #### [2] Entropy-driven mixed-precision quantization for deep network design. Neurips2022 --- ### **[Q2]: _Why weren’t dynamic pruning methods like ShearedLLaMA included as baselines? Their inclusion would clarify novelty against related work._** Dynamic pruning iteratively identifies the component with the least impact before permanently removing it. In contrast, our method follows a **static pruning strategy**, which determines the pruning configuration for all structure units in a **single preprocessing stage**, avoiding repeated evaluations of the LLM’s internal state. Moreover, methods like **ShearedLLaMA** typically require **additional fine-tuning or retraining** following the pruning process while our method requires no post-training. Our paper already includes comparisons with dynamic pruning methods such as **SLEB** and **BlockPruner**, and we have additionally compared to OWL and DSA using their settings. The corresponding results are shown in the Table 7 in the link. --- ### **[W1]: _This paper is generally well-written but algorithm steps (e.g., depth-first search compensation) could use more intuition._** We provide a detailed explanation and additional ablation study in **W3.** --- ### **[W2]: _Also, this paper ignore the previous layer-wise assignment methods._** We add the "MultiPruner" [3] pruning method as a baseline to evaluate its effectiveness on LLaMA2. The corresponding results are shown in the Table 6 in the link. #### [3] MultiPruner: Balanced Structure Removal in Foundation Models. arxiv 2025 --- ### **[W3]: _The ablation study is missing:_** _over the metric TE:_ We use the Frobenius norm [4] as the new criterion to measure the change of LLM hidden state. The results are presented in Table 8 in the link. #### [4] Channel pruning for accelerating very deep neural networks. ICCV2017 --- _over the metric entropy:_ We compare the proposed method to activation-magnitude and gradient-magnitude methods in the Table 3 in the link. --- _over the search method:_ Our work adopts the depth-first search (DFS) strategy, with the goal of fully removing certain blocks/layers to fulfill the remaining pruning ratio defined in Equation (5), thereby solving the optimization problem formulated in Equation (6). Here we compare to an intuitive solution - greedy search. We observe greedy search tends to prioritize removing entire blocks during the compensation phase to quickly satisfy the remaining pruning ratio. However, due to the lack of a backtracking mechanism, greedy search selects the seemingly optimal option at each step but is prone to getting stuck in local optima. Our ablation study in Table 9 in the link further supports this observation, demonstrating that DFS consistently outperforms greedy search in terms of pruning effectiveness. --- _Bias compensation:_ We have conducted an ablation study on the bias compensation strategy in Figure 6 of the paper. --- ### **[Other1]:_Code availability is unclear, affecting reproducibility. (no additional supplementary provided)_** We used the offical checkpoints (LLaMa2, LLaMA3&Vicuna) and followed the evaluation protocol of previous works like SliceGPT. Hence, our work warrants reproducibility. We will release the codebase upon publication. ### **[Other2]:_Typo in Table3: WADNDA → Wanda_** Thank you for pointing this out. We apologize for the typo and will correct “WADNDA” to “Wanda” in Table 3 in the revised version. --- We would like to encourage the reviewer to ask questions on anything that may still be unclear in our responses or which we should clarify further. Thank you very much, Authors --- Rebuttal Comment 1.1: Comment: Thanks for the detailed and comprehensive rebuttal, especially the ablation study part. I have raised my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your feedback. We will incorporate your suggestions.
Summary: The paper introduces a new approach to pruning large language models (LLMs) that dynamically assigns pruning ratios to different components based on their importance. There are two issues with the current pruning methods: (1) focusing on just one structure of the model; (2) using a prescribed pruning ratio. To address these two issues, they developed a more flexible approach that targets multiple parts of the model simultaneously and varies how much to prune based on what's actually important. They use something called "transfer entropy" to figure out which transformer blocks matter most, and then "information entropy" to determine which parts within those blocks to keep or remove. Their method works in two main steps: first, decide how much to prune each block based on its importance to the model's overall function; then, distribute that pruning load across different components within each block. They tested their method on various model sizes including LLaMA2-7B/13B/70B, LLaMA3-8B/70B, and Vicuna-7B/13B at different pruning levels (30%, 40%, 50%). Their pruned models had better perplexity scores and zero-shot accuracy than other methods. The models also ran faster and used less memory. Their approach worked well regardless of the training samples used or how many samples they had. Claims And Evidence: Supported Claims: The authors claim their method outperforms existing approaches. This is well-supported by comprehensive experimental results in Tables 1-9, showing better perplexity scores and zero-shot accuracy across multiple models (LLaMA2, LLaMA3, Vicuna) and pruning ratios (30%, 40%, 50%). Figure 6 adequately demonstrates how their bias compensation method helps maintain model performance after pruning. Potentially Problematic Claims: The paper claims that "at a pruning ratio of 30%, our method outperforms semi-structured pruning techniques, including SparseGPT and Wanda, across LLMs." However, this comparison is misleading and unfair. Looking at Table 1, we can see that SparseGPT and Wanda were only evaluated at 50% sparsity, not at 30%. The authors are comparing their method at a lower pruning ratio (30%) against other methods at a much higher pruning ratio (50%). This creates an unfair advantage for their approach, since models with less pruning naturally tend to perform better. A fair comparison would require all methods to be evaluated at the same pruning ratio. The paper assumes a Gaussian distribution for hidden states in Equation 3 without providing evidence that this assumption holds in practice for transformer blocks. This is problematic because the entire entropy calculation, which forms the foundation of their pruning strategy, depends on this distributional assumption. If the hidden states don't actually follow a Gaussian distribution, the entropy calculations could be inaccurate, potentially undermining the theoretical basis of their method. Methods And Evaluation Criteria: Methods The paper proposes a hierarchical pruning strategy that targets multiple structure units in LLMs with dynamic pruning ratios. While the approach is innovative, the theoretical foundation for using entropy as the key metric for importance could be strengthened. The authors don't sufficiently justify why entropy specifically is better than other potential metrics (such as activation magnitude or gradient-based importance) for identifying less important components. The transfer entropy concept for quantifying block importance is interesting, but lacks thorough theoretical connection to the actual functionality of transformer blocks in language modeling. Evaluation Criteria The paper's use of perplexity as a primary evaluation metric makes sense for measuring language model performance after pruning. The authors also appropriately evaluate their method on common benchmarks including WINOGRANDE, PIQA, HELLASWAG, ARC-E, and ARC-C, which are standard datasets for measuring commonsense reasoning capabilities. However, the evaluation could be strengthened by including more diverse real-world benchmarks that specifically measure different capabilities such as mathematical reasoning, text completion, and other practical tasks. Theoretical Claims: The paper lacks substantial theoretical analysis to support its claims. While the authors introduce transfer entropy as a key metric for determining block importance in Section 3.1, they provide no theoretical proof establishing a relationship between transfer entropy and model performance after pruning. This is a significant gap, as the entire hierarchical pruning strategy depends on this correlation. The formulas presented (particularly Equations 1-3) appear mathematically correct in isolation, but the authors make assumptions without rigorous justification - for example, assuming Gaussian distribution for hidden states in Equation 3 without verifying this distributional assumption holds in practice for transformer blocks. Experimental Designs Or Analyses: Overall, the experimental design in this paper is sound, with appropriate evaluations across multiple models, pruning ratios, and datasets. However, there are notable inconsistencies in the results that warrant further explanation. In Table 1, the performance patterns across different LLMs and pruning methods appear somewhat inconsistent - in some cases, the proposed method outperforms all others, while in other cases, different methods perform better for specific model sizes or types. The authors don't adequately analyze or explain these variations, which raises questions about the generalizability of their approach. For instance, why does their method work particularly well on LLaMA2-70B but show less improvement over alternatives on some other model variants? A deeper analysis of why certain methods perform better on specific architectures would strengthen the paper's contribution and help readers understand when to apply which pruning strategy. Supplementary Material: Yes, I reviewed the supplementary material in the document, which includes Appendix A and Appendix B. Relation To Broader Scientific Literature: This paper advances LLM pruning by introducing an entropy-based approach that quantifies information content across model components, allowing for dynamic, principled pruning decisions. While previous work like Wanda (2023) considered activation values and OWL (2023) implemented non-uniform layer pruning, this research provides a more sophisticated framework by using transfer entropy to determine component importance. Unlike single-structure methods such as SLEB (2024) or SliceGPT (2024), the authors' approach simultaneously targets multiple structural units (blocks, layers, weight matrices) with dynamically determined pruning ratios, creating a more holistic pruning strategy that better balances performance and efficiency. Essential References Not Discussed: Recent work on efficiently pruning attention heads and MLP layers in transformers like "Structured Pruning of Large Language Models" (Wang et al., 2023) and "Are Sixteen Heads Really Better than One?" (Michel et al., 2019) should be included, as they directly relate to the paper's multi-structure pruning approach. Other Strengths And Weaknesses: Strengths: The paper introduces a novel hierarchical pruning approach that targets multiple structural components simultaneously, which differentiates it from previous methods that focus on pruning a single structure. The dynamic pruning ratio allocation based on component importance is a significant innovation that allows for more flexible and efficient pruning. The bias compensation technique effectively helps maintain model performance after aggressive pruning. The experimental evaluation is comprehensive, covering multiple model families (LLaMA2, LLaMA3, Vicuna) and sizes, which demonstrates the broad applicability of their approach. Weaknesses: The paper lacks strong theoretical foundations for its entropy-based importance metrics. While the authors introduce transfer entropy and information entropy as key metrics, they don't adequately justify why these specific information-theoretic measures are optimal for pruning decisions compared to alternatives. The assumption of Gaussian distribution for hidden states needs validation. The description of the K-means clustering implementation lacks details on parameter selection (e.g., number of clusters). The paper would benefit from ablation studies isolating the impact of different components of their approach (hierarchical pruning, dynamic ratios, entropy-based importance) to better understand which elements contribute most to the performance improvements. Other Comments Or Suggestions: There is inconsistency in the decimal notation throughout the paper. For example, in Table 2 and other zero-shot performance tables, some accuracy percentages have two decimal places (e.g., "68.98"), while others have different formats (e.g., 76, 52.8, 62.40). Questions For Authors: In Section 4.6 and Figure 8, there appears to be a discrepancy between the figure captions and the content. Figure 8(a) is labeled as "Perplexity results on WikiText2" and Figure 8(b) as "Perplexity results on C4," yet both figures show bars for both WikiText2 and C4. Could you clarify what each subfigure is actually showing? In Section 3.3, you mention using K-means clustering to group structure units based on their entropy values before pruning. What is the motivation for using a learning-based clustering algorithm like K-means instead of simpler approaches such as averaging or thresholding based on entropy values? Does K-means provide specific advantages for identifying pruning candidates compared to non-learning approaches? Additionally, how do you determine the optimal number of clusters for the K-means algorithm in your implementation? Throughout the paper, you use entropy as the foundation for measuring importance in different structure units. While entropy is a measure of information complexity, the paper lacks theoretical justification for why entropy specifically is an appropriate metric for determining what to prune in LLMs. Could you provide more theoretical explanation for why entropy is better than other potential metrics (such as activation magnitude, gradient-based importance, or contribution sufficiency) for identifying less important components in large language models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer FHHv: We appreciate the reviewer’s constructive suggestions. Experimental results are available at the link: https://anonymous.4open.science/r/5BDF/README.md. We address the questions in “Questions for Authors” and “Other Strengths and Weaknesses” sections. For any additional concerns mentioned in other parts, we provide detailed responses below: --- ### **Q1: _Clarification to Figure 8_** In Fig. 8(a), we use the training sets of C4 and WikiText2 as calibration datasets and evaluate the models on WikiText2 test set. For Fig. 8(b), we use the same calibration datasets but evaluate the models on C4 test set. The result shows our approach is less sensitive to the calibration dataset. --- ### **Q2: _Motivation of K-means_** Using average entropy as a threshold tends to mix the units with low entropy and those close to the mean, failing to distinguish between components of different importance levels. ### **Q3: _K-means vs. Average threshold_** K-means allows us to partition structure units into distinct groups. Pruning is then applied to the group with the lowest entropy in each layer, enabling a more fine-grained pruning strategy. We validate the superiority of K-means over the averaging approach in Table 1 in the link. --- ### **Q4: _Number of K-means clusters_** As shown in Table 2 in the link, for 30% and 50% pruning ratios, K is fixed at 3 and 6, respectively. For 40% pruning ratio, we conducted a hyperparameter search. --- ### **Q5: _Theoretical explanation of entropy_** We learned from [1] that prior works [2,3] have found that in the over-parameterized LLM, network weights tend to remain close to their initialization throughout training. As a result, the magnitude of gradient updates is relatively small, and activation values remain nearly same in the ''lazy regime", making it difficult to reflect the true contribution of each unit. Please refer to more details for the ablation results in Table 3 in the link. [1] Junk DNA hypothesis: Pruning small pre-trained weights Irreversibly and Monotonically impairs “difficult” downstream tasks in LLMs. ICML2024 [2] Sparsity May Cry: Let us Fail Sparse Neural Networks Together. ICLR2023 [3] A Kernel-Based View of Language Model Fine-Tuning. ICML2023 --- ### **W1: _Theoretical foundations of entropy-based metrics_** TE [4] measures how much *unique* information a source process (blocks) provides about a target process (output layer). If removing a block dramatically reduces the unique information available to the downstream layers, the network’s overall performance should drop. Thus, blocks with high TE are deemed more critical for maintaining the model’s predictive capacity. A block that has low TE essentially replicates or redundantly encodes information in the network. Pruning such a block has minimal impact because it does not lose much unique signal. Information entropy as a key metric is introduced in **Q5**. [4] Measuring information transfer. Physical review letters, 2000. --- ### **W2: _Assumption of Gaussian distribution_** Please refer to Reviewer nYT6's Q1. --- ### **W3: _Number of K-means clusters_** Please refer to Q4. --- ### **W4: _Ablation studies_** _Entropy metric:_ We compare to activation and gradient magnitude in Table 3 in the link. _Hierarchical design:_ Hierarchical pruning targets multiple structures. To validate it, we implement a baseline that only prunes row&column of weight matrix in Table 4 in the link. _Dynamic ratios:_ We compare to fixed ratio in Table 5 in the link. --- ### **O1: _SparseGPT and Wanda_** SparseGPT and Wanda are two semi-structured methods that rely on special hardware, while our method is hardware-friendly. As a result, our model with 30% sparsity has faster inference while offering better results, as shown in Table 3. --- ### **O2: _Real-world tasks_** The MathQA, OpenBookQA, and SciQ are included in the link. ### **O3: _Noticeable fluctuations across different settings_** In Table 1, our method outperforms other structured pruning methods in 18 out of 21 cases. Beyond the Perplexity, we evaluate the model in a zero-shot setting in Table 2, showing consistent improvement over prior works. ### **O4:_Particularly well on LLaMA2-70B_** We guess you were mentioning Table 5 of the Appendix. Our core motivation is that **pruning single unit with fixed ratio cannot handle complex LLM**. The result in Table 5 aligns with our insight: When it comes to larger model size and higher pruning ratio, our hierarchical strategy achieves a better balance between efficiency and performance. ### **O5:_Missed references_**: The two references you suggested are not proposed for LLM pruning. We compare to [5] targeting multiple structures in Table 6 in the link. [5] MultiPruner: Balanced Structure Removal in Foundation Models. arxiv 2025 Decimal notation: We will fix it We encourage the reviewer to ask questions on anything that may still be unclear. Thank you so much, Authors
null
null
null
null
null
null
null
null
Generalists vs. Specialists: Evaluating LLMs on Highly-Constrained Biophysical Sequence Optimization Tasks
Accept (poster)
Summary: This paper tackles the problem of biophysical sequence optimization - a task where even small deviations from stringent constraints (e.g., protein stability or solubility) can render a solution unusable. To bridge the gap between generalist LLM-based methods and specialist solvers, the authors introduce a synthetic test suite and a optimization framework that continuously optimize protein sequence using LLMs. Claims And Evidence: Yes, the proposed methods seem to be well-motivated and well supported. Methods And Evaluation Criteria: Strengths: - The paper presents a well-motivated and technically sound approach. The whole framework is self-contained, borrowing insights from discrete optimization, LLM fine-tuning, and preference learning. - The benchmark (test suites) design is novel Weakness/Question: - Does the synthetic benchmarks generally applicable to real-world case? - How is the computation cost of the LLMs? Theoretical Claims: The paper does not contain any theoretical claims. The preference learning objectives are not new (in the appendix) so it doesn't really require further examination. Experimental Designs Or Analyses: The overall experimental design is comprehensive and verify the effectiveness of the proposed method Supplementary Material: I check the proof and the detail of the algorithm (A.3 and A.5) Relation To Broader Scientific Literature: The key contribution can be seen as a application of LLM on the tasks of biophysical sequence optimization, it would be interesting to researcher who works on this field Essential References Not Discussed: N/A Other Strengths And Weaknesses: See Methods And Evaluation Criteria Other Comments Or Suggestions: N/A Questions For Authors: See Methods And Evaluation Criteria Ethics Expertise Needed: ['Other expertise'] Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive assessment of our work. We appreciate your recognition that our approach is well-motivated and technically sound. ## On the applicability of synthetic benchmarks to real-world cases You raised an important question about whether our synthetic benchmarks apply to real-world cases. To address this directly, we conducted new experiments comparing optimizer performance on Ehrlich functions versus established lookup-based biological test functions (TFBind8, DhfR, TrpB -- see our response to reviewer b3Qx). We found strong rank correlations (0.61-0.89) between algorithm performance across these benchmarks, confirming that Ehrlich functions effectively capture the structure of real biological optimization problems. This validation supports our benchmark design principles, which were carefully crafted to capture key properties of real biophysical sequence optimization: 1. **Feasibility constraints**: The vast majority of random sequences fail to express or fold properly 2. **Epistasis**: Non-additive effects between sequence positions 3. **Position-dependent sensitivity**: The importance of specific residues at specific positions 4. **Motif constraints**: The need for functional motifs to appear with proper spacing By deliberately incorporating these properties, Ehrlich functions provide meaningful insights into algorithm performance on real-world biological optimization tasks while maintaining computational accessibility. ## On the computational cost of LLMs You also asked about the computational cost of using LLMs. We address this important practical consideration in Section 6.1: > "For relatively easy optimization problems, since the performance of various methods is similar, using a specialized model with 0.01% of the parameters of an LLM may be more practical." Our experiments revealed an interesting nuance: the optimal choice between generalist LLMs and specialist models depends on problem difficulty. For medium-difficulty problems, LLMs with appropriate training can significantly outperform specialized models, potentially justifying their higher computational cost. For very easy or very difficult problems, however, specialized models offer comparable performance with substantially lower computational requirements. This insight provides practical guidance for practitioners choosing between approaches based on their specific constraints and objectives. We have additionally conducted new LLOME-MargE experiments with a very small LLM (~226K params) trained from scratch (see our response to reviewer b3Qx). We found that even for a very small LLM with no pre-training, LLOME-MargE is significantly more sample efficient than LaMBO-2, despite having fewer model parameters than LaMBO-2. As such, the computational costs of LLOME need not always be a concern. Thank you again for your thoughtful review. We believe our work contributes valuable insights to both the machine learning and biological sequence design communities, and we appreciate your recognition of its potential impact.
Summary: This paper investigates the use of large language models (LLMs) as black-box sequence optimizers for biophysical sequence design and optimization. The authors compare generalist LLM-based approaches with specialized optimization methods, such as LaMBO-2, to determine whether LLMs can efficiently optimize under strict biophysical constraints. The study introduces new benchmarks, novel training objectives, and a bilevel optimization framework to enhance LLM-based sequence optimization. Claims And Evidence: The majority of claims in this paper are supported by thorough experiments, well-defined benchmarks, and ablation studies. However, some claims would benefit from additional validation. The only benchmark used is the Ehrlich functions, which are synthetic test functions, no real-world biological datasets (e.g., protein sequences, DNA regulatory elements) are tested. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally well-designed for assessing LLMs in biophysical sequence optimization. Theoretical Claims: All correct. Experimental Designs Or Analyses: No issues. Supplementary Material: No Relation To Broader Scientific Literature: This paper contributes to LLM-based sequence optimization, preference learning for biophysical design, and comparisons between generalist (LLM-based) and specialist (model-based) solvers. Essential References Not Discussed: No significant reference are missing. Other Strengths And Weaknesses: See above Other Comments Or Suggestions: Please considering reference some prior work on the protein sequence optimization: Chen, A., Stanton, S. D., Alberstein, R. G., Watkins, A. M., Bonneau, R., Gligorijević, V., ... & Frey, N. C. (2024). LLMs are highly-constrained biophysical sequence optimizers. arXiv preprint arXiv:2410.22296. Gomez-Uribe, C. A., Gado, J., & Islamov, M. (2024). Designing diverse and high-performance proteins with a large language model in the loop. bioRxiv, 2024-10. Subramanian, J., Sujit, S., Irtisam, N., Sain, U., Islam, R., Nowrouzezahrai, D., & Kahou, S. E. (2024). Reinforcement Learning for Sequence Design Leveraging Protein Language Models. arXiv preprint arXiv:2407.03154. Wang, Y., He, J., Du, Y., Chen, X., Li, J. C., Liu, L. P., ... & Hassoun, S. (2025). Large Language Model is Secretly a Protein Sequence Optimizer. arXiv preprint arXiv:2501.09274. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive assessment and constructive feedback. We appreciate your recognition of our thorough experiments and well-defined benchmarks. ## On testing with real-world biological datasets You noted that our evaluation relies on synthetic Ehrlich functions rather than real-world biological datasets. While direct evaluation on real biological data would indeed be valuable, there are two key challenges: 1. **Training data contamination**: Using widely available biological datasets risks contamination with LLM training data, which would invalidate our assessment of LLMs as black-box optimizers. 2. **Accessibility**: Real biological datasets and simulators often require specialized software, significant computational or experimental resources, or domain expertise, creating barriers to reproduction and wider adoption. To address these concerns while maintaining biological relevance, we conducted new experiments comparing optimizer performance on Ehrlich functions versus established lookup-based biological test functions (TFBind8, DhfR, TrpB -- see our response to reviewer b3Qx). We found strong rank correlations (0.61-0.89) between algorithm performance across these benchmarks, confirming that Ehrlich functions effectively capture the structure of real biological optimization problems. This validation approach allows us to demonstrate biological relevance while avoiding contamination in LLM training data. It also preserves the computational accessibility that makes Ehrlich functions valuable for algorithm development. ## On additional references Thank you for suggesting additional relevant references. We will include them in the revised version to better situate our work within the growing literature on protein sequence optimization using LLMs. We believe our work provides several unique contributions to this field: 1. A systematically designed benchmark that balances realism, computational accessibility, and difficulty 2. A bilevel optimization framework that effectively leverages LLMs for constrained optimization 3. A novel preference learning objective that outperforms SFT, DPO, and REINFORCE when rewards are observed. These contributions advance both the theoretical understanding and practical application of LLMs for biological sequence optimization. Thank you again for your thoughtful review and suggestions for improvement.
Summary: The authors introduce Ehrlich functions, a novel synthetic function suite designed to simulate the properties of biological sequences and to facilitate benchmarking of generative algorithms for sequence optimization. They also propose a bilevel LLM-based solver, LLOME, which leverages a new preference loss called MargE. Experimental results, benchmarked against LAMBO-2 and GA, suggest that LLOME holds promise for biological sequence optimization tasks. Claims And Evidence: The authors’ claims regarding LLOME, Ehrlich functions, and the MargE preference loss are well substantiated by the experimental results, which clearly demonstrate the efficacy and potential of these methods for biological sequence optimization. Methods And Evaluation Criteria: One key contribution of this work is the introduction of Ehrlich functions for evaluating generative algorithms in biological sequence optimization tasks. While these functions may oversimplify the complexity of real biological sequences, they provide a practical starting point, enabling rapid approximation and assessment of model performance. Theoretical Claims: Both the MargE and the construction of Ehrlich functions is sound and correct. Experimental Designs Or Analyses: One crucial consideration missing from the experimental design is the success rate. With unlimited time and resources, nearly any optimization method could eventually produce a sequence meeting the required criteria. However, in real-world settings—especially those involving costly wet-lab validation—it’s important to ensure a high success rate in the final (or a limited number of) round(s) of experiments. Supplementary Material: The supplementary is well written and comprehensive. Relation To Broader Scientific Literature: This work holds significant value for a broad range of scientific fields, including protein optimization, mRNA design, and antibody/CAR T-cell engineering. The design of Ehrlich functions is especially noteworthy for accelerating the development of advanced optimization algorithms. However, to strengthen its relevance for biologists, it would be helpful to validate these functions against existing antibody maturation datasets—demonstrating that, with proper parameter settings and initial seed sequences, they can closely simulate real biological processes. Additionally, expanding the discussion to address broader applications in other biological contexts could further underscore the method’s versatility. Essential References Not Discussed: There are several other attempts in using protein language models in simulating the antibody maturation and experimentally validated the effectiveness with wet-lab experiments. The authors may discuss the links between the proposed method and the ones that biologiest are interested in. See https://doi.org/10.1038/s41587-023-01763-2 and DOI: 10.1126/science.adk8946 Other Strengths And Weaknesses: It appears that the authors rely on a single model for both scoring and generation. What are the advantages of using a unified approach compared to the more traditional setup in protein engineering, where one model serves as the ‘oracle’ (scoring function) and a separate LLM is responsible for sequence generation? Other Comments Or Suggestions: See comments above. Questions For Authors: Success rates: It would be highly informative to report the success rate of each optimization algorithm, as these metrics are critical for gauging practical utility—particularly when transitioning to costly wet-lab validation. Ehrlich function applicability: Demonstrating whether Ehrlich functions can simulate real antibody maturation datasets would help validate their biological relevance. Such an evaluation could illustrate how well the functions capture key evolutionary or selection pressures inherent in the maturation process. Protein language models in antibody maturation: A deeper discussion of how protein language models (PLMs) apply to antibody maturation would strengthen the manuscript’s impact. One model vs 2 model: Explain the benefit in using one model for joint scoring and optimizing. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive assessment and thoughtful questions. We're pleased you recognize the value of our contributions and appreciate your suggestions for strengthening our work. ## On success rates and feasibility You raised an important point about success rates in real-world settings with costly wet-lab validation. We directly address this through our measurements of feasibility rates over time in Figures 4 and 10, which show how each method improves in generating valid sequences as optimization progresses. Figure 12 provides additional insight by analyzing the relationship between edit distance and feasibility, showing that LLOME-MargE achieves the best balance between exploration (making meaningful changes) and constraint satisfaction. Our simple regret plots (Figure 3) also demonstrate sample efficiency, which is critical when optimization is constrained by laboratory resources. LLOME-MargE consistently finds high-quality solutions with fewer function evaluations than other methods, particularly on medium-difficulty problems. ## On Ehrlich function applicability to real biology We share your interest in validating Ehrlich functions against real biological data. To address this, we conducted new experiments comparing optimizer performance on Ehrlich functions versus established biological test functions (TFBind8, DhfR, TrpB -- see the response to Reviewer b3Qx). We found strong rank correlations (0.61-0.89) between algorithm performance on these benchmarks, confirming that Ehrlich functions effectively capture the structure of real biological optimization problems. This validation approach allows us to demonstrate biological relevance while avoiding contamination in LLM training data, which could compromise benchmark integrity. ## On unified vs. separate models for scoring and generation You asked about the advantages of our unified approach compared to the traditional setup with separate scoring and generation models. The main benefits are: 1. **Improved ranking and guidance**: When a model is jointly trained on both tasks, its internal representations develop a better understanding of the relationship between sequence features and objective values. This improves the model's ability to generate and rank candidates effectively. 2. **Computational efficiency and search depth**: By unifying generation and evaluation in one model, we can perform deeper, more focused exploration with the same computational budget. 3. **Overcoming distributional limitations**: Traditional "generate-and-filter" approaches assume the desired outputs already exist in a mode of the generative model's training distribution. Scientific discovery inherently requires finding solutions in low-density regions or even outside the training distribution entirely. Our unified approach enables the model to progressively learn to generate such solutions. 4. **Simplified training and deployment**: Using a single model reduces engineering complexity and maintenance overhead in real-world applications. Thank you for your valuable suggestions. We agree that further validation on real antibody maturation datasets would strengthen our work's impact, and we're actively pursuing this direction for future research. We believe our current contributions provide a solid foundation for advancing the application of LLMs to biological sequence optimization problems. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your comments and revisions. I still have concerns regarding the success rate -- it seems that the model use more than 20k evaluation steps to find a practical solution -- but in the work I mentioned, biologist can have nearly less than 24/96 "evaluation steps" to get a practical sequence. This could be an big issue. I hope more could be discussed here and if necessary, a small experiments for minimizing the gaps is important. Best, mMvu
Summary: This paper introduces a new synthetic test suite (Ehrlich functions) that captures the geometric structure of biophysical sequence optimization problems, proposes a framework LLOME (Language Model Optimization with Margin Expectation), a bilevel optimization routine for online black-box optimization, and uses a preference learning loss called MargE. To evaluate LLMs on biophysical sequence optimization, the paper conducts comparative evaluation of LLMs against specialized solvers like LaMBO-2. Claims And Evidence: Claims: 1. Off-the-shelf LLMs struggle to optimize Ehrlich functions with prompting alone (Yes) 2. LLOME with MargE can learn to solve some Ehrlich functions (Yes) 3. LLOME can outperform LaMBO-2 on moderately difficult Ehrlich variants (Yes) 4. LLMs show limited extrapolative capabilities without further training (Yes) The evidence could be strengthened by including comparisons with more baselines. Methods And Evaluation Criteria: The proposed methods are reasonably sound for the problem as defined. Ehrlich functions provide a controlled environment for testing optimization capabilities with well-defined constraints. The evaluation criteria (regret metrics, feasibility, and diversity) are appropriate for measuring optimization performance. However, the paper only compares against two baselines (genetic algorithm and LaMBO-2) and there is no evaluation on real biophysical optimization tasks to validate transferability. Theoretical Claims: The derivation of MargE in Appendix A.3 appears mathematically sound, though relatively straightforward compared to prior work in preference learning. The proof of Lemma A.1 establishing properties of Bradley-Terry models with reward functions helps justify the design choices in the reward function but is not especially novel. Experimental Designs Or Analyses: Only two baselines are considered; it is not clear to me whether it is fair to compare the proposed LLOME on pretrained data with LAMBO-2 that is trained from random initialization. Supplementary Material: I briefly reviewed the details regarding the algorithms in Appendix A. Relation To Broader Scientific Literature: The paper builds on directed evolution and genetic algorithms for black-box optimization, adding LLM-based approaches to this domain. Essential References Not Discussed: More baseline works regarding LLMs and evolutionary algorithms can be considered. Here is a survey paper for reference: Evolutionary Computation in the Era of Large Language Model: Survey and Roadmap (https://arxiv.org/pdf/2401.10034) There are other standard sequence optimization benchmarks like proteingym that may need to be considered too. Other Strengths And Weaknesses: Pros: 1. The paper identifies an important challenge in comparing general-purpose and specialized models 2. The analysis of preference learning calibration provides useful insights Cons: 1. The significance of Ehrlich functions is not convincingly established; it's unclear why existing benchmarks weren't sufficient. It would be better to motivate why Ehrlich functions are specifically representative of biophysical sequence optimization rather than generic constrained optimization 2. The design of Ehrlich functions is not well motivated and not clearly explained in relation to biological sequence optimization tasks. It's unclear if the sequence nature of the problem is essential or just incidental to the optimization task 3. Only two baselines are compared and it seems the comparison is not fair with LAMBO-2 Other Comments Or Suggestions: See above Questions For Authors: 1. How do the authors justify that performance on Ehrlich functions would translate to real biophysical sequence optimization tasks? Could the authors provide evidence of correlation between performance on Ehrlich functions and established benchmarks? 2. Why develop a new synthetic benchmark rather than using established ones like those in ProteinGym? What specific limitations of existing benchmarks necessitated this approach? 3. Given that LLMs require significantly more computational resources, how would the authors characterize the trade-offs between performance and efficiency when choosing between generalist and specialist approaches? At what point would the performance improvements justify the increased computational costs? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful assessment of our work. We appreciate your recognition of our technical contributions and would like to address your concerns. ## On the choice of Ehrlich functions over existing benchmarks To validate the real-world applicability of Ehrlich functions, we conducted new experiments comparing optimizer performance on Ehrlich functions vs. 3 established lookup-based test functions: - [TFBind8](https://www.science.org/doi/10.1126/science.aad2257): DNA transcription factor binding optimization (8-base sequence) - [TrpB](https://www.science.org/doi/10.1126/science.adh3860): Tryptophan synthase β-subunit protein optimization (4-amino acid sequence) - [DhfR](https://www.pnas.org/doi/10.1073/pnas.2400439121): Dihydrofolate reductase DNA binding optimization (9-base sequence) For each benchmark, we evaluated the median cumulative regret (estimated from 8 trials) of 64 variants of our GA with different hyperparameter settings. We then computed rank correlations between algorithm performance on these biological benchmarks vs. comparable Ehrlich functions. The strong rank correlations confirm that Ehrlich functions effectively capture the structure of real biological optimization problems: | Spearman Corr. with → | DhfR | TFBind8 | TrpB | |--------------------|------|---------|------| | Ehr(4, 4)-2-2-2 | 0.75 | 0.75 | 0.61 | | Ehr(20, 8)-2-2-2 | 0.86 | 0.89 | 0.73 | ## On the design of Ehrlich functions We developed Ehrlich functions after carefully analyzing the limitations of existing benchmarks. As we explain in Section A.2.2, current benchmarks fall into several categories, each with significant drawbacks. **Database lookup** benchmarks are costly to construct and unnecessarily restrictive of the search space. **Empirical function approximations** often have spurious optima that are easy to find but not reflective of real solutions for the biological problem. **Physics-based simulations** are slow to evaluate, difficult to install/run correctly, and admit trivial solutions that score well but are not desirable. Instead, we designed the Ehrlich suite to have the following criteria: (1) low compute cost, (2) well-characterized solutions, (3) non-trivial difficulty, (4) similarity to real-life applications, and (5) not already seen in training data. To the best of our knowledge, this is currently the only biosequence optimization benchmark to possess all 5 attributes. ## On the relationship between Ehrlich functions and biological sequence optimization Ehrlich functions capture four key properties of real biophysical sequence optimization: 1. **Feasibility constraints**: The vast majority of random sequences are non-viable/non-expressible 2. **Epistasis**: Non-additive effects between sequence positions 3. **Position-dependent sensitivity**: The importance of specific residues at specific positions 4. **Motif constraints**: The need for functional motifs to appear with proper spacing As we illustrate in Fig. 6, these properties are directly related to antibody-antigen binding. ## On the choice of comparisons We have added a comparison against LLOME-MargE with a smaller LLM (~226K params), trained from scratch. Although pre-training should not offer any additional advantages due to the lack of overlap between Ehrlich functions and the pre-training data, we choose this setting to be similar in model size and training to LaMBO-2. We evaluate on the f2 function (i.e. Ehr(32, 32)-4-4-4). Despite being ~2/3 the size of LaMBO-2, this model achieves the same min. regret using several thousand fewer test function evaluations (please compare to LaMBO-2 results in Fig. 3b): | # Test Function Evals | 10000 | 12000 | 14000 | 16000 | 18000 | 20000 | 22000 | 24000 | 26000 | 28000 | |:-----------|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:| | Min. Regret | 0.906 | 0.859 | 0.789 | 0.719 | 0.625 | 0.5 | 0.438 | 0.438 | 0.438 | 0.438 | This illustrates the strength of LLOME-MargE, even in small models without any pre-training. ## On trade-offs You raise an important question about the trade-offs between LLMs and specialized models. We explicitly address this in Section 6.1: > "For relatively easy optimization problems, since the performance of various methods is similar, using a specialized model with 0.01% of the parameters of an LLM may be more practical." Our results show that the choice between generalist and specialist models depends - for medium-difficulty problems, the performance improvements of LLMs may justify their cost. For very easy or difficult problems, specialized models offer comparable performance with much less compute. ## On essential references We have an extended Related Work section in A.1, which discusses many LLM + evolution works. Also, ProteinGym is not a sequence generation benchmark.
null
null
null
null
null
null
KVTuner: Sensitivity-Aware Layer-Wise Mixed-Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference
Accept (poster)
Summary: This paper proposes a multi-objective optimization-based algorithm to search for the optimal layer-wise mixed precision KV cache quantization configuration. The authors observe that key caches generally require more bits for quantization than value caches, and thus propose allocating more bits to the key cache. Additionally, they find that different layers exhibit varying sensitivity to KV cache quantization. To address this, they apply mixed precision across different layers and employ a search algorithm to identify the optimal configuration. To reduce the search space, they incorporate pruning and clustering strategies. Experimental results across various LLMs and tasks demonstrate the accuracy and latency benefits of their approach. ## update after rebuttal The authors addressed my concerns regarding the baseline comparison and provided additional experimental results to support the effectiveness of their method. Therefore, I will raise my rating to a weak accept. Claims And Evidence: The authors make three claims: 1. The key cache is more important than the value cache. 2. Different layers exhibit varying sensitivity to KV cache quantization. 3. Their proposed method outperforms baseline methods that use static quantization bits for all KV caches, such as KIVI. However, claims 1 and 3 are problematic. For claim 1, Table 4 shows that in some layers, the value cache is actually more important than the key cache. The authors should explain why this occurs and clarify whether claim 1 holds universally or if it depends on specific conditions. For claim 3, it is not entirely clear that their method outperforms KIVI. In Table 5, KVTuner-C4.90 performs slightly worse than KIVI-4 for Llama-3.1-8B-Instruct, and KVTuner-C3.44 is worse than KIVI-4 for Qwen2.5-3B-Instruct. Moreover, the authors do not include a baseline of KIVI-3, making it difficult to provide a fair comparison between KIVI and the proposed method. For instance, although KVTuner-C3.25 outperforms KIVI-2 for Llama-3.1-8B-Instruct, KVTuner-C3.25 uses a higher bitwidth than KIVI-2, making the comparison unfair. This issue also applies to Table 6. Regarding latency, the baseline is unclear. Is KV8 referring to KIVI-8 or just standard 8-bit quantization? KIVI-n is used in Tables 5 and 6 but not in Table 7, which adds to the confusion. If KV8 is not KIVI-8, why not compare with KIVI-8? Furthermore, Llama2-7B is used for the latency comparison, but it is not included in the accuracy comparison, leaving the accuracy difference between KV8 and KVTuner-C6 unknown. Without this information, the latency comparison lacks context. It is difficult to assess whether the proposed method provides a better latency-accuracy trade-off than the baselines. Methods And Evaluation Criteria: The proposed method is reasonable, given the varying importance of key/value caches and the different layer-wise sensitivity to KV cache. However, the evaluation lacks clarity, as the accuracy benefits are subtle, and the latency comparison, as previously mentioned, is problematic. Additionally, since KV cache quantization is particularly important for long-context generation, the authors should evaluate their method on long-context benchmarks, such as LongBench, to more effectively validate its performance. Theoretical Claims: All proofs are correct. Experimental Designs Or Analyses: The accuracy experiments have the issue of lacking appropriate baselines with roughly the same quantization bitwidth as their proposed method. The latency experiment in Section 6.3 is problematic, as it does not provide the accuracy results for Llama2-7B, making it difficult to assess the accuracy-latency trade-off. In Section 6.5, it is unclear which points in Figure 18 correspond to the unified precision configuration. Supplementary Material: Yes, I have reviewed all parts. Relation To Broader Scientific Literature: The attempt to apply a multi-objective optimization algorithm to KV cache quantization in this work has the potential to broaden the application of MOO. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: 1. Most experiments in this paper mention the use of per-token KV cache quantization, such as in Figures 2, 5, and 6. However, as demonstrated in KIVI, per-token key quantization performs significantly worse than per-channel quantization. Why not consistently apply per-channel quantization to the key? 2. Table 2 compares word-perplexity for the KIVI-HQQ implementation. However, in the HQQ implementation, both the key and value are quantized per-token. Since using per-token quantization for the key naturally leads to greater loss compared to per-channel quantization, it is unreliable to draw conclusions about whether the key is more important than the value when per-channel quantization is not applied to the key and per-token quantization is used for the value. 3. Section 4.5 aims to conclude that layer-wise sensitivity to KV cache quantization is an inherent characteristic of LLMs. However, the analysis is based only on prompts for math problems, and it is unclear whether this finding applies to general prompts, such as non-math tasks like retrieval and summarization. 4. Although the search space is greatly reduced, it is still considerably large (15,625, as mentioned in Line 319). The search cost may still be high, yet there is no explanation provided regarding the search cost, such as the total search time. Other Comments Or Suggestions: Table 4 typo: Pateto -> Pareto Questions For Authors: 1. What is the search cost of the proposed method? 2. The accuracy comparison is unreliable, as it lacks appropriate baselines with competitive quantization bitwidths for a fair comparison. Additionally, the accuracy improvement appears to be subtle. 3. The latency comparison is unreliable, as it lacks the accuracy results for the Llama2-7B model, making it impossible to assess the latency-accuracy trade-off. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your thorough feedback. Below, we address the concerns raised and outline revisions to improve the clarity and rigor of our work. --- # 1. Key Cache Importance * The reviewer correctly observes that in certain layers (e.g. Layer 0,1,2,31 of Llama-3.1-8B-Instruct), the proposed intra-layer KV cache precision pair pruning algorithm selects K4V8 rather than K8V4. We will clarify in the revised manuscript that Claim 1 ("key cache is generally more important") reflects an overall trend across layers. KVTuner assigns more bits to value cache than key cache in specific layers. Those layers where higher value bitwidths outperform higher key bitwidths normally have more streaming head patterns than retrieval heads, where key is more robust to low-bit quantization. Therefore, more memory bitwidths should be assigned to value cache, which is more sensitive in this case. * **This phenomenon verifies our observation and theoretical analysis about the strong correlation of KV cache quantization errors and attention patterns in Section 4.4.** In addition, the proposed KVTuner is adaptive to the inherent model structure patterns and the proposed layer-wise KV cache quantization precision tuning makes sense. --- # 2. Comparison with KIVI We appreciate the reviewer’s attention to fairness in comparisons. **KVTuner offers more flexible accuracy and efficiency tradeoff, which is not available in the static, uniform and non mixed-precisionbaselines KIVI and per-token-asym.** * The accuracy of KVTuner-C3.44 only decreases by 0.52% but the memory usage is reduced by 14% compared with KIVI-4 in Qwen2.5-3B-Instruct. We also push the frontier of nearly lossless KV cache compression (only 0.04% accuracy loss than the BP16 baseline) to 3.44-bit in this model. * In Table 11 (Page 24~25), we compare more uniform precision including K8V4 (C6), K8V2 (C5), and K4V2 (C3). Llama-3.1-8B-Instruct and Mistral-7B-v0.3 are more robust to low bit key quantization than Qwen2.5-7B-Instruct. However, Qwen2.5-7B-Instruct outperforms others in terms of accuracy. * In Table 5 and 6 (Page 8), the baselines INT4 KIVI and per-token-asym quantization lead to significant (67% and 26%) accuracy degradation in Qwen2.5-7B-Instruct. However, **KVTuner successfully reduces the accuracy loss to 16% with lower 3.92-bit memory usage and 0.18% with similar 4-bit memory usage, respectively. It indicates that the accuracy improvement of KVTuner is noticeable and KVTuner offers more robustness and flexibility to sensitive but powerful models.** * Figure 5 also visualizes the accuracy and equivalent KV cache bitwidth of different layer-wise KV cache precision pairs during KVTuner offline searching. The red circles are uniform precision across all layers. From Figure 5, we can easily observe more Pareto-optimal layer-wise KV cache precision pairs than uniform ones. **Especially, the accuracy of the uniform KV4, K4V2, and K2V4 is around 0% in Qwen2.5-7B-Instruct, while the searched equivalent 4-bit and 3-bit configs of KVTuner achieve 80% and 50% accuracy, respectively. It is a huge improvement in terms of accuracy with similar memory usage.** * Latency Baseline Clarification In Table 7, "KV8" refers to 8-bit KIVI quantization. We thus report the total model-level throughput comparison of Llama-3.1-8B-Instruct using the searched config in Table 5. The hardwares are Nvidia RTX 4090 24G. Compared with KIVI-KV8, the throuhgput of KVTuner-C3.25 can be improved by 16.79%~21.25%. |BS, inputLen|KV8(baseline)|K8V4|KV4|K4V2|KVTuner-C4.92|KVTuner-C3.25| |-|-|-|-|-|-|-| |64,128|3836|4193|4567|4697|4240(10.53%)|4652(**21.25%**)| |8,1024|549|597|632|645|600(9.22%)|641(**16.79%**)| --- # 3. Long context evaluation We compare KVTuner with the basedlines in the 20 Longbench datasets and the averaged scores are as below. The conclusion is that KVTuner push the nearly lossless long context generation to 3.25-bit. ### Qwen2.5-7B-Instruct KIVI |BF16|KIVI8|KIVI-K8V4|KIVI4|KVTuner-C4.92|KVTuner-C3.25| |-|-|-|-|-|-| |0.7956|0.7992|0.8001|0.7723|0.7956|0.7903| ### Qwen2.5-7B-Instruct per-token-asym |BF16|KV8|K8V4|KV4|KVTuner-C5.0|KVTuner-C4.0| |-|-|-|-|-|-| |0.7956|0.7971|0.7953|0.6343|0.8005|0.7960| --- # 4. Per-channel vs. per-token quantization KIVI-HQQ also supports key per-channel quantization by tuning the axis_key config. KIVI requires new operators and careful management as discussed in Line 58~69. In contrast, per-token-asym can be easily implemented and is supported in common inference frameworks such as LMDeploy. --- # 5. Layer-wise sensitivity analysis We also study the layer-wise sensitivity to KV cache quantization of Llama-3.1-8B-Instruct and Qwen2.5-7B-Instruct with both KV per-token-asym and KIVI like quantization modes on the non-math AIGC multiturn softage dataset in Figure 9 (Page 17), 10 (Page 18), 12 (Page 20), and 13 (Page 21). The sensitive layers are consistent with those in math tasks e.g. Figure 8. --- Rebuttal Comment 1.1: Comment: Thank you for your response—it addressed all of my concerns, and I will be raising my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you sincerely for your thoughtful feedback and for recognizing our efforts to address the concerns raised in your initial review. We deeply appreciate your constructive critique, which has significantly strengthened the rigor and clarity of our work. Your insights, particularly on baseline comparisons, evaluation fairness, and long-context validation, have been invaluable in refining our methodology and presentation. We are committed to incorporating all promised revisions into the final manuscript, ensuring the paper meets the high standards of the publication. Your expertise and time invested in reviewing our work are greatly acknowledged and appreciated. Thank you once again for your support and for guiding us toward a stronger contribution to the field. Best regards, Authors of the paper 11535
Summary: This paper proposed an innovative quantization technique for KV caches, which can reduce the inference throughput with a negligible quality drop in the output. This paper's key insight is that the key cache is more important than the value cache in terms of reducing the quantization error. Its key contribution is an adaptive framework called KVTuner that can tune the KV quantization configurations offline and use them online for different objectives. The experiments show that the solution can produce similar quality compared with the state-of-the-art solutions. And its inference efficiency is higher than a default KV8 quantization. ## Update after rebuttal Thank the authors for the clarifications. I will maintain the decision of "weak accept". Claims And Evidence: The key insight of the paper (the key cache is more important than the value cache in terms of reducing the quantization error) is backed up with good experimental results (20 samples from the standard dataset on an up-to-date llama 8b model). Methods And Evaluation Criteria: The evaluation section contains 2 parts: quality evaluation and efficiency evaluation. The quality evaluation is comprehensive and the metrics are suitable for the use case. However, the efficiency evaluation needs to be improved on the following points: 1. The setup of the benchmark is not clear. The paper does not mention how the baseline and their solution are implemented. Is there any new CUDA kernel in the paper's solution? Is the baseline using state-of-the-art attention frameworks like FlashAttention? 2. The definition of "throughput" is not clear. Does it refer to the speed of token generation (i.e., tokens per second), or number of finished requests per second, or something else? Theoretical Claims: The theoretical claims and the algorithm design make sense and there are no obvious problems. Experimental Designs Or Analyses: The experimental design of quality evaluation is comprehensive and good. However, the design of the efficiency evaluation is not clear (because of the problems mentioned above) Supplementary Material: The author provides a bunch of attention pattern analysis and experimental results, making the claims and the quality evaluation more solid in this paper. Relation To Broader Scientific Literature: There are plenty of works focusing on KV cache compressions. Some of them suggest the key cache is less important than the value cache [1], and some of them suggest we should have different quantization methods for the key cache and value cache respectively [2]. The key claim in this paper seems to conflict with those prior works. Please discuss the difference and the potential reason. References ---- [1] Zhao, Yilong, et al. "Atom: Low-bit quantization for efficient and accurate llm serving, 2024." URL https://arxiv. org/abs/2310.19102. [2] Liu, Zirui, et al. "Kivi: A tuning-free asymmetric 2bit quantization for kv cache." arXiv preprint arXiv:2402.02750 (2024). Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: In Table 7, the 8K FP16 setup has an OOM error. I'm wondering why it could happen. 8K context length only corresponds to ~1GB of KV cache when using llama-7B models. If the GPU is 48GB, how could it be OOM? Could it be because there is something wrong with the experiment? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and constructive feedback. We sincerely appreciate your acknowledgment of the innovativeness and feasibility of the proposed methodology and theoretical analysis of attention patterns to KV cache quantization, as well as your recognition of the comprehensive implementation of the experimental investigations. Below, we address your concerns and suggestions point by point: --- # 1. Efficiency evaluation clarifications We acknowledge the lack of clarity in our efficiency evaluation setup. Here are key clarifications: * Baseline Implementation: In Table 7, the static SOTA baseline KIVI and our solutions are both based on the official KIVI code with flashattention enabled in their kernels during decoding. KV8 and KV4 are KIVI-8 and KIVI-4, respectively. We will correct them in the revised version. We slightly modify the KIVI CUDA kernels to support INT8 and mixed precision of key and value cache. KIVI, per-token-asym, and our KVTuner is a online static method and compatible with FlashAttention. In addition, It uses lightweight post-training calibration and offline bit-width selection. This design choice ensures compatibility with existing inference frameworks. However, other online mixed precision KV cache quantization methods with attention scores based online token importance estimation is normally not compatible with flashattention. * New GQA efficiency results with KVTuner: We modify the KIVI code to support GQA models including Llama-3.1-8B-Instruct. We thus report the total model-level throughput comparison of Llama-3.1-8B-Instruct using the searched config in Table 5. The hardwares are Nvidia RTX 4090 24G. **Compared with KIVI-KV8, the throuhgput of KVTuner-C3.25 can be improved by 16.79%~21.25%.** |BS, inputLen|KV8(baseline)|K8V4|KV4|K4V2|KVTuner-C4.92|KVTuner-C3.25| |-|-|-|-|-|-|-| |64,128|3836|4193|4567|4697|4240(10.53%)|4652(21.25%)| |16,512|1102|1205|1275|1304|1239(12.41%)|1296(17.55%)| |8,1024|549|597|632|645|600(9.22%)|641(16.79%)| * Throughput definition: We follow the same settings and definitions of KIVI. Throughput is defined as the the number of tokens generated per second (measured end-to-end, including quantization/dequantization overhead). For example, if the batch size is 128 and one generation step takes 50ms, the throughput is 128 * 1000 / 50 = 2560 tokens/s. --- # 2. Discussion of conflicting literature We appreciate the opportunity to clarify our position relative to prior works (Atom and KIVI): Atom claims that KV cache is more amerable to quantization than activation matrices in Section 4.4 and utilizes INT4 precision for both key and value. **The importance difference of key and value is not clearly discussed in Atom. KIVI implicitly implies that key is more important than value by applying more complex per-channel per-group quantization to key and simple per-token quantization to value.** The conclusion that key cache is normally more important than key cache is validated with extensive empirical studies, which include final perplexity on the **wikitext** dataset in Table 2, layer-wise attention errors on the **GSM8K** dataset in Table 3 (Page 4), final model accuracy on the **general CEVAL, MMLU, TriviaQA, RACE, and TruthfulQA** datasets with both **per-token-asym and KIVI** quantization modes in Table 11 (Page 25), layer-wise attention score and output errors with the key per-channel-asym and value per-token-asym quantization on **AIGC** multiturn softage dataset of **Llama-3.1-8B-Instruct and Qwen2.5-7B-Instruct** in Figure 10 (Page 18) and Figure 13 (Page 21). --- # 3. OOM with 8K sequence We use 4 batch size with 8K tokens (~4GB). The OOM error for the 8K FP16 setup may arise from an unoptimized memory allocation strategy in our prototype implementation (e.g., redundant intermediate tensors were not freed). The default efficiency testing in the KIVI repo does not enable FlashAttention during prefilling, which may also result in the OOM issue with 8K long sequence. --- # Conclusion The proposed layer-wise KV cache precision pair tuning naturally suits the layer-wise sensitivity to KV cache quantization, making KVTuner a practical solution to reduce the memory usage and improve inference efficiency of LLMs with various sensitivities. KVTuner successfully pushes the nearly lossless KV cache quantization in complex mathematical and sciencitific tasks to 3.25-bit for Llama-3.1-8B-Instruct and 4-bit for sensitive Qwen2.5-7B-Instruct. KVTuner also greatly narrows the performance difference between the simple per-token-asym and accurate KIVI quantization modes, even when using overall similar low-precision settings. Many KV cache quantization approaches are proposed recently, but the correlation to LLM attention patterns is not well studied. We theoretically prove that the sparse streaming heads are robust to KV cache quantization than sensitive retrieval heads, which is the cause of layer-wise sensitivity to KV cache quantization.
Summary: The authors propose KVTuner, a sensitivity-aware layer-wise mixed-precision KV cache quantization framework for LLM inference. KVTuner addresses key challenges in KV cache quantization, including layer-wise sensitivity to quantization errors, high overhead of fine-grained online adjustments, and inflexibility across different LLM architectures. Instead of applying uniform quantization across all layers, KVTuner performs an offline search for optimal layer-wise key and value precision pairs (e.g., K8V4, K4V2) using multi-objective optimization (MOO). This search considers both memory constraints and model accuracy. The precomputed precision pairs are then applied directly during inference, reducing computational overhead while maintaining nearly lossless accuracy. Claims And Evidence: The paper claims that KVTuner significantly improves LLM inference efficiency while maintaining accuracy close to full-precision KV caching. As shown in Table 7, KVTuner-C6 achieves a 38.3% throughput improvement compared to KV8, and KVTuner-C3 achieves an even higher 76.4% improvement. However, the selection method may introduce additional computational complexity. In lines 275-295, the authors discuss how KVTuner avoids online decision-making overhead. Methods And Evaluation Criteria: The methodology is well-structured and based on a layer-wise sensitivity analysis of KV cache quantization. The evaluation uses standard mathematical reasoning benchmarks such as GSM8K and GPQA, which are appropriate for testing the impact of quantization errors. However, as noted in lines 220-250, additional profiling of the computational overhead per layer and the impact on inference latency in real-world applications (e.g., batched inference on vLLM) would strengthen the claims. A comparison of layer-wise FLOP costs before and after applying KVTuner’s selection would provide a clearer picture of its computational efficiency. Theoretical Claims: The paper correctly identifies that key cache quantization errors accumulate across both model layers and generation steps, leading to significant degradation in long-context inference. The discussion in lines 330-350 formalizes the optimization problem for selecting layer-wise precision pairs, but it does not analyze whether the proposed selection strategy guarantees global optimality. Additionally, while KVTuner reduces memory usage, it does not completely eliminate online computational overhead. Experimental Designs Or Analyses: The experiments comprehensively evaluate KVTuner across different models (Llama-3.1-8B, Qwen2.5-7B, Mistral-7B) and various KV precision configurations. The results in Table 5 show that KVTuner maintains accuracy while achieving significant memory savings. However, a few aspects could be further explored. Supplementary Material: I reviewed the supplementary material, which provides additional ablation studies and sensitivity analysis. Relation To Broader Scientific Literature: The paper is well-situated in the literature on KV cache quantization and memory-efficient LLM inference. It correctly cites works on uniform KV quantization (KV8, KV4) and hybrid eviction strategies. However, as discussed in lines 275-295, it does not sufficiently compare with recent approaches that integrate quantization with eviction (e.g., SnapKV. A direct comparison would strengthen the positioning of KVTuner as a practical alternative to existing methods. Essential References Not Discussed: The paper does not discuss alternative mixed-precision approaches that incorporate token-importance ranking for KV selection. Other Strengths And Weaknesses: The paper makes an important contribution to memory-efficient LLM inference with strong empirical results. However, there are areas for improvement: 1) The additional computational cost of computation is not fully analyzed. 2) The practical impact on multi-head attention efficiency is unclear. 3) The method’s effectiveness in extremely long-context settings (e.g., 100K+ tokens) is not evaluated. Other Comments Or Suggestions: Including a runtime profiling analysis of KVTuner’s selection method would strengthen claims about efficiency. Questions For Authors: What is the additional FLOP overhead per generation step compared to traditional KV quantization methods? How does KVTuner scale with batch size increases? Can KVTuner be integrated with KV cache eviction methods like SnapKV for improved memory efficiency? Have you considered hybrid approaches? How does KVTuner handle extreme long-context inference (e.g., 100K+ tokens)? Does performance degrade due to accumulated quantization errors? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank you for the thoughtful feedback and constructive critiques. Below, we address each concern and outline planned revisions to strengthen the paper: --- # 1. Computational cost **The profiling and layer-wise KV cache precision tuning are completely offline and no online overhead for precision selection is introduced.** Only online quantization for the newly generated key and value token using the calibrated precision similar to uniform KV precision is applied. We only perform 200 rounds of offline multi-objective optimization search with limited data, which is efficient and **one-time cost**. This is a huge advantage compared with other mixed-precision approaches that incorporate token-importance ranking for KV selection. **The two-stage search space pruning only takes seconds.** The MOO search with pruned search spaces is the most time consumping part, which performs 200 rounds of the target LLM on the selected 200 prompts. The optuna framework is also quite efficient. **The simple huggingface transformers may takes hours** in the above settings with single batch inference in Nvidia RTX 4090. The offline tuning time cost is acceptable compared with the saved cost during large-scale online service and model pretraining cost. In addition, better hardwares, torch graph compiling, and multi-batch inference during tuning may reduce the tuning cost to less than one hour. --- # 2. Latency experiment The layer-wise FLOP cost difference is mainly caused by efficiency difference of the KV cache precision pairs. The model-level efficiency reflects the overall effects of layer-wise efficiency of all KV cache precision pairs. The memory movement cost from CPU sto GPUs linearly increases with the KV cache size in most case and attention is normally memory bounded. We also report the total model-level throughput comparison of Llama-3.1-8B-Instruct using the searched config in Table 5 as below. The hardwares are Nvidia RTX 4090 24G. **Compared with KIVI-KV8, the throuhgput of KVTuner-C3.25 can be improved by 16.79%~21.25%.** |BS, inputLen|KV8(baseline)|K8V4|KV4|K4V2|KVTuner-C4.92|KVTuner-C3.25| |-|-|-|-|-|-|-| |64,128|3836|4193|4567|4697|4240(10.53%)|4652(**21.25%**)| |16,512|1102|1205|1275|1304|1239(12.41%)|1296(**17.55%**)| |8,1024|549|597|632|645|600(9.22%)|641(**16.79%**)| --- # 3. Long context effectivenss We compare KVTuner with the basedlines KIVI-8, KIVI-4, our proposed variant KIVI-K8V4, and per-token-asym ones in the 20 Longbench datasets and the averaged scores are as below. **KVTuner pushes KV cache quantization for the nearly lossless long context generation to 3.25-bit, outperforming the uniform KV precision.** ### Qwen2.5-7B-Instruct KIVI |BF16|KIVI-8|KIVI-K8V4|KIVI-4|KVTuner-C4.92|KVTuner-C3.25| |-|-|-|-|-|-| |0.7956|0.7992|0.8001|0.7723|0.7956|0.7903| ### Qwen2.5-7B-Instruct per-token-asym |BF16|KV8|K8V4|KV4|KVTuner-C5.0|KVTuner-C4.0| |-|-|-|-|-|-| |0.7956|0.7971|0.7953|0.6343|0.8005|0.7960| --- # 3. Global optimum Due to the complex and nonlinear dependency of error accumulation, there are no theoretically global optimization solutions for the NP-hard problem. The MOO formulation balances Pareto-optimal solutions rather than seeking global optimality. The two-stage search space pruning aims to help MOO converge and reduce searching cost, comparing Figure 5 and 6. --- # 4. Scaling over batch size When the input and output sequence lengths are fixed at 512 and 128, respectively, the following table presents the throughput (in Token/s) of KVTuner. |BS|4|8|16|32|64|80| |-|-|-|-|-|-|-| |Tokens/s|494|938|1776|2889|4655|4964| --- # 5. Integration with eviction We agree that integrating with eviction methods is promising and leave it in our future work. KVTuner is fully compatible with KV cache eviction methods including StreamingLLM, H2O, and SnapKV, because quantization and eviction are two orthogonal approaches. --- # 6. MHA We mainly test on LLMs with grouped query attention (GQA), because most recent and powerful models are GQA based and GQA is a variant of MHA. The layer-wise sensitivity to KV cache quantization is the inherent property of LLM with multi-layer transformers. In addition, key and value in attention layers have different sensitivity to quantization. There are the only two assumptions of KVTuner. We also analyze MLA models such as Deepseek-v2-lite-chat, in which we also observed similar layer-wise patterns. --- # Conclusion KVTuner’s practical impact lies in its deployability: it requires no inference-time overhead and achieves near-lossless accuracy, making it a compelling solution for production LLM systems and most LLM acceleration hardwares. In addition, we study the underlying mechasim of the higher importance of key cache. We also theoretically analyze that the layer-wise sensivity of attention heads to KV cache quantization strongly correlates with attention patterns, which is novel and may provide more insights to model design and compression.
null
null
null
null
null
null
null
null
Can Large Language Models Understand Intermediate Representations in Compilers?
Accept (poster)
Summary: This paper presents an empirical study of the capability of LLMs to understand intermediate representations (IRs) of code. The LLMs are evaluated on 4 types of tasks of IR understanding: control-flow graph (CFG) reconstruction, IR decompilation, code summarization and execution reasoning. The results indicate that while LLMs are capable of parsing IR syntax and recognizing high-level structures in code tasks, they struggle with control flow reasoning, execution semantics, and loop handling. Claims And Evidence: The paper provides experiments to support the claims, by using HumanEval dataset and five LLMs. Methods And Evaluation Criteria: The method is only evaluated on HumanEval dataset. This dataset is at the code function level with the average code lines of less than 10. The paper investigates GPT-4, GPT-3, Gemma 2, LLaMA 3.1, and Code Llama, in understanding IRs. Since the capabilities of LLMs are evolving rapidly, SOTA LLMs should be used for evaluation. Theoretical Claims: This paper is an empirical study. There is no proof of theoretical claims. Experimental Designs Or Analyses: The designs and analysis of experiments make sense for all 4 types of tasks, demonstrating the LLMs’ capability of understanding IRs. The evaluation of LLMs is usually conducted in multiple turns, and the results are often presented with statistical metrics such as pass@k, while whether the results in this paper are obtained in multiple turns is not specified. Supplementary Material: I review the appendix of the paper, including related work and prompt design. The paper did not provide other supplementary materials. Relation To Broader Scientific Literature: This paper lists multiple relevant papers and analysis their differences. For example, Meta’s LLM Compiler offers pre-trained models for code optimization tasks. While prior work has explored IR representation learning for code optimization and analysis, no studies have systematically examined how LLMs comprehend IR syntax, CFG structures, execution behavior, and semantic relationships. This paper addresses this gap by providing the first empirical evaluation of LLMs’ IR comprehension across these dimensions. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. This paper presents a empirical study to investigate the capabilities of LLMs, including GPT-4, GPT-3, Gemma 2, LLaMA 3.1, and Code Llama, in understanding IRs. 2. The IR comprehension of LLMs is analyzed across four tasks: Control Flow Graph (CFG) reconstruction, decompilation, code summarization, and execution reasoning. Weaknesses: 1. The paper only uses HumanEval dataset, which is at code functional levels with fewer average lines. 2. The paper evaluates the capabilities of LLMs, including GPT-4, GPT-3, Gemma 2, LLaMA 3.1, and Code Llama. More SOTA LLMs should be included, such as GPT-4o, DeepSeek-V3 or R1, Qwen. 3. There are non-neural developed tools for these tasks, many of them integrated in commonly-used IDEs or software testing applications. If the performances of these tools can be listed together, they could show the advantages and disadvantages of LLM-based methods. 4. The evaluation of LLMs is usually conducted in multiple turns, and the results are often presented with statistical metrics such as pass@k, while whether the results in this paper are obtained in multiple turns is not specified. Other Comments Or Suggestions: No. Questions For Authors: Please refer to the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. Below, we address the key concerns: **W1:** The paper only uses the HumanEval, which contains code functions with an average of fewer than 10 lines. **A1:** Though HumanEval consists of relatively short functions, our results reveal fundamental limitations in LLMs’ IR comprehension that extend beyond dataset scale from three aspects: **(1) Controlled Complexity:** Even small programs require precise reasoning about IR constructs; **(2) Task Difficulty Beyond Code Size:** Despite their brevity, small programs exhibit complex low-level semantics.; **(3) Systemic Failures:** Consistent errors across tasks—such as CFG Construction and Execution Reasoning—indicate systemic deficiencies in IR understanding. --- **W2:** The paper evaluates GPT4, Gemma, and so on. More SOTA LLMs, such as GPT-4o, DeepSeek-V3 (or R1), should be included.\ &\ **W4:** It is not specified whether the results are obtained in multiple turns. **A2:** We agree that including SOTA models is essential. We supplemented our experiments with DeepSeek R1. **A4:** We fully agree that multi-turn evaluations using metrics like pass@k enhance robustness. We ran DeepSeek R1 on all four tasks three times (R1–R3) to ensure stability. T1: CFG Construction | | Comp. | Node Acc. | Full Accu. | Partial Accu. | |:-:|:-:|:-:|:-:|:-:| | R1 | 69 | **55** | 53 | 2 | | R2 | 77 | **64** | 57 | 6 | | R3 | 73 | **62** | 60 | 2 | | GPT-4 | 164 | 50 | 39 | 11 | T2: IR Decompilation | | Comp. | Re-exe. Comp. | Re-exe. Success | |:-:|:-:|:-:|:-:| | R1 | 72 | 36 | 18 | | R2 | 77 | 38 | 17 | | R3 | 75 | 39 | 14 | |LLaMA 3.1| 77 | 23 | 14 | T3: Code Summarization | | Task Comp. | BLEU > 0.8 | METEOR > 0.8 | ROUGE > 0.8 | Avg. BLEU | Avg. ROUGE | Avg. METEOR | |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | R1 | 49 | 1 | 5 | 8 | 0.413 | 0.637 | 0.699 | | R2 | 49 | 1 | 5 | 10 | 0.420 | 0.639 | 0.692 | | R3 | 49 | 1 | 4 | 10 | 0.433 | 0.640 | 0.705 | | LLaMA 3.1 | 81 | 1 | 5 | 11 | 0.39 | 0.61 | 0.67 | T4: Execution Reasoning | | IR Com. |SC Com. | IR Pass | SC Pass | IR Partial Pass |SC Partial Pass | IR Pass % | SC Pass % | |:-:|:-:|:-:|:----:|:-:|:-:|:-:|:-:|:-:| | R1 | 164 | 164 | 31 | 133 | 133 | 31 | 18.9 | 81.1 | | R2 | 164 | 164 | 30 | 146 | 134 | 18 | 18.3 | 81.7 | | R3 | 164 | 164 | 32 | 139 | 132 | 25 | 19.5 | 80.4 | |LLaMA 3.1 | 164 | 164 | 31 | 119 | 114 | 35 | 18.9 | 72.0 | DeepSeek R1 performs comparably to LLaMA 3.1 on T2–T4 but excels in T1 (CFG Construction) due to its integrated chain-of-thought mechanism that helps identify basic blocks and critical control flow instructions (e.g., "br" and "jmp"). **Revised Finding 1:** LLMs generally struggle with detecting basic blocks and constructing control flow edges; however, chain-of-thought prompting modestly enhances the recognition of key control flow instructions. **A4:** Notably, the three runs yielded consistent results, demonstrating the reliability of DeepSeek R1’s performance. We will include these multi-turn results in the revised manuscript. **In the final version, we will include comprehensive analyses, additional experiments, and concrete examples to support these findings.** In future work, we will systematically explore additional prompting techniques to further validate and extend these findings. --- **W3:** There exist non-neural developed tools for IR tasks in common software testing applications that could help highlight the dis/advantages of LLM-based methods. **A3:** We appreciate the reviewer’s point that established non-neural methods serve as valuable baselines. Tools such as Ghidra, various IDE tools, and symbolic execution engines like KLEE excel at CFG construction, decompilation, and execution reasoning through extensive domain-specific training. In contrast, our study uses these tasks solely as diagnostic benchmarks to evaluate the raw, untuned IR comprehension of LLMs. Our goal is not to outperform specialized tools but to reveal intrinsic LLM limitations—such as in control flow inference, granular semantic understanding, and loop handling—that can guide future optimizations. In the final manuscript, we will include a detailed discussion comparing our results with these non-neural baselines. --- W4 & A4 are included in A2. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I have updated the score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your updated evaluation and for taking the time to review our rebuttal. We greatly appreciate your feedback and value your insights into our work. Your constructive comments are crucial for us, and we welcome any further concerns or suggestions you may have as we continue to refine our research. Best Regards, The Authors
Summary: The paper experiments with applying LLMs to the control flow graph of programming code, identifying key challenges of control flow, semantic understanding, and loop handling. These challenges, as analyzed through 4 tasks, seem to permeate over a variety of language models including Coda Llama, Gemma 2, and GPT-4. The paper claims to be the first to analyze how LLMs perform on intermediate representations of the compiler. Claims And Evidence: The main claim is negative, that LLMs struggle with CFG-based tasks. This negative claim receives support from the results, with some limitations (as described below) -- as common with negative claims -- where a comprehensive study of all possibilities is difficult to bring forward. It is difficult to put these results that the LLMs currently achieve into context without seeing some non-LLM baselines for these specific tasks. Methods And Evaluation Criteria: The dataset is a processed version of a subset of MMLU, which is an established benchmark for evaluating LLMs. The tasks around the CFGs are newly designed and seem to make sense. There are technically no new proposed methods, but a new application of LLMs to these newly designed tasks. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The experimental design seems to be appropriate for the studied research question of how LLMs would perform on CFG-related tasks. Supplementary Material: I reviewed the appendices on comprehensive related work and on the employed prompts to understand that evaluation is done zero-shot and w/o any chain-of-thought prompting techniques. Relation To Broader Scientific Literature: The key contributions match with the general trend of applying LLMs to a variety of problems. The paper covers a particular topic within applying LLMs to code. Essential References Not Discussed: None that I'm aware of, yet it is not well argued why methods from the related work were not considered as baselines. Other Strengths And Weaknesses: ## Strengths - The paper provides an in-depth analysis of applying LLMs to control flow graphs - The importance of analyzing control flow graphs is nicely motivated. - Specific findings are emphasized and justified by the results. - Applying LLM to the intermediate representations of a compiler is, to the best of my knowledge, a novel application of LLMs ## Weaknesses - The paper doesn't consider any non-LLM baselines, e.g. graph neural nets (as those mentioned in the related work) that would be arguably are more natural fit for the task. Even if LLMs currently don't compete with GNNs and other graph representation learning methods, it would be interesting to see how big the margin is. - Evaluation is limited to zero-shot experiments. There are no few-shot experiments and no basic CoT prompting was tried. Both these are fairly established as standard techniques and could very well influence the conclusions. Other Comments Or Suggestions: - The paper could be strengthened by efforts to improve the basic LLM performance with well-established basic prompting techniques (e.g., chain-of-thought, few-shot prompting) - The paper could be improved by adding task-specific baselines. Questions For Authors: Are there any non-LLM approaches that would be applicable to the introduced tasks, e.g., fine-tuning or training a classifier on top of IR2vec / Bert-style models and/or applying graph representation learning methods? Or is there any particular reason that those are not included/applicable? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and recognition of our novel application of LLMs to compiler IRs. We also appreciate the acknowledgement of our focus on task-specific analysis. We address the key concerns below. **Q1 & W1:** Are there any non-LLM approaches that would be applicable to the introduced tasks, e.g., fine-tuning or training a classifier on top of IR2vec / Bert-style models and/or applying graph representation learning methods? Or is there any particular reason that those are not included/applicable\ &\ **W3:** The paper does not include non-LLM baselines. **A1:** We appreciate the reviewer’s point that established non-LLM methods can serve as valuable baselines. For instance, pre-trained models such as FAIR [1] have shown impressive results in semantic summarization. However, while non-LLM approaches (e.g., GNNs, fine-tuned BERT-style models) excel on specific tasks, they require extensive dataset-specific training and are typically applied to higher-level applications. In contrast, our study focuses on the raw, untuned IR comprehension of general-purpose LLMs. By using tasks such as CFG reconstruction, IR decompilation, code summarization, and execution reasoning, we establish diagnostic benchmarks that evaluate the zero/few-shot and chain-of-thought (CoT) prompting capabilities of current LLMs without any task-specific fine-tuning. This approach not only reveals the inherent limitations of existing LLMs in understanding intermediate representations but also provides actionable insights for future improvements. Moreover, as detailed in **A2**, we will incorporate additional CoT prompting in the revised manuscript. In future work, we plan to explore non-LLM baselines and a broader range of prompting techniques to further enhance our analysis. --- **W2 & W3:** Evaluation is limited to zero-shot experiments; few-shot and basic CoT prompting have not been explored. **A2:** Our evaluation is not strictly zero-shot. We use zero-shot prompting for CFG Construction and IR Decompilation, few-shot examples for Code Summarization, and chain-of-thought prompting for Execution Reasoning. Although these techniques improve performance, fundamental challenges in IR understanding—particularly in control flow and execution semantics—persist, with model rankings remaining unchanged. To assess the benefits of chain-of-thought prompting, we conducted supplementary experiments using DeepSeek R1. We selected DeepSeek R1 because (1) it is among the state-of-the-art LLMs, and (2) it features an inherent chain-of-thought mechanism that decomposes complex IR tasks into intermediate reasoning steps, showing potential advantages for our IR tasks. We ran DeepSeek R1 on all four tasks three times to ensure stability. Our preliminary results (one as an example) are as follows: T1: CFG Construction | | Comp. | Node Acc. | Full Accu. | Partial Accu. | |:-:|:-:|:-:|:-:|:-:| | R2 | 77 | **64** | 57 | 6 | | GPT-4 | 164 | 50 | 39 | 11 | T2: IR Decompilation | | Comp. | Re-exe. Comp. | Re-exe. Success | |:-:|:-:|:-:|:-:| | R1 | 72 | 36 | 18 | |LLaMA 3.1| 77 | 23 | 14 | T3: Code Summarization | | Task Comp. | BLEU > 0.8 | METEOR > 0.8 | ROUGE > 0.8 | Avg. BLEU | Avg. ROUGE | Avg. METEOR | |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | R1 | 49 | 1 | 5 | 8 | 0.413 | 0.637 | 0.699 | | LLaMA 3.1 | 81 | 1 | 5 | 11 | 0.39 | 0.61 | 0.67 | T4: Execution Reasoning | | IR Completed |SC Completed | IR Pass | SC Pass | IR Partial Pass |SC Partial Pass | IR Pass rate | SC Pass Rate | |:-:|:-:|:-:|:----:|:-:|:-:|:-:|:-:|:-:| | R1 | 164 | 164 | 31 | 133 | 133 | 31 | 0.189 | 0.811 | |LLaMA 3.1 | 164 | 164 | 31 | 119 | 114 | 35 | 0.189 | 0.72 | DeepSeek R1 performs comparably to LLaMA 3.1 on T2–T4 but excels in T1 (CFG Construction) due to its integrated chain-of-thought mechanism that helps identify basic blocks and critical control flow instructions (e.g., "br" and "jmp"). **Revised Finding 1:** LLMs generally struggle with detecting basic blocks and constructing control flow edges; however, chain-of-thought prompting modestly enhances the recognition of key control flow instructions. **In the final version, we will include comprehensive analyses, additional experiments, and concrete examples to support these findings.** In future work, we will systematically explore additional prompting techniques to further validate and extend these findings. [1] Niu, Changan, et al. "Fair: flow type-aware pre-training of compiler intermediate representations." ICSE’24.
Summary: The paper provides an empirical evaluation of current LLMs on IR understanding tasks, namely -- - CFG reconstruction - decompilation - code summarization, and - execution reasoning and find that models struggle with complex reasoning about IRs Claims And Evidence: - Pioneering empirical study to investigate the capabilities of LLMs -- first work to evaluate LLMs on IR related tasks - Empirical findings - LLMs recognize syntax but struggle with control flow and semantics - Loop handling remains a fundamental challenge These are demonstrated with comprehensive evaluations on the 4 tasks. Methods And Evaluation Criteria: They propose new benchmarks (derived from compiling humaneval problems to IRs) on four diverse tasks to evaluate IR capabilities of models. The tasks are not necessarily novel and have been studied for programs in high-level language, but are new in the context of LLVM code. The benchmark is collected from HumanEval problems, which might not provide insights toward understanding real-world programs. Particularly, given a challenge with IR / Assembly programs is terseness of such programs, it is unclear if those effects are accounted for here. Theoretical Claims: none Experimental Designs Or Analyses: - The experimental design and evaluation are well presented. Each task is described clearly with appropriate metrics and prompts. - Given IR understanding can be seen as an OOD task, it would be useful to further enhance the description about the prompting effort that was applied across different models. - Reasoning models (such as O1 or R1) are not discussed. Given considerable performance gains witnessed on programming tasks, it deserves analysis in this paper. Supplementary Material: only skimmed the prompts Relation To Broader Scientific Literature: The paper contextualizes the research with relevant work in LLMs for intermediate/assembly language. Essential References Not Discussed: The related work for intermediate/assembly language is well provided, however, a rich body of work about code understanding is not discussed. For example [1] introduced code execution evaluations, which has been followed up by many other works attempting to use different static and dynamic approaches to evaluate code understanding (including cfg path analysis in [2]) [1] CRUXEval: Code Reasoning, Understanding, and Execution Evaluation [2] LLMs: Understanding Code Syntax and Semantics for Code Analysis Other Strengths And Weaknesses: Strength. - The paper is well written and the findings are easy to understand. The recommendations for improving IR reasoning capabilities would also be useful for future work. Weakness. - Choice of HumanEval problems used for constructing tasks likely challenges generalizability of the findings. - Current LLMs are likely not heavily optimized for IR programs. It is unclear how to calibrate the findings once we develop models explicitly optimized for IR programs. It is possible that the findings would considerably change for such models. Other Comments Or Suggestions: none Questions For Authors: - Can you provide more details on the level of prompting required for models to work with IR programs? - Can you provide the statistics of the IR humaneval programs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We appreciate your recognition of the novelty of applying LLMs to compiler IRs and the value of our in-depth evaluation. Below, we address the key concerns: **Q1:** Can you provide more details on the level of prompting? **A1:** In our paper, we have adopted two prompting strategies: **(1) Task-Specific Expert Prompts.** For each IR task, prompts assign a role (e.g., “CFG analyzer for IRs”) and clearly define the input, task objective, and required output format (e.g., DOT files with nodes and edges); **(2) Advanced Prompting Techniques.** We also applied the *Zero/Few-Shot Prompting* and *Chain-of-Thought (CoT) Prompting* in Tasks 1–3 and Task 4, respectively. Note that the main text summarizes our approach, and complete prompt templates are provided in Appendix B (extended examples omitted for brevity). --- **Q2:** Can you provide the statistics of the IR humaneval programs? **A2:** Key statistics of 164 IR programs across optimization levels are as follows: | OPT Level | LoC | Tokens | Functions | Loops | Basic Blocks | |:-:|:-:|:-:|:-:|:-:|:-:| | -O0 | 162,389 | 1,079,290 | 5,195 | 463 | 13,228 | | -O1 | 66,598 | 454,954 | 393 | 577 | 7,708 | | -O2 | 69,102 | 478,002 | 374 | 625 | 8,048 | | -O3 | 75,519 | 526,395 | 367 | 827 | 8,917 | At -O0, minimal optimizations yield verbose IR with duplicate functions to preserve debugging (‘linkonce_odr’). -O1 and -O2 remove redundant elements, reducing size but slightly increasing loop counts, while -O3 further simplifies the structure at the cost of more loops. These trends explain why decompilation performs best at -O1/O2. We will include a detailed description and a summary table in the revised manuscript, along with data-driven explanations to substantiate our findings. --- **W1:** Choice of HumanEval problems used for constructing tasks likely challenges the generalizability. **AW1:** We acknowledge that the HumanEval dataset may limit generalizability. However, our results reveal “general limitations” in LLMs’ IR comprehension that extend beyond dataset scale or diversity from two aspects: **(1) Controlled Complexity:** Although the programs are small, the extracted IRs involve common operations—including nested loops, conditional statements, and function calls—that pose significant challenges; **(2) Systemic Failures:** Consistent errors across tasks, such as CFG reconstruction and execution reasoning, indicate systemic deficiencies in IR understanding. These issues represent fundamental “general challenges” that LLMs have yet to overcome for most programs. Once these preliminary obstacles are addressed, we plan to expand our evaluations to larger, more diverse datasets to continuously enhance generalizability. We will also discuss future directions for incorporating more diverse IR samples in the revised manuscript. --- **W2:** It is unclear how to calibrate the findings once we develop models explicitly optimized for IR programs **AW2:** We agree that fine-tuning on IR-specific datasets may improve performance. Our study is primarily exploratory, positioned to assess the untuned performance of current LLMs in understanding IRs without relying on benchmark-specific fine-tuning. We acknowledge that fine-tuning LLMs on specialized datasets could boost their performance on IR-related downstream tasks. However, by focusing on their untuned performance, our work establishes a clear baseline that exposes key challenges. Our findings pinpoint areas where current LLMs fall short—such as control flow comprehension (Finding 1), granular semantic understanding (Findings 4–5), and loop handling—while also offering guidance for targeted fine-tuning strategies. We are committed to pursuing fine-tuning as a central focus of our future research, leveraging our insights to significantly enhance models' abilities to tackle more complex IR-related tasks. --- **W3**: The related work for intermediate/assembly language is well provided; however, a rich body of work about code understanding is not discussed. **AW3:** We appreciate the suggestion to contextualize our work further within the broader code understanding literature. Although our manuscript comprehensively covers IR-level work, we agree that expanding the discussion of static/dynamic analysis methods (e.g., CRUXEval [2] and CFG path analysis in [3]) would enrich our related work section. In the revised manuscript, we will incorporate these references to better position our study within the broader landscape of code understanding research. [1] Chris C., et al. LLM Compiler: Foundation Language Models for Compiler Optimization. CC’25 [2] CRUXEval: Code Reasoning, Understanding, and Execution Evaluation [3] LLMs: Understanding Code Syntax and Semantics for Code Analysis
Summary: The authors explored the capabilities of large language models (LLMs) in understanding intermediate representations (IRs), primarily for applications such as code comprehension, optimization, and automated reasoning. Their findings indicate that while LLMs are proficient in understanding static IR features and basic control flow structures, they struggle with more complex representations, such as loop reasoning and execution simulation. Additionally, LLMs perform better at capturing semantic-level behavior rather than instruction-level details. Their findings are highly significant for the broader research community in the field of LLMs, particularly in the domain of software analysis and handling IR-related tasks. Claims And Evidence: The claims and research findings are highly interesting and novel, supported by an extensive experimental setup and validated with convincing results. The findings are realistic and impressive. Methods And Evaluation Criteria: Yes, the proposed methods, target study tasks/categories, and corresponding evaluation criteria are well-defined and clearly presented. The research is thorough and well-structured, providing an in-depth validation of the authors' claims and findings. The authors benchmarked the performance of selected but widely used LLMs, including GPT-3/4, Gemma 2, LLaMA 3.1, and Code Llama, on key IR-related tasks: (a) CFG construction, (b) decompilation ability, (c) code summarization, and (d) execution reasoning. Their findings effectively highlight the relative strengths and limitations of these LLMs for the aforementioned tasks. Theoretical Claims: Yes, I have already elaborated on this in the "Methods and Evaluation Criteria" section. However, I also considered the research limitations highlighted by the authors. They acknowledged that their study is constrained by a limited benchmark dataset, as the HumanEval-derived IRs do not fully reflect data diversity. Additionally, the impact on model performance due to the lack of exploration into advanced prompting techniques and the omission of fine-tuning strategies—such as IR-specific dataset augmentation and fine-tuning—was not accounted for. Experimental Designs Or Analyses: I already mentioned and discussed in sections like "Methods And Evaluation Criteria*" and "Theoretical Claims*". Supplementary Material: I thoroughly reviewed the supplementary material and each section to gain a deeper understanding of the main research concept. This section is comprehensive and highly insightful, providing a clear understanding of the overall research flow. Relation To Broader Scientific Literature: Their findings are highly significant for the broader research community in the field of LLMs, particularly in the domain of software analysis and handling complex IR-related tasks. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strength: 1. The manuscript is well-written, thoroughly analyzed, and presents a detailed experimental setup to highlight the shortcomings of LLMs in reasoning and understanding complex IRs. 2. It provides a comprehensive benchmarking of widely used LLMs, including GPT-3/4, Gemma 2, LLaMA 3.1, and Code Llama, on key IR-related tasks: (a) CFG construction, (b) decompilation ability, (c) code summarization, and (d) execution reasoning. Weakness: The study does not consider fine-tuning on a benchmark-specific dataset, which limits its ability to reflect true benchmarking metrics. This omission is a drawback, as it does not fully showcase the potential capabilities of LLMs, particularly in the domain of software analysis. Other Comments Or Suggestions: several typos have been noticed, Questions For Authors: Please see Weakness and other comments Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your detailed review and positive feedback on our work. We greatly appreciate your recognition of the novelty and significance of our study, as well as your thorough evaluation of our experimental setup. We would like to address your comments as follows: **W1:** The study does not consider fine-tuning on a benchmark-specific dataset, which limits its ability to reflect true benchmarking metrics. **AW1:** We appreciate the reviewer’s comment. Our study is primarily exploratory, positioned to assess the untuned performance of current LLMs (including zero/few-shot and chain-of-thought prompting) in understanding intermediate representations (IRs) without relying on benchmark-specific fine-tuning. This approach enables us to use tasks such as CFG reconstruction, IR decompilation, code summarization, and execution reasoning as benchmarks for evaluating LLMs' understanding of intermediate representations. Moreover, our findings reveal inherent limitations in how these models understand and process IRs. We acknowledge that fine-tuning LLMs on specialized datasets could boost their performance on IR-related downstream tasks. However, by focusing on their untuned performance, our work establishes a clear baseline that exposes key challenges. Our findings pinpoint areas where current LLMs fall short—such as control flow comprehension (Finding 1), granular semantic understanding (Findings 4–5), and loop handling—while also offering guidance for targeted fine-tuning strategies. We are committed to pursuing fine-tuning as a central focus of our future research, leveraging our insights to significantly enhance models' abilities to tackle more complex IR-related tasks. **Typos and Formatting Issues** We appreciate you highlighting some spelling and formatting errors. We have reviewed the manuscript carefully and will correct these issues in the final version. Again, thank you for your constructive comments.
null
null
null
null
null
null
Beyond One-Hot Labels: Semantic Mixing for Model Calibration
Accept (poster)
Summary: This paper propose Calibration-aware Semantic Mixing, a model calibration approach using diffusion-based data augmentation, like “semantic mixup”.Unlike traditional one-hot labeling, CSM generates mixed samples with soft labels with the CLIP. The authors introduce a reannotation technique using CLIP features and investigate the influence of loss functions prove L2 loss is good for enhances calibration. Claims And Evidence: 1. CSM improves model calibration by introducing semantically meaningful augmentations, validated through ECE and AECE reductions. 2. L2 loss leads to balanced learning, demonstrated by both theoretical insights and empirical results. Methods And Evaluation Criteria: 1. The paper introduces a diffusion-based augmentation method using L2 loss supported by theoretical evidence. 2. This paper is evaluate with ECE, AECE on CIFAR-10, CIFAR-100, and Tiny-ImageNet, using ResNet-50/101, Wide-ResNet-26-10 and DenseNet-12 Theoretical Claims: The authors provide a theoretical justification for their reannotation strategy and choice of loss functions Experimental Designs Or Analyses: 1. extensive comparisons with other train time calibration techniques. 2. AUROC (%) for robustness evaluation under distribution shifts. 3. Calibration performance with post hoc calibration methods. 4. Reliability diagrams and ablation studies Supplementary Material: The supplementary material privide proof for Equations and Propositions and other reproduction details. Relation To Broader Scientific Literature: This paper is highly related to the field of calibration, with the relationship to test time augmentation and label smoothing. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: 1. Limited discussion on computational efficiency 2. Hyperparameter sensitivity analysis is not well explored. 3. No Transformer model is included for comparison Other Comments Or Suggestions: 1. Provide comparisons of computational cost with existing calibration techniques. 2. Include failure cases where CSM does not improve calibration performance Questions For Authors: 1. How does the choice of diffusion model affect the performance of CSM? How about we use other generative models. There are some generative model doing interpolation , like GANs. Does it helps? 2. How does CSM compare to Mixup in terms of training efficiency and memory usage? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Response to Reviewer jJxJ Thanks for your helpful suggestions! Here’s our response: **Q4-1**: Training computational cost and memory usage compared to existing methods. **A4-1**: As also analyzed in **A2-3**, we compare the computational efficiency in terms of the training time in **A2-3 Table B**: We can conclude that the **number of augmented samples per batch** is the **major factor** in training time. Our CSM maintain relative efficiency considering the calibration effectiveness. When compared on the EQ-DATA setting in the main paper Table 2, CSM achieves competitive calibration results with an equalized training time. Such factor also decides memory usage. CSM uses ~$\frac{3}{2}$ memory compared to Mixup/RegMixup, while using ~equal memory compared to the RankMixup (depending on setep). Regarding resource consumption, we generate augmented samples and train our model on one A4000 device. It's worth noting that CSM **needs no re-generation** when switching model/objectives/re-annotation methods, making it more efficient for decoupled study of these modules. We will include the key information in the main paper. **Q4-2**: Hyperparameter sensitivity. **A4-2**: We have analyzed hyperparameter $s$ and the number of augmented samples in Appendix C. We provide more analyses about **the number of augmented samples per training sample** (denoted as $N_{aug}$) here. **Table C:** | $N_{augs}$ | 1 | 2 | 3 | | - | - | - | - | | ECE: CIFAR-10 | 0.83 | 0.54 | 0.39 | | ECE: CIFAR-100 | 2.07 | 1.29 | 1.74 | It can be observed that adding the number of accompanied augmentations per dataset sample can generally improve the final calibration performance. This is because a larger number of $N_{augs}$ can sample more sufficient proximal data for training, better filling the domain space and providing more accurate confidence estimation. For **sensitivity analysis**, we have the following key observations to enhance our experiments considering **existing results** in Appendix C: 1. Our proposed CSM is not sensitive to the total number of augmented samples. A relatively smaller quantity can still make CSM effective. 2. CSM is relatively sensitive to the scaling factor $s$ as it is related to the temperature. Nevertheless, under an appropriate range of $s$, the method can perform consistently well. **Q4-3**: Evaluation on the transformer architecture. **A4-3**: Evaluations on the Swin-Transformer architecture verify our method's equal or stronger effectiveness. Please refer to **A1-3 Table A**. **Q4-4**: Failure cases of CSM. **A4-4**: While our method can perform well on all the evaluated datasets, we admit there are some cases CSM fails. For instance, although CSM surpass compared methods on ECE/AECE, some post-temperature results on the CIFAR-100 dataset are not comparative to the SOTA methods. Specifically, they exhibit balanced pre- and post-temperature results (searched $T = 1$) with remarkable pre-temperature ECE values but slightly larger calibration errors after temperature scaling. Such phenomenon is also stated in the calibration literature [1], where they found a balance exists in the two results. Considering both results together, our CSM can still achieve satisfactory model calibration. **Q4-5**: Will the choice of generative models affect the performance? **A4-5**: The choice of the generative model does influence the final prediction results. Some existing study [2] has already proved that adopting a better generative model could improve the classification model's robustness. We anticipate that a better generative backbone can achieve superior confidence calibration using our CSM. This aligns with our empirical experiences that using the ordinary Stable Diffusion architecture, we can only achieve suboptimal results as shown in the table: | Gen. Model | ACC | ECE$\downarrow$ | AECE$\downarrow$ | | - | - | - | - | |SD [3]|76.87|1.93|1.78| |EDM [4] (in our CSM)|**78.84**|**1.29**|**1.63**| Therefore, we anticipate that a typical GAN with less parameters or lower fidelity compared to diffusion models would yield worse results. **References** [1] Wang, D. B., Feng, L., & Zhang, M. L. (2021). Rethinking calibration of deep neural networks: Do not be afraid of overconfidence. Advances in Neural Information Processing Systems, 34, 11809-11820. [2] Wang, Z., Pang, T., Du, C., Lin, M., Liu, W., & Yan, S. (2023, July). Better diffusion models further improve adversarial training. In International conference on machine learning (pp. 36246-36263). PMLR. [3] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10684-10695). [4] Karras, T., Aittala, M., Aila, T., & Laine, S. (2022). Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems, 35, 26565-26577.
Summary: This paper introduces a novel framework, Calibration-aware Semantic Mixing (CSM), designed to improve model calibration. The key contribution lies in addressing the limitations of one-hot labeled datasets by proposing a data augmentation technique that leverages semantic mixing to generate diverse samples via diffusion models. The paper also introduces reannotation techniques to enhance confidence annotation accuracy and explores different loss functions to achieve confidence-balanced learning. Experimental results demonstrate that CSM surpasses existing calibration methods, delivering superior performance across multiple benchmarks and tasks. ## update after rebuttal My concerns have been addressed and would like to recommend accept. Claims And Evidence: The claims presented in the paper are well-supported by empirical evidence from the experiments. Methods And Evaluation Criteria: The proposed method is both novel and well-motivated, offering fresh insights into model calibration. The evaluation follows standard practice in this field, employing accuracy, Expected Calibration Error (ECE), and post-temperature scaling as key metrics. Theoretical Claims: One concern arises in Proposition 3.4, where the paper claims that L2 loss outperforms cross-entropy (CE) and focal loss. The reasoning behind this claim is unclear and requires further clarification. Experimental Designs Or Analyses: The overall experimental design and analysis are well-structured and reasonable. The authors compare the proposed calibration technique against several widely-used calibration algorithms on diverse models and datasets. Ablation studies effectively highlight the contributions of different components. However, I have two major concerns: - Calibrated Reannotation – The authors utilize CLIP’s visual encoder for reannotation, but they do not discuss or compare it with a simple baseline that directly adopts CLIP outputs as labels. Evaluating this baseline would help assess the added benefit of the proposed reannotation approach. - Calibration-aware Data Augmentation – The study proposes a semantic mixing strategy for generating calibrated samples using diffusion models. However, a crucial baseline is missing: directly using generated images from diffusion models without semantic mixing. Evaluating this approach would provide a clearer understanding of semantic mixing’s contribution. Supplementary Material: The supplementary material appropriately includes proofs of propositions, detailed descriptions of the experimental setup, and additional results. Relation To Broader Scientific Literature: Addressing model calibration from a data-driven perspective is an interesting and promising direction. The results across different models and datasets suggest strong potential for real-world applications. The proposed algorithm could contribute significantly to the field of trustworthy AI, enhancing model reliability and confidence estimation in diverse applications. Essential References Not Discussed: None. Other Strengths And Weaknesses: Overall, the paper is well-written and easy to follow. The proposed semantic mixing framework is conceptually sound and coherently presented. The evaluation is comprehensive, and the method demonstrates superior calibration performance compared to existing baselines. However, as noted earlier, some concerns remain regarding the theoretical claims (Proposition 3.4) and the experimental design (missing baselines for reannotation and augmentation methods). Other Comments Or Suggestions: None. Questions For Authors: - Could you discuss or evaluate simple baselines for calibrated reannotation, such as directly adopting CLIP outputs as labels? - Could you discuss or evaluate simple baselines for calibration-aware data augmentation, such as using generated images without semantic mixing? - Could you clarify Proposition 3.4, particularly regarding why L2 loss outperforms cross-entropy and focal loss? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to Reviewer WPRF Thank you for your encouraging feedback on the clarity, soundness, and comprehensive evaluation of our work. We truly appreciate your thoughtful suggestions for clarity and comprehensive validation. Here are our responses to the suggestions: **Q3-1**: Clarify Proposition 3.4 (L2 loss vs. CE/FL). **A3-1**: Thank you for commenting on the clarity issue. We need to clarify that there exists a **typo** in Proposition 3.4 potentially hindered understanding. Proposition 3.4 should have been presented as - $\forall \delta \ge 0$, we have $\beta(p^{L2}_1, p^{L2}_2) = 0$, which is an equation rather than an inequality for the $\mathcal{L}_2$ loss's balance function, meaning that when two similar samples exceed the model's discriminability, $\mathcal{L}_2$ loss tends to **balance the learned labels** of the harder and softer instances, instead of tending to fit a specific one of them. Note that the proof of Proposition 3.4 we provided in Appendix A.3 does prove that $\beta(p^{L2}_1,p^{L2}_2) = 0$. As proved in Appendix A, easier samples with $\delta \ge \|q^{L2}_1, q^{L2}_2\|$ would generally have $\|p^{L2}_i, q^{L2}_i\| = 0, i=1,2$. Smaller $\delta$s indicate difficulty for the learned model to separate the outputs for both samples, hence introducing a balancing problem. A theoretically non-zero balancing score $\beta$ means one of $\|p^{L2}_i, q^{L2}_i\|, i=1,2$, is minimized more completely, indicating the imbalanced nature of the objective. Among the three, only the L2 loss theoretically zeros the $\beta$ balancing score, indicating its ability to balance over- and under-confidence of difficult soft-labeled sample pairs, hence being a superior loss for calibration with our augmented samples. In practice, the soft-label distribution may disturb such balance, which can be one of our future study. We will correct this typo in the revised main paper. **Q3-2**: Baseline for directly adopting CLIP labels. (Var. 1) **Q3-3**: Baseline for diffusive sample augment without semantic mixing. (Var. 2) **A3-2, 3-3**: We evaluate these two variants and compare them with our proposed CSM as follows: CIFAR-10: | Variant | ACC | ECE$\downarrow$ | AECE$\downarrow$ | | - | - | - | - | |Var. 1|*92.13*|2.68(0.92)|2.67(0.88)| |Var. 2|**96.12**|2.45(0.96)|2.44(1.13)| |CSM (Ours)|95.79|**0.54(0.54)**|**0.33(0.33)**| CIFAR-100: | Variant | ACC | ECE$\downarrow$ | AECE$\downarrow$ | | - | - | - | - | |Var. 1|*66.60*|52.78(1.36)|52.78(**1.11**)| |Var. 2|**79.24**|10.84(2.48)|10.84(2.41)| |CSM (Ours)|78.84|**1.29(1.29)**|**1.63**(1.63)| From these results, we can have these valuable observations: 1. The vanilla CLIP annotation method yields the worst ACC and pre-temperature calibration errors, primarily due to the noisy information by annotating all classes. Such degradation is significant for CIFAR-100, in which there are more classes so that the noise is severer. 2. Directly adopting class-conditioned augmentations from the diffusion model can slightly rise the prediction accuracy, as also evidenced by the generative model augmented classification literature. However, as it does not contain soft-labeled samples, Var. 2 fails to improve model calibration. Therefore, we can conclude that models are effectively calibrated only when adopting the proper data and re-annotation scheme.
Summary: This paper presents Calibration-aware Semantic Mixing (CSM), a novel approach to improving model calibration by generating high-quality augmented data with soft labels. Unlike traditional augmentation methods that rely on one-hot labels, CSM leverages diffusion models to create semantically mixed images with confidence scores. The authors introduce a reannotation strategy based on CLIP features and explore different loss functions, demonstrating that L2 loss leads to better calibration. Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet show that CSM surpasses existing calibration techniques. Claims And Evidence: CSM enhances calibration by generating realistic semantically mixed samples, as evidenced by Figure 1. Reannotating confidence scores improves performance, which is validated through ablation studies. L2 loss provides a better balance in learning, leading to improved calibration, supported by both theoretical analysis and empirical results. Methods And Evaluation Criteria: The paper evaluates model calibration using standard metrics, including Expected Calibration Error (ECE) and Adaptive ECE (AECE), across multiple datasets (CIFAR-10, CIFAR-100, Tiny-ImageNet). Additionally, Reliability diagrams and ablation studies are conducted for further comparison. Theoretical Claims: The authors provide theoretical justifications for their reannotation strategy and choice of loss function. While the analysis appears rigorous, additional details and proofs in the supplementary material could further strengthen their claims. Experimental Designs Or Analyses: The experiments primarily focus on ResNet-based models, and additional evaluations on other architectures (e.g., Transformer-based models) would be beneficial to confirm the generalizability of CSM. Computational overhead is not explicitly discussed—more details on efficiency and resource consumption would enhance the paper. Supplementary Material: The paper provides sufficient methodological details, but a more thorough review of supplementary material would be helpful to assess the depth of theoretical and experimental justifications. Relation To Broader Scientific Literature: The work builds on existing research in model calibration and data augmentation, presenting a novel approach by incorporating diffusion models for calibration-aware augmentations. However, the discussion could benefit from additional comparisons to post-hoc calibration methods. Essential References Not Discussed: The paper should consider discussing and comparing its approach with existing post-hoc calibration methods, particularly [1] Test Time Augmentation Meets Post-hoc Calibration, which is closely related to data augmentation for calibration. Other Strengths And Weaknesses: Strengths: Novel method integrating diffusion models for calibration. Strong empirical results demonstrating superior performance over existing techniques. Comprehensive evaluation using standard calibration metrics. Weaknesses: Limited hyperparameter analysis—the sensitivity of CSM to different configurations is not well explored. Unclear computational cost—the efficiency trade-offs of using diffusion models for augmentation should be discussed. Other Comments Or Suggestions: N/A Questions For Authors: Since semantic mix augmentation effectively fills sparse regions in the data space and improves local data proximity, could this approach be integrated with [1] to enhance not only calibration but also model robustness? Exploring this synergy could yield further improvements in generalization and uncertainty estimation. Reference: [1] Proximity-Informed Calibration for Deep Neural Networks Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to Reviewer xs6o Thank you for your positive and insightful feedback! Here are our responses: **Q2-1**: Additional details and proofs in the supplementary material. **A2-1**: Thank you for the kindly comments on the theoretical soundness. The claims made in our paper (including deducted results in Eq.(6)-(10) and Prop. 3.2-3.4) are sufficiently proved in Appendix A. In detail: - Eq.(6)-(7) corresponds to Eq.(14)-(15) with - Assumption 1: Classification optimal classifier $\operatorname{E}(\cdot)$ ensures the likelihood ratio of different classes; - Assumption 2: Regarding the feature as the affine set elements of class prototypes with an orthogonal deviation. - Eq.(8)-(9) is proved by Eq.(15)-(19). - The result of Eq.(10) is acquired from Eq.(9) with the class factor invariance assumption. - Proposition 3.4 is first proved through Eq.(22)-(26) with the assumptions given in Definition 3.1. - To prove Proposition 3.2 and 3.3, we first give a general analysis of the problem in Line 712-791 (or Eq.(27)-(37)), then prove them by Lemma A.1 and Lemma A.2, respectively. Note that the proof for CE is unconditional, while for FL it's proved with assumption that $\gamma_{FL}=1.0$ and we empirically find FL more imbalanced with larger $\gamma_{FL}$. These detailed descriptions illustrate the overall framework of the theoretical analysis. We will include connective details and key deductions in the main paper. **Q2-2**: Additional evaluations on other architectures. **A2-2**: Our method performs equally or more effectively to others with the Swin-Transformer architecture. Please refer to **A1-3 Table A**. **Q2-3**: More details on efficiency and resource consumption. **A2-3**: Thank your for your suggestion. For efficiency analysis, we compare explicitly in terms of the training time as follows: **Table B:** |Methods|CE|MbLS|Mixup|RegMixup|RankMixup|Ours|Ours(EQ-DATA)| |-|-|-|-|-|-|-|-| |Training Time|2.63h|2.65h|3.48h|3.50h|4.30h|4.28h|2.64h| One can see the **number of augmented samples per batch** is the **major factor** for training time. CSM outperforms others in ECE/AECE while maintaining reasonable speed. Even with equalized training samples (EQ-DATA, Table 2), it achieves competitive calibration. CSM runs on a single A4000. Augmented samples need no re-generation across model/loss/annotation changes, enabling efficient modular study. Key details will be added to the main paper. **Q2-4**: Comparison with [1], a test-time augmentation (TTA) post-hoc calibration method. **A2-4**: We compare with [1] by evaluating CSM + [1] as follows: |Metrics|ECE|AECE| |-|-|-| |Ours|**1.29**|1.63| |Ours+[1]|1.39|**1.53**| Our integrated method balances ECE and AECE, achieving an optimized AECE of **1.53** on CIFAR-100. Compared to [1] using test-time sample-wise scaling, CSM employs training-time augmentation with inter-sample augmentations to expand the proximity space, enhancing calibration robustness. We will cite [1] and provide full comparisons in the main paper. **Q2-5**: Computational overhead of diffusion-based augmentation. **A2-5**: Our analysis in Appendix C shows that CSM requires **few augmented samples** to launch effectively, ensuring low computational costs. Generating augmented sets takes **~4 hours** (CIFAR-10/100) or **~9 hours** (Tiny-ImageNet) on an RTX4090 GPU, comparable to typical training times. Crucially, CSM **eliminates re-generation** when model architectures/parameters change, further enhancing efficiency through its decoupled design. This validates CSM's computational efficiency. **Q2-6**: Hyperparameter sensitivity analysis. **A2-6**: We have analyzed parameter $s$ and No. of augmented samples in Appendix C. We analyze $N_{aug}$ (refer to **A4-2 Table C**) and check sensitivity in **A4-2**. Larger $N_{aug}$ generally enhances our method, while within appropriate ranges of other parameters, it yields stable results. **Q2-7** Possibility to integrate our CSM with [2] to enhance calibration and robustness. **A2-7**: Thank you for this insightful question. We conduct post-hoc experiments to integrate our method's outputs with [2], acquiring the following result: |Errors$\downarrow$|ECE|MCE|AECE|PIECE| |-|-|-|-|-| |Ours|**1.29**|**0.21**|**1.62**|3.16| |Ours+[2]|1.89|0.73|1.82|**3.11**| Due to limited time, we simply integrate CSM with [2] without further adjustments. Although a simple combination of them doesn't yield superioir ECE/AECE results, we find that the proximity-informed metric PIECE displays better results, which validates the robustness growth related to proximity from the integration. We will cite [2] for analysis. **References** [1] Hekler, A., Brinker, T. J., & Buettner, F. (2023, June). Test time augmentation meets post-hoc calibration: uncertainty quantification under real-world conditions. AAAI. [2] Xiong, M., Deng, A., Koh, P. W. W., Wu, J., Li, S., Xu, J., & Hooi, B. (2023). Proximity-informed calibration for deep neural networks. NeurIPS.
Summary: Model calibration typically assumes full certainty in datasets with one-hot labels, limiting accurate uncertainty estimation. To address this, the paper introduces Calibration-aware Semantic Mixing (CSM), a data augmentation framework that synthetically generates diverse training samples annotated with explicit confidence scores using diffusion models. Additionally, the authors propose a calibrated reannotation method and explore suitable loss functions for this new data paradigm. Experimental results show CSM significantly improves model calibration over existing state-of-the-art methods. ## update after rebuttal Thank you for the author rebuttal. The major concerns regarding clarifications and additional experiments have been addressed. I will maintain my current rating. Claims And Evidence: - The motivation and necessity of semantic mixing from the perspective of network calibration are well articulated. Additionally, the drawbacks of existing data-driven methods (mixup-based approaches) are clearly defined. Methods And Evaluation Criteria: - Leveraging conditional diffusion models, specifically via a pre-trained diffusion network, to generate semantically mixed images is technically novel within the context of network calibration. - Further innovation is demonstrated through the identification and resolution of limitations associated with generated labels by introducing a calibration-oriented reannotation process. Theoretical Claims: Further clarification and verification are needed regarding the balanced loss section. Specifically, more clarification on why the proposed L2 loss functions as a balanced loss would be helpful. Experimental Designs Or Analyses: - The experimental results across various networks and datasets demonstrate superior performance compared to existing state-of-the-art methods. - Although generalization capability is emphasized, experimental validation on larger datasets such as ImageNet and different network architectures such as Transformers appears insufficient. Additionally, comparisons with recent state-of-the-art methods like CALS (CVPR 23) and ACLS (ICCV 23) are missing,. - The experiment described in Table 2, which compares training times under identical conditions, is commendable, considering the potential increase in training duration due to the diffusion network. The necessity and effectiveness of reannotation are well-demonstrated in Figure 3. - In the ablation study (lines 382–384), please confirm if the explanations regarding CE and FL are reversed, particularly concerning temperature. - It would also be beneficial to provide a comparison illustrating the degree of confidence-balancing achieved by using CE, FL, and L2 losses. Supplementary Material: I have thoroughly reviewed the code, including both the sample mixing and reannotation modules. Relation To Broader Scientific Literature: This approach effectively improves not only the network’s calibration capability but also enhances its interpretability and accuracy. Essential References Not Discussed: Recent state-of-the-art methods, such as ACLS: Adaptive and Conditional Label Smoothing for Network Calibration (ICCV 2023) and Class Adaptive Network Calibration (CVPR 2023), have not been included in the reference list. These works offer significant contributions to network calibration and should be considered for inclusion to provide a more comprehensive and up-to-date overview of current methodologies. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to Reviewer TsGd Thank you for your kind suggestions on clarity and experimental thoroughness. Below are our responses: **Q1-1**: Clarification on the reason that the proposed L2 loss is a balanced loss. **A1-2**: Thank you for this nice concern. We need to clarify that there exists a **typo in Proposition 3.4** which makes the conclusion confusing. Proposition 3.4 should have been presented as - $\forall \delta \ge 0$, we have $\beta(p^{L2}_1,p^{L2}_2)=0$, which is an equation rather than an inequality for the $\mathcal{L}_2$ loss's balance function, meaning that when two similar samples exceed the model's discriminability, $\mathcal{L}_2$ loss tends to balance the learned labels of the harder and softer instances, instead of tending to fit a specific one of them. Note that the proof of Proposition 3.4 we provided in the Appendix A.3 does prove that $\beta(p^{L2}_1,p^{L2}_2) = 0$. With such theoretical justification, we also present an empirical evidence in **A1-5** regarding the confidence balance score. Also refer to **A3-1**. We will correct this typo in the revised main paper. **Q1-2**: Missing comparisons with ACLS [1] and CALS [2]. **A1-2**: We compare our result with theirs in the following tables. The results demonstrate the competitive or superior performance of our method compared to the state-of-the-arts. We will cite these compared methods and include these results in the final paper. |ResNet-50|CIFAR-10|||\||Tiny-ImageNet||| |-|-|-|-|-|-|-|-| |Metrics|ACC|ECE$\downarrow$|AECE$\downarrow$|\||ACC|ECE$\downarrow$|AECE$\downarrow$| |ACLS|95.40|1.12|2.87|\||64.84|**1.05**|**1.03**| |Ours|**95.79**|**0.54**|**0.33**|\||**66.99**|1.29|1.19| |ResNet-50|Tiny-ImageNet|||\||ImageNet||| |-|-|-|-|-|-|-|-| |Metrics|ACC|ECE$\downarrow$|AECE$\downarrow$|\||ACC|ECE$\downarrow$|AECE$\downarrow$| |CALS|65.03|1.54|1.38|\||76.44|1.46|**1.32**| |Ours|**66.99**|**1.29**|**1.19**|\||**79.87**|**1.32**|1.35| **Q1-3**: Insufficient validation on ImageNet and Transformers. **A1-3**: We compare our result with representative methods on ImageNet with the ResNet-50 and Swin-Transformer architectures. Our method performs equally or more effectively compared to these methods, especially to the mixup-based methods. We will include these results in the final paper. |ResNet50|ImageNet||| |-|-|-|-| |Metrics|ACC|ECE$\downarrow$|AECE$\downarrow$| |CE|73.96|9.10|9.24| |Mixup|75.84|7.07|7.09| |CRL|73.83|8.47|8.47| |MbLS|75.39|4.07|4.14| |RegMixup|75.64|5.34|5.42| |RankMixup|74.86|3.93|3.92| |CALS|76.44|1.46|**1.32**| |Ours|**79.87**|**1.32**|1.35| **Table A:** |SwinTransformerV2|ImageNet||| |-|-|-|-| |Metrics|ACC|ECE$\downarrow$|AECE$\downarrow$| |CE|75.60|9.95|9.94| |LS|75.42|7.32|7.33| |FL|75.60|3.19|3.18| |FLSD|74.70|2.44|2.37| |MbLS|77.18|1.95|1.73| |CALS|77.10|1.61|**1.69**| |Ours|**81.08**|**1.49**|1.86| **Q1-4**: Potential reversal of the following CE/FL explanations in ablation (lines 382–384). > "In contrast, CE and FL losses often require temperature adjustments, with CE favoring sharper labels and FL for softer ones, aligning with our theoretical expectations from Section 3.3." **A1-4**: The analysis corresponds to the searched temperature values of Mixup and CSM in Table 4, where CE results sometimes involve a searched temperature larger than 1.0 compared using Mixup (Mixup (**CE**): **T = 1.3**), while FL results produce a searched temperature of **T = 0.9 < 1** compared with our CSM (CSM (**FL**)). These two specific results highlight the nature of CE and FL losses. As studied by existing works, a higher post-temperature can indicate the model's over-confidence while a lower one suggests under-confidence of the pre-temperature model. Therefore, with soft labels during training, these phenomena indicate a preference/bias of fitting different samples for the adopted losses, *i.e.*, harder labels (*e.g.*, close to one hot) vs. softer labels (*e.g.*, mixup labels with $\lambda=0.5$). **Q1-5**: Confidence-balancing comparison across CE, FL, and L2. **A1-5**: We explicitly compute the average balance scores to illustrate the confidence balancing results here: |Loss Objectives|CE|Our Loss|FL| |-|-|-|-| |CIFAR-100|-0.1438|-0.1330|-0.0393| |Relative Value|**-0.0108**|**0.0000**|**+0.0937**| Our loss shows a clear confidence balance between CE and FL, confirming its effectiveness, though empirical values are typically negative due to the rareness of indistinguishable pairs and easier learning of high-confidence samples in practical experiments. **References** [1] Park, H., Noh, J., Oh, Y., Baek, D., & Ham, B. (2023). Acls: Adaptive and conditional label smoothing for network calibration. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3936-3945). [2] Liu, B., Rony, J., Galdran, A., Dolz, J., & Ben Ayed, I. (2023). Class adaptive network calibration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 16070-16079).
null
null
null
null
null
null
Reliable Image Quality Evaluation and Mitigation of Quality Bias in Generative Models
Reject
Summary: This paper introduces the Difference in Quality Assessment (DQA) score, which is designed to evaluate the reliability of evaluation metrics such as the Fréchet Inception Distance (FID). Additionally, the DQA framework aids in identifying more reliable image encoders, thereby enhancing the robustness of evaluation metrics. Furthermore, the proposed DQA-Guidance method not only improves the quality of pretrained diffusion models but also advances fairness. ## update after rebuttal Overall, the paper is well-written. The method is novel and the results are good. However, I don’t think it quite reaches the level for an "Accept" (score 4). I keep my original score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No, I have not checked the theoretical proofs. Experimental Designs Or Analyses: Yes, I have carefully reviewed the experimental section and all the figures and tables in the main body of the paper. I believe the authors’ experimental design is sound and addresses my concerns step by step. However, I have one question: Can the DQA framework be used to evaluate the performance of currently popular text-to-image diffusion models, such as Flux? Supplementary Material: No. Relation To Broader Scientific Literature: The main contributions of this paper can be broadly summarized as the introduction of the Difference in Quality Assessment (DQA) score to evaluate the reliability of existing evaluation metrics, followed by the application of the DQA score to enhance diffusion models. To my knowledge, other related works tend to focus more on constructing comprehensive benchmarks for evaluating generative models. In this regard, the approach presented in this paper appears to be novel. Essential References Not Discussed: Regarding energy guidance, the paper 'Egsde: Unpaired Image-to-Image Translation via Energy-Guided Stochastic Differential Equations' should be cited, as it introduces the use of energy guidance to improve generative models. Other Strengths And Weaknesses: Strengths: - The FID (Fréchet Inception Distance) is a widely used metric for evaluating generative models. However, this paper highlights some unreliable phenomena associated with FID during the evaluation process, particularly its mismatch with generation quality and fairness. Notably, better FID scores may sometimes correspond to worse generation quality. - The introduction of the DQA (Difference in Quality Assessment) method is a significant contribution, as it helps identify more reliable image encoders. This is crucial for the appropriate selection of evaluation metrics. - The proposal of DQA-Guidance is another key strength, as it enhances generative models by improving generation quality while maintaining fairness. Weakness: see question seciton. Other Comments Or Suggestions: No. Questions For Authors: 1. In the DGA-Guidance section, regarding Equation (3), could you provide further clarification on how Group A and Group B are determined? 2. Can the DQA framework be used to evaluate the performance of currently popular text-to-image diffusion models, such as Flux? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Additional Reference Thanks for suggesting a missing reference. Although our paper already includes references related to energy-based guidance in text-to-image models—such as Composing Diffusion Models [1], Self-Guidance [2], and Universal Guidance [3]—the suggested reference [4] is indeed a valuable cornerstone in the energy-based guidance literature. We will include this reference in the revised version of our paper. [1] Liu, N., Li, S., Du, Y., Torralba, A., & Tenenbaum, J. B. (2022, October). Compositional visual generation with composable diffusion models. In European Conference on Computer Vision (pp. 423-439). Cham: Springer Nature Switzerland. [2] Epstein, D., Jabri, A., Poole, B., Efros, A., & Holynski, A. (2023). Diffusion self-guidance for controllable image generation. Advances in Neural Information Processing Systems, 36, 16222-16239. [3] Bansal, A., Chu, H. M., Schwarzschild, A., Sengupta, S., Goldblum, M., Geiping, J., & Goldstein, T. (2023). Universal guidance for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 843-852). [4] Zhao, M., Bao, F., Li, C., & Zhu, J. (2022). Egsde: Unpaired image-to-image translation via energy-guided stochastic differential equations. Advances in Neural Information Processing Systems, 35, 3609-3623. ## Clarification on Equation 3 Thank you for pointing out the lack of explicit notation in Equation 3. While Groups A and B are introduced as two demographic groups in Section 3.1, the page distance between that section and Equation 3 may cause confusion. To improve clarity, we agree that the definitions of Groups A and B should be stated explicitly near Equation 3. Specifically, Group A and Group B refer to demographic groups such as *male* and *female*. The terms $z_t^A$ and $z_t^B$ represent latent variables derived from the input prompt: *“a photo of a {GENDER} who works as a {PROFESSION}.”* Moreover, Groups A and B can also represent other demographic attributes, such as different races, as demonstrated in our rebuttal to Reviewer h2hy. We expect that this formulation can be extended further to accommodate any desired quality bias to mitigate, depending on the fairness objective. ## Extension to Other Generative Models Yes, the DQA framework can be applied to any generative model such as Flux. Since DQA serves as a reliability measure for evaluation metrics such as FID, it does not rely on the performance of the generative model itself—as long as the model produces images and their quality is evaluated using FID. Moreover, the DQA-Guidance approach is also model-agnostic and can be applied to any diffusion-based generative models. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I maintain my original score.
Summary: This paper proposes a Difference in Quality Assessment (DQA) measure that quantifies the reliability of existing quality evaluation metrics for generative models. The authors present a problem in generation model evaluation, i.e., the demographic bias. They find that conventional quality assessment measures are biased across groups, and the reasons can lie in the inappropriate reference selection or the inherent bias in the FID image encoder. They further apply the proposed DQA for guiding diffusion models to reduce cross-group quality discrepancies. Claims And Evidence: The claims in this paper are well-supported by literature and analyses, making them reasonable. Methods And Evaluation Criteria: The proposed DQA and DQA-guidance are applicable to the research questions. Theoretical Claims: NA Experimental Designs Or Analyses: In section 5.3, the authors provide the performance evaluation of DQA-guidance on two sub-categories, i.e., male and female. Can you further provide more extensive results on other demographic groups? Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: This paper provides comprehensive experimental results to support their findings and solutions. The article is well-written and easy to follow. A very good paper. Other Comments Or Suggestions: NA Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Extension of Demographic Groups Thank you for raising this point. Quality bias is not limited to gender; it also extends to other demographic attributes such as race. In our study, we consider four racial groups: Asian, Black, Caucasian, and Indian. We explore two possible directions for extending DQA-Guidance to handle multi-racial bias: 1. **Pairwise Group Comparison** In this approach, we select two racial groups (e.g., Asian vs. Black) as Group A and Group B and apply DQA-Guidance in the same manner as the gender-bias case. This enables a detailed, pairwise analysis of racial bias and its mitigation. 2. **All-at-Once Comparison** Alternatively, we can modify the DQA-Guidance formulation to consider all racial groups simultaneously by replacing the DQA term in Equation 4 with the average pairwise DQA across all race pairs, as defined below: \\[ \tilde{\epsilon}_{\theta} (z_t) = \epsilon\_{\theta} (z_t) + \sigma\_t \nabla\_{z_t} \left(\lambda\_1 \, \text{AvgDQA} + \lambda\_2 \, D\big(f^*(\mathcal{I}\_{\text{gen}}), f^*(\mathcal{I}\_{\text{ref}})\big)\right) \\] where \\[ \text{AvgDQA}(\mathcal{G}) = \frac{1}{\binom{n}{2}} \sum\_{1 \leq i < j \leq n} \text{DQA}(G\_i, G\_j; f^*) \\] This formulation generalizes fairness evaluation across all $n$ racial groups by averaging over all possible group pairs. For ease of implementation and to enable a more detailed analysis of individual racial quality disparities, we adopt the first approach in our additional experiments. |Stable Diffusion| Avg. MMD | Avg. MMD Disparity | Max MMD Disparity | Worst Case | |----------------|----------|----------------|----------------|------------------------------------| | **Baseline** | 118.36 | 14.49 | 38.14 | Caucasian vs Indian, Nurse | | **DQA-Guidance** | 96.68 | 10.16 | 25.38 | Caucasian vs Indian, Nurse | In our experiments, DQA-Guidance improves both the overall image quality and reduces quality disparity across racial groups, consistent with the results reported for the gender case.
Summary: The paper aims to address the issue of quality disparities in image generation models, proposing the DQA score as a method for assessing the reliability of evaluation metrics, and introducing DQA-Guidance to mitigate quality bias in diffusion models. The core contributions are the DQA metric and its application to identify reliable image encoders and guide the diffusion sampling process. ## update after rebuttal ## Thank you for the author's response. The limited scope of the comparison experiments remains my major concern. Therefore, I will maintain my original score. Claims And Evidence: - The idea of fairness in generative models and the biases of evaluation metrics have been discussed in previous works. The paper does not make a big step beyond the existing literature. Methods And Evaluation Criteria: - The method for creating controlled datasets with varying degrees of image quality (Section 4.2 and Appendix C) is not sufficiently detailed. The specific hyperparameter adjustments and their impact on perceived image quality need to be better explained and justified. It's unclear if these adjustments consistently produce the intended quality gradations across different demographic groups. Theoretical Claims: - The DQA score, as defined in Equation (1), lacks strong theoretical justification. The normalization by the denominator D(f(Igen), f(Iref)) is not adequately explained. It's unclear why this specific normalization is appropriate or how it ensures a reliable measure of bias. The paper needs to provide a more rigorous mathematical justification for the DQA formulation. Experimental Designs Or Analyses: - The experimental results for DQA-Guidance (Section 5.3 and Figure 6) are not compelling. The plots show only marginal improvements in image quality and quality disparity. The qualitative results in Figure 7 are subjective and do not provide strong evidence of the effectiveness of DQA-Guidance. The paper needs to provide a more comprehensive and objective evaluation of DQA-Guidance, including quantitative metrics beyond MMD. - The paper does not adequately compare DQA and DQA-Guidance to existing fairness-aware evaluation metrics and mitigation techniques. It's unclear whether the proposed approach offers any significant advantages over existing methods. Supplementary Material: - The paper claims DQA is validated through a classification task (Section 4.5 and Appendix A), but the results in Table 1 are not convincing. The improvements in fairness metrics with the "Fair Subset" are marginal and could be due to random variation. The algorithm for selecting fair and unfair subsets (Algorithm 1) is complex and lacks clear motivation. The use of influence functions seems like an unnecessary complication, and the sampling process is not well-explained. Relation To Broader Scientific Literature: The DQA score attempts to formalize this by quantifying the discrepancy in quality assessments across demographic groups. However, the novelty lies in the specific formulation of DQA, not in the general concept of biased metrics. The paper needs to clearly articulate how DQA differs from and improves upon existing methods for detecting biases in evaluation, even if those methods are not directly applied to image quality. Is it more sensitive? More robust? Easier to interpret? Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: + The paper identifies a significant and underexplored problem of quality bias in generative models, which can have important practical implications. + The idea of using a controlled dataset to assess the reliability of evaluation metrics is promising, as it can uncover biases in the encoders. Weaknesses - The paper should include a more comprehensive set of evaluation metrics, including both quantitative and qualitative measures. - The paper should compare the performance of DQA-Guidance to existing fairness-aware methods. Other Comments Or Suggestions: - In several places, the wording is awkward or unclear. For example, the sentence "DQA serves not only as a reliability indicator for the evaluation metric but can also act as an energy function in generative models to regularize equal image quality across demographic groups" could be rephrased for clarity. Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Novelty of the Paper To the best of our knowledge, this paper is the first to address fairness issues in the evaluation metrics used for generated images. We distinguish two types of bias: - **(a) Bias in the evaluation metric** - **(b) Bias in quality in the generated image** Although (b) has been studied, (a) has not received adequate attention. In this work, we identify bias in a widely used evaluation metric, FID, and propose **DQA** as a reliability measure to detect and quantify such bias. By isolating evaluation bias through DQA by choosing a reliable image encoder, we can more accurately reveal the quality bias in generative models, (b). Furthermore, we introduce **DQA-Guidance**, a mitigation strategy that guides generative models toward reducing quality bias in the generated images. Taken together, our contributions are the first to highlight and address the two-fold fairness challenge in generative modeling. ## Regarding Comparison Method As ours is the first work to identify bias in evaluation metrics and to propose a framework for mitigating quality bias in generative models, we were unable to include direct comparisons with existing methods. ## Adjustment in the Controlled Dataset Please see the rebuttal for Reviewer 4STN. ## Theoretical Analysis for Equation 1 The denominator captures the total generation shift, i.e., how far the generated distribution is from the reference distribution across all groups. It serves two purposes. 1. DQA measures how large the inter-group disparity is relative to the overall deviation. If both group-specific shifts are small, then even a small difference between them may be meaningful. Conversely, if the model globally generates low-quality outputs, a larger group disparity might be expected and less concerning. 2. Different generative models, encoders, and data domains may exhibit widely varying absolute distances. Without normalization, a group-level bias in numerator is hard to determined to be negligible or severe. Therefore, the denominator anchors the numerator to this global scale. ## Improvement via DQA-Guidance While Figures 6 and 7 illustrate the improvements in both performance and fairness of image quality, we add a table below for better clarity. This table explicitly demonstrates that an appropriate choice of $\lambda_1$ and $\lambda_2$ leads to substantial improvements in both fairness and overall image quality. ||Avg. MMD|Avg. MMD Diff.|Max MMD Diff.| |-|-|-|-| |Baseline|109.93|12.57|17.77| |Ours ($\lambda_1=20,\lambda_2=100$)|103.89|6.21|6.94| |Ours ($\lambda_1=20,\lambda_2=1000$)|85.72|10.16|11.87| While MMD with DINO serves as our primary quantitative evaluation metric for image quality, here we also report Fréchet Distance (FD) for generated images w/ and w/o DQA-Guidance. ||Avg. FD|Avg. FD Diff.|Max FD Diff.| |-|-|-|-| |Baseline|29.09|1.26|1.77| |Ours ($\lambda_1=20,\lambda_2=100$)|28.53|0.09|0.12| |Ours ($\lambda_1=20,\lambda_2=1000$)|26.27|0.29|0.44| Regarding the qualitative results in Figure 7, while some aspects of visual quality are subjective, the improvements are evident. For example, DQA-Guidance helps remove visual artifacts such as extra limbs (e.g., three hands), enhances image characteristics like color (e.g., from grayscale to natural color), and improves coherence between the prompt and the image (e.g., ensuring the nurse to be male along with a male prompt). ## Regarding Algorithm 1 While Algorithm 1 may initially appear complex, its underlying logic is straightforward. It identifies subsets of generated images that most strongly increase or decrease the group-level discrepancy in perceived image quality, as measured by DQA. These subsets support a downstream diagnostic evaluation. Specifically, we use the fair/unfair subsets in classification as a data augmentation to assess whether the quality discrepancy is practically meaningful. We find that models trained on the unfair subset exhibit larger fairness gaps in downstream classification, while those trained on the fair subset show more equitable performance. This demonstrates that the discrepancy captured by the DQA is predictive of real fairness issues, thereby validating its utility as a diagnostic signal for metric reliability. ## Results in Table 1 To show the validty of our experimental results in Table 1, we investigate the confidence interval of experimental results. ||Overall AUC|AVG($\Delta$AUC)|Max($\Delta$AUC)| |-|-|-|-| |Fair Subset| 53.91$\pm$ 0.26|6.18$\pm$ 0.36|15.65$\pm$ 2.25| |Unfair Subset| 54.32$\pm$ 0.24|6.83$\pm$ 0.39 |17.19$\pm$ 2.76 | ## Regarding the Vague Content Thank you for pointing this out. The sentence in Section 5 was intended as an introduction to the extension of DQA as a guidance term for diffusion models. We agree that the current phrasing may be vague and potentially confusing. We will revise this section to more clearly explain how DQA can be extended and applied in the context of guidance for diffusion models. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Though the proposed image quality evaluation is new, the bias in the quality of the generated image has been widely explored. Why does the proposed method not compare with the existing methods [1,2,3,4] on the publicly available dataset rather than the self-constructed dataset? Thus, more experimental results are needed to support the superiority of the proposed DQA-Guidance. Here is a partial list of the references: [1] Finetuning Text-to-Image Diffusion Models for Fairness [2] INFELM: In-depth Fairness Evaluation of Large Text-To-Image Models [3] Instructing Text-to-Image Generation Models on Fairness [4] Unlocking Intrinsic Fairness in Stable Diffusion --- Reply to Comment 1.1.1: Comment: Thank you for raising this important point. However, the references cited by the reviewer—[1], [2], [3], and [4]—address a different type of bias than the one investigated in our paper. Specifically, these works focus on **distributional bias** in generative models, where the concern lies in the demographic distribution of generated images given a neutral prompt (e.g., aiming for a 50/50 balance between "male nurse" and "female nurse" for the prompt "a photo of a nurse"). In these studies, the evaluation metrics typically take the form of demographic parity, often expressed as "Ratio of Major Attribute" or "Rate of Female-Appearing" images. In contrast, our study investigates **quality bias**—disparities in the image quality of outputs across demographic groups when the group is explicitly specified in the prompt (e.g., higher-quality outputs for "a photo of a female nurse" than for "a photo of a male nurse"). We demonstrate that such bias not only exists in the generated outputs, but also that the commonly used evaluation metric, FID, fails to reliably detect these disparities. To address this dual issue, we propose DQA, which evaluates the reliability of image quality assessments using a self-constructed dataset that enables precise control over image quality for measuring reliability. We also introduce DQA-Guidance, which promotes fairness in image quality. To the best of our knowledge, prior work has not addressed fairness from the perspective of image quality, nor has it examined the reliability of image quality evaluation. For this reason, no suitable baseline methods were available for comparison, as briefly mentioned in our rebuttal.
Summary: This paper introduces DQA, a novel scoring method designed to assess the reliability of image quality evaluation metrics, particularly in the context of generative models. DQA aims to address the bias present in metrics like FID when evaluating image quality across different demographic groups. The core idea is to use carefully constructed, controlled datasets with comparable quality across groups and then measure the consistency of the image encoder. Furthermore, the paper proposes DQA-Guidance, a regularization technique applied during diffusion model sampling to mitigate quality disparities. The authors present empirical results demonstrating the effectiveness of DQA in identifying biased metrics and DQA-Guidance in improving fairness and overall image quality. Claims And Evidence: While the paper presents a compelling motivation and a novel approach, the evidence supporting the practical advantages of DQA over existing methods like FID is not entirely convincing. The claims regarding the superior fairness and reliability of DQA hinge on the construction of the controlled datasets. It's unclear how robust these datasets are to different types of quality degradation and whether they truly capture the nuances of real-world image quality variations across demographic groups. The gains achieved with DQA-Guidance, while present, appear somewhat marginal and their significance needs further validation. Methods And Evaluation Criteria: The proposed methods are well-defined and logically sound. However, the evaluation criteria rely heavily on the artificially constructed datasets. The paper would benefit from a more rigorous evaluation using real-world datasets with inherent demographic biases, even if it's challenging to establish ground truth quality. The choice of MMD as the distance metric is justified, but exploring alternative distance metrics and analyzing their impact on DQA scores would strengthen the analysis. Theoretical Claims: NAN Experimental Designs Or Analyses: The experimental designs are generally well-structured, but the analysis could be more in-depth. For instance, a more detailed ablation study exploring the impact of different components of DQA-Guidance (e.g., the regularization parameters) would be valuable. The paper should also provide more insights into the computational cost associated with DQA and DQA-Guidance compared to standard FID-based evaluations and sampling. Supplementary Material: NAN Relation To Broader Scientific Literature: The paper builds upon a foundation of existing work on fairness in machine learning and image quality assessment. It correctly identifies the limitations of FID in the context of demographic biases and proposes a novel approach to address these limitations. The related work section is comprehensive. Essential References Not Discussed: NAN Other Strengths And Weaknesses: Strengths: The paper addresses an important and timely problem: fairness in image generation. The DQA metric provides a useful tool for evaluating the reliability of image encoders. The DQA-Guidance method offers a practical way to mitigate quality biases in diffusion models without retraining. Weaknesses: The reliance on artificially constructed datasets limits the generalizability of the findings. The experimental results are not entirely convincing, and the computational cost of DQA is not adequately addressed. The paper could benefit from a more in-depth analysis of the limitations of the proposed approach. Other Comments Or Suggestions: The authors should clearly articulate the assumptions underlying the construction of the controlled datasets. The paper should include a discussion of the potential limitations of DQA when applied to datasets with complex or unknown demographic biases. Questions For Authors: How do you ensure that the artificially constructed datasets used for DQA accurately reflect real-world image quality variations across different demographic groups? How sensitive are the DQA scores to the specific choices made in constructing these datasets (e.g., the types and magnitudes of the quality degradations)? A response demonstrating the robustness of the dataset construction would strengthen my confidence in the reliability of DQA. Can you provide a more detailed analysis of the computational overhead associated with DQA and DQA-Guidance compared to standard FID and diffusion model sampling? Quantifying the extra cost would help assess the practicality of the proposed methods. What are the limitations of DQA-Guidance? Under what circumstances might it fail to improve or even worsen the fairness of the generated images? Addressing this limitation would show a more balanced perspective. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Adjustment in the Controlled Dataset The degradations we introduce are well-established in diffusion-based generative models literature. - **Weak Classifier-Free Guidance (CFG)** In CFG, using weak guidance simulates a scenario where the generated image loses coherence with the prompt. - **Fewer Inference Steps** As noted in [1], images generated with fewer diffusion steps typically exhibit lower visual quality by leaving more residual noise and artifacts. - **Stronger Initial Noise** In our method, we use in-painting to construct the controlled dataset. We first generate images from text prompts and then modify them to reflect different attributes to maintain contextual consistency. In in-painting, a stronger noise level preserves more of the original image. As a result, the model struggles to apply the desired attribute modification, leading to poor coherence with the target attribute. - **No Refiner** The SDXL paper explicitly states that the use of a refiner network improves visual quality, meaning removing the refiner leads to a noticeable degradation in image quality. Each of these modifications is grounded in existing literature, ensuring that the controlled degradations reflect realistic variations in generation quality. [1] Kim et. al., 2024, Model-Agnostic Human Preference Inversion in Diffusion Models ## Hyperparameter Sensitivity As shown in the figure at the following link: https://drive.google.com/file/d/1Ot1FkMuPYmb0-6vFZZ5vUHmtcw6xgGAq/view?usp=sharing we investigate the impact of hyperparameter variation in the controlled dataset by adjusting the guidance scale in CFG. The right subfigure shows a clear degradation in the generated images as controlled. In the main paper, we argue that a lower average DQA across degraded images indicates higher reliability of an evaluation metric. Although this conclusion remains consistent (DINO-RN50 as the most reliable), this additional analysis highlights that robustness to degradation could serve as another criterion for evaluating the reliability of an image encoder. We will include this analysis in the revised version to better emphasize the desired characteristics of reliablilty for quality evaluation. ## Analysis for Experimental Results Please see the rebuttal for Reviewer yKK1. ## Regarding Ablation Study In Figure 6, the left subfigure shows the performance trend when varying $\lambda_1$ while keeping $\lambda_2$ fixed, and the right subfigure shows the trend when varying $\lambda_2$ with $\lambda_1$ held constant. Therefore, the experimental results in Figure 6 already serve as an ablation study, demonstrating the individual effects of each component in the DQA-Guidance. ## Cost for DQA Evaluation Since DQA is a **reliability score** for evaluation metrics, its cost is not directly comparable to performance metrics such as FID. However, to clarify the computational requirements: DQA involves three quality evaluations, two for the subgroup qualities (numerator) and one for the overall quality (denominator). Importantly, DQA needs to be computed only once for each encoder to determine the reliability for evaluation. Once a reliable score is identified, there is no need to recompute DQA during future model evaluations. On the other hand, MMD with DINO is our primary quantitative evaluation metric. Notably, MMD is reported to be approximately 1,000 times faster than the Fréchet Distance used in FID, making it a more efficient choice for large-scale or repeated evaluations. ## Cost for DQA-Guidance We observe the increase in memory usage. DQA-Guidance utilizes an additional image encoder along with gradient computation for the guidance term. As a result, memory usage increased from 10,124MB to 18,626MB due to the gradient computation. ## Potential Limitation ### Computational Cost As shown above, DQA-Guidance introduces additional computational cost. However, since ours is the first work to demonstrate that quality bias-based guidance can steer diffusion models toward fairer outputs, it opens a promising direction for fairness-aware guidance. Future work can further explore cost-efficient implementations of such approaches. ### Lack of Real Reference Dataset Although the controlled dataset contains high-quality images, they may not fully align with human-perceived realism or quality standards. A promising future direction is to develop human-validated reference datasets, where quality judgments are collected through perceptual surveys. This would enhance the validity of DQA as a reliability indicator and offer a more robust benchmark for auditing image evaluation metrics. ### Possibility of Overfitting DQA-Guidance leverages group-specific references. However, if the reference set is narrow or unrepresentative, the model might overfit to a particular visual style, reducing diversity or realism. Constructing more diverse and well-controlled reference dataset would improve the generalizability of DQA-Guidance.
null
null
null
null
null
null
Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination
Accept (oral)
Summary: This paper proposes to train agents in self-play on a large-distribution of environments to enhance the agents' coordination ability with unseen teammates in unseen environments. Experiments on a toy grid-world game and Overcooked demonstrate the effectiveness of the proposed method. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There is no formulated theoretical claims in this paper. Experimental Designs Or Analyses: Yes. I checked Section 5. Experiments and Section 6. Results. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The key contributions extend and refine ideas from MARL, cooperative AI, game theory, robustness in machine learning, emergent communication, theory of mind, and open-ended learning. By addressing the challenge of generalization to unseen partners and environments, this paper advances these fields and provides a framework for developing more adaptable and collaborative AI systems. Essential References Not Discussed: No. Other Strengths And Weaknesses: ### Strengths 1. **Clarity and Simplicity:** The paper is well-written and easy to follow. The proposed method, CEC, is presented in a simple and straightforward manner, making it accessible to readers. 2. **Empirical Validation:** The paper provides extensive experimental results that demonstrate the effectiveness of CEC in enhancing coordination generalization. 3. **Interesting Insight:** The observation that training on multiple environments (initial states) improves coordination with unseen teammates is intriguing and potentially impactful for zero-shot coordination (ZSC) tasks. ### Weaknesses 1. **Lack of Analysis:** The fundamental reason why training on multiple environments enhances coordination generalization remains unclear. The paper would benefit from a more rigorous theoretical analysis or analytical experiments to explain this phenomenon. Without such analysis, it is difficult to assess whether CEC can generalize to more complex tasks. 2. **Limited Novelty:** The proposed method appears to be a direct application of multi-task RL to ZSC, which lacks significant novelty. Additionally, key methodological details, such as the task sampling strategy, are not clearly explained. 3. **Narrow Experimental Scope:** CEC is evaluated only on two simple discrete grid-world environments, where enumerating initial states is straightforward. In more complex scenarios, the cost of creating diverse environments may outweigh the benefits of training with unseen teammates, potentially limiting the practical applicability of CEC. Other Comments Or Suggestions: None. Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and effort in reviewing our paper, and for recognizing its strengths in clarity, empirical validation, and the intriguing insights it provides. # New experiments Based on your suggestions, we’ve added the following results to broaden the experimental scope to more complex scenarios, and improve the analytical strength of our work: - **[CEC in Partially Observable Environments](https://rb.gy/iwg3wx).** Testing CEC in a partially-observed version of the Dual Destination problem yielded similar results: CEC achieved high cross-play performance on novel environments (0.74), outperforming population-based methods (0.61) and naive self-play (0.03). - **Combining Task and Partner Diversity.** Following reviewer mZRY and 11py’s suggestions, we tested combining [CEC with E3T](https://rb.gy/g015me). While it performed poorly on the original Overcooked layouts (CEC=130.51, CEC-E3T=28.21), it outperformed CEC-Finetune on held-out grids (CEC-FT=41.73, CEC-E3T=58.13). We hypothesize the E3T partner noise will require larger networks and more training time when combined with CEC’s diverse environments. - **Analyzing the effect of RNNs in CEC.** Following reviewer SmWr’s suggestions, we provide new experiments ablating the use of an RNN for CEC, where the RNN is used to provide a simple meta-learning algorithm (Wang et al, 2018; Rabinowitz et al, 2018). In Overcooked and the Dual Destination problem, agents without recurrence failed to converge or adapt effectively. As shown in the [learning curves for the Dual Destination problem](https://rb.gy/nti7xf), CEC with LSTMs successfully converged, while the non-recurrent version could not even achieve positive rewards. # Overcooked benchmark Following many prior papers published at top conferences like ICML (Li et al (2023); Mahlau et al. (2024)), NeurIPS (Carroll et al. (2020), Strouse et al. (2022), Yan et al. (2023), Sarkar et al. (2023), Myers et al. (2024), Liang et al. (2024)) and ICLR (Yu et al. (2023), Gessler et al. (2025)), we focus on the Overcooked benchmark as a challenging human-AI cooperation task. *A core strength of Overcooked is that it enables real-time evaluation with actual human players, and shows that even for the simplest layouts coordinating with heterogenous and unpredictable human players requires a level of robustness that even state-of-the-art AI algorithms typically do not achieve.* **Since your review did not mention our human-study results** (Figures 9-11), **we would like to draw your attention to the fact that CEC achieves performance equivalent to state-of-the-art techniques in real human evaluations *without using population-based training,*** a surprising finding that contradicts much of the human-AI coordination literature. # Novelty While multi-task RL is well-established (Tobin et al., 2017), **our work is, to our knowledge, one of the first to show that environment diversity can outperform partner diversity for human-AI coordination, challenging the dominance of Population-Based Training (PBT) in ZSC** (Vinyals et al., 2019; Carroll et al., 2020; Strouse et al., 2022; Zhao et al., 2022; Sarkar et al., 2023; Liang et al., 2024). To support this claim, we reference Reviewer 11py’s comment, “I really like the idea of training the learning agent across variations of environments, which **has not been investigated much, if at all, in the literature. I think the authors could also go as far as claiming cross-environment cross-play evaluation as a novel evaluation setup, which would serve as another contribution of the paper.”** # Methodological details Our task sampling strategy is described in Section 4 and Appendix A.1. Figure 12 details how we determine which items to sample to create solvable environments, and how this leads to procedural generation of over $1.16 * 10^{17}$ diverse, solvable coordination challenges by sampling wall structures and randomizing features like goals, plates, pots, and onions. Our paper provides several **insights into how and why training in multiple environments enhances coordination with many partners:** - **Real Human Experiments:** Quantitative and Qualitative assessments (Section 6, Figures 9-11, 16-21) indicate agents learn to "adapt to partners in service of completing the task." - **Heat maps** (Figures 16-18) visualize the frequency of states covered by naive IPPO agents and CEC. CEC agents are able to adapt to novel partner strategies better by covering a greater distribution of states in self-play. - **Empirical game-theoretic analysis** (Section 5) demonstrates CEC agents forming more robust equilibria with novel partners. - **Learning curves** (Figure 7) illustrate how CEC agents improve differently on held-out tasks based on optimal strategy diversity. Thank you again for your feedback, which will help us improve the final version of our paper. --- Rebuttal Comment 1.1: Comment: My concerns are largely addressed. I raise my score from 1 to 3. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to engage with our paper and updating your score! Please let us know if there are any additional questions or concerns we can address. We wanted to highlight an additional experiment we ran during this second rebuttal period that might be of interest. # CEC in multi-task environments Following reviewer 11py’s suggestions, we explored whether CEC would be beneficial for cross-partner and cross-environment generalization when there are multiple solutions to a task. The intuition here is that with multiple possible optimal responses a team of agents could have for collaboration, PBT or self-play methods with sufficient exploration might be able to form robust, object-oriented representations without the need for CEC. To test this, [we extended the Dual Destination environment](https://rb.gy/4f9hmh) to have two possible valid solutions to reward agents. Now, agents are rewarded if they are on opposite green or opposite pink squares. As shown in the attached Figure, in the single-task variant, both valid squares remain equidistant from the agents so that there are now 4 strategies which could be rewarded. For the procedural generator, just as in the original Dual Destination problem we randomly shuffle agent and goal locations so that they all lie on unique grid cells. We show, even **in the multi-task setting, CEC agents (0.404) outperform PBT methods (0.251) and naive self-play (0.083) when collaborating with novel partners on tasks PBT and naive self-play method swerve trained on. Just as in the single-task setting, PBT (0.005) and naive self-play (0.004) cannot generalize to novel partners on novel environments, whereas CEC can (0.446)**, albeit with a slight performance reduction from the single-task setting in Figure 3 of our paper (CEC=0.931 and 0.966 on fixed and procedurally generated single-task problems respectively). As the reviewer insightfully hinted at, this finding illustrates that additional work is needed to understand the impacts of task complexity and procedural environment generation, and we will use these results to create a more nuanced characterization of the generalizability of our work.
Summary: This work presents cross-environment coordination as an alternative to population based training for enabling smooth coordination with unseen partners. They find that (pre-)training on a diverse set of environment configurations on Overcooked with a single learning partner enables agents to work in new environments with new partners. A human-AI user study demonstrates the effectiveness of their training approach for human-AI coordination. ## Updates after Rebuttal The authors have clarified the ZSC vs AHT distinction I emphasized in my original review, and added new results to demonstrate that CEC requires an RNN to train. The experimental results in this paper are very interesting, so I strongly recommend this paper be accepted. Claims And Evidence: I detail specific claims that I find problematic in later sections of the review. Methods And Evaluation Criteria: - This may just be a misunderstanding, but it is incorrect to claim that training multiple agents using the “same algorithm” suffices for XP evaluation. For instance, the first cited work in the XP Evaluation section (Strouse et al., 2022) measures performance by explicitly testing against a held-out set of agents (H_proxy, diverse SP, and random agents) and does not test against the same algorithm being trained. - Cross-seed XP on the same algorithm is only valid in the ZSC setting, not ad-hoc teamplay (see my comments in “other comments or suggestions” for the distinction). Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments and analyses are valid (with the exception of the issue mentioned earlier in the evaluation criteria) Supplementary Material: I read the appendix Relation To Broader Scientific Literature: The key contribution is applying domain randomization to the multi-agent setting for the purposes of ad-hoc coordination, which is novel to my understanding. Essential References Not Discussed: The related works section is comprehensive (assuming the setting is ad-hoc coordination and not ZSC) Other Strengths And Weaknesses: Strengths: - This is a very interesting work, demonstrating strong results for cross-environment coordination helping for ad-hoc coordination. - The human-AI experiments are principled and demonstrate strong transfer to human partners. - The empirical game-theoretic analysis is a very creative and interesting way to demonstrate cross-algorithm performance. Weaknesses: - The most critical weakness is the confounding of ZSC and ad-hoc coordination. It seems like this paper motivates itself under the (harder) ad-hoc coordination paradigm but evaluates itself under the (easier) ZSC paradigm. - I would typically request using held-out partners for these results (i.e. the human BC model used for evaluation in Overcooked on earlier works), but the human user study is sufficient. I’d instead like to see the ZSC vs ad-hoc coordination point clarified throughout the text. - This work only studies fully observable, simultaneous action settings, but coordination challenges occur in partially observable settings (like Hanabi) so we cannot generalize the results of this paper to the broader ZSC community. Other Comments Or Suggestions: - I think the references to ZSC should be replaced with ad-hoc coordination, based on conventions from the MARL literature. Although similar, ZSC refers to the setting where we assume partners follow the same algorithm and attempt to maximize cross-play across initializations, while ad-hoc coordination refers to the ability to adapt to new partners who may not share the same learning algorithm. In the context of human-AI coordination, it seems like this paper cares more about the latter. Please refer to “‘Other-Play’ for Zero-Shot Coordination” in ICML 2020 for more information. - Line 161 column 1: “we how training” typo - A should not be duplicated in the Markov Game tuple definition. The horizon should also be defined in the Markov Game tuple, especially since it is used in the score definition (though I would advise against using T for both transitions and horizon). Questions For Authors: - My interpretation of the key results is that the pair of agents trained via CEC do not learn consistent “conventions” across environments, so they “give up” and don’t learn any conventions. Are there experiments to indicate that CEC models do have some underlying conventions consistently across environments? - In particular, I am wondering about settings where forming conventions is more strictly necessary (i.e. settings with partial observability), and whether CEC helps create consistent conventions. - Given that Overcooked is fully observable, how important is the recurrent core for performance and generalization? This seems to be a key difference between this work and (Yan et al., 2023), and this may be hurting the human-AI performance of your methods since it is easier to move off distribution at test time with recurrent inputs. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful and constructive feedback on our paper, and address your comments below: # ZSC vs Ad-hoc Teamplay We sincerely thank you for clarifying the distinction between these two evaluation settings. From our understanding, our use of the empirical game theoretic analysis to compare how agents trained with different algorithms cooperate with each other, as well as our human study, both fall within the ad-hoc teamplay setting. However, we acknowledge that some of our experiments (particularly Figures 3, 5, and 7) address the ZSC setting, where we evaluate performance against the same algorithm with different random seeds. We will follow your suggestions to clarify our references and experimental analysis to distinguish between the two, and make a more nuanced characterization of the conclusions we can draw. We will also make sure to reference "Other-Play.” # CEC in Partially Observable Environments Following your suggestions, we tested [CEC with partial observability](https://rb.gy/iwg3wx), by modifying the Dual Destination problem. We trained CEC, FCP, and IPPO using the same architectures and 300 million steps of training. The challenge of breaking handshakes when learning multi-agent policies is even more pronounced in the partially observable case, as agents may form arbitrary conventions to handle high uncertainty about the state of the world. **From our results below, we find the same conclusions in the partially observable setting as we did in the fully observable results described in the paper: CEC has high cross-play performance in ZSC with other agents on novel environments (0.74), outperforming population based methods (0.61) and naive self-play (0.03).** We believe this finding strengthens our paper, and thank the reviewer for providing this suggestion. # New experiments with no RNN We include recurrent networks with CEC to enable a basic meta-learning algorithm (Wang et al, 2018; Rabinowitz et al, 2018). Following your suggestion, we tested whether it is possible for CEC to retain reasonable performance without using recurrent policies. In Overcooked and the Dual Destination problem, agents without recurrence failed to converge or adapt effectively. As shown in the [learning curves for the Dual Destination problem](https://rb.gy/nti7xf), CEC with LSTMs successfully converged, while the non-recurrent version couldn't even achieve positive rewards. # Combining Task and Partner Diversity Following reviewer mZRY and 11py’s suggestion, we tested [combining partner diversity algorithms with environment diversity](https://rb.gy/g015me) using E3T under the CEC paradigm. Results showed CEC-E3T performed worse than other models on Overcooked’s five original layouts (CEC=130.51, CEC-E3T=28.21) but outperformed CEC-Finetune on 100 held-out grids (CEC-FT=41.73, CEC-E3T=58.13). The linked learning curves reveal that noisy partners combined with dynamic environments introduced additional noise, suggesting larger networks and longer training times for convergence may be needed compared to vanilla CEC. # Learned Conventions in CEC You raised an interesting point about learned conventions from CEC. As noted in our qualitative analysis, conventions like "move out of the way" emerged from CEC agents trained across diverse environments. Unlike fixed strategies (e.g., "red agent cooks onions while blue delivers"), this adaptability reflects a general form of learned convention that enhances transferability and generalization to novel scenarios. We appreciate your attention to detail in pointing out the typos and suggestions for improving our Markov Game tuple definition. We will address these in our revision. We thank you for recognizing the strengths in our work, particularly the demonstration of cross-environment coordination's benefits for ad-hoc coordination, the principled human-AI experiments showing strong transfer to human partners, and the creative use of empirical game-theoretic analysis to demonstrate cross-algorithm performance. We thank the reviewer again for their valuable feedback, which will help us improve the clarity and rigor of our paper. Ref: Rabinowitz, N., Perbet, F., Song, F., Zhang, C., Eslami, S. A., & Botvinick, M. (2018, July). Machine theory of mind. In International conference on machine learning (pp. 4218-4227). PMLR. Wang JX, Kurth-Nelson Z, Kumaran D, Tirumala D, Soyer H, Leibo JZ, Hassabis D, Botvinick M. Prefrontal cortex as a meta-reinforcement learning system. Nat Neurosci. 2018 Jun;21(6):860-868. doi: 10.1038/s41593-018-0147-8. Epub 2018 May 14. PMID: 29760527. --- Rebuttal Comment 1.1: Comment: Thank you for the new experiments and for updating the text to clarify the ZSC vs AHT distinction. As a follow-up, it is *extremely* surprising to me that the LSTM is necessary for CEC given that the settings are fully observable and there is only one partner. Even complex, multi-task, fully-observable, single-agent environments don't need recurrence ("Kinetix: Investigating the Training of General Agents through Open-Ended Physics-Based Control Tasks" in ICLR 2025 comes to mind), so there may be something special about MARL that necessitates the "meta-learning" capabilities enabled by recurrence. --- Reply to Comment 1.1.1: Comment: Thank you for your engagement with our rebuttal and for updating your review! We believe the reason the RNN is necessary in the MARL case is that additional non-stationarity beyond the environment changing is introduced through a partner causally impacting subsequent observations. With many equally valuable strategies a cooperative partner may adopt (such as move clockwise vs counterclockwise), conditioning on the history of past states is needed to overcome difficulties in predicting the future that only focusing on the current state faces. This does not necessarily mean using an RNN: as you pointed out in your initial review, E3T did not use any recurrence in their architecture. However, they still conditioned on the past 5 states to form a character embedding that was used to model another agent’s actions, then conditioned on the current state and the partner character embedding to perform well in a highly non-stationary learning environment, effectively conditioning on the history of states. Here, we see the meta-learning problem as using the first few steps of the episode to adapt to a new partner in a new environment, which is only possible for models that can condition on the episode history to revise their policy.
Summary: This paper studies a novel multi-agent training paradigm, Cross-Environment Cooperation (CEC), where the learning agent learns to work with a single partner across different variations of the environment. This is in contrast with prior work in the literature that focuses on training an agent that can adapt to unseen partners/strategies under a fixed environment. Despite not training with diverse partners, the CEC agents outperform state-of-the-art baselines under fixed and procedurally generated layouts. Additionally, the paper utilizes a newly developed Jax based procedural 2-player Overcooked environment for efficient training. Claims And Evidence: All the claims are well supported by convincing evidence. Methods And Evaluation Criteria: The proposed method is technically sound and has strong implication to the MARL literature. The evaluation protocol is clear and reasonable. All methods use the same or similar computation budget for a fair comparison. Theoretical Claims: N/A Experimental Designs Or Analyses: All experiments are well designed and provide good intuition for the reader. The analyses are sound and based on best statistical practices. Supplementary Material: The supplementary includes additional experimental details, results, and analyses. It is also useful for future reproduction. Relation To Broader Scientific Literature: The findings in this paper will be impactful in the field of ad-hoc teamwork and multi-agent reinforcement learning in general. The idea of positive transfer between the two axes of generalization (environment and partner) is an intriguing finding. This paper serves as a good first step towards achieving jointly environment and partner generalization. Essential References Not Discussed: - Other Strengths And Weaknesses: Strengths - The paper is well written and easy to follow. - The core idea of the paper is novel, simple and very effective. - The toy example provides good intuition. - The experiments are well designed and thorough. Weaknesses - The environments used in this work are limited and relatively simple. Other Comments Or Suggestions: Suggestions - I really like the idea of training the learning agent across variation of environments, which has not been investigated much, if at all, in the literature. I think the authors could also go as far as claiming cross-environment cross-play evaluation as a novel evaluation setup, which would serve as another contribution of the paper. In its current form, the paper reads like CEC is purely a novel training paradigm. - In Section 4, since there are only two possible strategies, it would be beneficial to include a fixed task oracle baseline. The oracle cooperator would be trained against the two strategies. This would make it clear to the reader how much environment diversity helps partner generalization relative to this oracle (which is trained with "maximally diverse partners"). - Is it possible to show some combination of a partner diversification method (e.g., FCP) and CEC? Analyzing if the diversity generated from the two sources are additive or redundant would provide a very valuable insight - Since the Overcooked environment used in the paper has only one recipe, It is quite straightforward to see why varying environment helps: the agents learn to do the recipe in different orders/combinations under different environment variants. I wonder if CEC would work well in environments with multiple solutions (e.g., multiple recipes in Ovecooked [1,2,3]). typos - line 161: "In contrast, we how training ..." [1] Wu, Sarah A., et al. "Too many cooks: Bayesian inference for coordinating multi‐agent collaboration." Topics in Cognitive Science 13.2 (2021): 414-432. [2] Charakorn et al. "Generating diverse cooperative agents by learning incompatible policies." ICLR. 2023. [3] Yu, Chao, et al. "Learning Zero-Shot Cooperation with Humans, Assuming Humans Are Biased." ICLR. 2023. Questions For Authors: - There are two axes of generalization considered in this paper. It shows that environment diversity helps partner generalization. I wonder if the opposite is true: Does training with diverse partners help environment generalization? - Why E3T has lower evaluation XP performance than IPPO in Fig. 6 (left)? - How come E3T gets the lowest XP reward while perform best with humans? - Will the code be open source? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your interest in our work and recognizing the novelty of our training and evaluation approaches. We are glad you found our writing to be clear and our experiment section to be thorough. Your idea for framing cross-environment cross-partner evaluations as a novel contribution is one we will take on board in our revision of the paper, and we feel this will make our paper even stronger. We would now like to address some of their questions below: # Combining Task and Partner Diversity Following your suggestion, we tested the impact of [combining partner diversity with environment diversity](https://rb.gy/g015me) using E3T under the CEC paradigm. We set the partner policy randomness to 0.5, consistent with human experiments and E3T’s original design. Results in simulation showed CEC-E3T performed worse than other models on Overcooked’s five original layouts (CEC=130.51, CEC-E3T=28.21) but outperformed CEC-Finetune on 100 held-out grids (CEC-FT=41.73, CEC-E3T=58.13). The attached learning curves reveal that noisy partners introduced additional training noise, which, combined with dynamic environments, likely requires larger networks and longer training times for convergence compared to vanilla CEC. While the vanilla E3T struggled in simulation, it excelled in human trials, outperforming all other models in terms of reward. # Fixed Task Oracle In Figure 3 of the paper, we report the score in Dual Destination normalized by the theoretical maximum possible score which would be achieved if both agents spawned on the correct goals in the first timestep. Figure 3 plots these normalized cross-play rewards for CEC, FCP, and IPPO. To obtain the **oracle score that you suggested on the Fixed Task,** we calculate that **an oracle agent** that perfectly responds to either optimal strategy would require 3 steps of receiving a -1 step cost to move to the target location before receiving a (3 positive - 1 step cost) reward for 97 steps, equating to (2*97 - 3) = 191 / 200 = **0.955 normalized reward**. **CEC scored 0.931 normalized reward** with a standard error of 0.013, indicating it underperforms the oracle’s cross play performance by about 2.5%. # CEC in Partially Observable Environments We tested [CEC with partial observability](https://rb.gy/iwg3wx), by modifying the Dual Destination problem. We trained CEC, FCP, and IPPO using the same architectures and 300 million steps of training. The challenge of breaking handshakes when learning multi-agent policies is even more pronounced in the partially observable case, as agents may form arbitrary conventions to handle high uncertainty about the state of the world. **From our results below, we find the same conclusions in the partially observable setting as we did in the fully observable results described in the paper: CEC has high cross-play performance in ZSC with other agents on novel environments (0.74), outperforming population based methods (0.61) and naive self-play (0.03).** We believe this finding strengthens our paper, and thank the reviewer for providing this suggestion. # New experiments with no RNN Following reviewer SmWr’s suggestions, we provide new experiments ablating the use of an RNN for CEC, where the RNN is used to provide a simple meta-learning algorithm (Wang et al, 2018; Rabinowitz et al, 2018). We find that in Overcooked and the Dual Destination problem, agents without recurrence failed to converge or adapt effectively. As shown in the [learning curves for the Dual Destination problem](https://rb.gy/nti7xf), CEC with LSTMs successfully converged, while the non-recurrent version could not achieve positive rewards. # E3T performance difference in cross-play performance in simulation vs with humans During training, E3T uses a partner policy defined as π_p = (1-ε) * learned_policy + ε * uniform_policy, maintaining entropy to induce greater strategy coverage without requiring diverse co-players like population-based methods. Yan et al. (2023) found ε = 0.5 most effective for collaborating with humans, which we used for our baseline. For AI zero-shot coordination, ε = 0.3 performed best, while ε = 0.0 excelled in low-exploration layouts like Forced Coordination. **The code will be open sourced, from the environment code to training scripts to the human evaluation interface, which supports running experiments on arbitrary Jax-based reinforcement learning environments.** --- Rebuttal Comment 1.1: Comment: I appreciate the detailed response from the authors. I find the "Combining Task and Partner Diversity" and "Fixed Task Oracle" experiments very informative. I strongly suggest the authors put these results in the paper. I also do really appreciate that the source code will be fully open source. There are still some questions and concerns not addressed. I keep my score as is. --- Reply to Comment 1.1.1: Comment: Thank you for engaging in the rebuttal process. We will follow your suggestions and include the new results from our previous response in the paper. We would like to address some of their questions and concerns from your initial review below: # CEC in multi-task environments Following your suggestions, we explored whether CEC would be beneficial for cross-partner and cross-environment generalization when there are multiple solutions to a task. The intuition here is that with multiple possible optimal responses a team of agents could have for collaboration, PBT or self-play methods with sufficient exploration might be able to form robust, object-oriented representations without the need for CEC. To test this, [we extended the Dual Destination environment](https://rb.gy/4f9hmh) to have two possible valid solutions to reward agents. Now, agents are rewarded if they are on opposite green or opposite pink squares. As shown in the attached Figure, in the single-task variant, both valid squares remain equidistant from the agents so that there are now 4 strategies which could be rewarded. For the procedural generator, just as in the original Dual Destination problem we randomly shuffle agent and goal locations so that they all lie on unique grid cells. We show **even in the multi-task setting, CEC agents (0.404) outperform PBT methods (0.251) and naive self-play (0.083) when collaborating with novel partners on tasks PBT and naive self-play method swerve trained on. Just as in the single-task setting, PBT (0.005) and naive self-play (0.004) cannot generalize to novel partners on novel environments, whereas CEC can (0.446)**, albeit with a slight performance reduction from the single-task setting in Figure 3 of our paper (CEC=0.931 and 0.966 on fixed and procedurally generated single-task problems respectively). As the reviewer insightfully hinted at, this finding illustrates that additional work is needed to understand the impacts of task complexity and procedural environment generation, and we will use these results to create a more nuanced characterization of the generalizability of our work. # Does partner diversity improve environment generalization As discussed in Section 6 question 2 of our paper, and demonstrated in Figures 3, 6, 8, 9, and 14, we found that **partner diversity on its own does not improve environment generalization.** This is likely due to the fact that on a single task, irrespective of the number of diverse partners an ego cooperator is exposed to, it will struggle to form robust representations of the optimal policy in a way that supports state generalization, since it can associate its learned behaviors with brittle functions such as “move in pattern A then B” rather than in object-oriented or task-centric ways. There is much additional work needed on how to combine partner and environment diversity to realize the full benefits of both.
Summary: This paper proposes Cross-Environment Generalization (CEC) as a way of improving agents’ generalization to unseen agents (the ad hoc teamwork problem) and unseen environments. The proposed method consists of a procedurally generator that varies Overcooked initial states, over which an IPPO team learns via self-play. The method is evaluated on Overcooked, and demonstrates improved task generalization compared to FCP and E3T, and generalization to human teammates that improves over FCP, but is equivalent to E3T. Claims And Evidence: - **Key Claim**: training agents in across procedurally generated, randomized environments, improves agents’ ability to generalize to new environments and new cooperation partners. - This is the core hypothesis/claim of the paper, but I think it is overstated. The experiments only analyze CEC’s performance on Overcooked, and do not consider any other environments. The efficacy of environment randomization in inducing diverse partner strategies might be specific to Overcooked, so I think the authors should weaken their claims. - **Secondary Claim**: The authors claim that their procedural generation enviornment generation method is superior to randomly generating Overcooked environments, due to the problem of generating unsolveable environments. - However, there is no empirical evidence provided that shows that random level generation is problematic. Can the authors empirically test what percentage of randomly generated environments would be unsolvable in Overcooked? - **Secondary Claim**: training on large set of procedurally generated environments is easier/results in a more computationally efficient algorithm than training a population of agents on a small set of environment configurations: - CEC is trained for 3 billion steps on Overcooked. Isn’t this a much larger training duration than conventional teammate generation approaches such as LIPO and CoMeDi (generally around a couple million timesteps, with a population size < 10) on Overcooked? Methods And Evaluation Criteria: The proposed method, CEC is framed as an algorithm for training ad hoc agents, but environment diversity seems somewhat orthogonal from teammate diversity, even if environment diversity can induce teammate diversity. As such, it seems unfair to evaluate FCP/E3T on environment generalization. I am also wondering how CEC would perform if combined with a teammate diversity method such as E3T. Theoretical Claims: N/A Experimental Designs Or Analyses: Overall, the empirical design/analysis is strong, other than the issues I described in Claims/Evidence. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: The authors cite all relevant work that I am aware of, but they could improve the contextualization of their work w.r.t the UED literature, in particular, with respect to MAESTRO (Samvelyan et al. 2023) and Domain Randomization (Tobin et al. 2017) (already cited in the paper). - MAESTRO: MAESTRO examines both environment and co-player autocurricula in 2p0s game settings, and is a prioritized-level-replay style method that leverages a randomized environment generator. How do the insights from MAESTRO relate to this paper. which addresses the fully cooperative setting? - Domain Randomization: it’s not clear to me that the proposed environment randomization in this paper is any different from domain randomization. The author should also discuss the MADRID paper (Samvelyan et al. 24), which also examines environment diversity as a way of exposing weaknesses in teammate policies. Samvelyan, Mikayel, Davide Paglieri, Minqi Jiang, Jack Parker-Holder, and Tim Rocktäschel. 2024. “Multi-Agent Diagnostics for Robustness via Illuminated Diversity.” arXiv. [http://arxiv.org/abs/2401.13460](http://arxiv.org/abs/2401.13460). Essential References Not Discussed: MADRID (Samvelyan et al. 24) - see above for citation. Other Strengths And Weaknesses: - Strengths: - The paper is clear and well-written. - Statistical significance tests are provided - Core finding that environment randomization improves both task and cooperative generalization ability is interesting and somewhat surprising. - Method is evaluated with humans, and performs strongly - Weaknesses: - Method is specific to overcooked, and cannot be applied out-of-the-box on other domains, without a procedural generator. Can the authors discuss what would be needed for their method to apply more broadly? - Method is extremely computionally expensive, requiring training for 3B steps, compared to conventional teammate generation methods. - No formal characterization of the relationship between environment layout diversity and strategic diversity in Overcooked, which might explain why increasing environment diversity leads to improved ZSC. - No explanation of why humans find CEC more enjoyable/adaptive compared to baseline agents Other Comments Or Suggestions: - Typo on Line 161: “In contrast, we how…” - Figure 3: the discussion of this figure on pg 4, right is very confusing because I’m not sure if the discussion refers to the right or left subfigure in Fig. 3. Can the authors also specify what the “Fixed Task” is in Fig. 3, left? Questions For Authors: 1. How is the procedural environment generation method proposed in this paper different from domain randomization (Tobin et al. 2017)? 2. CEC is implemented using a self-play algorithm to isolate the effect of task diversity and partner diversity. Have the authors considered an algorithm that combines task and partner diversity? 3. Human-evaluation experiments (Q3): in these experiments, CEC falls short of E3T in cooperation with humans, but scores highest (by a small margin) according to human preferences. According to the analysis fo Figure 19, the authors state that this occurs because CEC is more adaptable to user behaviors, more strategically consistent, and better at avoiding collisions (Figure 11). However, the training process for CEC does not include any human data, so why would it learn any human norms? 4. Figure 13: why is it that the self-play score of each algorithm (on the diagonal) is no higher than any of the cross play scores, on the original 5 layouts? Typically, the algorithm self-play score is higher than algorithm cross-play scores. Tobin, Josh, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. 2017. “Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World.” In *2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)*, 23–30. [https://doi.org/10.1109/IROS.2017.8202133](https://doi.org/10.1109/IROS.2017.8202133). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and recognition of our work’s novelty and clarity. Below, we address key concerns and how we plan to integrate this feedback: # Experiments Beyond Overcooked To help address your suggestions, we have conducted **3 new experiments showing that our method generalizes beyond Overcooked.** First, we would like to highlight the “Dual Destination” environment (Section 4, Figures 2-3 of the submission), where rewards are only given when both occupy separate green squares. For the rebuttal, we have conducted new experiments in a [partially observable version of this environment](https://rb.gy/iwg3wx), and found that CEC once again achieved high cross-play performance in novel environments (0.74), outperforming population-based methods (0.61) and naive self-play (0.03). # Combining Task and Partner Diversity Following your suggestion, we tested [combining partner diversity algorithms with environment diversity](https://rb.gy/g015me) using E3T under the CEC paradigm. Results showed CEC-E3T performed worse than other models on Overcooked’s five original layouts (CEC=130.51, CEC-E3T=28.21) but outperformed CEC-Finetune on 100 held-out grids (CEC-FT=41.73, CEC-E3T=58.13). The linked learning curves reveal that noisy partners combined with dynamic environments introduced additional noise, suggesting larger networks and longer training times for convergence may be needed compared to vanilla CEC. # New experiments without RNNs Per reviewer SmWr, we ablated RNNs in CEC (used for meta-learning per Wang et al, 2018; Rabinowitz et al, 2018). In Overcooked and Dual Destination, non-recurrent agents failed to converge or adapt. [Learning curves](https://rb.gy/nti7xf) show CEC with LSTMs converged successfully, while non-recurrent versions could not achieve positive rewards. # DR generates unsolvable environments We also appreciate the opportunity to clarify the distinction between CEC and standard domain randomization (DR). While CEC uses environment randomization, it employs a procedural generator to ensure all tasks are solvable, unlike naive randomization, which we found in our experiments often produces unsolvable layouts and poor learning signals. This mirrors the conclusions of the Overcooked Generalisation Challenge (Ruhdorfer et al., 2024), which shows that random/UED approaches often fail to generate solvable layouts or train agents capable of completing tasks. This highlights the need for structured procedural generation like ours. # Computational Feasibility of CEC We acknowledged that CEC’s 3 billion training timesteps exceed the computational budget of prior work like CoMeDi for a single layout. However, Population-based training (PBT) scales poorly when the goal is to cooperate on many levels: training eight agents (5M steps each) plus an ego cooperator (10M steps) results in a 50-million-step budget per layout. Scaling to 100 layouts would require 5 billion steps but still only generalize to fixed levels. CEC generalizes better to thousands of unseen layouts with lower total compute costs (Figures 3,6,9), making it more efficient for broad adaptability. We also note that for our experiments, we ensure that all algorithms are able to use the same compute budget of 3 billion steps, and we find that CEC is able to better use this compute to improve performance. We agree that **formalizing procedural generators** for naturalistic settings is a challenging direction for future work. One idea could involve combining program induction with procedural generation, as in WorldCoder (Tang et al., 2024), where LLMs generate OpenAI Gym-like environments with unknown dynamics. For CEC, inferred dynamics could be encoded into simulators like Unity before generating diverse scenes for RL training—a form of real-to-sim-to-real transfer (Torne et al., 2024). # CEC and related work Thank you for highlighting MAESTRO and MADRID, which we will include in our related works section. MAESTRO complements our work by focusing on zero-sum settings, and showing that in that case, jointly considering environment and partner diversity is optimal, while focusing on either leads to suboptimal outcomes. While we used uniform sampling over grids irrespective of the agent’s abilities during training, we will add to our discussion about how future work can explore autocurricula for partners and tasks to improve sample efficiency through strategies such as regret minimization ala MADRID and MAESTRO. # Analyzing CEC’s Norms The reviewer astutely questions how CEC learns human-like norms without explicit instructions. As noted in Section 6, we believe these norms reflect principles for cooperation in dynamic environments, such as “stay out of each other’s way.” These norms may be perceived as more human-like because humans also follow them. **Figures 13 and 14** show cross-play scores across all algorithms, forming the payoff matrix used to create Figure 8. The diagonal reflects results from Figure 6. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for providing a comprehensive rebuttal, and apologies for the late response. The additional results on the Dual Destination environment somewhat address my concerns about the generalizability of the approach to other domains, but not fully, as the dual destination problem was designed to illustrate the authors' point. It would be more convincing if the authors present additional experiments on another actual AHT domain such as Hanabi. The main argument that's not convincing to me is the argument on the computational feasibility of CEC. The authors argue that CEC is a more efficient approach than population-based AHT methods when it comes to training a policy that can deal with 100 layouts, because if we train a population-based AHT method separately for each layout with a population size of 8, it would take 5 billion timesteps total (compared to 3B for CEC). This argument has a couple holes: (1) the numbers were specifically chosen to make CEC more efficient -- if the population-based AHT method was trained with a population size of 4, then the PBT method would only need a computational budget of 2.5B steps; (2) it's unreasonable to assume that we would start training the policy from scratch for each layout. The two issues I pointed out above are somewhat pedantic though, and I do not view computational feasibility as a key issue for the paper. In my opinion, the key contribution of this paper/CEC is demonstrating that cross-task generalization and cross-team generalization (AHT) are not orthogonal axes, which is an important finding alone. However, the paper reads to me as though it is presenting CEC as a novel AHT algorithm. This actually weakens the impact of the main contribution, since characterizing CEC as an AHT algorithm is challenging --- as the argument on computational feasibility of CEC vs AHT demonstrates, constructing a totally fair comparison between CEC and existing AHT methods is difficult. I encourage the authors to discuss limitations of the comparison between CEC and AHT methods in future iterations of the paper. --- Reply to Comment 1.1.1: Comment: Thank you for engaging in the rebuttal process and providing a more nuanced perspective on CEC as an AHT algorithm. We agree that in our example we tried to illustrate how with a large population trained from scratch on each new environment, CEC proves to be more efficient in cross-partner cross-environment generalization than population based methods. Due to the current limitations of PBT methods, namely that it is hard to guarantee diversity in partner strategies if the only variation between populations of self-play agents is the random initialization or network architecture, a larger population size increases the likelihood of new strategies being found during training, making the ego cooperator more robust to novel partners at test-time. For instance, in continuous domains you might require a very large population to obtain sufficient strategy coverage, since each policy might diverge by a very small amount but be semantically similar. However, should there be a method for accurately estimating differences between partner strategies, then the reviewer correctly points out that PBT methods can be more sample efficient, albeit if they are trained from scratch on every new environment they will likely prove to be more computationally expensive than CEC. For instance, if we use the reviewer’s numbers of training a population of size 4 and ego cooperator for 25 million steps per environment, retraining an agent for each of 120 environments would take 3 billion steps, but would lack the generality to thousands of levels that CEC boasts without any additional training. If we assume we are not going to retrain a PBT method from scratch when exposed to a new level, this becomes murkier territory to evaluate the differences between CEC and PBT. As we demonstrate in Figures 6 and 9 of our paper, the CEC-Finetune experiments show sequentially training on a set of levels (first learn on set of levels A, then specialize on level B) leads to catastrophic forgetting in agents if they try to play levels in distribution of the set of levels A. As evidenced from the continual learning literature, there is a fundamental tradeoff between stability and plasticity (Mermillod et al, 2013; Kim et al, 2023; Elsayed and Mahmood, 2024) when trying to deal with sequential learning tasks, a problem we showed to persist in Figures 6 and 9 (high plasticity but low stability). An alternative to sequentially learning tasks with partner diversity methods is to do both forms of learning simultaneously, that is, combine partner and environment diversity. However, as depicted in our additional experiments, this is a non-trivial problem to solve, since naive combinations of partner and environment diversity methods lead to a [highly noisy training procedure](https://rb.gy/g015me) where agents fail to learn or generalize effectively (CEC=130.51, CEC-E3T=28.21). Based on the experiments we conducted and our understanding of the literature, which we acknowledge may have limitations, we observed that retraining a PBT method on every level can be less efficient than CEC when aiming to ensure that the ego cooperator is robust to new partners on new environments. Specifically, increasing the number of partners in a population improves robustness but also increases inefficiency in PBT methods. We appreciate the reviewer’s perspective that integrating population-based methods with cross-environment training can blur the distinction between the two approaches. In response, we will incorporate their suggestion to reframe our contributions as a more detailed exploration of how cross-task and cross-partner generalization can complement each other rather than being mutually exclusive. Ref: Elsayed, Mohamed, and A. Rupam Mahmood. "Addressing loss of plasticity and catastrophic forgetting in continual learning." arXiv preprint arXiv:2404.00781 (2024). Kim, Sanghwan, et al. "Achieving a better stability-plasticity trade-off via auxiliary networks in continual learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. Mermillod, Martial, Aurélia Bugaiska, and Patrick Bonin. "The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects." Frontiers in psychology 4 (2013): 504.
null
null
null
null
null
null
Survival Analysis via Density Estimation
Accept (poster)
Summary: The authors consider the problem of survival analysis with competing and potentially dependent risks. The paper has two main contributions. First, the authors propose a two-step plug-and-play method which uses the output of a generic density estimator and transforms it into an estimate of the joint survival function. Their method relies on knowledge of a copula describing the dependence structure between the competing risks. When the independence copula is used, this reduces to the standard assumption of conditional independence among the competing risks. When the copula is unknown, they also provide upper and lower bounds on the resulting survival function. Second, the authors introduce a proper scoring rule for competing risks which is valid for any number of competing risks. They use this proper scoring rule both as an evaluation metric, and as a training objective for a neural network-based model of the joint survival function. Finally, they compare several instantiations of their two-step method, their neural network approach, and several standard survival analysis baselines on two real datasets. They obtain consistent improvements in terms of both accuracy and calibration compared to the baselines. Claims And Evidence: The only obvious shortcoming of the paper is the lack of baseline methods in the experiments. The authors compare against a Cox model, a random survival forest, and a neural network-based method (DeepHit). However, there are many other methods from the literature which are applicable to this problem. It would greatly strengthen the paper to compare against these as well. Some examples include: Katzman, Jared L., et al. "DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network." BMC medical research methodology 18 (2018): 1-12. Kvamme, Håvard, Ørnulf Borgan, and Ida Scheel. "Time-to-event prediction with neural networks and Cox regression." Journal of machine learning research 20.129 (2019): 1-30. Hu, Bingqing, and Bin Nan. "Conditional distribution function estimation using neural networks for censored and uncensored data." Journal of Machine Learning Research 24.223 (2023): 1-26. It seems it would also be reasonable to include results for some of the methods which are already discussed in the related work, in particular the monotone neural network model of Rindt et al. (2022). Methods And Evaluation Criteria: The two-step method proposed by the paper is flexible and innovative. The ability to turn any density estimator into a method for survival analysis with competing risks greatly expands the toolkit available to practitioners, and the experimental results so far indicate that it obtains consistent improvement over existing methods. While the assumption of a known copula may seem restrictive at first, the authors clearly explain that the more common assumption of conditional independence of the competing risks is actually a special case of this copula assumption, so it is never a more severe modeling restriction than the independence assumption. The derived upper and lower bounds on the survival function even go a step further in alleviating this modeling assumption. While the introduced strictly proper scoring rule is helpful as an evaluation metric in the setting of this paper, especially in the $K>2$ setting. However, the introduction of neural network model based on this score feels a bit out of place in the paper, as relatively little space is devoted to it and its performance is not up to par with the other methods. The suite of evaluation metrics are extensive and well-explained in the paper. The datasets used are relevant, though somewhat limited (there are only two). Theoretical Claims: I checked the proof of Theorem 5.3 in Appendix F and did not find any errors. A minor comment: I assume $v$ and $\hat{v}$ are the distributions over $(t, k)$ induced by each of the $v_k$ and $\hat{v}_k$. This may be apparent but I did not see it explicitly stated anywhere. Experimental Designs Or Analyses: The experimental design is sound. As mentioned in the Claims and Evidence section, while the structure of the experiments is sound, it is missing some relevant materials and could benefit from experiments on more datasets. Supplementary Material: I reviewed the proof in Appendix F and did not find any errors. Relation To Broader Scientific Literature: The authors give a very clear description of the relationship between their work and the prior literature. Specifically: - Previous extensions of density estimation to survival analysis are bespoke for that particular density estimator, whereas their method works for any density estimation method. (A remark on this point: it would be helpful for the authors to provide some references for the prior works they're talking about.) - Previous works rely on stronger assumptions on the competing risks, such as conditional independence or proportional hazards. - Previous work has shown that the survival function is not identifiable without additional assumptions, but it has not examined whether or not the survival function can be *bounded* even if it is not exactly identifiable. - Previous works established strictly proper scoring rules in more restricted settings ($K=2$ competing risks and conditional independence), but a strictly proper scoring rule has not been established in the more general setting of this paper. Essential References Not Discussed: See the Claims and Evidence section for a collection of baselines which should be added. Please also provide some references for prior works which used bespoke extensions of specific density estimators to survival analysis (these are mentioned in the related work without references). Other Strengths And Weaknesses: This paper is exceptionally well-written. I was especially impressed by the color-coded visual explanation of the core formula for their two-step method, represented in equations (6)-(10) and Figure 3. Other Comments Or Suggestions: N/A Questions For Authors: - Is there an intuitive explanation for condition (iii) in the definition of a copula (pg. 3, just above equation (1)? - Can the authors provide the derivation for the equation of the derivative $v_k(t|x)$ of the cumulative incidence function (pg. 7, just above Assumption 5.2)? - How are the class probabilities $f_k(x^{(i)})$ which are used to compute the KS calibration error (bottom of pg. 7) recovered from the survival function estimates? --- After the rebuttal phase, the authors have addressed my main concerns. I looked at the other reviews and author responses, and it seems that these concerns have been addressed as well. I have raised my score accordingly. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your comments. We appreciate your remark, "The two-step method proposed by the paper is flexible and innovative," and we are grateful for the suggestions to improve the presentation of our paper. > The suite of evaluation metrics are extensive and well-explained in the paper. The datasets used are relevant, though somewhat limited (there are only two). > The only obvious shortcoming of the paper is the lack of baseline methods in the experiments. As we state in the last paragraph of Section 6, we include additional evaluation results in the appendix. Specifically, we present our experimental results (Fig. 6) on four datasets with $K=2$ in Section G, in addition to the experimental results (Fig. 5) on two datasets with $K=2$ in the main body of our paper. Furthermore, Section G includes experimental results (Fig. 7) on two datasets with $K=3$. In summary, we used eight datasets in total, as summarized in Table 2 (page 27). We would also like to note that we used different sets of baseline methods for $K=2$ and $K=3$, and we used six baseline methods (i.e., not only three) in total. > I checked the proof of Theorem 5.3 in Appendix F and did not find any errors. A minor comment: I assume $v$ and $\hat{v}$ are the distributions over $(t,k)$ induced by each of the $v_k$ and $\hat{v}_k$. This may be apparent but I did not see it explicitly stated anywhere. We will update the description based on your comments if our paper is accepted. > Please also provide some references for prior works which used bespoke extensions of specific density estimators to survival analysis (these are mentioned in the related work without references). We would like to remind you that we show some examples of the bespoke extensions with references in the second paragraph of Section 1. > Is there an intuitive explanation for condition (iii) in the definition of a copula (pg. 3, just above equation (1)? When $K=2$, this condition is $C(v_1,v_2) - C(u_1,v_2) - C(v_1,u_2) + C(u_1,u_2) \geq 0$. This is a necessary condition to represent a probability $\Pr(\zeta_1 < T_1 \leq \zeta_2, \zeta_3 < T_2 \leq \zeta_4)$ as $\Pr(\zeta_1 < T_1 \leq \zeta_2, \zeta_3 < T_2 \leq \zeta_4) = C(v_1,v_2) - C(u_1,v_2) - C(v_1,u_2) + C(u_1,u_2)$, where $v_1 = F_1(\zeta_2)$, $v_2 = F_2(\zeta_4)$, $u_1 = F_1(\zeta_1)$, and $u_2 = F_2(\zeta_3)$, because the probability must be non-negative. > Can the authors provide the derivation for the equation of the derivative $v_k(t|x)$ of the cumulative incidence function (pg. 7, just above Assumption 5.2)? The proof is immediate from Theorem 1 of (Tsiatis, 1975). Note that $H^{(k)}(t_1,t_2,...,t_k)$ in (Tsiatis, 1975) corresponds to $\overline{C}(1-F_{1}(t_{1}),1-F_{2}(t_{2}),\ldots,F_{K}(t_{K}))$ in our paper and $Q_i(t)$ in (Tsiatis, 1975) corresponds to $V_k(\infty) - V_k(t)$ in our paper. We will cite (Tsiatis, 1975) in our future revision. > How are the class probabilities $f_k(x^{(i)})$ which are used to compute the KS calibration error (bottom of pg. 7) recovered from the survival function estimates? They can be computed by using $r_{k}(t|x)$, and we can compute each $r_{k}(t|x)$ by using equation (14) (equivalently, equations (6)-(13)) since we have an estimate $F_{k}(t|x)$. We will add explanation on the computation in our future revision. --- Rebuttal Comment 1.1: Comment: (I have also added this comment in my updated review.) After the rebuttal phase, the authors have addressed my main concerns. I looked at the other reviews and author responses, and it seems that these concerns have been addressed as well. I have raised my score accordingly.
Summary: **Summary:** The authors introduce an algorithm that can post-process density estimators to perform survival analysis. Additionally, a relaxed assumption to the typical conditional independence between event times is used, by modelling the joint distribution using copulas. Several generalisations and scenarios involving more than two competing events are analysed. ## update after rebuttal Thanks for your response. You've satisfactorily addressed all my comments. It seems I was also the most negative reviewer, and the other reviewers have somewhat swayed my decision. Given these two points, I increase my score. Claims And Evidence: The claims appear to mostly be supported by evidence, however I have some confusions. See detailed comments below. Methods And Evaluation Criteria: - Only two datasets are evaluated --- more datasets would make sense for the problem at hand. Theoretical Claims: - Preliminaries. Up until the last sentence, it seems like we are dealing with general K. Then in the last sentence, we seem to be dealing exclusively with K=2 - "however, in this paper, we assume blah takes values from ${1,2}$". Whether the index starts at 0 or 1 is separate to my confusion. Are we dealing with K=2 or general K? - I am confused as to how we can interpret the observation (x, t, \delta) as belonging to a $K$-dimensional space. Doesn't x belong to $\mathcal{X}$, an arbitrary feature space? Can't this be of any dimension? I can understand how (t, \delta) can be interpeted in a $K$-dimensional space. - I am confused about "This implies the existence of a function $g_C$ such that". Don't we actually know exactly what this function is, described by (6-13)? That is, not only does it imply an existence, but also that we have this function. I am assuming this is the function that is actually used in algorithm 1? The wording seems to hint that somehow this function exists but we might have to find it, whereas I don't think this is actually the case.It is essentially just the definition of the CJD, CIF and copula. - I am confused about equation 15. The symbol $\hat{r}_{b \mid x}$ is never defined or explained. What are you minimising (15) with respect to? What is the reason for the gap between algorithm 1 and the "simplified implementation"? - The abstract and intro and several other places talk about survival analysis being performed as a post-processing of density estimation procedures. However, I am not seeing this in algorithm 1, equation 14 or equation 15. Could the authors be more explicit about exactly how this is a postprocessing of density estimation? It is clear that the claim is true, as the authors later show different density estimation techniques in the experiments. Experimental Designs Or Analyses: - The first sentence of the experiments section talks about evaluating the two-step algorithm. Does this mean you don't use the previously mentioned "simplified implementation"? It is not clear which algorithm you are using. - Only two datasets are evaluated. Supplementary Material: I didn't carefully check the supplementary material. Relation To Broader Scientific Literature: I believe the results are related to the broader scientific literature, as discussed in the introduction. In particular, prior works that use density estimation to perform survival analysis are discussed. Previous works have used stronger assumptions. Essential References Not Discussed: None to the best of my knowledge. Other Strengths And Weaknesses: **Detailed Comments/Questions/Weaknesses:** - Preliminaries. Up until the last sentence, it seems like we are dealing with general K. Then in the last sentence, we seem to be dealing exclusively with K=2 - "however, in this paper, we assume blah takes values from ${1,2}$". Whether the index starts at 0 or 1 is separate to my confusion. Are we dealing with K=2 or general K? - I am confused as to how we can interpret the observation (x, t, \delta) as belonging to a $K$-dimensional space. Doesn't x belong to $\mathcal{X}$, an arbitrary feature space? Can't this be of any dimension? I can understand how (t, \delta) can be interpeted in a $K$-dimensional space. - I am confused about "This implies the existence of a function $g_C$ such that". Don't we actually know exactly what this function is, described by (6-13)? That is, not only does it imply an existence, but also that we have this function. I am assuming this is the function that is actually used in algorithm 1? The wording seems to hint that somehow this function exists but we might have to find it, whereas I don't think this is actually the case.It is essentially just the definition of the CJD, CIF and copula. - I am confused about equation 15. The symbol $\hat{r}_{b \mid x}$ is never defined or explained. What are you minimising (15) with respect to? What is the reason for the gap between algorithm 1 and the "simplified implementation"? - The abstract and intro and several other places talk about survival analysis being performed as a post-processing of density estimation procedures. However, I am not seeing this in algorithm 1, equation 14 or equation 15. Could the authors be more explicit about exactly how this is a postprocessing of density estimation? It is clear that the claim is true, as the authors later show different density estimation techniques in the experiments. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments. We hope our answers resolve your concerns regarding the clarity of our paper. > Only two datasets are evaluated --- more datasets would make sense for the problem at hand. As we state in the last paragraph of Section 6, we include additional evaluation results in the appendix. Specifically, we present our experimental results (Fig. 6) on four datasets with $K=2$ in Section G, in addition to the experimental results (Fig. 5) on two datasets with $K=2$ in the main body of our paper. Furthermore, Section G includes experimental results (Fig. 7) on two datasets with $K=3$. In summary, we used eight datasets in total, as summarized in Table 2 (page 27). > Preliminaries. Up until the last sentence, it seems like we are dealing with general K. Then in the last sentence, we seem to be dealing exclusively with K=2 - "however, in this paper, we assume blah takes values from ". Whether the index starts at 0 or 1 is separate to my confusion. Are we dealing with K=2 or general K? Our paper deals with general K. The sentence is meant to note that the start index is different from most of the existing literature (with K=2). We will rewrite this sentence to avoid confusion. > I am confused as to how we can interpret the observation (x, t, \delta) as belonging to a -dimensional space. Doesn't x belong to ${\cal X}$, an arbitrary feature space? Can't this be of any dimension? I can understand how (t, \delta) can be interpeted in a $K$-dimensional space. You are right. We mean that $(t, \delta)$ can be interpreted in a $K$-dimensional space. We will fix the description in our future revision. > I am confused about "This implies the existence of a function $g_C$ such that". Don't we actually know exactly what this function is, described by (6-13)? That is, not only does it imply an existence, but also that we have this function. I am assuming this is the function that is actually used in algorithm 1? The wording seems to hint that somehow this function exists but we might have to find it, whereas I don't think this is actually the case.It is essentially just the definition of the CJD, CIF and copula. You are right. The function $g_C$ is exactly the combination of (6)-(13). We will fix the description in our future revision. > I am confused about equation 15. The symbol $\hat{r}_{b|x}$ is never defined or explained. What are you minimising (15) with respect to? What is the reason for the gap between algorithm 1 and the "simplified implementation"? We admit that the definition of $\hat{r_{b|x}}$ is missing, and we will fix this in our future revision. The definition of $\hat{r_{b|x}}$ is analogous to the definition of $r_{b|x}$, where each element $r_{b,k|x}$ of $r_{b|x}$ is replaced with $\hat{r}_{b,k|x}$. We minimize (15) with respect to $F_{b|x}$ (and $F_{b-1|x}$) for a given $\hat{r}_{b|x}$, which is equivalent to solving (14) for all $b$ simultaneously (because the value of (15) is equal to zero if the equality (14) holds for all $b$). Whereas we need to implement a specialized algorithm to solve (14) as we explain in Section B.3, it is much easier to implement an algorithm to minimize (15) because we can use a PyTorch library to minimize (15). We will clarify this fact in our future revision. > The abstract and intro and several other places talk about survival analysis being performed as a post-processing of density estimation procedures. However, I am not seeing this in algorithm 1, equation 14 or equation 15. Could the authors be more explicit about exactly how this is a postprocessing of density estimation? It is clear that the claim is true, as the authors later show different density estimation techniques in the experiments. Given an estimate $r_{b,k|x}$ (possibly together with an estimate in the form of $V_{k}(t|x)$) as an output of Step 1, Step 2 just solves equation (14) (or equivalently minimizes (15)) to obtain $F_{b,k|x}$. In other words, Step 2 is simply a transformation from the representation $r_{b,k|x}$ to another equivalent representation $F_{b,k|x}$. Therefore, we use the term “postprocessing” for Step 2. > The first sentence of the experiments section talks about evaluating the two-step algorithm. Does this mean you don't use the previously mentioned "simplified implementation"? It is not clear which algorithm you are using. We used the two-step algorithm, and we used the simplified implementation in Step 2 of this algorithm. We will clarify this fact in our future revision.
Summary: This paper presents a framework that reframes survival analysis as a density estimation problem. By post-processing density estimates to derive survival functions, the approach enables the use of any density estimation model for survival analysis, including handling competing risks and dependent censoring. Claims And Evidence: The paper highlights several limitations of existing survival analysis frameworks, including reliance on the conditional independence assumption, issues with identifiability, and the absence of strictly proper scoring rules for cases with K > 2. It provides empirical evidence and theoretical arguments to support these claims. Methods And Evaluation Criteria: The proposed method is primarily a two-step process. In the first step, the authors estimate the cumulative incidence function, and in the second step, they compute the cumulative distribution function using the estimates obtained in the first step along with a specified copula. The paper presents the general case of a sequential approach to solving the proposed function and also provides a simplified implementation of the algorithm. Additionally, the method can be used to estimate upper and lower bounds and is flexible enough to incorporate identifiability results. The proposed approach is evaluated on two datasets and across five different models using a standard survival analysis setup. Theoretical Claims: No Experimental Designs Or Analyses: The experimental design appears standard; however, the primary concern lies with the choice and number of datasets used. Supplementary Material: No Relation To Broader Scientific Literature: The paper is primarily related to the survival analysis literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Model-agnostic framework. Weaknesses: - The experimental evaluation involves only a few datasets, raising concerns about the generalizability of the approach. - The proposed method appears to perform poorly with certain classes of density estimators and exhibits large variations across different methods. This issue is not thoroughly discussed in the paper. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > The experimental evaluation involves only a few datasets, raising concerns about the generalizability of the approach. As we state in the last paragraph of Section 6, we include additional evaluation results in the appendix. Specifically, we present our experimental results (Fig. 6) on four datasets with $K=2$ in Section G, in addition to the experimental results (Fig. 5) on two datasets with $K=2$ in the main body of our paper. Furthermore, Section G includes experimental results (Fig. 7) on two datasets with $K=3$. In summary, we used eight datasets in total, as summarized in Table 2 (page 27). > The proposed method appears to perform poorly with certain classes of density estimators and exhibits large variations across different methods. This issue is not thoroughly discussed in the paper. Thank you for your insightful comments. Compared to standard regression analysis, we need to deal with two types of difficulties in survival analysis: (i) estimating a probability distribution and (ii) handling censored data. The key to our approach is to decouple these two difficulties into Step 1 (which handles (i)) and Step 2 (which handles (ii)). The large variations coming from (i) in our experimental results indicate that survival analysis is difficult even if the censored data are absent (i.e., (ii) is removed) and our future work can be seeking for the best density estimator to improve the performance of survival analysis models.
null
null
null
null
null
null
null
null
No Free Lunch from Random Feature Ensembles: Scaling Laws and Near-Optimality Conditions
Accept (poster)
Summary: The paper investigates the random-feature ridge regression between using a single large model versus multiple smaller models (ensembles). The authors demonstrate that ensembles can achieve near-optimal performance when the total feature count remains high in the overparameterized regime, while in the underparameterized regime, error scaling laws depend on the relationship between ensemble size and model size. Claims And Evidence: The claims are supported by clear evidence. Methods And Evaluation Criteria: I appreciate the real data experiments via CIFAR10 and MNIST. The authors performed fully numerical experiments first to support their theory first. Then they also conducted real data experiments. Theoretical Claims: I did not check how equation (9) comes. To my understanding, all the comparison below comes from equation (9), the authors should give clearer demonstration on Section 3.1, at least give clear resources where these equations come from. Experimental Designs Or Analyses: In Section D.1, D.2, the authors should clearly point out where are the experimental results. Supplementary Material: I have checked all supplementary materials. Relation To Broader Scientific Literature: The authors missed two important references, which already rigorously gave the risks of single random feature model and ensembled random feature models. The authors should discuss these references adequately, and compare the results. [1] Mei S, Montanari A. The generalization error of random features regression: Precise asymptotics and the double descent curve[J]. Communications on Pure and Applied Mathematics, 2022, 75(4): 667-766. [2] Meng, X., Yao, J., & Cao, Y. (2024). Multiple descent in the multiple random feature model. Journal of Machine Learning Research, 25(44), 1-49. Essential References Not Discussed: I have some concerns on the essential contribution of this paper. The rigorous excess risk curve has already been investigated by previous papers. [1] gave the rigorous theoretical values of risks of random feature model, and [2] extended the single random feature model to ensembled random feature model. With $y=f_0(x)+\epsilon$, people can also easily get rigorous theoretical values of risks of ensembled random feature model based on the analysis of [1] and [2]. The authors should discuss more on these two references. [1] Mei S, Montanari A. The generalization error of random features regression: Precise asymptotics and the double descent curve[J]. Communications on Pure and Applied Mathematics, 2022, 75(4): 667-766. [2] Meng, X., Yao, J., & Cao, Y. (2024). Multiple descent in the multiple random feature model. Journal of Machine Learning Research, 25(44), 1-49. Other Strengths And Weaknesses: The paper is generally well written. However, there are two main concerns. [1] Literature related issues given above. Please also see my questions below. [2] Clearer resources in Section 3.1. Other Comments Or Suggestions: See above. Questions For Authors: Can the authors rely on the results presented in [1] and [2]? In my understanding, the solution for random feature models in [2] directly corresponds to the solution of ensembled random feature models. This follows from the fact that for different vectors $x_1, x_2$, $\min_{x_1, x_2} \left( f(x_1) + g(x_2) \right) = \min_{x_1} f(x_1) + \min_{x_2} g(x_2)$, which implies that $x_1^* = \arg\min f(x_1), \quad x_2^* = \arg\min g(x_2)$. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. We respond to your questions concerns as follows: > In Section D.1, D.2, the authors should clearly point out where are the experimental results. **Response:** Thank you for your suggestion. We will update the text to point this out. Section D.1 on synthetic tasks pertains to the numerical error points in Figures 3 and S2.A-C. Section D.2 on the numerical experiments for binarized MNIST and CIFAR-10 with ReLU features pertains to figures 1, 2, 4, S1, S2.D-F, S3, S4, S5, S6, and S7. > The authors missed two important references ([1], [2])...compare the results. Thank you for bringing these works to our attention. We were aware of ref [1] and many other papers deriving the error... These papers focus on the over-fitting and double descent / multiple descent pheonomena in their analysis. However, these effects vanish entirely when the ridge parameter is optimized at each sample size. By studyiing the behavior of the test risk at optimal ridge, we obtain more practical results that will apply even in settings where the ridge parameter is set to its optimal value using cross-validation. > I have some concerns on the essential contribution ... these two references. > Can the authors rely on the results ...$ \arg\min_x f_2(x). $ Thank you for bringing these papers to our attention. In short, no, we cannot rely on either of them to derive the test risk of ensembled RFRR. As for ref. [1], the data model considered here consists of samples drawn uniformly from the unit sphere, whereas the results we use apply to arbitrary (but nicely behaved) data distributions $\mathbf{x} \sim \mu_{\mathbf{x}}$. As for ref. [2], the loss function considered there encodes \textit{joint} training of the parameters $\mathbf{a}\_1 \equiv [a\_1, \dots, a\_{N\_1}]^\top $ and $\mathbf{a}\_2 \equiv [a\_{N\_1 + 1}, \dots, a\_{N\_2}]^\top$. This can be written as $$ \frac{1}{n} \sum_{i=1}^n \left( y - f_1(\mathbf{a}\_1) \right)^2 + \frac{d}{n} \lambda ||\mathbf{a}\_1||^2 + \frac{1}{n} \sum_{i=1}^n \left( y - f_1( \mathbf{a}\_1) \right)^2 + \frac{d}{n} \lambda ||\mathbf{a}\_2||^2 $$ The square term here cannot be factored into a sum of contributions depending separately on $\mathbf{a}\_1$ and $\mathbf{a}\_2$, as you have assumed in your comment. In the notation of [2], the model we consider corresponds to a decoupled loss function which optimizes $\mathbb{a}\_1$ and $\mathbf{a}\_2$ separately: $$\frac{1}{n} \sum_{i=1}^n\left( y - f_1(\mathbf{a}\_1) \right)^2 + \frac{d}{n} \lambda ||\mathbf{a}\_1||^2+ \frac{1}{n} \sum\_{i=1}^n\left( y - f_1(\mathbf{a}\_1) \right)^2 + \frac{d}{n} \lambda ||\mathbf{a}\_2||^2 \,,$$ with additional terms if $K>2$. > Additional response to reviewer comments: Many of your questions and concerns revolve around the derivation of the risk estimate for random feature ensembles, and its presence in prior literature. There are many papers which derive the bias and variance terms for random-features regression using a variety of methods, which we have referenced (Atanasov e tal.,2024; Canatar et al. ,2021; Simon et al., 2023; Adlam & Pennington, 2020; Rocks & Mehta, 2021; Hastie etal., 2022; Zavatone-Vethetal., 2022) and we are happy to add discussions of [1] and [2]. We do not claim to have contributed the error formula for ensembles -- our contribution is a novel and informative analysis of test risk formula. The error formulas for RFRR ensembles derived in previous works are not easily interpretable, and studying the implications of these error formulas is an important task unto itself. We believe that our paper has addressed an important gap in our understanding of ensembled random feature models. We have used the (known) bias-variance decomposition of the test risk of RFRR to study the tradeoff between model size and ensemble size. Specifically, we have shown that: - Ensembling is *never* optimal under a fixed parameter budget at optimal ridge. - Ensembling can achieve near-optimal performance in both the overparameterized and underparameterized regimes, with precise spectral conditions for near-optimal scaling in the underparameterized regime. If accepted, we will clarify these contributions in the abstract and introduction. We may also change the title of the paper to "No Free Lunch from Random Feature Ensembles: Scaling Laws and Near-Optimality Conditions" to highlight all of our main contributions. [1] Mei S, Montanari A. The generalization error of random features regression: Precise asymptotics and the double descent curve J. Communications on Pure and Applied Mathematics, 2022, 75(4): 667-766. [2] Meng, X., Yao, J., & Cao, Y. (2024). Multiple descent in the multiple random feature model. Journal of Machine Learning Research, 25(44), 1-49.
Summary: ## Updates after author discussion Thanks a lot for all the clear discussion. A lot of my issues/confusion with the paper have been addressed in the comments, and I'm convinced that the theory just needs some cleaning up to be fully clear. The paper then tells an interesting -- and, to my knowledge, novel -- story about ensembling. I've increased my score to a 4/5 to vote for an accept. I would just recommend to the authors to do a few careful scrubbing passes over the theoretical sections to make sure notation is being used consistently and all the asymptotics are carefully defined *and* explained (e.g., the fact that $\lambda \to 0$ is (1) done for analytical convenience, but (2) has some justification in the literature is helpful discussion that should definitely be in the paper!) Best, Reviewer Ef22 ## Original review before author response This is a theoretical paper studying kernel ridge regression with random features. Specifically, it asks whether ensembles of multiple models are effective when the optimal ridge parameter is used. The authors show that, in fact, when holding the total number of random features constant across all models (representing a fixed compute budget), the test risk is minimized by not ensembling. However, the authors note that ensembles can achieve *nearly* optimal performance in the overparameterized regime. The authors validate their theory in small-scale synthetic and real experiments, which point to some interesting future directions around when the ridge parameter is not optimally chosen. Claims And Evidence: I think that the theoretical results, as stated, back up the overall premise of the paper. And the empirical results also effectively illustrate the theoretical results. I do have some issues with the theoretical development, which are listed below. The one portion of the paper that I didn't see as offering much evidence was Section 6. This felt a little tacked on, and I wasn't sure how it related to the main story of the paper, which is (to my reading) about ensembling not being effective. I think some more discussion of what the motivation / takeaways of this section are would be helpful. Methods And Evaluation Criteria: Yes, the paper is mostly theoretical, so there isn't too much choice of methodology here. The only comments I had were from reading Appendix D (details on experimental setups): 1. Section D.1 states that that the $\eta_t$ are normalized so that $\sum_t \eta_t = 1$ and $\sum_t \bar w_t^2 \eta_t = 1$. Why is it that these are both simultaneously satisfiable? 2. In Section D.2, why is the variance of $V_{ij}^k$ chosen to be $2/D$? Is this a result from Lee et al. (2018)? If so, some quick rewording might clarify this. Theoretical Claims: There were a few times where I thought the theoretical development was hard to follow because it seemed to be lacking definition, was not fully rigorous, or seemed to have skipped a few steps in proofs. I've bulleted out these issues below: **Lacking definition** 1. "consider the "featurization" transformation $g$" -- I think this could really use a precise example to clarify what $g$ is supposed to be. 2. $\mu_v$ was used around the second column of line 62, but I didn't see a definition for it. 3. ``As $N \to \infty$, this stochastic kernel converges to the deterministic kernel $H(x,x')$''. Does this not require a careful choice of $g$ and $\mu_v$. 4. ``$f(x)$ and the true target function $f(x)$'' -- I think this is just a typo 5. "where $\mathcal{E}_g^1$ is the "true'' risk" -- How is the definition of $E^1_g$ in equation 9 not the `"true" risk? I didn't really understand what $\mathcal{E}$ is supposed to be. 6. Eq 7 seems to define $E_g$ for a fixed $f$. But then Eqs 12-14 decompose $E_g$ using expectations over $Z$ (i.e., over the random features making up $f$). These seem inconsistent with one another. 7. In section 5.1, "$\lambda = 1/N = 0$". It doesn't seem possible that $\lambda$ could be both $1/N$ and 0. **Missing Rigor** 1. I think it's fine to not show the derivation of Eq 9. But it's written as an approximate equality without ever saying what the slack in this approximation is. I think this should be stated. 2. Eq 21 seems like a major point of the paper, given that it's the main result showing near optimality of ensembling in the overparameterized regime. But, it's proof in Appendix C.2 doesn't seem fully rigorous. First, in the proof $1/N$ and $\lambda$ are stated to be "[assumed] to be on the same order of magnitude" without stating what this means. Second, the proof uses a number of approximate equalities (Eqs C.15-C.17) without specifying what the approximation is or how it affects the proof. Overall, I think this result should be wrapped into a lemma / theorem environment with carefully stated assumptions and a complete proof. 3. In Appendix C (derivation of the scaling laws), there were a lot of approximations used. It's not clear how these affect the proof. Overall, my major issue with rigor is that I don't think it's appropriate to use approximate equalities in a formal proof without precisely defining what terms $\approx$ sign is hiding and verifying that dropping those terms doesn't affect the result. **Missing Steps in Proofs** Appendix B.1 contains the proof of Theorem 4.1, which says that when minimizing the test risk over the ridge parameter $\lambda$, the test risk decreases monotonically with the number of features, number of ensemble elements, and number of training datapoints. It's overall not clear to me how this proof accounts for the fact that we are minimizing over $\lambda$. There's a reference to Eq 11 and the fact that its fixed point can be held constant by scaling up $\lambda$. But we're optimizing over $\lambda$. So I'm not totally sure what this is pointing out. Also, the proof of monotonicity in $N$ (the number of model features) uses Eq. 9 to derive its result. But Eq 9 is an approximate equality, so I don't think that analyzing it can lead us to rigorous conclusions. Experimental Designs Or Analyses: Yes, the paper is mostly theoretical, so there isn't too much choice of experimental design here. Supplementary Material: Yes, the only portion I did not read is Appendix B.2 (the proof of theorem 5.1); I was a little too confused about some of the notation / previous results to really dig into that proof. Relation To Broader Scientific Literature: Ensembling is an important idea in the machine learning literature; e.g. random forests are hugely successful in practice, and ensembles of deep models have been proposed recently for various purposes such as uncertainty estimation. As far as I know, most work on ensembling does not study the very important tradeoff in terms of number of parameters (representing compute) and statistical accuracy. I think this paper is an interesting step towards understanding that tradeoff. Essential References Not Discussed: Nothing as far as I know! Other Strengths And Weaknesses: I've listed everything in the sections above. Other Comments Or Suggestions: I would just point out that, while there's not a statistical gain in ensembling here (in fact it seems there may be a statistical loss!) there is a computational gain, as the compute required for $M$ features is $O(M^3)$ versus $M$ features spread over $K$ ensembles requires $O(M^3 / K^2)$. I think this would be good to bring up around the discussion of Theorem 5.1. Questions For Authors: My main questions that influenced my review are: 1. Are all the approximate equalitites really justified in the proofs / theoretical statements? 2. Can the proof of theorem 5.1 be expanded on or my understanding of it corrected? 3. What is the purpose of Section 6? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review. Below, we address your questions and concerns: > "The one portion ... would be helpful." **Response**: Thank you for this suggestion. We will add more justification. While ensembles are never optimal, they allow parallelization and can be *near-optimal*, hence useful in practice. Section 5.1 addresses near-optimality in the overparameterized regime. The scaling laws derived in section 6 address near-optimality in the under-parameterized regime -- necessary for a complete characterization. > 1. **Section D.1** states ... satisfiable? **Response:** We will clarify that both the $\eta_t$ and $\bar{w}_t$ are normalized to satisfy these constraints. > 2. In **Section D.2**, why ... clarify this. **Response:** Yes, we choose $\mathbf{V}^k_{ij}\sim \mathcal{N}(0, 2/D)$ to converge to the NNGP kernel in Lee et. al. (2018). We will clarify this in the text. >1. **"consider the *featurization* transformation $g$"**. I ... supposed to be. >2. **$\mu_\nu$** was ... for it. **Response**: We have revised the text: "Define the random features $\mathbf{\psi}(\mathbf{x}) \in \mathbb{R}^N$ by $\left[ \mathbf{\psi}(\mathbf{x})\right]\_n = g(\mathbf{v}\_n, \mathbf{x})$, where the $\mathbf{v}\_n$ are random parameter vectors sampled independently from measure $ \mu\_{\mathbf{v}}$ on $\mathbb{R}^C$. Here, the function $g: \mathbb{R}^C \times R^{D} \mapsto \mathbb{R}$ is a "featurization transformation," often taking the form $g(\mathbf{v}\_n, \mathbf{x}) = \varphi (\mathbf{v\_n}^\top \mathbf{x})$ for some nonlinear activation function $\varphi(\cdot)$... " >3. **"As $N \to \infty$, this... and $\mu_\nu$? **Response**: A careful choice of $g$ and $\mu\_{\mathbf{v}}$ is required to ensure convergence to *a particular* kernel (see point 2). However, the stochastic kernel in general converges to *some* deterministic kernel given by $H(x, x') = \mathbb{E}\_{\mathbf{v} \sim \mu\_{\mathbf{v}}} g(\mathbf{v}, \mathbf{x})g(\mathbf{v}, \mathbf{x'})$. We will clarify. >6. **Equation (7)** seems to ... with one another. **Response**: This is a standard bias-variance decomposition of the error (see, [2]). An expectation over $\mathbf{Z}$ is not necessary on the left side of eq. 12 because the test risk **concentrates** over $\mathbf{Z}$. >7. In **Section 5.1**, *"$\lambda = 1/N = 0$"*.... $1/N$ and 0. **Response**: To clarify, we now say "we expand the risk estimate $E_g^K$ (eq. 9) as power series in $1/N \gtrsim 0$ and $\lambda \approx 0$". > Comments regarding the use of eq. 9, and the slack in this estimate ("where $\mathcal{E}\_g^1$ is ... to be."; "I think... should be stated."; "In Appendix C ... result."; "Also, the proof rigorous conclusions."): **Response**: Thank you, we agree that more thorough explanation would be helpful. We call the "true" risk $\mathcal{E}\_g$ the error for a particular realization of the random parameters $\mathbf{v}\_n$. The key result of Defilippis *et al.* (2024) is to show that the *distribution* over $\mathcal{E}\_g$ values *concentrates* around a *deterministic‐equivalent* expression $E\_g$ which depends on the feature count $N$. The difference between the *true* random‐features generalization error $\mathcal{E}\_g$ and its deterministic‐equivalent **$E\_g$** is controlled by a **multiplicative concentration bound**: $$ |\mathcal{E}\_g - E\_g| \le C\, \bigl(\tfrac{1}{\sqrt{N}} + \tfrac{1}{\sqrt{P}}\bigr)\,E\_g \quad\text{(with high probability)}, $$ for constant $C$. We will clarify the notion of "deterministic" equivalence between the random quantity $\mathcal{E}\_g$ and the deterministic quantity $E\_g$ at large $P$ and $N$. >Eq 21 seems ... complete proof. **Response:** The approximate equalities in Eq's C.15 - C.17 indicate that we are neglecting higher order terms in $\lambda$ and $1/N$. The result is a rigorous first-order approximation in these variables. To clarify, we will replace $\approx$ with $\asymp$ (equal to leading order) in the derivation. > Appendix B.1... pointing out. **Response:** Thanks, we will clarify these steps in the SI. For increasing $N$ we could argue as follows: Denote by $E_g(N, \lambda)$ the test with $N$ random features and ridge $\lambda$. Consider $N \to N'$ for $N'>N$. We show that there exists a ridge parameter $\lambda'$ such that $E_g(N', \lambda') \leq E_g(N, \lambda)$. Next, we have that $$ \min_\lambda (E\_g(N', \lambda)) \leq E\_g(N', \lambda') \leq E\_g(N, \lambda)$$ Because this is true for any $\lambda$, we may assign $\lambda = \operatorname{argmin}\_{\lambda} E\_g (N, \lambda)$, completing the proof. Analogous steps can be applied when increasing $P$ or for the joint transformation of $N$ and $K$ with $KN=M$. References: [1] Defilippis et. al https://arxiv. org/abs/2405.15699. [2] Adlam et. al. https://arxiv.org/abs/2011.03321. --- Rebuttal Comment 1.1: Comment: (reposting this as a Rebuttal Comment, as I didn't realize that authors can't see Official Comments. Sorry about that!) Thanks for all the replies! I have a few follow-up points listed below: 1. On section 6: these updates sound great. In addition to the computational complexity being lowered from $O(M^3)$ to $O(M^3 / K^2)$, I totally agree with the point that ensembling allows parallelization. This is a really interesting statistical-computation tradeoff. 2. On Eq (7): To clarify, I'm confused because Eq 7 seems to be a function of Z. In particular, a random $Z$ defines a given $f(x)$. And Eq (7) refers to a fixed $f(x)$, that is, a fixed $Z$. On the other hand, the right hand side of Eq. (12) does not depend on $Z$, as it takes expectations over $Z$. I would believe that Eq. (7) will concentrate in $Z$ for large numbers of features. But then there needs to be some kind of limit taken here for Eq (7) and Eq (12) to both hold. But maybe I'm misunderstanding the point about concentration in $Z$. 3. On the high probability bound between $\mathcal{E_g}$ and $E_g$ -- are the commas here typos? Should the bound be $C \left( \frac{1}{\sqrt{N}} + \frac{1}{P} \right) E_g$? Just want to make sure I'm following here! Also is there an exact reference from Defilippis et al. that has this result? 4. On the approximate equalities: is it correct to say that every approximate equality in the paper actually means "this is an equality when dropping terms that are of size $\lambda^2$ or $1/N^2$ or smaller?" If so, doesn't this assume we're working under an asymptotic model that has $\lambda \to 0$? Why should we expect this? 5. One final thing from the review that wasn't addressed: what does it mean that $1/N$ and $\lambda$ are "[assumed] to be on the same order of magnitude"? This sounds like the imagined asymptotic model not only has $\lambda \to 0$, but also has $N \to \infty$ *and* they're going at about the same rate. Can the authors give some more clarification here? Thank you! --- Reply to Comment 1.1.1: Comment: Thank you for your constructive feedback, which has helped us improve the clarity of our results. We respond to your comments below: > On section 6: ... statistical-computation tradeoff. **Response**: We are glad that you agree that this is an interesting question, and will emphasize these points about parallelization and improved computational complexity in ensembles in the final version if accepted. > On Eq (7): To clarify, I'm confused because Eq 7 seems to be a function of Z... **Response**: Thank you for your comment, which we did not have space to fully address in our last reply. We will make some edits to section 2 to clarify the meaning of the errror formulas, and to make sure that the meaning of $E_g^K$ is consistent throughout. First, we will replace the $E_g^1$ in equation 7, with the "true" error symbol $\mathcal{E}_g^1$ to indicate that this depends on a particular realization of $\mathbf{Z}$: $$ \mathcal{E}\_g^1=\mathbb{E}\_{\boldsymbol{x} \sim \mu\_{\boldsymbol{x}}}\left[\left(f(\boldsymbol{x})-f\_*(\boldsymbol{x})\right)^2\right]+\sigma\_\epsilon^2 \qquad (7)$$ We will then introduce the *deterministic equivalent* error $E_g^1$, which is the deterministic quantity about which the "true" error $\mathcal{E}_g^1$ concentrates. This satisfies $$ |\mathcal{E}\_g^1 - E\_g^1| \le C \bigl(\tfrac{1}{\sqrt{N}} + \tfrac{1}{\sqrt{P}}\bigr) E\_g^1 \quad\text{(with high probability)}, $$ (commas in last reply were a formatting error). As a shorthand, we may write $\mathcal{E}_g \simeq E_g$, with $\simeq$ indicating deterministic equivalence with a multiplicative error bound of this type. We will update equation 9 so that it reads as: $$ \mathcal{E}\_g^1 \simeq E\_g^1 = \frac{1}{1-\gamma\_1}\left[-\rho \kappa_2^2 \mathrm{tf}\_1^{\prime}\left(\kappa\_2\right)+(1-\rho) \kappa\_2 \mathrm{tf}\_1\left(\kappa\_2\right)+\sigma\_\epsilon^2\right] \qquad (9)$$ So, the error formula on the right of eq. 9 is exactly equal to $E_g^1$. $E_g^1$ is, in turn, a deterministic equivalent approximation of the "true" error $\mathcal{E}_g^1$. We will similarly have $\mathcal{E}_g^K \simeq E_g^K$ for $K>1$. >On ... from Defilippis et al. that has this result? **Response**: Defilippis et. al. indeed is the reference with this exact result. See equation 3 in their paper. >On the approximate equalities: ... we expect this? **Response**: No, it would not be correct to say that every approximate equality in our paper corresponds to dropping higher order terms in $\lambda$ and $1/N$. There are two different types of approximation we are making. The first is to replace the random quantity $\mathcal{E}_g^K$ with it's deterministic equivalent $E_g^K$. All of our results rest on this simplification, which is justified due to the multiplicative concentration bounds proven by Defilippis et. al, and by our numerical simulations. Theorems 4.1, 5.1, and corollary 5.5 apply exactly to $E_g^K$ with no further approximations, and with no assumptions about the ridge. Additional approximations are used in In sections 5.1 and 6 to study the behavior of $E_g^K$ in limits corresponding to the overparameterized and underparameterized regimes, respectively. In section 5.1, we expand $E_g^K$ in the limit $N \gg 1$, keeping $P \sim \mathcal{O}(1)$, so that $N \gg P$ (overparameterized). Equation 21 is the **only** equation in the paper that assumes an asymptotically small ridge parameter $\lambda \approx 0$, and this is indicated in the correction term $\mathcal{O}(\lambda^2, \lambda/N, 1/N^2)$. Here, we have assumed small ridge for analytical convenience, but the result (eq. 21) provides a good explanation of our empirical results in fig, S2.C and S2.F at optimal ridge as well. Specifically, the overlap between the green curves ($P = 10$) and dashed black lines show that ensembles achieve *near-optimal* performance in the heavily overparameterized regime. Simon et. al. (2023) also provide an argument for why optimal ridge is always small in the overparameterized regime (see theorem 2 there). We include discussion of this in the final version of the paper. > One final thing ... "[assumed] to be on the same order of magnitude"? ... more clarification here? **Response**: To clarify this point, we will remove the statement that we are assuming that $\lambda$ and $1/N$ are "on the same order of magnitude," and instead use the terminology "we expand as power series in $1/N \gtrsim 0$ and $\lambda \approx 0$." Again, this only applies to the derivation of eq. 21, and higher-order contributions are accounted for in the additive correction term there ($\dots + \mathcal{O}(\lambda^2, \lambda/N, 1/N^2)$). > To conclude Since this is our last opportunity to reply, we want to again thank the reviewer for giving such a careful reading of our paper, and for helping us improve the clarity of our presentation.
Summary: In the context of random feature high-dimensional ridge regression, this paper investigates the problem of training an ensemble of independent models and the trade-off between ensemble size and model size for a fixed total number of features. The authors prove a 'no free lunch' theorem, showing that increasing the ensemble size while keeping the total number of features fixed leads to higher test risk, making a single large model the optimal choice, provided the ridge parameter is fine-tuned. However, in the overparameterized regime, small ensembles can achieve near-optimal performance over a wider range of ridge parameter values, making them more robust than single models. The authors derive scaling laws showing that while optimal error scaling is always achieved by increasing model size with a fixed ensemble size, near-optimal scaling can be achieved under certain conditions on the kernel and task eigenstructure. These findings are validated through numerical simulations on synthetic data and real-world datasets. Claims And Evidence: All claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods are well-suited for the problem considered. Theoretical Claims: I checked the correctness of the proofs and I have no issues to discuss. Experimental Designs Or Analyses: I have no issues to discuss. Supplementary Material: I have reviewed the supplementary material in its entirety. Relation To Broader Scientific Literature: This paper builds on prior works showing that single large models can outperform ensembles when optimally tuned and extends the theoretical understanding of random feature generalization to the ensemble setting. The authors also contribute to the literature on scaling laws by deriving how test risk scales with ensemble and model size, providing insights into the trade-offs between the two quantities. Essential References Not Discussed: I am not aware of any relevant references that have been omitted. Other Strengths And Weaknesses: This work improves the current understanding of generalization and scaling laws for RFRR, extending previous findings by tackling the problem of ensemble learning, which is closely related to practical issues and applications. The presentation is clear and provides valuable insights that could apply to other settings, such as explaining the apparent violation of the "no free lunch from ensembles" principle. Moreover, all claims are supported by extensive numerical simulations. I have not identified any major weaknesses. Other Comments Or Suggestions: Typo on line 121, second column: "et al." is repeated twice Questions For Authors: I have no important questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review!
Summary: The paper investigates the performance of random feature ensembles, and discusss whether the ensemble models outperform the single model when the number of total parameter is fixed. The theoretical analysis is given based on the random feature ridge regression, while the empirical studies are performed on binarized CIFAR10 task. The paper provides two main results, the first one shows the RF ensemble models always benefit from the larger ensemble member and the more ensembles, while the second shows when there is a fixed total parameter count, increasing the number of ensemble size $K$ degrades the performance with an optimal ridge parameter. Moreover, the paper also derives a acaling law for the underparameterized ensembles. Claims And Evidence: The paper provides the sufficient theoretical analysis to support the claims, and the experiments based on the random feature ridge regression on binarized CIFAR10 task also shows the convincing results. Methods And Evaluation Criteria: The theorems and results provided in this paper are easy to understand, and there are several works referenced in this paper already give the similar analysis that decreasing the random features $N$ leads to a poor performance. The proposed results are clear and make sense for the emsemble RF models. Theoretical Claims: The paper presents the theoretical analysis to prove the theorems, and the claims is proved based on the prior works and some obvious inequations. Experimental Designs Or Analyses: The experimental studies are sufficient and verify the claims in this paper. The Figure 1 shows increasing the number of samples P and the ensemble member size N can reduce the predictor error, while the Figure 2 shows increasing the number of ensemble menbers K while fix the total number of random features M, the performance of ensemble model degrades. Supplementary Material: The supplementary material contains the sufficient process of proofs. Relation To Broader Scientific Literature: The results presented in this paper is clear and evident, the theoretical analysis is mainly based on the prior works, and the results are mainly focus on the random feature ridge regression model, whether the results are useful for the other machine learning models or deep neural networks are unclear. Essential References Not Discussed: No additional references need to be discussed here. The authors have covered all the essential studies in the experiments or the related work. Other Strengths And Weaknesses: Weaknesses: 1. The contribution of this paper is limited, the results in this paper are evident, and can be easily derived based on the prior works. 2. The results mainly based on the random feature ridge regression model, whether the results suit for the deep learning models are not discussed. 3. The layout of the figures in this paper is irregular, as well as the fonts in these figures. Other Comments Or Suggestions: This is a paper with limited contributions, the author should provide more interesting findings, such as extending the results on deep learning area or other ensembel models. Questions For Authors: Most of the experiments are based on the random feature RF model. Whether the same results shows in some larger ensemble models? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # Rebuttal to Reviewer 2SLL (Complete) Thank you for your review and questions. It appears that you are convinced of the correctness of our results, but have concerns about the significance of the contribution. We will respond to your concerns and questions individually below: >1. The contribution of this paper is limited; the results are somewhat evident and can be derived from prior works. **Response:** The error formulas for RFRR ensembles derived in previous works are not easily interpretable, and studying the implications of these deterministic equivalent errors is an important task unto itself. We believe that our paper has addressed an important gap in our understanding of ensembled random feature models. We have used the (known) bias-variance decomposition of the test risk of RFRR to study the tradeoff between model size and ensemble size. Specifically, we have shown that: - Ensembling is *never* optimal under a fixed parameter budget at optimal ridge. - Ensembling can achieve near-optimal performance in both the overparameterized and underparameterized regimes, with precise spectral conditions for near-optimal scaling in the underparameterized regime. If accepted, we will clarify these contributions in the abstract and introduction. We may also change the title of the paper to "No Free Lunch from Random Feature Ensembles: Scaling Laws and Near-Optimality Conditions" to highlight our contributions beyond Theorem 4.1. >2. The results mainly pertain to the random feature ridge regression model; whether they apply to deep learning models remains unaddressed. **Response:** While we are ultimately interested in understanding the utility of ensemble learning using state-of-the-art machine learning models, random-features regression provides an important "ground-floor" to investigate the utility of ensemble learning in a tractable setting. Furthermore, because of the approximate correspondence between RFRR and deep networks trained in the lazy learning regime [3], our results already bear some relevance to deep ensembles. Understanding the properties deep ensembles trained in the rich regime is a critical research direction for our future work, but will be better presented with reference to these baseline results in linear models, which have already saturated the page limit for this venue. Our results also add to a long history of research on random-feature models in analogy to deep networks. Another example of this fruitful correspondence is work on fine-grained bias-variance decompositions [2] We have also performed numerical experiments for deep ensembles. In these experiments, we train ensembles of deep convolutional neural networks on a computer vision task (CIFAR10 image classification) using $\mu P$ parameterization, which keeps training dynamics consistent across widths [1]. In addition to the weight decay, there is a "richness" parameter $\gamma$ which controls the amount of feature-learning in the network. Our simulations show that a large network outperforms any ensemble of smaller networks with the same total size when both the weight decay and "richness" parameter are tuned to their optimal values. If it will have a positive impact on your evaluation, we are willing to add these numerical results to the supplemental material. This result is available at this anonymized github repo: https://anonymous.4open.science/r/NoFreeLunchRandomFeatureEnsembles/README.md >3. The layout of the figures is irregular, and the fonts in these figures could be improved. **Response:** Thank you for your feedback, we are open to any and all suggestions on how to improve the clarity and appearance of our figures for the final version of this paper if accepted! > "Most of the experiments are based on the random feature (RF) model. Would the same results hold for larger ensemble models?" **Response:** See our response to 2. above. [1] Nikhil Vyas, Alexander Atanasov, Blake Bordelon, Depen Morwani, Sabarish Sainathan, and Cengiz Pehlevan. Feature-learning networks are consistent across widths at realistic scales, 2023. URL https://arxiv.org/abs/2305.18411. [2] Ben Adlam and Jeffrey Pennington. Understanding double descent requires a fine-grained bias-variance decomposition, 2020. URL https://arxiv.org/abs/2011.03321. [3] Chizat, L., Oyallon, E., & Bach, F. (2020). On lazy training in differentiable programming (arXiv:1812.07956v5). Retrieved from https://arxiv.org/abs/1812.07956
null
null
null
null
null
null
SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity
Accept (poster)
Summary: The paper proposes to accelerate LoRA fine-tuning with contextual sparsity. Tailored for fine-tuning, they propose a lightweight, training-free SVD sparsity estimator to reduce computation overhead. Experimental results show that they can speed up LoRA fine-tuning by 1.4x. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Not applicable. Experimental Designs Or Analyses: Yes Supplementary Material: Not applicable. Relation To Broader Scientific Literature: The motivation is related to contextual sparsity but the key contributions are how to apply the idea to fine-tuning. Essential References Not Discussed: "S2FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity" is a recent paper published in Arxiv in December. However, it is officially published in ICLR 25 after the submission deadline of ICML. Other Strengths And Weaknesses: Strengths - I think it is an interesting and meaningful observation that output tokens are more sensitive to pruning. Weaknesses - I can not see why this method needs to be used together with LoRA. It would be great to use it independently and compare it with FFT. Other Comments Or Suggestions: April 12: I will keep my score. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thoughtful feedback and recognition of our approach. Below, we address each of the concerns in detail: > "S2FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity" is a recent paper published in Arxiv in December. However, it is officially published in ICLR 25 after the submission deadline of ICML. We would like to clarify that our paper already cites the S2FT work. S2FT introduces a structured pruning approach aimed at memory-efficient fine-tuning. According to Figure 5 in their paper, S2FT achieves a speedup of 1.1× over LoRA on LLaMA2-7B for the Commonsense 170K dataset. In contrast, our method achieves a 1.32× speedup on the same setting. While S2FT prioritizes memory efficiency and accuracy, SparseLoRA addresses a complementary direction: computational efficiency with lossless accuracy. We will include a more detailed discussion of S2FT in the final version of the paper. > I can not see why this method needs to be used together with LoRA. It would be great to use it independently and compare it with FFT. We agree that extending our method beyond LoRA fine-tuning is an exciting direction. However, there are some practical considerations tied to the characteristics of LoRA. A key component of our method is the dynamic prediction of activation sparsity using an SVD-based predictor. This predictor relies on the static nature of the base layer weights—which, in LoRA, remain frozen during fine-tuning. In contrast, in full fine-tuning settings, the base weights are updated throughout training. This undermines the validity of the SVD-based predictor, potentially leading to inaccurate sparsity estimates. At this stage, our method leverages the frozen base layers in LoRA to reliably identify and exploit structured sparsity for speedup. Exploring how to adapt the predictor dynamically for full fine-tuning is an interesting direction we plan to pursue in future work.
Summary: The paper introduces a method to accelerate fine-tuning of large language models (LLMs) by leveraging contextual sparsity. Unlike existing parameter-efficient fine-tuning (PEFT) methods such as LoRA and DoRA, which reduce memory usage but not computational cost, SparseLoRA optimizes both memory and computation. The key contributions of the paper include: a training-free SVD-based sparsity estimator that selects a subset of weights for loss and gradient computation, context-aware sparsity application (non-uniform sparsity across layers based on sensitivity analysis, selective sparsity for context tokens, while keeping output tokens dense, progressive sparsity). Empirical validation on commonsense and arithmetic reasoning benchmarks is provided. Claims And Evidence: * While the computation speedup is clear, the memory analysis and comparison to the baselines (presented by the authors and not presented, please see below) are missing. * The comparison of the proposed method was done to a narrow set of models: LoRA and DoRA while there are more baselines that could be compared (please see my points below). The claims regarding training speed-up are supported by empirical results. Methods And Evaluation Criteria: The benchmark datasets choice looks reasonable to me. However I would like to see the results for a commonly used GLUE benchmark and not only common and arithmetic reasoning. Theoretical Claims: There are no theoretical claims Experimental Designs Or Analyses: There are plenty of works in LLM PEFT domain. I think the most missing baselines are APT[1] and Galore[2] methods (or their variants). Also I found [3] that looks similar to the proposed method. The authors presented only 2 baselines Despite the lack of additional baselines, I think that experimental design in terms of datasets and metrics is reasonable. [1] Zhao, Bowen, Hannaneh Hajishirzi, and Qingqing Cao. "APT: adaptive pruning and tuning pretrained language models for efficient training and inference." Proceedings of the 41st International Conference on Machine Learning. 2024. [2] Zhao, Jiawei, et al. "GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection." International Conference on Machine Learning. PMLR, 2024. [3] Huang, Weizhong, et al. "Dynamic Low-Rank Sparse Adaptation for Large Language Models." The Thirteenth International Conference on Learning Representations. Supplementary Material: Yes there is a single part with an analysis of Pruning at Attention Head Level Relation To Broader Scientific Literature: The method builds upon and extends: * LoRA (Hu et al., 2022) and DoRA (Liu et al., 2024b) for parameter-efficient fine-tuning. * Contextual sparsity approaches in LLM inference (Liu et al., 2023). The key novelty is applying structured contextual sparsity to fine-tuning, whereas previous methods focused on inference-time acceleration. The work is highly relevant to ongoing research in: * Efficient LLM training (Thangarasa et al., 2023, Mozaffari et al., 2024) * Sparse computing for neural networks (Han et al., 2015, 2016) The paper appropriately cites relevant work but does not discuss: * Alternative structured sparsity methods (e.g., block sparsity, hardware-aware sparsity). * Recent advances in mixed low-rank and sparse fine-tuning methods (e.g., WeLore, SLoPe). Essential References Not Discussed: The paper cites enough related papers. Other Strengths And Weaknesses: **Strengths**: * The proposed model reduces computation time by sparsifting the base model weights during the finetuning in addition to the low-rank adaptor training. * the method is evaluated on commonsense and arithmetic reasoning * The method reduces the number of FLOPS by 30-40% * runtime breakdown of LLM fine-tuning and sensitivity analysis on layer-wise sparsity is interesting and important. **Weaknesses**: * The proposed method doesn’t improve the finetuned model accuracy but only reduces the training time * The GPU memory increase/decrease is not discussed. * Missing a comparison to the optimizer-base finetuning methods such as a Galore or other recent methods in terms of training time and GPU memory. * Missing more baselines for sparse finetuningof LLMs Other Comments Or Suggestions: * I think the format of the paper is not the same as provided in the ICML 2025 template (e.g. tables captions should be above the tables), please fix it. Questions For Authors: * I don't find the values of k in svd decomposition you used in your experiments / or sparsity levels in Tables 1 + 2 * It is not clear from the text what is your **inference** setup and how the inference times are compared across baselines and your method? I see only the training time measurements. * Why are some experimental results missing for DoRA method? (E.g. with base model LLaMA2-13B) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > While the computation speedup is clear, the memory analysis and comparison to the baselines are missing Our approach uses LoRA for fine-tuning, so the memory profile remains the same as LoRA. Sparsifying the main branch does not affect memory usage. > Include comparisons with additional PEFT methods like APT and LoSA APT and LoSA are designed to optimize inference-time sparsity rather than fine-tuning speed. Their methods typically slow down fine-tuning significantly to produce a sparse model with minimal accuracy loss at inference. For instance, LoSA requires 45.34 minutes compared to 13.78 minutes for LoRA, and under the Wanda setting, it takes 73.91 minutes versus 21.40 minutes for LoRA under SparseGPT, as shown in Table 8 of the original paper. > Include comparisons with additional PEFT methods like GaLore. Missing comparison to GaLore in terms of training time and GPU memory We compare LoRA, GaLore, and SparseLoRA on Commonsense 170K and Math 10K. GaLore training requires A100 GPUs due to VRAM limitations of A6000 under DDP. Runtime is normalized to LoRA, as in our main submission. GaLore incurs a 1.58x training overhead but achieves similar performance to LoRA. The amortized time of GaLore, accounting for projection updates and online SVD, slows fine-tuning by 13.72x. While GaLore focuses on memory-efficient fine-tuning at the cost of computational efficiency, SparseLoRA accelerates fine-tuning with near-lossless performance. ### LLaMA3-8B Commonsense 170K [on A100s] |Model|Runtime|Mean|BoolQ|PIQA|Social-IQA|HellaSwag|Winogrande|ARC-Easy|ARC-Challenge|OpenBookQA| |-----|-------|----|-----|-----|----------|---------|----------|--------|--------------|----------| |LoRA|1.00|87.1|74.5|89.6|82.8|95.3|88.4|93.1|84.4|88.8| |GaLore|1.58[13.72]|84.1|71.2|87.1|79.6|92.0|85.0|89.4|80.5|87.8| |SparseLoRA|0.78|87.0|74.7|89.5|82.8|95.3|88.8|92.9|83.6|88.3| ### LLaMA3-8B Math 10K [on A100s] |Model|Runtime|Mean|gsm8k|svamp|mawps| |-----|-------|----|-----|-----|----------| |LoRA|1.00|80.0|71.1|79.5|89.5| |GaLore|1.58[13.72]|78.7|68.1|77.9|90.2| |SparseLoRA|0.82|80.0|70.9|79.4|89.9| > Include results on GLUE benchmark Following the reviewer's recommendation, we extend LoRA and SparseLORA using Llama3-8B on the GLUE benchmark. SparseLoRA maintains competetive perofrmance on sequence classification accross various subsets of the GLUE benchmark. ### LLaMA3-8B GLUE Benchmark |Model|Mean|MRPC|SST2|QNLI|RTE|QQP|COLA| |-----|----|----|----|----|---|---|-----| |LoRA|88.6|92.1|96.2|95.2|88.8|91.8|67.7| |SparseLoRA|88.6|92.3|96.4|95.5|88.7|91.9|66.7| > The proposed method doesn’t improve the finetuned model accuracy but only reduces the training time Our method is designed to accelerate fine-tuning while preserving accuracy—not to improve accuracy over standard fine-tuning methods. > The GPU memory increase/decrease is not discussed SparseLoRA improves computational efficiency during fine-tuning without altering the memory usage compared to baseline LoRA. > Specify SVD rank (k) values and sparsity levels used in experiments We use an SVD Rank of 8 across all models and datasets, resulting in minimal runtime overhead (Table 3). Layer sparsity ratios are based on sensitivity analysis per model, with Llama2-7B's layer-wise analysis shown in Figure 7. The specific sparsity ratios employed are: ### Model Sparsity Information |Model|Dataset|FFNSparsity|FFNSparseLayers|QKVOSparsity|QKVOSparseLayers| |-----|-------|-----------|---------------|-------------|----------------| |Llama2-7B|Commonsense170K|90|L13-L29|60|L17-L29/L20,L24| |Llama2-7B|Math10k|90|L13-L29|60|L13-L29/L20,L24| |Llama2-13B|Commonsense170K|90|L17-L37|60|L17-L37| |Llama2-13B|Math10k|90|L13-L37|60|L13-L37| |Llama3-8B|Commonsense170K|90|L13-L29|60|L15-L29| |Llama3-8B|Math10k|90|L9-L29|60|L9-L29| The selected layers and sparsity rates remain mostly consistent across models and datasets. Sensitivity analysis identifies layers unsuitable for sparsity (i.e. layers 20 and 24 in Llama2-7B). On Math 10k, layer ranges were increased to account for token splitting overhead. The sparsity assignment is latency-driven and informed by sensitivity analysis, ensuring no added overhead in SparseLoRA. > Inference Setup SparseLoRA targets fine-tuning accleration and only applies sparsity during fine-tuning; inference remains unchanged compared to baseline LoRA. > Missing DoRA Results Results for the DoRA method with LLaMA2-13B are missing due to OOM on A6000 GPUs without gradient checkpointing, as DoRA requires significantly more memory. > Template Issues We will correct table captions and formatting in the revised version. --- Rebuttal Comment 1.1: Comment: After reviewing the authors' rebuttal and the additional results, I would like to revise my evaluation of the paper positively. Furthermore, I believe that releasing the code is essential for the research community. --- Reply to Comment 1.1.1: Comment: Thank you for the positive feedback and updated evaluation! We appreciate your thoughtful suggestions—they were instrumental in improving the clarity and completeness of our work. We will incorporate the additional results and feedback into the revised manuscript. We also fully agree that releasing the code is essential, and upon acceptance, we are committed to open-sourcing clean, well-documented code to support adoption and further research on SparseLoRA.
Summary: Previous parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, have primarily focused on memory efficiency and lightweight storage. However, these approaches do not necessarily lead to faster fine-tuning. This paper introduces SparseLoRA, a novel technique that accelerates fine-tuning by selecting a sparse subset of the base model’s weights, enabling more efficient loss and gradient computation while fully preserving LoRA’s structure. SparseLoRA achieves this by decomposing the original weight matrix and selectively activating channels based on a Singular Value Decomposition (SVD) sparsity estimator. This estimator adaptively determines sparsity using certain norms of the batched input, allowing the method to remain dynamic and data-aware. Unlike prior works that apply sparsity only during inference (typically with a batch size of 1), SparseLoRA incorporates sparsity directly into the training process. During training, LoRA initially operates in its standard dense form for the first few iterations. Sparse fine-tuning is then gradually introduced, reducing the computational overhead while maintaining performance. Experimental results demonstrate that SparseLoRA achieves faster training times with minimal accuracy degradation across multiple benchmark tasks, making it a promising alternative for efficient fine-tuning. Claims And Evidence: While the claims in the paper are generally well-supported, some unconvincing weaknesses remain. Below, I outline key concerns that, if addressed, would strengthen the paper’s experimental robustness: 1. The experiments use a fixed learning rate, but it is standard practice to perform a hyperparameter sweep and report the best-performing configuration. To ensure fair comparisons, Tables 1 and 2 should reflect results from a learning rate sweep, rather than relying on a single fixed value. Otherwise, there is a risk that SparseLoRA benefits simply from better tuning rather than intrinsic efficiency. 2. The paper evaluates SparseLoRA only on QKVO projections. However, LoRA can be applied to different subsets of projections, and it is unclear whether SparseLoRA remains effective across different configurations. A more thorough evaluation should test SparseLoRA on different subsets of trainable LoRA projections to confirm that its benefits generalize beyond QKVO. 3. A crucial missing experiment is an Iso-FLOP comparison—i.e., comparing models trained with the same computational budget. In real-world applications, practitioners often have a fixed FLOP budget rather than a fixed number of iterations. Therefore, it is important to test whether LoRA trained with the same FLOP budget as SparseLoRA produces weaker models. In Table 1 (LLaMA3-8B results), SparseLoRA trains at 0.62x FLOPs of full LoRA. The paper should compare whether training LoRA for the same 0.62x FLOPs produces weaker performance than SparseLoRA. The paper should include a graph of benchmark accuracy vs. wall-clock time (or training FLOPs). This would help practitioners determine when SparseLoRA is beneficial and when standard LoRA suffices. While SparseLoRA presents a promising approach for efficient fine-tuning, the paper lacks critical experimental validations. A more rigorous study should incorporate learning rate sweeps to avoid selection bias, evaluations across different LoRA projection sets, and, most importantly, Iso-FLOP comparisons and efficiency trade-offs, as practitioners need to know whether SparseLoRA is truly advantageous under fixed compute constraints. Without these, the claims of SparseLoRA’s efficiency remain incomplete and may not fully guide real-world adoption. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A. Experimental Designs Or Analyses: The experiments seem sound. However, given the empirical nature of the paper, necessary experimental details are missing, especially those one would usually find in the supplementary material. For instance, I cannot find more information on Table 1 when I want to check how many shots were used for each task, or the absolute runtime in terms of wall-clock time, or whether FLOPs includes SVD or not. Supplementary Material: Yes, but there is a lack of experimental detail. Relation To Broader Scientific Literature: - Computationally efficient finetuning methods are not well developed. This work attempts to achieve this without sacrificing accuracy. The motivation is extremely practical and the sparse selection mechanism is sensible. However, I am not particularly well-versed in this sub-area, so I am not entirely sure how it fares to prior work or if this is truly the first work to consider computationally efficient PEFT methods. - The SVD estimator based on two kinds of norm-based criteria can be used elsewhere that require input-adaptive sparsity. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: **Strength:** - Computationally efficient finetuning methods are not well developed. This work attempts to achieve this without sacrificing accuracy. The motivation is extremely practical and the sparse selection mechanism is sensible. However, I am not particularly well-versed in this sub-area, so I am not entirely sure how it fares to prior work or if this is truly the first work to consider computationally efficient PEFT methods. - Adaptive sparsity that takes into account training-time input batches probably requires more care than dealing with just one input as in inference-time. Weaknesses listed "Claims And Evidence" and "Questions For Authors". Other Comments Or Suggestions: - It would be more helpful if Figure 1 is more descriptive, e.g., which models, tasks, and datasets were used for the particular experiment. - In lines 43-44, when claiming "adding less than 0.5% overhead to finetuning", does this mean computationally (if so, in terms of FLOPs or wall-clock time) or memory (VRAM)? - In general, the paper is lacking in detail, e.g., experimental settings, captions in figures. The paper will be stronger with these details. Questions For Authors: Listed in order of decreasing importance. 1. **Details on SVD.** I am curious to know the details on the computation of SVD that is done offline, such as time, memory, and etc. Furthermore, when finetuning with SparseLoRA, how much memory overhead is included by loading the SVD decomposition alongside the model weights? Include details about GPU memory in Table 1 seems important even if the overhead is minimal. 2. **Motivation for computationally efficient PEFT.** Thought PEFT methods are not computationally efficient, often this is not much of a concern since tasks that require finetuning do not require massive datasets and thus does not consume long GPU hours. While potentially useful, I have some reservations whether such method is necessary. Can the authors provide examples in which PEFT would take more than hundreds of GPU hours, making computationally efficient PEFT methods necessary? In my practical experience, I'd rather train hours to a day more if that means I do not sacrifice accuracy for finetuning tasks. 3. **DoRA Settings.** DoRA was first published as an improvement to LoRA, albeit the additional overhead in computation. I assume there must be settings in which DoRA is better than LoRA, as settings described in the DoRA paper. Could authors comment on whether such setting exists, and if SparseLoRA's performance still holds in that particular setting as well? With the weaknesses and questions addressed appropriately, I am willling to change my evaluation of the paper. Thank you. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s comments: > Fixed learning rate instead of hyperparameter sweeps. For concision we only include mean performance. Here V1 refers to results in Table 1-2 of the paper and V2 is the new "conservative" config in our response to Reviewer as16 ### LLaMA3-8B LR Sweep (Math10K) |LearningRate|LoRA|SparseLoRA-V1|SparseLoRA-V2| |-|-|-|-| |3.00E-05|78.3|77.7|78.8| |5.00E-05|78.6|78.6|79.3| |9.49E-05|79.6|79.2|79.8| |3.00E-04|80.0|**79.8**|**80.0**| |5.00E-04|**80.2**|79.3|79.6| |9.49E-04|78.1|77.1|77.3| ### LLaMA3-8B LR Sweep (Commonsense 170K) |LearningRate|LoRA|SparseLoRA-V1|SparseLoRA-V2| |-|-|-|-| |3.00E-05|85.7|84.8|85.6| |5.00E-05|86.7|85.9|86.5| |9.49E-05|**87.7**|**86.8**|**87.4**| |3.00E-04|87.1|86.3|87.1| We conduct two learning rate sweeps with Llama3-8B on Math10K and CSR170K for LoRA and SparseLoRA. SparseLoRA shows minimal performance degradation compared to LoRA, confirming our preformance claims. In the conservative setting, the performance gap between the optimal LoRA and SparseLoRA is 0.2% on Math10K and 0.3% on CSR170K. > SparseLoRA is only evaluated on QKVO projections, but LoRA can be applied to different subsets of projections. We focus on applying sparsity to speed up the main branch of the model, leaving the LoRA branches unchanged. However, we also provide additional experiments applying LoRA to Q, K, V, up, and down projections, following DoRA, in addition to the settings in our paper. ### LLaMA3-8B Performance on Math10K for different projections |Method|Config|Mean| |-|-|-| |LoRA|QKVO|79.9| |SparseLoRA|QKVO|80.0| |LoRA|QKVUD|80.3| |SparseLoRA|QKVUD|80.9| |LoRA|QKVOGUD|80.5| |SparseLoRA|QKVOGUD|80.7| > Provide iso-FLOP comparisons showing performance vs. computational budget Following the reviewer’s recommendation, we conduct an iso-flop comparison on Llama3-8B using Math10K and CSR170K. SparseLoRA outperforms LoRA, with a 3% improvement at 5% FLOP budget on Math10K. Experiments run using 1 epoch as 100% FLOPs. ### Iso-FLOP Comparison on Math10K with LLaMA3-8B |Setting|FLOP(%)|Mean|Diff| |-|-|-|-| |**LoRA (Baseline)**|100|79.6|--| |**LoRA**|63|78.7|--| |**SparseLoRA**|63|79.2|+0.55| |**LoRA**|30|76.7|--| |**SparseLoRA**|30|78.3|+1.55| |**LoRA**|10|74.8|--| |**SparseLoRA**|10|76.2|+1.63| |**LoRA**|5|72.5|--| |**SparseLoRA**|5|75.8|+3.30| ### Iso-FLOP Comparison on Commonsense 170K with LLaMA3-8B |Setting|FLOP(%)|Mean|Diff| |-|-|-|-| |**LoRA (Baseline)**|100|87.1|--| |**LoRA**|64|86.8|--| |**SparseLoRA**|64|87.0|+0.28| |**LoRA**|30|86.0|--| |**SparseLoRA**|30|86.2|+0.19| |**LoRA**|10|83.9|--| |**SparseLoRA**|10|84.7|+0.83| |**LoRA**|5|82.0|--| |**SparseLoRA**|5|83.5|+1.47| > Lack of specifics like number of shots per task etc. Tasks are evaluated with the mean over 5 shots. The SVD sparsity estimator is used during training to skip parts of the base-branch computation. The SVD computation is offline, but during training, marginal FLOPs are spent on the sparsity metric, accounting for 0.0534% of LoRA FLOPs and 0.8% runtime overhead, as shown in Table 3. > Details on SVD Predictor We compute the SVD of weight matrices to create our sparsity estimator (Algorithm 1, lines 251-254) once per model. For the self-attention block, we SVD q, k, and v, and for the FFN, we compute up and gate projections, as described in Section 3.1. SVD is performed with rank 8 for all models. The predictors incur minimal runtime during training (Table 3) and negligible memory overhead (see below in Mb). - Llama2-7B: 26.8 - Llama2-13B: 36.9 - Llama3-8B: 30.0 > Motivation for computationally efficient PEFT SparseLoRA achieves near-lossless performance while reducing computation. Even if PEFT tasks are fast, practitioners favor any free acceleration. For example, many opt for BF16 training over FP32 because, despite its slightly lower precision, the efficiency gains are substantial and the performance drop is negligible. Faster training means more rapid experimentation and ability to scale to larger models or more complex tasks—all of which are significant benefits in practice. SparseLoRA is a step toward this goal and can inspire future improvements in computationally efficient, lossless PEFT methods. > DoRA Settings In this work, we use a rank of 32 for LoRA fine-tuning; DoRA has shown a smaller performance gap with this setting (Figure 5 in their work). S2FT reproduced similar results on Llama3-8B (arithmetic). Unlike DoRA—which prioritizes stability and performance at the expense of runtime—SparseLoRA enhances efficiency, complementing DoRA for practitioners. > Figure 1 We visualize the results directly from Table 1, specifically highlighting the normalized runtime and average performance on CSR170K for Llama2-7B. > "less than 0.5% overhead to finetuning" by SVD We meant "0.8%" runtime overhead introduced by SVD estimator. Additional metrics in Table 3. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. I now have more confidence in the experiments, especially due to the Iso-FLOPs experiments. I hope the authors release clean reusable code (if accepted) for wider adoption and examination of their method. I have adjusted my score accordingly (2->3). --- Reply to Comment 1.1.1: Comment: Thank you for the positive feedback and score adjustment! We’re glad the experiments clarified our contributions. We’ll incorporate these results and other suggestions into the revised manuscript. Upon acceptance, we’re committed to releasing clean, well-documented code to support broad adoption and further exploration of SparseLoRA.
Summary: The paper proposes a framework for accelerating the fine-tuning large language models by structured pruning of pretrained weight matrices, and using dense and trainable LoRA adapters. The core proposed idea is to estimate the importance of each channel in a pretrained weight matrix, prune the unimportant channels by slicing the weights of that channel, and then using the sliced weights for computation. The LoRA computation is done following the original LoRA paper, with dense matrices. The porposed method uses different pruning strategies for different components: - FFN: Uses the L2 norm of the activations of the Gate projection after SiLU activation along the batch and sequence dimensions, and retains the columns in the Gate and Up weight matrices with the highest L2 norm (assuming columns represent the weights if a single channel). The corresponsing rows of the Down weight matrix are also pruned. - Value and Output Projection in the attention layer: Uses the L2 norm of the activations of the attention heads after the Value projection along the batch and sequence dimensions, and retains the columns in the Value matrix with the highest L2 norm. The corresponding rows of the Output matrix are also pruned. - Query and Key: Calculates the L2 norm of the Key and Value projections along the batch and sequence dimensions, and uses their dot product as the pruning criterion. The columns in the Key and Query matrices with the highest dot product are retained. The paper proposes several tricks to reduce computational cost and improve performance: - T1: The paper also proposes an SVD based estimator for the sparsity. Instead of using the projections from full pretrained weight matrices in order to calculate the L2 norms (the sparsity metric), the operations are performed using the forst $k$ ranks of their respective SVD projections, which results in reduced computational cost for the calculation of the sparsity metric. - T2: The sparsity ratio is selected based on layer sensitivity analysis, with higher sparsity for deeper layers. To determine the sparsity ratio, the authors perform a sensitivity analysis by greedily increasing the sparsity ratio of each layer while keeping the other layers dense, and measuring the performance on a subset of the Commonsense Reasoning tasks. - T3: The pruning (sparsity) is only applied to the context tokens, and dense pretrained weight matrices are retained for the output tokens. - T4: Early iterations in fine-tuning are run in dense mode (upto 20% of the total finetuning iterations) and sparsity is imposed in the later iterations. - T5: Uses sequence averaging in the SVD estimator for the FFN block to reduce the computational cost. *However, the authors do not define what they mean by the term, and do not provide the exact implementation details. See Point 5 in the Strengths and Weaknesses section.* ## Update after rebuttal I have gone through the authors' rebuttal and their response to other reviews. I do not mind raining my score to 3, provided the writing of the paper is improved based on my comments in the Other Strengths And Weaknesses section. Claims And Evidence: 1. **Sparsity metric**: The use of L2 norm is well justified, but I would like to mention some nuances to the results of the ablation studies in Table 6 which compares L2 norm with Random pruning and Wanda: In case of the Self-Attention block, L2 norm does not show a significant improvement over Random pruning. However, in the FFN block, L2 norm shows an improvement over Random pruning and maintins simplicity. The authors should mention this in the paper. 2. **Channel pruning vs. Head pruning**: Section 3.1.2, mentions that head level pruning strategy could be problematic. However, Table 6 shows that the accuracy with Head pruning is 80.1 and accuracy due to channel pruning is 80.2, which is not a significant difference. Hence, the choice of channel pruning over head pruning proposed in the paper is not very well justified. Methods And Evaluation Criteria: The paper evaluates the proposed SparseLoRA on Commonsense170k and Math10k benchmarks, which have been used in prior works [Hu et al. (2023), Liu et al. (2024)]. The performance on a task is measured using the accuracy of the final answer, which is standard in prior PEFT works. The computaitonal efficiency is measured using the wall clock time and the number of FLOPs relative to LoRA fine-tuning, which reflects the goal of the paper. --- ## References [1] Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, Roy Lee, "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models", EMNLP 23 [2] Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, Min-Hung Chen, "DoRA: Weight-Decomposed Low-Rank Adaptation", ICML 24 Theoretical Claims: N/A Experimental Designs Or Analyses: The design of the experiments and results reflect the goal if the paper. However, the ablation studies need to be described in more detail. Specifically, when the checking the effect of a particular setting, what are the other settings that are kept constant? For example, Table 5 shows the effect of uniform vs. layer specific sparsity. What are the other settings that are kept constant? The paper should provide a detailed description of the ablation studies. Supplementary Material: Reviewed in full. Relation To Broader Scientific Literature: 1. Using sparisity and structured pruning during finetuning has been explored before by Ma et al. (2024), which uses structured pruning along with full fine-tuning of unpruned model parameters. Ma et al. (2024) have also experimented with L2 norm as a sparsity metric, along with other metrics. Their findings show that L2 norm performs slightly worse than other metrics. 2. This paper extends the idea of pruning+finetuning to use adapters (LoRA) instead of full fine-tuning, and uses SVD decomposition of the pretrained weights to obtain the sparsity metrics for efficiency. This paper also proposes channel pruning instead of head pruning in the self-attention block. However, as mentioned in the Claims and Evidence section (Point 2), the contribution of channel pruning over head pruning is not very significant. 3. In terms of the efficiency metrics, while the proposed method can achieve a reduction in FLOPs to ~60-80% that or LoRA, the accuracy is not guaranteed to stay within a small range, suggesting that there could be limitations to the practical applicability of the proposed method. For example, on the Math10k benchmark (Table 2), with LLaMA2-7B, the performance drop on SVAMP is aaround 5% compared to LoRA. With Llama2-13B, the performance drop on GSM8k and SVAMP is around 2%. With Llama3-8B, the drop on GSM8k is ~2%. This drop in performance is not very significant, but it is not negligible either. LoRA, on the other hand, consistently achieves higher accuracy, while being very practical in terms of real-world applications. Overall, the contribution of the paper is fairly incremental and given the comparison with LoRA mentioned above, the practical applicability of the proposed method could be limited. --- ## References [1] Da Ma, Lu Chen, Pengyu Wang, Hongshen Xu, Hanqi Li, Liangtai Sun, Su Zhu, Shuai Fan, Kai Yu, "Sparsity-Accelerated Training for Large Language Models", ACL Findings 2024 Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths - Achieves 1.7x reduction in computational cost and 1.4x reduction in wall clock time over LoRA, while achieving almost the same accuracy in most cases. - The paper proposes a novel method for estimating the sparsity metric using SVD decomposition of the pretrained weights, which reduces the computational cost of the sparsity metric calculation. ## Missing hyperparameter details 1. The expeirmental details for the SVD rank and the exact sparsity ratios per layer are missing. ## Writing and Clarity The writing is unclear in many places: 2. In section 4.3 (Analysis), none of the experiments and tables mention the model being used, or how the accruracy on Math10k is calculated (for e.g., is it the mean of the accuracies of individual datasets?). Furthermore, the accuracies on Math10k in the Tables 3, 4, 5, and 6 are much higher than the accuracies reported in Table 2. What is the reason for this discrepancy? 3. Figure 8 seems to contradict the text on line 245, which claims that sparsity is only applied to the context tokens, and output tokens are processed in a dense manner. However, the figure shows that only a small portion of the output tokens use dense weights. 4. In Table 5, what do columns 1.4x Speedup and 1.6x speedup denote, and how do they relate to the main results in Table 2? Table 5 also does not mention what benchmarks the experiments are performed with (from the accuracy scores, it looks to be Math10k) 5. The paper is unclear about the definition and implementation of the sequence averaging in the SVD estimator. 6. The paper is very fragmented in terms of what optimizations lead to the reported FLOPs and Runtime decrease. Lines 379, Section 4.3, mentions that sequence averaging incurs leads to significant speedup, yet it has not been defined or explained in the paper. A coherent explanation of exactly which optimizations lead to the decrease in FLOPs and Runtime reported in Tables 1 and 2 should be provided. Other Comments Or Suggestions: N/A Questions For Authors: 1. To determine the sparsity ratio for every layer, the authors perform a sensitivity analysis by greedily increasing the sparsity ratio of each layer while keeping the other layers dense. Is the model trained to convergence for every setting? 2. Related to Q1, if the model is trained for every combination, it will add to the computaitonal cost. Section 3.3 studies this with Llama2-7B and a subset of Commonsense Reasoning. Is the same sparsity ratio used across all the models and tasks? If the same ratio is applicable to all models and tasks, then the sensitivity analysis could be done once and the same ratio could be used across all models and tasks. If not, then the sensitivity analysis needs to be done for each model and task, which could be computationally expensive, adding to the overall computational cost of the method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's comments. > L2 norm vs. Random pruning in Self-Attention blocks L2 norm shows clear benefits for FFN blocks, while its gains over Random pruning in Self-Attention blocks are modest. We use a unified L2-based criterion to avoid over-engineering and ensure broad applicability. > Channel vs. Head Pruning: Although the performance gap is small (80.2 vs. 80.1), channel pruning offers greater flexibility. Unlike head pruning—which enforces equal channel counts—channel pruning enables nuanced selection of important channels, allowing precise control over computational cost. >SVD Rank and Per-Layer Sparsity Ratios: For all models, we use an SVD rank of 8, which incurs marginal overhead (see Table 3). Our per-layer sparsity ratios are determined via sensitivity analysis per model (see Figure 7). For the detailed configurations, please check "Model Sparsity Information table" in our response to Reviewer pTrd. Similar settings, with slight adjustments for latency, are applied across models. > Model Specification and Math10K Accuracies All experiments in Section 4.3 use LLaMA3-8B. The Math10K results for LLaMA3-8B in Table 2 have 79.8% accuracy on average. This matches the result in line 2 of Table 3, line 2 of Table 4, and the “Non-uniform 1.4x” configuration in Table 5. In Table 6, higher accuracies result from applying sparsity only to specific modules rather than across the entire model. > Context-Output Aware Sparsity Lines 263–264 describe our strategy as “selectively preserving dense computation for output tokens while applying sparsity to the context,” meaning only a portion of output tokens are processed densely, as reflected in Figure 8. We will revise the text to avoid potential ambiguity. > Table 5 Speedup Columns The “1.4x Speedup” and “1.6x Speedup” columns indicate runtime targets by adjusting sparsity ratios. Both uniform and non-uniform configurations are calibrated to these targets for fair comparison. The 1.4x (1/0.71) non-uniform setting, with 79.8% accuracy on Math10K, directly corresponds to the SparseLoRA LLaMA3-8B result in Table 2; the 1.6x setting explores a more aggressive sparsity trade-off. > "Sequence averaging" in the SVD estimator “Sequence averaging” reduces sparsity estimation cost by averaging activations over the sequence dimension for SVD inputs to compute a single channel score. This is particularly effective for FFN layers, where token-level differences are less critical, and it minimizes overhead while maintaining performance (see Table 3). > FLOPs and Runtime Optimizations in Tables 1-2. The reductions reported in Tables 1 and 2 stem from applying contextual channel sparsity guided by our SVD estimator to expensive linear layers. Sequence averaging further minimizes the overhead of sparsity estimation, making the cost of identifying sparse channels negligible. > Trained to Convergence and Uniformity of Sparsity Ratios No additional training was performed during the sensitivity analysis. As shown in Figure 7, we apply sparsity to the pretrained LLaMA2-7B model at individual layers (with others kept dense) and evaluate its performance on CSR170K. This analysis identifies robust sparsity patterns that are then uniformly applied across all models and tasks. > Comparison with Ma et al. (2024) Ma et al.’s method differs from ours in many ways: it uses structured pruning with full fine-tuning, resulting in full-model memory usage. In contrast, SparseLoRA integrates structured pruning with LoRA adapters, reducing both computation and memory. Moreover, their importance metrics (Wanda, MaxiP) involve element-wise operations that incur high overhead (e.g., on A6000 GPUs), nearly canceling speedup gains when we experimented Wanda for Table 6. Our SVD-based sparsity estimator relies on efficient matrix multiplications with low-rank weights, ensuring minimal overhead across GPUs. > Performance drop not negligible SparseLoRA offers a flexible speed–accuracy trade-off. For applications where maximum accuracy is crucial, LoRA remains viable; for efficiency-critical scenarios, SparseLoRA delivers substantial speedups with minimal accuracy loss. Here we provide an updated, more conservative sparsity configuration for LLaMA2-13B and LLaMA3-8B that slightly increase FLOPs (0.59 → 0.62 for LLaMA2-13B and 0.60 → 0.63 for LLaMA3-8B) but yield nearly identical runtime (0.74 → 0.76 for LLaMA2-13B and 0.71 for LLaMA3-8B) and match or improve accuracy: | |#FLOPs|Runtime|Avg|GSM8K|SVAMP|MAWPS| |---|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:| |LLaMA2-13B+LoRA|1|1|63.3%|50.7%|59.0%|80.4%| |LLaMA2-13B+SparseLoRA|0.62|0.76|63.5%|49.3%|58.7%|82.6%| |LLaMA3-8B+LoRA|1.00|1.00|80.0%|71.1%|79.5%|89.5%| |LLaMA3-8B+SparseLoRA|0.63|0.71|80.0%|70.9%|79.4%|89.9%| These results show that, with proper configuration, SparseLoRA can achieve comparable or better accuracy than LoRA while delivering strong runtime benefits. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. While some of my concerns have been addressed, my main concern still remains: > Practical application I am still unconvinced about the practicality of the method. The method involves a number of overheads over LoRA. Arguably, LoRA is much simpler and very practical to run. With efficiency tricks like gradient checkpointing and quantization (for e.g., QLoRA, LoftQ), is accessible even on consumer grade GPUs. Moreover, if the sparsity ratios need to be adjusted to obtain equivalent performance, SparseLoRA needs to be ablated over a range of sparsity ratios for tasks from different domains. This is important because if the starting point for completely novel tasks needs to be a low sparsity, using LoRA would already yield performance close to the maximum achievable performance. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the thoughtful comments and for raising important points regarding the practicality of our method. We emphasize that SparseLoRA is not intended as a replacement for LoRA but rather as a complementary enhancement. Techniques like gradient checkpointing and quantization (e.g., QLoRA, LoftQ) primarily target memory savings but often often come at the cost of increased runtime (as illustrated in Figure 1 of our paper). These methods are therefore orthogonal to our approach. In fact, SparseLoRA can be combined with techniques like QLoRA to simultaneously benefit from reduced memory consumption and improved runtime efficiency — a direction we believe could be highly valuable in practice – as shown below: ### LLaMA3-8B Commonsense170K | Method | Runtime | Mean | BoolQ | PIQA | Social-IQA | HellaSwag | Winogrande | ARC-Easy | ARC-Challenge | OpenBookQA | |--------|---------|------|-------|------|------------|-----------|------------|----------|---------------|------------| | QLoRA | 1.00 | 87.2% | 74.4% | 89.4% | 83.3% | 95.4% | 89.2% | 93.3% | 84.2% | 88.7% | | +SparseLoRA | 0.76 | 87.1% | 74.8% | 89.6% | 83.1% | 95.3% | 88.6% | 93.2% | 83.7% | 89.0% | ### LLaMA3-8B Math10K | Method | Runtime | Mean | GSM8K | SVAMP | MAWPS | |--------|---------|------|-------|-------|-------| | QLoRA | 1.00 | 80.8% | 71.4% | 80.2% | 90.3% | | +SparseLoRA | 0.74 | 80.5% | 71.3% | 79.9% | 90.8% | Regarding sparsity configurations and their generalizability, we clarify that **our sparsity settings are primarily model-dependent rather than task-dependent**. As discussed in our response to Reviewer pTrd, the sparsity ratios remain largely consistent across tasks, with minor adjustments primarily made to achieve specific runtime targets rather than to maintain accuracy. We further validated this generality by applying the same near-lossless sparsity configurations on LLaMA-3 across diverse benchmarks—including Commonsense Reasoning, Math10K, and GLUE (also provided in our response to Reviewer pTrd)—achieving consistent strong performance without extensive per-task tuning, addressing the reviewer's concern about practical applicability in new settings. Lastly, we note that SparseLoRA provides a comparable level of **simplicity and plug-and-play applicability** to methods like QLoRA and LoftQ. Both of these existing methods leverage quantization, with LoftQ additionally requiring mixed-precision setups to achieve optimal trade-offs—indicating that efficient PEFT approaches naturally carry some complexity. SparseLoRA integrates easily within this framework, making it equally practical for real-world adoption. Our initial QLoRA integration results above, obtained without extensive tuning (a "speed run"), already demonstrate promising efficiency gains and near-lossless accuracy. Further optimization or targeted tuning would likely yield even stronger results in practice. Thus, SparseLoRA presents a valuable and practical enhancement for existing PEFT methods, aligning closely with the community's ongoing efforts toward efficient yet accurate model adaptation.
null
null
null
null
null
null
CtrlSynth: Controllable Image Text Synthesis for Data-Efficient Multimodal Learning
Accept (poster)
Summary: This paper proposes CtrlSynth, a image-text synthesis pipeline designed for efficient and robust multimodal learning. Specifically, CtrlSynth decomposes an image's visual semantics into basic elements and recompose them to generate images or texts. With these synthetic data, the performance of CLIP-based model improves on zero-shot classification, image-text retrieval, and compositional reasoning. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no theoretical proofs in this paper. Experimental Designs Or Analyses: Yes, the experimental designs or analyses are reasonable. Supplementary Material: I have checked the whole supplementary material. Relation To Broader Scientific Literature: The key contribution of this paper is a new pipeline to synthesize high-quality pre-training data for multimodal learning, which is related to topic - data augmentation. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. The experiment is comprehensive, however, did the author try using both CtrlSynth-mix and original image-text pairs? In this way, the CtrlSynth-mix serves as a data augmentation of image-text pairs. There are two ways: (1) Mix CtrlSynth-mix and original image-text pairs for training. (2) Pre-train on noisy image-text pairs and fine-tune on high-quality synthetic data. 2. The image/text controller is somehow unclear how it affect the quality of the synthetic data. Are them useful? There may be a lack of an ablation experiment w/wo image/text controller. Other Comments Or Suggestions: In order to obtain more confident conclusions, I suggest also including top-3, top-5 Accuray and Reall@3, Recall@5 in the Table. If there is not enough space, at least put it in the appendix. Questions For Authors: 1. Has the author tried to use the original image as a constraint when synthesizing the image? This will keep the synthesized image from deviating too much from the original image and make the synthesized image more realistic. 2. Did the author try to get more accurate tags obtained from Florence-large + Qwen2-7B-Instruct. For example, for tag1, ask MLLM to assess whether this tag exists in the image. In this way, more accurate tags can be obtained. I am not sure if more accurate tags will help in generating higher quality synthetic data. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for highlighting that our experiments are comprehensive. We have added a detailed explanation below: > Did the author try using both CtrlSynth-mix and original image-text pairs? **Response 1**: Yes, all our reported results for CtrlSynth-mix include both synthetic and original image-text pairs during training. This combined approach consistently yielded the best performance across our experiments. We did not explore pre-training exclusively on synthetic data before fine-tuning on original data, as our focus was evaluating the direct impact of our synthetic data when integrated with standard training procedures. We agree that exploring more sophisticated training strategies (such as curriculum learning with different mixing ratios at various training stages) represents a promising direction for future work that could potentially further enhance the benefits of our synthetic data. >The image/text controller is somehow unclear how it affect the quality of the synthetic data. Are them useful? There may be a lack of an ablation experiment w/wo image/text controller. **Response 2**: We provide an ablation study in Table 6 that directly addresses this question by evaluating different controller configurations. The results clearly demonstrate that both controllers significantly contribute to data quality and downstream performance. Specifically, CtrlSynth-cap (which lacks image control) and CtrlSynth-image (which lacks text control) both underperform compared to the full CtrlSynth model with both controllers enabled. >Has the author tried to use the original image as a constraint when synthesizing the image? This will keep the synthesized image from deviating too much from the original image and make the synthesized image more realistic. **Response 3**: This is an excellent suggestion that aligns well with CtrlSynth's modular design philosophy. While we did not implement this specific constraint in the current work, our framework is explicitly designed to accommodate such extensions. Using original images as additional conditioning signals could indeed help preserve certain visual characteristics while introducing targeted variations. Our current implementation demonstrates four distinct synthesis paths to showcase the framework's versatility, but the architecture readily supports incorporating image-anchored generation as suggested. >Did the author try to get more accurate tags obtained from Florence-large + Qwen2-7B-Instruct. For example, for tag1, ask MLLM to assess whether this tag exists in the image. In this way, more accurate tags can be obtained. I am not sure if more accurate tags will help in generating higher quality synthetic data. **Response 4**: While we did not implement this specific verification loop in our current pipeline, it represents a valuable extension that aligns with our modular design. Our experiments indicate that the current tagging approach achieves sufficient accuracy to significantly improve downstream task performance. Moreover, CtrlSynth's strength lies partly in its ability to generate diverse variations even from imperfect tags. The controllers also apply filtering policies that remove low-confidence tags. Our framework is designed to be component-agnostic, allowing straightforward integration of improved tagging models or verification mechanisms as they become available, without requiring architectural changes to the overall pipeline.
Summary: The paper introduces CtrlSynth, a controllable image-text synthesis framework designed to enhance data efficiency and address challenges in training robust vision-language models. By decomposing visual semantics into modular elements (objects, attributes, relations) and enabling fine-grained control over synthetic data generation, CtrlSynth generates high-quality, diverse multimodal samples. It outperforms baselines across 31 datasets, showing significant improvements in zero-shot classification, compositional reasoning, and long-tail task performance. - **Fine-Grained Control via Modular Visual Tags** Breaks down visual semantics into objects, attributes, and relations, allowing precise manipulation of synthetic data (e.g., augmenting underrepresented classes or mitigating biases). Combines hybrid visual tag extraction (captioning + multi-label classification) to improve robustness, unlike prior domain-specific methods. - **Closed-Loop Synthesis Without Additional Training** Leverages pre-trained models (e.g., Mistral-NeMo for text, SDXL for images) in a plug-and-play pipeline, avoiding costly retraining. Filters low-quality outputs automatically, ensuring data quality. - **Data Efficiency and Versatility** Achieves comparable performance with 40% fewer training iterations than baselines (Table 2, Figure 5). Outperforms on long-tail and robustness benchmarks (ImageNet-R/A/O) and compositional tasks (SugarCrepe). Claims And Evidence: **Limitations** - **Adaptability of Preset Label Thresholds** The paper does not clarify whether the “label existence ratio threshold” (used for filtering visual tags) generalizes across datasets. Experiments focus on common benchmarks (e.g., ImageNet, COCO), but domain-specific tasks might require manual threshold adjustments. - **High Resource Consumption** The pipeline relies on multiple heavy pre-trained models (e.g., LLMs, diffusion models). For example: Training with SDXL (3.5B parameters) and Mistral-NeMo demands significant GPU resources (e.g., 8–32 A100 GPUs, Table 8). Repeated generation of images/texts through sequential LLM and diffusion steps scales compute costs, though no direct comparison to alternative methods is provided. Methods And Evaluation Criteria: SEE Claims And Evidence Theoretical Claims: SEE Claims And Evidence Experimental Designs Or Analyses: SEE Claims And Evidence Supplementary Material: ALL Relation To Broader Scientific Literature: SEE Summary Essential References Not Discussed: NO Other Strengths And Weaknesses: SEE Claims And Evidence Other Comments Or Suggestions: SEE Claims And Evidence Questions For Authors: SEE Claims And Evidence Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. We clarify the filtering threshold and computation costs below: >The paper does not clarify whether the “label existence ratio threshold” (used for filtering visual tags) generalizes across datasets. Experiments focus on common benchmarks (e.g., ImageNet, COCO), but domain-specific tasks might require manual threshold adjustments. **Response 1**: We empirically validate the generalizability of our filtering threshold across datasets in Figure 6 (Appendix A.5). Our ablation study demonstrates that a consistent threshold value (20%) works effectively across all tested datasets without requiring domain-specific adjustments. While fine-tuning thresholds for specific domains might yield marginal improvements, our experiments show that values between 10-20% provide robust performance across diverse visual domains with minimal manual intervention. > For example: Training with SDXL (3.5B parameters) and Mistral-NeMo demands significant GPU resources (e.g., 8–32 A100 GPUs, Table 8). Repeated generation of images/texts through sequential LLM and diffusion steps scales compute costs, though no direct comparison to alternative methods is provided. **Response 2**: We do not train the SDXL and Mistral-NeMo models. Our method is training-free. The computational cost primarily scales with the number of synthetic samples needed. This is particularly efficient for long-tail tasks, where generating targeted synthetic samples for underrepresented classes yields substantial performance gains with minimal computational overhead compared to collecting and annotating real examples.
Summary: The paper introduces CtrlSynth, a closed loop framework to generate synthetic data in both text and images. The core idea of the work is to decompose an image into granular components (objects and relationships) and re-compose them based on user-specified controls. This is facilitated through the use of foundational models such as an image tagging model, image and text generation model(s). Through this setup, CtrlSynth is able to create synthetic data with diverse "synthesis" paths, which enables them to create various forms of multi-modal data. With extensive experiments on different vision and vision-language tasks CtrlSynth substantially improves zero-shot classification, image-text retrieval, and compositional reasoning performance of CLIP models. Claims And Evidence: 1. I am convinced that this method works in generating synthetic (image-text) pairs for CLIP-like models. The authors perform comprehensive experiments and ablations to support this claim across multiple datasets and tasks. 2. I have 1 major concern : i) The paper performs all experiments on classification and retrieval tasks, by fine-tuning CLIP on their data and comparing to it baseline models. However, the method lacks any comparison to other relevant tasks such as text to image generation or text-based image editing. Firstly, since CtrlSynth can generate both images and text as part of its pipeline, comparing {real} images/captions vs {synthetic} images/captions should be made. Furthermore, since the paper claims to be able to perform user-based edits to images, there is no quantitative evidence that supports this claim. This makes me believe that such a set-up only works on CLIP like models, and might not be scalable to other models such as T2I and VLM's. 3. The paper could be improved a lot with an error analysis which could help explain the importance of the individual foundation models used in the paper. Since, there are 3 foundational models used in this work, and are esssentially treated as black-box, they could a) each have their own modes of failures and b) since their outputs depend on each other, there is the trivial case of compounding errors. Any analysis on the above will help back-up the papers' claim. Methods And Evaluation Criteria: For the domain of CLIP-like models, the authors perform comprehensive experiments across multiple benchmarks. Theoretical Claims: N/A. Experimental Designs Or Analyses: I do not have concerns with the experiment(s) performed in the paper. The design and the ablations provided are sound and just. Supplementary Material: I have read the entire supplementary material. Relation To Broader Scientific Literature: Synthetic data for CLIP has been largely studied in the last couple of years. The core contribution of this work is developing synthetic data using fine-grained concepts. However, since there is no analysis on the correctness of these fine-grained concepts (i.e. how good is the vision tagging model), it is hard to pin-point the exact gains achieved because of it. Essential References Not Discussed: There are some other works that the authors failed to mention/discuss : 1. https://arxiv.org/pdf/2407.05600 2. https://arxiv.org/pdf/2402.01832 3. https://arxiv.org/pdf/2410.08211 Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Please check above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for acknowledging the effectiveness of our method in the current setting. >The method lacks any comparison to other relevant tasks, such as text to image generation or text-based image editing. Firstly, since CtrlSynth can generate both images and text as part of its pipeline, comparing {real} images/captions vs {synthetic} images/captions should be made. Furthermore, since the paper claims to be able to perform user-based edits to images, there is no quantitative evidence that supports this claim. This makes me believe that such a set-up only works on CLIP-like models and might not be scalable to other models such as T2I and VLMs. **Response 1**: We appreciate this thoughtful feedback. To clarify, CtrlSynth's primary contribution is not developing a better text-to-image model, but rather leveraging existing text-to-image models to generate diverse, controllable training data. The user-based control we describe refers to specifying desired attributes for synthetic data generation, not proposing novel image-editing techniques. Our pipeline is intentionally designed to be modular, allowing easy integration of any advanced text-to-image or image-editing methods as they become available. This flexibility ensures users maintain fine-grained control over synthetic sample characteristics while benefiting from improvements in generative technology. Regarding scalability beyond CLIP models, while our current evaluations focus on vision-language representation learning, the synthetic data generated by CtrlSynth is model-agnostic. We have preliminary explorations suggesting potential benefits for T2I and VLMs, though comprehensive evaluation across these model families would require significant additional resources and falls outside our current scope. We will add this limitation and future direction to the discussion section. > Since there are 3 foundational models used in this work, and are essentially treated as black-box, they could a) each have their own modes of failures and b) since their outputs depend on each other, there is the trivial case of compounding errors. Any analysis on the above will help back-up the papers' claim. There is no analysis on the correctness of these fine-grained concepts (i.e., how good is the vision tagging model) **Response 2**: This is an excellent point about potential error propagation. Our methodology deliberately employs a redundancy-based approach where imperfections in individual components don't critically impact the overall system performance. In practice, we found that even when specific visual tags are missed or text is occasionally hallucinated, the aggregate diversity and quality of the synthetic data remains beneficial for downstream tasks. We conducted additional quality assessments of our Visual Tagging Model, finding 92% precision on a manually annotated subset of 50 images. More importantly, our ablation studies in Section 4.4 empirically demonstrate that the end-to-end system produces data that significantly improves model performance, suggesting that any noise introduced by component imperfections is outweighed by the benefits of the diverse synthetic samples. We will add a brief error analysis section to address these concerns directly. >There are some other works that the authors failed to mention/discuss. **Response 3**: Thank you for identifying these oversight gaps in our literature review. We will expand our related work section to include the additional papers you referenced and discuss how our approach relates to and differs from these contributions. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. I will keep my score. The reasons are : 1. I believe results on T2I and VLMs are crucial to fully gauge the quality of this synthetic data. Effectively, CLIP is not a generative model where-as T2I and VLMs are. Therefore, having supporting evidence on any generative model would help the quality of the paper. 2. I would really like to see some concrete numbers, of the individual components and how these numbers individually affect the final output.
Summary: This paper proposes CtrlSynth to build a closed-loop data generation pipeline. Building upon the powerful foundation models, this approach generates diverse synthetic data samples depending on the text or image. It first breaks down the visual elements into visual tags, and exploits them with a user control to synthesize new ones. Also, there are several pathways to build diverse types of data, which can provide the flexibility of this method. Experimental results demonstrate that the generated samples from CtrlSynth are effective to improve pertaining performance on several zero-shot benchmarks. Claims And Evidence: It is unclear why the re-synthesized data from existing images helps address the long-tail problem. Additionally, rather than improving a well-trained model (e.g., fine-tuning or parameter efficient tuning), the approach involves pretraining from scratch to demonstrate the dataset's effectiveness. I don't find this to be a practical solution or reason to use a synthesized dataset. Methods And Evaluation Criteria: Clear instructions on how to use the image controller are needed. Theoretical Claims: I cannot find theoretical claims in this paper. Experimental Designs Or Analyses: The tasks are overly focused on discrimination-based learning. Given that such LLMs and large VL models were used to generate the data, I believe it is important to also evaluate generative models, such as image generation and long-text captioning, among others. Supplementary Material: The appendix provides the instruction prompts, training and inference details, more ablation studies, and comparisons with VeCLIP and LaCLIP. Relation To Broader Scientific Literature: One of the key reasons for generating new or additional data is to address the scale-up challenge. However, with a model size as small as ViT-B/16 in CLIP, it is necessary to verify whether this dataset can effectively solve such practical issues in other multimodal models. Essential References Not Discussed: n/a Other Strengths And Weaknesses: This paper presents a dataset generation framework with various pathways for generating both images and text. Additionally, the paper claims that decomposing visual tags is a main contribution. However, a comparison is needed to demonstrate the advantages of using fine-grained tags rather than sentence forms. Other Comments Or Suggestions: n/a Questions For Authors: My main concern is that the primary experiments were conducted only with training on CLIP from scratch and the ViT-B size. I am curious about the effects of scaling up, fine-tuning, and how this approach performs on generation tasks. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your feedback and have provided additional clarification below. >It is unclear why the re-synthesized data from existing images helps address the long-tail problem. **Response 1**: Our visual tagging model (VTM) identifies and extracts fine-grained, long-tail concepts from existing images that traditional approaches might miss. These concepts are then semantically enriched by pretrained LLMs to expand their coverage and diversity. When text-to-image models generate new images using these enhanced long-tail concepts, they create targeted examples for underrepresented categories, effectively rebalancing the distribution. Our quantitative results in Section 4.3 confirm that this approach significantly improves performance on long-tail recognition benchmarks. > Rather than improving a well-trained model (e.g., fine-tuning or parameter efficient tuning), the approach involves pretraining from scratch to demonstrate the dataset's effectiveness. I don't find this to be a practical solution or reason to use a synthesized dataset **Response 2**: We want to point out that the evaluation section 4.3 shows the effects of only fine-tuning the classifier head of the pretrained models for long-tail tasks. We show the effectiveness for both pretraining and fine-tuning, demonstrating CtrlSynth's flexibility across different practical deployment scenarios regardless of whether users prefer full pretraining or efficient adaptation of existing models. >However, a comparison is needed to demonstrate the advantages of using fine-grained tags rather than sentence forms. **Response 3**: Prior work like VeCLIP and LaCLIP use sentence-level captions; we show in A.7 a detailed comparison with them, and our CtrlSynth outperforms the prior works. Our ablation studies further demonstrate that fine-grained tagging enables more precise control over specific visual attributes and concepts that might even be omitted in the accompanied natural language sentences, particularly for long-tail categories. > The primary experiments were conducted only with training on CLIP from scratch and the ViT-B size. **Response 4**: We study small and large ViT backbones (ViT-H and ViT-L) in Table 9 at Appendix A.5. We show that CtrlSynth consistently improves baselines across different backbone scales, confirming that our approach complements architectural scaling and remains effective regardless of model capacity. >I am curious about the effects of scaling up, fine-tuning, and how this approach performs on generation tasks. **Response 5**: - For model architecture scaling, we show the effectiveness of our CtrlSynth for small and large backbones (see response 4); for data scaling, we show CtrlSynth is effective across different sample sizes from 3M, 12M to 200M (Table 10 in page 19) and 400M (Table 11 in page 19) and outperforms prior works. - We have demonstrated the effectiveness of CntrlSynth in the fine-tuning setting as well. Please see response 2 above . - Extending CtrlSynth to generation tasks such as image generation is an important future direction. The main goal of CtrlSynth is to demonstrate the effectiveness and controllability of diverse text-image synthesis across different settings, including image-text datasets and vision longtail datasets. That said, while we believe that discriminative tasks are an important domain in themselves, our data synthesis approach is not limited to these task types. Users can use the synthetic data for both understanding and generation tasks. Due to the resources budget limitation and the scope of this work, we leave exploring CtrlSynth data for training LLMs or generative multimodal models for future work. --- Rebuttal Comment 1.1: Comment: Thank you for your sincere answers to my questions. Most of my questions have been resolved, and I am raising the rate.
null
null
null
null
null
null
Thinking LLMs: General Instruction Following with Thought Generation
Accept (poster)
Summary: This paper introduces Thinking LLMs, a novel approach aimed at improving general instruction following in large language models (LLMs) by explicitly incorporating internal thought processes before generating responses. Traditional LLMs respond directly to user instructions without intermediate reasoning steps, which can be inefficient for complex queries. The proposed method, Thought Preference Optimization (TPO), enables LLMs to generate and refine internal thoughts in an unsupervised manner without requiring additional human-annotated data. The key idea behind TPO is to: Generate multiple candidate thought-response pairs for a given instruction using an instruction-tuned LLM. Evaluate the responses using a reward model that only assesses the response quality, not the thought process itself. Optimize the thought generation through Direct Preference Optimization (DPO) by selecting the thought-response pairs that lead to the highest-rated responses. Iteratively refine the model, ensuring that the generated thoughts consistently contribute to better response quality. The paper provides empirical validation across multiple instruction-following benchmarks, demonstrating that TPO-trained models: Outperform standard LLMs on AlpacaEval (+4.1% win rate) and Arena-Hard (+4.3% win rate). Improve response quality not just in reasoning tasks but also in non-reasoning domains like marketing, health, and general knowledge. Reduce reliance on manual Chain-of-Thought (CoT) prompting by allowing LLMs to learn when and how to think. Claims And Evidence: The paper presents several claims that are supported by empirical evidence: Claim 1: Thinking LLMs improve instruction-following accuracy across various tasks. The experiments on AlpacaEval and Arena-Hard demonstrate consistent gains over direct-response baselines, with TPO achieving a 52.5% win rate on AlpacaEval and 37.3% on Arena-Hard. A category-wise analysis indicates that TPO benefits both reasoning and non-reasoning tasks, unlike prior CoT-based methods that primarily enhance mathematical and logical reasoning. Claim 2: Thought generation can be optimized without direct human supervision. The proposed reward-based optimization strategy allows the model to learn effective thought processes without requiring labeled thought data. Through Direct Preference Optimization (DPO), the system refines thought generation iteratively, producing better responses over time. Ablation studies confirm that models trained without TPO fail to achieve similar improvements, reinforcing that the optimization of thought generation is a crucial component. Claim 3: Thinking helps general instruction-following beyond traditional reasoning tasks. The fine-grained analysis (Figure 4) demonstrates improvements in domains such as marketing, translation, and content writing, areas where CoT prompting was previously considered ineffective. This contrasts with prior research (e.g., Sprague et al., 2024), which suggested that CoT primarily benefits logic-based tasks. Methods And Evaluation Criteria: The paper follows a rigorous evaluation framework: Uses AlpacaEval and Arena-Hard, widely accepted benchmarks for instruction-following models. Compares against strong baselines, including direct-response models and those trained with generic thought prompting. Provides a detailed breakdown of performance across 20 instruction categories, illustrating the types of tasks that benefit most from TPO. Conducts ablation studies to evaluate the impact of: Different thought prompting strategies (generic vs. specific). Effect of different reward models (ArmoRM vs. STE). Impact of iterative training on thought refinement. Potential improvements: Human evaluation of response quality would provide stronger validation of TPO’s effectiveness. Additional experiments on real-world applications (e.g., dialogue systems, tutoring systems) would further demonstrate the practical benefits of TPO. Theoretical Claims: The paper does not focus on formal theoretical contributions but provides strong empirical justification for the proposed approach. The method is based on Reinforcement Learning from AI Feedback (RLAIF) and Direct Preference Optimization (DPO), but lacks a formal theoretical analysis of thought generation dynamics. A deeper exploration of whether TPO converges to optimal thought processes or if certain types of errors persist over training iterations would strengthen the theoretical foundation. Experimental Designs Or Analyses: The experimental setup is well-structured and controlled: Uses standardized prompts, models, and evaluation protocols. Clearly demonstrates win rate improvements over baselines. Implements a length-control mechanism to ensure response quality does not simply improve due to verbosity. Areas that could be expanded: Failure case analysis is limited. The paper does not discuss scenarios where TPO might fail, such as backtracking-heavy tasks or reasoning tasks requiring multiple revisions. Automated evaluation reliance—the paper depends on GPT-4-based judges, which may introduce evaluation biases that are not addressed. Supplementary Material: The supplementary material includes: Expanded experimental results, including category-wise performance comparisons. Additional ablations on different prompt types and training setups. Examples of thought-response pairs, illustrating qualitative benefits of TPO. A stronger discussion of failure cases and potential limitations would further enhance the supplementary content. Relation To Broader Scientific Literature: Chain of Thought (CoT) (Wei et al., 2022) – Demonstrated the benefits of explicit reasoning but was mostly effective for math and logic tasks. Reinforcement Learning from AI Feedback (RLAIF) (Bai et al., 2022) – Introduced reward-based optimization for LLMs, but TPO adapts this for thought generation. Quiet-STaR (Zelikman et al., 2024) – Explored unsupervised thought generation but was primarily focused on structured reasoning tasks. DeepSeek-R1 (Guo et al., 2025) – Used reinforcement learning to train structured reasoning templates, whereas TPO provides a more flexible, data-driven approach. Essential References Not Discussed: Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models Other Strengths And Weaknesses: Strengths: The paper presents an original approach to optimizing thought generation in LLMs, extending beyond traditional Chain-of-Thought (CoT) techniques. The proposed Thought Preference Optimization (TPO) method is well-motivated and effectively eliminates the need for manually annotated thought supervision, making it scalable and adaptable across different tasks. The empirical evaluation is comprehensive, covering AlpacaEval, Arena-Hard, and category-wise breakdowns, providing strong evidence that TPO improves instruction-following models in both reasoning and non-reasoning tasks. The paper is well-written and structured, making it easy to follow, with clear explanations of methodology, evaluation, and experimental findings. The ablation studies effectively demonstrate the contribution of different components of TPO, particularly the role of DPO in refining thought generation. Weaknesses: The paper does not explore failure cases in depth. It would be helpful to provide more insights into when and why TPO fails, particularly in tasks that require iterative backtracking or fine-grained numerical precision. The evaluation relies entirely on GPT-4-based reward models, which introduces potential biases. A complementary human evaluation would help confirm the effectiveness of TPO in real-world scenarios. The work primarily focuses on GPT-based models. It remains unclear how well TPO generalizes to open-source models such as LLaMA, Mistral, or Claude. While the method is technically sound, there is no theoretical analysis of whether TPO converges to an optimal thought process or if certain errors persist over time. Other Comments Or Suggestions: Providing examples of failure cases in the supplementary material would help clarify the limitations of TPO. A discussion on the computational efficiency of TPO relative to existing approaches would be useful. Does training with TPO introduce significant additional overhead compared to standard instruction tuning? The paper could benefit from more real-world application demonstrations, such as deploying TPO-trained models in interactive agents or dialogue systems. Questions For Authors: How does TPO handle cases where the initial thought generation is incorrect? Does the model attempt self-correction, or does it remain committed to the initial flawed reasoning path? What are the primary failure modes of TPO? Are there specific task types where the thought-generation process degrades performance instead of improving it? Can TPO be integrated with retrieval-augmented generation (RAG) systems? Would the model's internal thought process benefit from external knowledge retrieval, and how would that affect optimization? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedbacks. Below, we address your concerns and propose our revisions. --- > Failure case analysis is limited. The paper does not discuss scenarios where TPO might fail, such as backtracking-heavy tasks or reasoning tasks requiring multiple revisions. Automated evaluation reliance—the paper depends on GPT-4-based judges, which may introduce evaluation biases that are not addressed. Our results on GSM8K, where both TPO and standard DPO degraded performance, illustrate a limitation. We attribute this primarily to the reward model's limitations in accurately judging complex, multi-step reasoning, highlighting that reward model quality is critical, especially for such tasks. We also acknowledge the potential for inherent biases in automated evaluations using LLM-based judges like GPT-4. To mitigate this, we implemented controls where feasible; for instance, output length was controlled across compared methods to prevent models from exploiting potential length bias in the automated evaluator. > Essential References Not Discussed We agree these references are relevant and appreciate the reviewer pointing them out. We'll discuss these papers in our final draft. > The paper does not explore failure cases in depth. It would be helpful to provide more insights into when and why TPO fails, particularly in tasks that require iterative backtracking or fine-grained numerical precision. The evaluation relies entirely on GPT-4-based reward models, which introduces potential biases. A complementary human evaluation would help confirm the effectiveness of TPO in real-world scenarios. We agree that a deeper analysis of failure cases would be valuable. Although the patterns TPO exhibits in general are generally helpful, such as drafting a reminder list for the answer (Figure 6 & 15), refining the response (Figure 5 & 16), and reflections, we also observed cases where TPO failed. Notably, in Figure 18, we found the model exhibits non-terminating behavior by iteratively criticizing its own answer without being able to produce a fix. This indicates the model might have a tendency to deviate from the thought-answer structure, despite being specifically trained to follow the structure. > The work primarily focuses on GPT-based models. It remains unclear how well TPO generalizes to open-source models such as LLaMA, Mistral, or Claude. While the method is technically sound, there is no theoretical analysis of whether TPO converges to an optimal thought process or if certain errors persist over time. Regarding the request for theoretical analysis, we are not aware of any theoretical analysis on the optimality of TPO. But our intuition why TPO worked is it provides more freedom for the LLM to explore during the RL process by putting no constraint on the thought part (as the judge does not know what is inside the thought), though some errors might persist, e.g. not following format, non-stopping behaviors, etc. > Does training with TPO introduce significant additional overhead compared to standard instruction tuning? TPO training doesn’t add additional complexity in implementation compared with DPO, but potentially generates slightly more tokens than standard DPO due to the thought part. > The paper could benefit from more real-world application demonstrations, such as deploying TPO-trained models in interactive agents or dialogue systems. These are interesting ideas to explore in our future work. > How does TPO handle cases where the initial thought generation is incorrect? Does the model attempt self-correction, or does it remain committed to the initial flawed reasoning path? Figure 18 provides an example where the model recognized an error but failed to recover or correct it. > What are the primary failure modes of TPO? Are there specific task types where the thought-generation process degrades performance instead of improving it? One primary failure mode stems from the reward model's limitations. If the reward model struggles to accurately judge the final answers for certain types of tasks (like complex reasoning), it can provide poor guidance, causing TPO to optimize in the wrong direction and potentially degrade performance more. Another observed failure, illustrated in Figure 18, involves process instability where the model might enter non-stopping loops, such as repeated self-criticism without resolution, or deviate from the intended thought-answer structure. > Can TPO be integrated with retrieval-augmented generation (RAG) systems? Would the model's internal thought process benefit from external knowledge retrieval, and how would that affect optimization? Thank you for the insightful comment; unfortunately, we are unable to provide a direct answer at this time, but it is an interesting topic worth exploring in the future. --- We greatly appreciate the reviewer’s constructive feedback, which significantly enhances the quality and clarity of our work.
Summary: Authors propose TPO, a method that finetunes an instruction-tuned LLM to output discrete thought tokens for harder tasks, without any supervision signal. The model undergoes iterative RLFAI preference learning, where the reward model comes from a judge model that judges based on the LLM’s final answer. Finally, authors show TPO achieves performance improvement against baselines on AlpacaEval and Arena-hard. Claims And Evidence: Yes Methods And Evaluation Criteria: * TPO uses a judge that is finetuned from Llama 70B. What if the authors simply prompt the 70B model with the thought prompts from figure 2, and SFT the smaller llama 8B with this dataset? How would simply doing this SFT compare with using TPO? An even more simpler variant is to just use the 70B to generate without the thought prompts, and SFT on this dataset. * How would TPO compare with parallel sampling techniques? I.e. sample multiple answers and use the judge model to select the final answer. * In the related works section, authors mention several prior works on thinking tokens that also do not require any labeled data (STaR, Quiet-STaR), is there a reason why authors didn’t compare empirically with those two baselines? Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: No. Relation To Broader Scientific Literature: The authors propose TPO, a method for LLMs to learn to generate thinking tokens during inference time. Time-time compute scaling is an important and timely topic for LLMs capabilities research. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: * Paper writing and figure illustrations are clear and easy to digest. * Comprehensive evaluation and ablations. * The method proposed is simple and achieves good results in the evaluation tasks. * People have more commonly associated COT to work with reasoning tasks, the surprising results showing TPO improves on various instruction following tasks are interesting to see. Weakness: * Missing baselines that authors should include (see questions below). * See more questions below Other Comments Or Suggestions: N/A Questions For Authors: * Can we also see a variant of Figure 4 with win rate against the seed model? As authors have already mentioned, COT only helps with math/logic related tasks, so seed model should be a stronger baseline here for most of the categories shown. * Authors show TPO do worse than baselines in GSM8K, any ideas why it does better than the baseline in the “math and calculation” subset of ultrafeedback? * Why is it problematic that “the seed model … uses CoT anyway due to its instruct training”? If it is because the accuracy is too high for further improvement, can authors also evaluate on MATH, which is a harder task than gsm8k? * Authors mentioned that the poor performance on GSM8K might be “due to our setup not being oriented toward such tasks”. Have the authors tried to simply add the training examples in GSM8K as a part of the training data for TPO? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable and insightful feedback. We address your concerns as follows: --- > How would simply doing this SFT compare with using TPO? That's an insightful question regarding potential training alternatives. The Llama 70B model in our study functions strictly as a reward model, similar to the other RM evaluated (ArmoRM), providing reward signals rather than generating target outputs for supervision. Our TPO framework utilizes these reward signals and is agnostic to the specific nature of the reward model (e.g., a traditional RM or an LLM-as-a-Judge). This contrasts fundamentally with the proposed alternatives, which would use the 70B model as a generator to create a dataset for supervised fine-tuning (SFT). Because TPO operates via preference optimization driven by reward signals, while the suggested approaches rely on SFT using generated targets, a direct comparison might not be appropriate. > How would TPO compare with parallel sampling techniques? I.e. sample multiple answers and use the judge model to select the final answer. We appreciate the reviewer raising the comparison with inference-time techniques. TPO as a training-time optimization method, is orthogonal to inference-time techniques such as sampling multiple responses and selecting the best using a reward model. These approaches are not mutually exclusive; applying inference-time sampling and selection strategies to a model already optimized with TPO is a plausible approach. Investigating the interplay and relative benefits of TPO training versus various inference-time selection methods remains an interesting direction for future work. > authors mention several prior works on thinking tokens that also do not require any labeled data (STaR, Quiet-STaR), is there a reason why authors didn’t compare empirically with those two baselines? Thank you for asking about the comparison with STaR and Quiet-STaR. STaR requires ground truth answers, and Quiet-STaR relies on supervised fine-tuning data. In contrast, TPO requires only unlabeled prompts and preference signals from a reward model. These fundamental differences in data make a direct empirical comparison challenging. Furthermore, STaR and its variants were primarily evaluated on reasoning-intensive tasks, whereas our work focuses on general instruction-following capabilities. > Can we also see a variant of Figure 4 with win rate against the seed model? As authors have already mentioned, COT only helps with math/logic related tasks, so seed model should be a stronger baseline here for most of the categories shown. We understand the reviewer's interest in seeing a direct comparison against the base seed model. The 'seed model' refers to the base pre-trained model prior to any preference optimization. The 'direct baseline' actually presented in Figure 4 is this seed model after it has undergone standard Direct Preference Optimization (DPO) training directly on final answers. This DPO-optimized direct baseline is generally stronger than the original un-optimized seed model, making it the more relevant and challenging point of comparison for evaluating TPO. > Authors show TPO do worse than baselines in GSM8K, any ideas why it does better than the baseline in the “math and calculation” subset of ultrafeedback? This is a keen observation regarding the differing performance on math-related datasets. These two datasets exhibit significant distributional differences in the types of mathematical problems presented. We hypothesize it’s because the reward model we used is better at judging questions in the ultrafeedback distribution but bad on the GSM8k distribution. > Why is it problematic that “the seed model … uses CoT anyway due to its instruct training”? If it is because the accuracy is too high for further improvement, can authors also evaluate on MATH, which is a harder task than gsm8k? We understand the reviewer's query regarding the implications of the seed model's inherent CoT capabilities. The goal of that experiment was to understand the effect of thinking and CoT on gsm8k performance. In particular, we tried to measure the accuracy of the seed model when directly outputting answers without any CoT. However this was tricky to measure because the model often did CoT before answering, sometimes even when explicitly instructed to not to do CoT > Authors mentioned that the poor performance on GSM8K might be “due to our setup not being oriented toward such tasks”. Have the authors tried to simply add the training examples in GSM8K as a part of the training data for TPO? That's a relevant suggestion concerning the training data composition for specific benchmarks. We did not include examples from the GSM8K training set within the dataset used for TPO training in this study. This is an interesting direction which we’ll leave for future work. --- We deeply appreciate the reviewer’s feedback and hope our responses fully address your concerns. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for answering my questions and providing further clarifications. I will maintain my score.
Summary: The paper proposes a method to enhance LLMs by enabling them to "think" explicitly before generating responses. This is aimed at improving performance on complex tasks requiring reasoning and planning, as well as general instruction-following tasks. The authors introduce the so-called Thought Preference Optimization (TPO), a training method that equips LLMs with the ability to generate internal "thoughts" before producing a response. These thoughts are hidden from the user and serve as an intermediate step to improve response quality. The method does not require additional human-labeled thought data. Claims And Evidence: 1. Claim: Generated thoughts are meaningful and improve response quality. The paper focuses on optimizing thoughts based on the quality of the final response but does not rigorously evaluate the quality of the thoughts themselves. For example, are the thoughts interpretable and aligned with the optimal or correct human-like reasoning steps? 2. Claim: TPO introduces a novel and significant advancement in LLM training. While the method is well-executed, its core idea—using preference optimization to improve intermediate outputs—is not fundamentally novel. It builds heavily on existing techniques like Direct Preference Optimization (DPO) and Reinforcement Learning from AI Feedback (RLAIF). A clearer comparison with existing methods (e.g., standard DPO, CoT) and a discussion of how TPO uniquely advances the field would strengthen this claim. Methods And Evaluation Criteria: 1. The proposed method focuses on optimizing thoughts based on response quality but seems to lack the rigorous evaluation of the quality of the thoughts themselves. For example, are the thoughts interpretable and meaningful? T 2. The proposed method underperforms on math tasks (e.g., GSM8K), which is surprising given the emphasis on reasoning. This suggests that TPO may not be well-suited for tasks requiring precise, step-by-step reasoning. The authors attribute this to the small proportion of math-related instructions in the training data, but this explanation feels insufficient. 3. The paper claims that specific thought prompts (e.g., drafting and evaluating responses) perform slightly better than generic prompts, but the difference is marginal. This raises questions about whether the added complexity of specific prompts is justified. Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: 1. The paper claims that specific thought prompts (e.g., drafting and evaluating responses) perform slightly better than generic prompts, but the difference is marginal. This raises questions about whether the added complexity of specific prompts is justified. 2. It is better to also evaluate the computational cost of generating and optimizing thoughts relative to the improvements in response quality. This would help establish the method's practical value. 3. It may be also helpful to investigate why TPO underperforms on math tasks and explore ways to improve its performance in this domain. Supplementary Material: No. Relation To Broader Scientific Literature: The key contributions of the paper is related to areas like CoT prompting, thought generation, iterative training and self-Improvement. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper evaluates the proposed method, Thought Preference Optimization (TPO), on well-established benchmarks like AlpacaEval 2 and Arena-Hard, demonstrating clear improvements over baselines. The use of GPT-4 as an auto-evaluator adds credibility to the results. 2. The fine-grained evaluation on 20 categories (e.g., marketing, health, math) provides valuable insights into where TPO excels, showing gains even in non-reasoning tasks. 3. The finding that TPO improves performance in non-reasoning tasks (e.g., marketing, health) is interesting and suggests that "thinking" may have broader applications than previously assumed. Weakness: 1. The method builds on existing techniques (e.g., DPO, RLAIF) and does not introduce significant algorithmic novelty. 2. The quality of the generated thoughts is not rigorously evaluated, and their interpretability remains unclear. 3. The practical benefits of the method, such as computational cost and real-world applicability, are not thoroughly analyzed. 4. The paper claims that specific thought prompts (e.g., drafting and evaluating responses) perform slightly better than generic prompts, but the difference is marginal. This raises questions about whether the added complexity of specific prompts is justified. In summary, while the application of preference optimization to thought generation is interesting, the core idea builds heavily on existing techniques like DPO and RLAIF. The paper does not introduce a fundamentally new paradigm, making the work feel incremental. Other Comments Or Suggestions: 1. How interpretable are the generated thoughts? Can they be used to debug or explain the model's decision-making process? Are there examples where the thoughts are nonsensical or misleading, despite leading to good responses? 2. It's better to include a qualitative analysis of the generated thoughts to assess their interpretability, alignment with human reasoning, and potential utility for model transparency. 3. It's better to evaluate the computational cost of generating and optimizing thoughts relative to the improvements in response quality. 4. It is better to investigate why TPO underperforms on math tasks and explore ways to improve its performance in this domain. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We address each concern raised and propose revisions: --- > Are the thoughts interpretable and aligned with the optimal or correct human-like reasoning steps? While our methodology does not impose explicit constraints on the structure or content of the thoughts during reinforcement learning, our analysis revealed the emergence of patterns that resemble effective human problem-solving strategies. These include: - Generating Checklists/Key Points: As seen in Figures 6 and 15, the model often formulates preliminary lists of elements to include or emphasize in the final answer. - Iterative Refinement: Figures 5 and 16 show the model refining potential answers within the thought process, adding arguments or examples. - Self-Correction and Evaluation: Figure 18 illustrates the model engaging in self-correction, sometimes even overriding pre-specified formats. > It's better to include a qualitative analysis of the generated thoughts to assess their interpretability, alignment with human reasoning... We agree that a dedicated qualitative analysis assessing thought interpretability is a valuable suggestion. However, rigorously and explicitly evaluating the intrinsic quality of machine-generated thoughts presents significant challenges. First, benchmarks specifically designed for assessing thought processes are not well-established. Secondly, defining objective criteria for a "good" thought is inherently complex. For instance, is a thought process containing flawless step-by-step logic but ultimately unhelpful superior to one that suggests a promising overall direction despite containing intermediate errors? Developing methods for more direct evaluation of thought quality itself remains an important direction for future research. > It builds heavily on existing techniques like Direct Preference Optimization (DPO) and Reinforcement Learning from AI Feedback (RLAIF)... We wish to clarify that the core innovation of TPO is not presented as a fundamentally new type of reinforcement learning or preference optimization algorithm itself. The primary objective of TPO is to demonstrate that substantial performance gains are achievable by affording the model this unjudged "thinking space," without requiring significant alterations to the underlying training algorithm architecture. We provides empirical evidence supporting this claim through direct comparisons with standard DPO and a CoT approach (representing the iteration 0 before TPO). As presented in our results, optimizing the hidden thought process via TPO yields significant performance improvements over these methods. > ...specific thought prompts perform slightly better than generic prompts, but the difference is marginal... Specific thought prompt introduce negligible implementation complexity because the TPO training methodology remains identical irrespective of the prompt format. While average performance gains across all tasks may appear modest, specific prompts yield substantial improvements on challenging benchmarks, notably ArenaHard. > It is better to also evaluate the computational cost... We’ll add discussion and plots of the generation cost vs the final performance in our final draft. > investigate why TPO underperforms on math tasks... We hypothesize this may stem from the deficiency of the reward model we used, which focused on general instruction following rather than specialized mathematical reasoning. For domains requiring strict correctness like mathematics, directly evaluating answer accuracy against reference solutions, might be more effective than relying solely on the preference judge. Incorporating such correctness checks represents a promising direction for future work. > the core idea builds heavily on existing techniques like DPO and RLAIF... TPO's contribution lies not in proposing a new optimization algorithm variant, but in demonstrating the methodology of optimizing an free-form thought process based only on judgments of the final answer. This indirect supervision approach facilitates the emergence of complex reasoning behaviors during training, which are not seen in standard answer only training. > Are there examples where the thoughts are nonsensical or misleading, despite leading to good responses We did not observe cases where nonsensical thoughts produced high-quality final answers; we hypothesize the KL regularization used during training helps maintain coherent and interpretable thought structures by penalizing significant deviations from the base model's. > It's better to evaluate the computational cost of generating and optimizing thoughts relative to the improvements in response quality. We’ll add analysis in the final draft. --- We greatly appreciate the reviewer’s constructive feedback, which improves the clarity of our work. We respectfully ask the reviewer to reconsider their rating, given our revisions and clarifications.
Summary: This paper presents a method and studies how to get LLMs to output initial thought traces before a final answer on instruction-following tasks. Their main idea is to prompt LLMs to initially produce these thought traces before a final response, score just the final response with an LLM-as-a-judge, and train over pairs of entire thought + final response sequences rated by just the response parts using DPO. Doing so enables training "thinking LLMs" without the need for any human-sourced thoughts, and leads to improved performance on AlpacaEval and Arena-Hard benchmarks (>50% win rate against direct finetuning on good responses). They finally perform finer-grained analysis to see where the thinking helps across instruction topic, and study properties such as impact of initial prompt type and thought lengths. Claims And Evidence: The claim of "investigating the possibility of converting existing LLMs into Thinking LLMs that work across a wide variety of tasks, without any additional data" is supported, albeit modestly. **Support / positives** * Experiments demonstrate that this paradigm works through their proposed method * The authors pick several interesting axes of study / points of comparison, such as (1) prompt for structuring the initial thoughts, (2) different judge models, (3) post-training procedure (DPO vs IRPO) * They also find some interesting empirical nuggets, such as the initial responses following thoughts being worse than the initial instruction-tuned model's. But then improving via the procedure. I found the example of self-correction on GSM8k also a nice highlight, though it would have been interesting to see how frequent this phenomenon occurred **Insufficiencies / negatives** * **Lack of model support**. All the experiments for the model generation are done on 1 model: Llama 3.1 8B Instruct. While this is a pretty modest model not known for having "reasoning" capabilities (i.e., it's cool to see these thinking traces emerge here), just showing the evaluation on one model seems insufficient re: the scope of a general method, and "converting existing LLMs into Thinking LLMs" at large. * **Lack of understanding or insight into why thoughts help**. I appreciated the study into the different topics and how thinking could help to varying degrees, as well as the examples of when thinking helped or hurt in the appendix. However, I would have liked to see more (hypotheses + validation or not) on why the thought processes help, especially on non-reasoning instruction-following tasks. * For example, what kinds of patterns or additional context emerge from the thoughts that contribute to higher quality responses? * Do different patterns emerge for different topics? * How robust is this emergence? e.g., the Figure 15 response example doesn't strike me as something that benefits truly from the thoughts. The response also seems like it could have come from the model without thoughts. * **Justification for parts of the method**. I appreciated the study on length-control and DRO vs IRPO, but some parts of the method came across a bit ad-hoc. e.g., what was the motivation for why the preference pair building was done as proposed? Could other techniques work? * Regarding the question on studying whether things are possible, I'm curious if this is a result strictly from preference optimization (TPO), or could we get Thinking LLMs via purely an outcome-based signal (answer the question or not) and techniques that just use this (KTO, SFT after sampling for positive sequences, RLVR) Nit, but the phrasing "we allow the model to independently learn to think" (L079) is a bit misleading, given that we use *additional* (larger) LLMs for prompt generation (Llama 70B) and response scoring (STE, ArmoRM) Methods And Evaluation Criteria: I think AlpacaEval and Arena-Hard are reasonable given the span of instruction categories (e.g., marketing, health and general knowledge). However, as so far as contrasting against the popular "logic-based tasks like math or coding", did the authors consider additional broadening the tasks or "skills" beyond general instruction-following (and evaluation based on LLM-as-a-judge preferences)? * For example, can TPO and reasoning help with non-logic tasks like summarization or question-answering over (long) contexts? The method comparison is also a bit lacking, where I think the authors should at least compare against STaR. As the authors point out in their related work (L431): > However, these methods rely on supervised training so ground-truth thought data is required. STaR (Zelikman et al., 2022) removes this constraint by generating both thought and answer from a model using few-shot prompting. Given that STaR also enables reasoning thoughts without the need for human thought data, and can be applied to the instruction-following tasks (e.g., filtering by using an LLM-as-a-judge on whether the response satisfies the instruction or not), it seems worth comparing to assess the novelty + impact of contribution for TPO. Theoretical Claims: N/A. No theoretical claims made. Experimental Designs Or Analyses: Yes. I checked model comparison, benchmark selection, and ablations. See issues pointed out in **Claims And Evidence** and **Methods And Evaluation Criteria**. Namely: * Lack of support for Thinking LLMs beyond Llama 3.1 8B Instruct * Evaluation on "only" AlpacaEval and Arena-Hard (or at least beyond the topic-based granularity breakdown, I think looking into the nature of the skills needed to follow the instructions faithfully [e.g., question-answering, summarization, content generation] and comparing this to the improvement in performance via the thoughts would be more insightful (an example of this is the factoid highlight in Figure 5, thought a more systematic analysis would be better). Supplementary Material: Yes. Experimental and implementation details (Evaluation, ELO computation). Additional artifacts (thought examples). Relation To Broader Scientific Literature: The authors show interesting results where thinking can emerge for non logic-based tasks, can be done without needing to do SFT on human thought traces, and can help non-logic based task quality. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See supports / positives in above claims response. Other Comments Or Suggestions: N/A Questions For Authors: 1. Did the authors consider studying the cost-quality trade-off between having to generate more tokens in the hidden thoughts vs direct prompting? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We address your concerns and propose corresponding revisions: --- > Lack of model support. All the experiments for the model generation are done on 1 model: Llama 3.1 8B Instruct... We acknowledge the reviewer's concern regarding model diversity. While we aim to conduct further evaluations involving larger models such as the latest Llama 3.3 70B and Deepseek R1, our current model selection is primarily constrained by limited training resources. Subject to the availability of time and resources, we intend to expand our experimental scope to include a wider range of models. However, this expansion may be relegated to future work. > Lack of understanding or insight into why thoughts help... For example, what kinds of patterns or additional context emerge from the thoughts that contribute to higher quality responses? We agree that understanding why thoughts contribute to better responses is crucial. Our analysis identified several patterns within the thought processes that correlate with higher-quality final responses: - **Generating Checklists/Key Points:** As illustrated in Figures 6 and 15, the model often drafts a preliminary list of essential elements or topics it determines should be included or emphasized in the final answer, effectively creating a structured plan. - **Iterative Refinement:** Figures 5 and 16 demonstrate instances where the model refines its potential answer. This includes augmenting initial drafts with stronger arguments, incorporating more specific examples, or restructuring the content for clarity. - **Self-Correction and Evaluation:** Figure 18 provides an example of the model engaging in self-correction and evaluation. Notably, this reflective behavior can emerge even when it leads to deviations from pre-specified output formats. These observed behaviors are consistent with established strategies known to enhance the quality and reliability of responses generated by large language models. > Do different patterns emerge for different topics? How robust is this emergence? ... We appreciate the reviewer inquiring about the robustness and specificity of these patterns. We didn’t statistically evaluate and classify these thought patterns because it requires eye-balling and does not have a way to do it at scale. However, we want to emphasize that some of the behaviors exhibited by our thought model are nearly never seen by a direct answer model, for example like the self reflection, refinement and self reminder behaviors, which we believe are the main reasons why our model can perform better. > I appreciated the study on length-control and DRO vs IRPO, but some parts of the method came across a bit ad-hoc... Regarding length control, our approach incorporates established mechanisms documented in prior work. Extensive research has demonstrated that without effective length constraints, model performance can be significantly impacted, often negatively. We adopted these standard techniques primarily to ensure fair and meaningful comparisons between different methods evaluated in our study, as consistent length controls were applied across all conditions. This also enhances the practical relevance of our findings by preventing models from artificially inflating reward scores through excessive verbosity. > if this is a result strictly from preference optimization (TPO), or could we get Thinking LLMs via purely an outcome-based signal ... Regarding the outcome-based signal, it aligns with methodologies explored in recent literature, such as DeepSeek-V1. Our proposed framework, which explicitly separates the generation process into distinct 'thought' and 'answer' components, is indeed adaptable to such outcome-based evaluation paradigms. By evaluating only the final 'answer' part, the framework grants the model greater flexibility in the 'thought' generation phase, encouraging more exploration during reinforcement learning, potentially leading to more diverse reasoning strategies and ultimately improving final task performance. Furthermore, alternative optimization algorithms like KTO or RLVR could readily replace DPO within our framework; our initial selection of DPO was based on its established effectiveness and relative simplicity of implementation. > The method comparison is also a bit lacking, where I think the authors should at least compare against STaR. STaR requires ground truth answers, in contrast, TPO requires only a reward model. These fundamental differences in data make a direct empirical comparison challenging. Furthermore, STaR and its variants were primarily evaluated on reasoning-intensive tasks, whereas our work focuses on general instruction-following capabilities. --- We greatly appreciate the reviewer’s constructive feedback, which enhances the quality and clarity of our work. We respectfully ask the reviewer to reconsider their rating, given our revisions and clarifications.
null
null
null
null
null
null
DRAG: Data Reconstruction Attack using Guided Diffusion
Accept (poster)
Summary: This paper proposes DRAG, a new data reconstruction attack under the guidance of diffusion models. This method utilizes the rich prior knowledge embedded in the latent diffusion model and firstly reconstructs data from vision foundation models. Experiments have shown the superiority of DRAG to some extent. Claims And Evidence: No, the experiments are insufficient and certain claims need further justifications. Please refer to the later parts of this review for details. Methods And Evaluation Criteria: Yes, this method is the first data reconstruction attack that introduces the diffusion model. Theoretical Claims: Yes, the theoretical claims are correct. Experimental Designs Or Analyses: The experimental results in this paper are insufficient and problematic. Detailed comments are listed as follows: - Current evaluations mainly focus on the image fidelity of reconstructed images without fully considering the potential privacy threat for users. More discussion of the metrics are expected. - The compared baselines are not sufficient. It would be better to provide comparisons with more SOTA attacks such as [1-4]. The listed methods are also evaluated in the previous data reconstruction attacks. - The assumption that the attacker has white-box access to the model architecture and parameters is a strong setting, which is usually unpractical in real-world scenarios. However, previous works [4-5] have discussed the utility of their methods under black-box scenarios. More discussion on the more practical settings are expected. - What about the performance on smaller CNN models like ResNet? More evaluation on the CNN models utilized in previous works is expected for alignment with the baselines. [1]Dario Pasquini, Giuseppe Ateniese, and Massimo Bernaschi. Unleashing the tiger: Inference attacks on split learning. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pages 2113–2129, 2021. [2]Unsplit: Data-oblivious model inversion, model stealing, and label inference attacks against split learning. In Proceedings of the 21st Workshop on Privacy in the Electronic Society. [3]Xinben Gao and Lan Zhang. PCAT: Functionality and data stealing from split learning by Pseudo-Client attack. In 32nd USENIX Security Symposium (USENIX Security 23), pages 5271–5288, Anaheim, CA, 2023. USENIX Association. [4]Xu X, Yang M, Yi W, et al. A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 12130-12139. [5]Li Z, Yang M, Liu Y, et al. GAN you see me? enhanced data reconstruction attacks against split inference[J]. Advances in Neural Information Processing Systems, 2023, 36: 54554-54566. Supplementary Material: Yes, the reviewer has carefully checked the appendix. Relation To Broader Scientific Literature: This paper focuses on an important privacy problem. However, it relies on a strong setting of white-box access to the target model. This will limit its potential impact on broader scientific literature. Essential References Not Discussed: Yes, there are certain new works [1-4] that are not evaluated in this paper. Other Strengths And Weaknesses: Other Strengths: - The proposed method is not time-consuming. Other Weaknesses: - The Peak Signal-to-Noise Ratio (PSNR) metric is also critical to assess the image fidelity. However, this paper does not adopt this metric. Other Comments Or Suggestions: - minor mistake: the GLASS [5] is published in 2023 instead of 2024. Questions For Authors: - The GradViT is introduced in the “Baseline Attacks” section in the Appendix. But where is the experimental results? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We address your concerns point by point below. --- > 1. Related to the evaluation metrics The choice of metrics is highly application dependent, and our selections were guided by prior works in this area. In our study, we focused on MS-SSIM, LPIPS, and DINO because they better capture human perceptual similarity and privacy leaks. PSNR and MSE are sensitive to translation and other low-level distortions. However, we are open to including PSNR to offer a more comprehensive analysis. --- > 2. It would be better to provide comparisons with more SOTA attacks. > 3. The assumption that the attacker has white-box access to the model architecture and parameters is a strong setting, which is usually unpractical in real-world scenarios. > We agree that FORA [1] and related works are important to discuss in the context of privacy threats in split inference, and we will include a discussion of these methods in the revised paper. These works consider privacy risk under a different configuration from ours by exploring query-free data reconstruction attacks in split learning. In that setup, the attacker cannot directly access $f_c$ but may capture or interfere with the training process to build a surrogate model $\tilde{f}_c \approx f_c$. Once this surrogate model is built, attackers can reconstruct private data using either optimization-based (e.g., DRAG) or learning-based methods. Xu et al. [1] note that combining these two research areas can lead to more powerful reconstruction attacks, as their developments are independent. On the other hand, our work focuses on the privacy risks associated with using foundation models as part of the model parameters in downstream tasks, implying that an attacker can feasibly access $f_c$ directly. Our findings highlight the need to develop privacy-preserving inference techniques, especially as new applications [2, 3] increasingly leverage foundation models. --- > 4. What about the performance on smaller CNN models like ResNet? More evaluation on the CNN models utilized in previous works is expected for alignment with the baselines. DRAG is broadly applicable to all models regardless of architecture. To address it, we evaluated our method on CLIP-RN50. Key results are presented in Table 2 of the main paper. Detailed experimental results can be found in Appendix A, specifically Table 6, Fig. 6c) and Fig. 7c). We will reorganize the paper to more clearly direct readers to the experimental results and improve overall clarity. The table below—captured from Appendix A—illustrates DRAG's effectiveness compared to other methods in reconstructing data from CLIP-RN50. In this experiment, the feature space distance metric $d_\mathcal{H}$ was implemented using MSELoss. Due to space limitations, we have presented results for model splits at blocks 4 to 5. | Split Point | Method | MS-SSIM ($\uparrow$) | LPIPS ($\downarrow$) | DINO ($\uparrow$) | | - | - |:-:|:-:|:-:| |Block 4|rMLE | 0.4888 | 0.4198 | 0.7776 | | |LM | 0.5855 | 0.2576 | 0.9012 | | |GLASS| 0.4872 | 0.3568 | 0.7315 | | |DRAG |**0.7896**|**0.0898**|**0.9622**| |Block 5|rMLE | 0.3980 | 0.5006 | 0.6739 | | |LM | 0.4432 | 0.3409 | 0.7614 | | |GLASS| 0.2917 | 0.4223 | 0.6811 | | |DRAG |**0.5206**|**0.2231**|**0.9001**| --- > Minor mistake: the GLASS is published in 2023 instead of 2024. Thanks for pointing out this issue. We have revised the year of this reference. --- > The GradViT is introduced in the “Baseline Attacks” section in the Appendix. But where is the experimental results? We apologize for the confusion regarding the explanation of GradViT. GradViT is a parallel work that focuses on reconstructing training data using the gradients of model parameters. We referenced it because we observed artifacts in rMLE when attacking ViT, and GradViT proposed a regularization to mitigate these artifacts. We have adapted this regularization to strengthen rMLE and LM (referring to $\lambda_\text{patch}$ in Table 10). However, even with this regularization, reconstruction of deep-layer IR fails, which motivated our proposal of a data-driven image prior for enhancement. --- We appreciate your feedback and remain available to address any additional questions or concerns you may have. ### Reference [1] Xu, X., et al. A stealthy wrongdoer: Feature-oriented reconstruction attack against split learning. In CVPR, 2024. [2] Liu, H., et al. Visual instruction tuning. In NeurIPS, 2023. [3] Chen, J., et al. Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. arXiv preprint arXiv:2310.09478. 2023
Summary: - This paper is about reconstruction attacks in split inference (SI) configurations. Specifically, this paper studies reconstructing a datapoint given the intermediate representation of that datapoint in a deep models - The paper proposed guided diffusion to do this attack (DRAG), where the guidance term is given by a cosine similarity between the reconstruction intermediate embedding and the target embedding - The authors validate they method on pretrained CLIP models, and for models with defences applied, showing good results compared to existing methods - DRAG++ is also proposed (in a very rushed fashion) which also uses an inverse model to bootstrap the attack Claims And Evidence: Most of the claims are substantiated. See other sections for specific details. I think that generally speaking, the attack seems promising and the results are convincing, particularly when compared to equivalent attacks in the same settings. However, it is unclear what dataset the GAN used in GLASS is trained on, and how that compared to the dataset that the diffusion model DRAG is trained on, and should be mentioned in more detail Methods And Evaluation Criteria: There is very limited explanation of the DRAG++ method. In particular: 1. Can we write out the DRAG++ pseudocode in full? At least in the appendix 2. What are the details of the dataset used to train the reconstruction network? How sensitive is DRAG++ to the choice of public dataset 3. I think DRAG++ should be explained earlier in the paper rather than at the end right before the conclusion. At the moment it seems like it was added ad-hoc Theoretical Claims: There are no theoretical concerns in this paper. Experimental Designs Or Analyses: There are a few concerns: 1. The largest concern is that the paper is that there is both the reconstruction model and the attack diffusion model are trained on the same domain. Appendix A.3. claims that they do this, testing on the UCMerced LandUse data, but because they are using pretrained diffusion models (stable Diffusion 1.5), it is unclear how much contamination there is in the diffusion model. 2. I am not convinced that fine-tuning the bast model on distinct subsets of the dataset ensures separation, as the base model is already pretrained on data which might include both the private and public splits of data. 3. To fix this, I think it would be best to train models from scratch (at least in one section of the paper). In this case the inference model could, for example, be trained on CelebA, and the diffusion model be trained on ImageNet. This would ensure that there is no leakage in the paper. Supplementary Material: Yes. There is not much technical content in the supplementary and it mainly contains more results. Relation To Broader Scientific Literature: This paper proposes another reconstruction attack for the split inference setting. It is not particularly surprising that such an attack works, however this is good validation. I think that using diffusion models in reconstruction attacks is a promising general research direction. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: Main concerns are in the experimental design/analysis section. Other Comments Or Suggestions: None. Questions For Authors: - For ResNet models, do you still use the same cosine loss function given in equation 10? The authors only specify this for transformer models. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We address your concerns point by point below. --- > it is unclear what dataset the GAN used in GLASS is trained on, and how that compared to the dataset that the diffusion model DRAG is trained on, and should be mentioned in more detail. In our evaluation of GLASS, we used two GAN models: StyleGAN2-ADA (trained on FFHQ) and StyleGAN-XL (trained on ImageNet). These models are the most relevant publicly accessible GANs, to the best of our knowledge. We assume that GLASS has prior knowledge of the target image distribution; therefore, if the private image is from FFHQ, the attacker selects StyleGAN2-ADA, and if not, StyleGAN-XL is selected. This setup inherently gives advantages to GLASS in the comparison. On the other hand, DRAG utilizes SDv1.5, pretrained on a subset of LAION-5B, to demonstrate the effectiveness of the diffusion prior with the support of a large dataset. This choice reflects the evolving nature of DMs and their accessibility, potentially reducing the attacker's cost in preparing the model. We will update the paper to include these experimental details for improved clarity. --- > There is very limited explanation of the DRAG++ method. In particular: > 1. Can we write out the DRAG++ pseudocode in full? At least in the appendix > 2. What are the details of the dataset used to train the reconstruction network? How sensitive is DRAG++ to the choice of public dataset > 3. I think DRAG++ should be explained earlier in the paper rather than at the end right before the conclusion. At the moment it seems like it was added ad-hoc We agree that DRAG++ should be introduced earlier in the paper. We will also include the full DRAG++ pseudocode in the appendix to provide complete details. In brief, DRAG++ uses an auxiliary $f_c^{-1}$ to initialize $x_{t}$ and denoises from $t=sT$ (where $s \in [0,1]$), while the core guided diffusion remains unchanged. We refer to DRAG++ as an optional enhancement, since $f_c^{-1}$ is an auxiliary component for attackers who have the resources to train such a network. Regarding the training dataset of $f_c^{-1}$, we train it on the ImageNet-1K training split (using 50% of the data, while assuming the other 50% is not accessible to the attacker). To evaluate the sensitivity of DRAG++ to the choice of public dataset, we also train $f_c^{-1}$ on other datasets. For a fair comparison, we randomly sampled 60,000 images from the dataset to serve as the training data. Our results indicate that using a less diverse dataset (e.g., FFHQ) leads to a slight performance drop, whereas training on more complex datasets (e.g., ImageNet or MSCOCO), maintains similar performance. | Split Point | Dataset | MS-SSIM ($\uparrow$) | LPIPS ($\downarrow$) | DINO ($\uparrow$) | |--|:- |:-:|:-:|:-:| |Layer 9| ImageNet |0.8062|0.0914|0.9682| || MSCOCO |0.8037|0.0944|0.9655| || FFHQ |0.7805|0.1021|0.9632| |Layer 12| ImageNet |0.6987|0.1732|0.9412| || MSCOCO |0.6850|0.1867|0.9407| || FFHQ |0.6568|0.2092|0.9325| --- > The largest concern is that the paper is that there is both the reconstruction model and the attack DM are trained on the same domain. To address this concern, we designed an experiment to evaluate the OOD capability of using a DM as an image prior in DRA. In this experiment, we employ the checkpoint "google/ddpm-bedroom-256" from HuggingFace, which we denote as DRAG* to distinguish it from DRAG using SDv1.5. The inference model used is CLIP-ViT-B/16, and the evaluation is performed on the same dataset as in our paper. The bold number, shown among rMLE, LM, GLASS, and DRAG*, indicates the best score. The original score of DRAG is also provided for reference. | Split Point | Method | MS-SSIM ($\uparrow$) | LPIPS ($\downarrow$) | DINO ($\uparrow$) | |:-|:-|:-:|:-:|:-:| | Layer 9 | rMLE | 0.4957 | 0.5131 | 0.7086 | || LM | **0.6681**| **0.2138**| **0.9037**| || GLASS| 0.3852 | 0.4310 | 0.6648 | || DRAG*| 0.5378 | 0.3940 | 0.8147 | || DRAG | 0.7974 | 0.0967 | 0.9652 | | Layer 12| rMLE | 0.3884 | 0.5900 | 0.6462 | || LM | 0.2560 | 0.6024 | 0.4097 | || GLASS| 0.2396 | 0.5790 | 0.4578 | || DRAG*| **0.3958**| **0.4941**| **0.7240**| || DRAG | 0.6735 | 0.1857 | 0.9331 | These experimental results show that DRAG using an OOD DM outperforms rMLE, LM, and GLASS in Layer 12 according to LPIPS and DINO metrics, thereby demonstrating the effectiveness of leveraging DMs in DRAs under an OOD configuration. --- > For ResNet models, do you still use the same cosine loss function given in equation 10? The authors only specify this for transformer models. We have applied MSELoss as the distance metric $d_\mathcal{H}$ for all DRAs while attacking CLIP-RN50. We would mention this configuration earlier in the revised menuscript. --- We appreciate your feedback and remain available to address any additional questions or concerns you may have.
Summary: This paper proposes a data reconstruction attack in split inference. The proposed method is based on guided diffusion, which leverages the rich prior knowledge embedded in a latent diffusion model (LDM) pre-trained on a large-scale dataset. The proposed method performs iterative reconstruction on the LDM’s learned image prior, effectively generating high-fidelity images resembling the original data from their intermediate representations (IR). Extensive experiments demonstrate that the proposed approach outperforms prior methods. Claims And Evidence: I didn't find claims that are problematic. Methods And Evaluation Criteria: The proposed method make sense for the problem and application. However, the paper lacks details on the attack framework. The author only refer to the Figure 2 for the attack framework. In Figure 2, why all images look like noise for timesteps 0 to T? g_t in equation 6 is not clear shown in the figure. Algorithm 1 is never cited, and the symbols in the algorithm are not explained. In line 225, it is unclear why there is a loop and what is the value for k in the experiments. If the attack requires back propogation for every timestep, the attack should experience very long runtime. If the attack does not require back propogation for every timestep, it is unclear what backprop. is done for the diffusion model. The evaluation criteria is solid. Theoretical Claims: There is no theoretical claim and proof. Experimental Designs Or Analyses: This paper lacks experimental details. There is no information is provided for the number of timestep T, the t in Algorithm 1. Besides the above parameters, the experiments are comprehensive and convincing. Supplementary Material: I briefly reviewed the supplementary material. Relation To Broader Scientific Literature: This work focus on an important research area. However, this paper lacks compare with recent methods which addressing the same proble. See below. Essential References Not Discussed: Below is a work solving the same problem by using diffusion model, and the target model is ViT. I suggest comparing DRAG with this work. Chen, D., Li, S., Zhang, Y., Li, C., Kundu, S. and Beerel, P.A., 2024. DIA: Diffusion based Inverse Network Attack on Collaborative Inference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 124-130). Other Strengths And Weaknesses: Strengths: This paper provides defense methods to DRA. Weakness: In section 2.2 line 98-99, the author mentions three types of DRA, however, only one of them is introduced in this section. Other Comments Or Suggestions: I don't have additional comments or suggestions. Questions For Authors: How does the back propagation applied to the diffusion model as suggested in Figure 2? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We address your concerns point by point below. --- > The proposed method make sense for the problem and application. However, the paper lacks details on the attack framework. The author only refer to the Figure 2 for the attack framework. In Figure 2, why all images look like noise for timesteps 0 to $T$? $g_t$ in equation 6 is not clear shown in the figure. Algorithm 1 is never cited, and the symbols in the algorithm are not explained. We apologize for the lack of clarity regarding the DRAG framework in the draft. In the revised manuscript, we will provide a detailed description of the DRAG approach and address several points. In Figure 2, we currently want to decipt that $x_{t-1}$ to $x_0$ are to be computed, and we will further improve the legend to enhance clarity. Additionally, we will explicitly illustrate and define $g_t$ in Figure 2 to clearly depict its role. We will also properly reference Alg. 1 in the main paper and include thorough explanations for all associated symbols. --- > In line 225, it is unclear why there is a loop and what is the value for $k$ in the experiments. If the attack requires back propogation for every timestep, the attack should experience very long runtime. If the attack does not require back propogation for every timestep, it is unclear what backprop. is done for the diffusion model. > > How does the back propagation applied to the diffusion model as suggested in Figure 2? To clarify, DRAG requires back-propagation at each timestep to reconstruct the image, and the number of iterations $k$ in the inner loop controls the reconstruction quality. Figure 11b) illustrates the trade-off between reconstruction quality and execution time, while Table 9 provides execution times for various DRAs. Although the algorithm includes an inner loop, our experiments demonstrate that this approach leads to higher reconstruction performance when the split point is deep, thereby highlighting significant privacy threats. The back propagation computes the gradient for the sample $x_t$, which is then used to adjust the sampling process of $\epsilon_t$ according to Eq. 8 during DDPM denoising. --- > In section 2.2 line 98-99, the author mentions three types of DRA, however, only one of them is introduced in this section. Thanks for pointing out this issue. We will revise this section to mention the other types of DRA for a more comprehensive review. --- > Below is a work solving the same problem by using diffusion model, and the target model is ViT. I suggest comparing DRAG with DIA. In our framework, DIA can be viewed as an enhancement to the inverse network $f_c^{-1}$ within our reconstruction pipeline (see Fig. 2), as it provides a better initialization for the optimization-based reconstruction process. Notably, the $f_c^{-1}$ component is optional and requires extra data and model training. In contrast, DRAG leverages a publicly available diffusion model and does not require additional training or data, aligning with the prior work we compared against. --- We appreciate your feedback and remain available to address any additional questions or concerns you may have.
Summary: The paper introduces a new reconstruction attack method, DRAG (Data Reconstruction Attack using Guided Diffusion), that reconstructs private data from intermediate representations in split Inference settings. Unlike previous attacks on small CNNs, DRAG employs Latent Diffusion Models (LDMs) to iteratively improve reconstructions, resulting in high-fidelity image recovery from the deep-layer intermediate representations of CLIP and DINOv2. The experimental results indicate that the propsoed method surpasses existing attacks (e.g., rMLE, LM, GLASS) in deep layers and maintains effectiveness against privacy defenses such as DISCO and NoPeek. Furthermore, the enhanced version, DRAG++, utilizes an inverse network for improved initialization, leading to higher attack success rates. The results highlight significant privacy risks in vision foundation models, emphasizing the necessity for stronger defenses within SI frameworks. Claims And Evidence: The paper presents empirical evidence supporting its claims through quantitative experiments on several benchmarks (MSCOCO, FFHQ, ImageNet-1K) and thorough comparisons with previous data reconstruction attacks (rMLE, LM, GLASS). The findings indicate that DRAG offers superior reconstruction quality, especially at deeper layers of CLIP and DINOv2. This reinforces the main assertion that large vision foundation models are susceptible to privacy attacks in split inference (SI) scenarios. Methods And Evaluation Criteria: The methods and evaluation criteria are well-suited for assessing reconstruction quality, utilizing vision foundation models such as CLIP and DINOv2 alongside benchmark datasets like MSCOCO, FFHQ, and ImageNet-1K. Metrics including MS-SSIM, LPIPS, and DINO similarity effectively evaluate both low- and high-level fidelity. Additionally, comparisons with rMLE, LM, and GLASS confirm the improvements made. The inclusion of privacy defenses (DISCO and NoPeek) further strengthens the analysis. Theoretical Claims: This paper is primarily empirical rather than theoretical, focusing on experimental validation of data reconstruction attacks rather than formal proofs. Experimental Designs Or Analyses: The paper shows a comprehensive experimental design that evaluates prominent vision models (CLIP, DINOv2) and benchmark datasets (MSCOCO, FFHQ, ImageNet-1K). It effectively compares previous attack methods (rMLE, LM, GLASS) and incorporates multiple reconstruction quality metrics (MS-SSIM, LPIPS, DINO similarity) for an in-depth analysis. However, The paper evaluates DRAG against DISCO (2021) and NoPeek (2020), which are relatively older defenses in the evolving landscape of privacy-preserving machine learning. Supplementary Material: Yes, I checked the implementation details for evaluating the fairness of settings of different methods. Relation To Broader Scientific Literature: The paper builds upon prior work in **data reconstruction attacks**, **diffusion models**, and **split inference** privacy risks, contributing a novel diffusion-guided approach for reconstructing private data from intermediate representations. Essential References Not Discussed: A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning, CVPR 2024 GAN-based data reconstruction attacks in split learning, Other Strengths And Weaknesses: Weakness: + Figure 2 is not very effective for understanding, even though it is drawn simply. The caption should include additional explanations. + The paper evaluates DRAG against DISCO (2021) and NoPeek (2020), but these defenses are somewhat outdated. + The method is not specifically designed for ViT or deeper layer models. Its applicability to CNNs and potential performance improvements over other methods remain unclear. + The experimental results lack discussion. The method is more effective at reconstructing data from deep layers compared to shallow layers. What accounts for this difference? + The paper should include the discussion of FORA (A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning 2024 CVPR), etc,. methods. Other Comments Or Suggestions: Please refer to Other Strengths and Weaknesses. Questions For Authors: Please refer to Other Strengths and Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We address your concerns point by point below. --- > Fig. 2 is not very effective for understanding, even though it is drawn simply. The caption should include additional explanations. We agree that enhancing the caption and associated text will improve clarity. We will revise the figure accordingly and upload the updated version to OpenReview before 4/8. --- > The paper evaluates DRAG against DISCO (2021) and NoPeek (2020), but these defenses are somewhat outdated. Thank you for pointing out this issue. We chose DISCO and NoPeek as target defenses because they are representative of the approaches highlighted in GLASS, and they provide a well-established baseline for comparison. Additionally, we have identified several more recent works [4-6], and we will include evaluations against these newer defenses in the next revision to offer a more comprehensive assessment. --- > The method is not specifically designed for ViT or deeper layer models. Its applicability to CNNs and potential performance improvements over other methods remain unclear. DRAG is broadly applicable to all models regardless of architecture. To address it, we evaluated our method on CLIP-RN50. Key results are presented in Table 2 of the main paper. Detailed experimental results can be found in Appendix A, specifically Table 6, Fig. 6c) and Fig. 7c). We will reorganize the paper to more clearly direct readers to the experimental results and improve overall clarity. The table below—captured from Appendix A—illustrates DRAG's effectiveness compared to other methods in reconstructing data from CLIP-RN50. In this experiment, the $d_\mathcal{H}$ was implemented using MSELoss. Due to space limitations, we have presented results for model splits at blocks 4 to 5. |Split Point| Method | MS-SSIM ($\uparrow$) | LPIPS ($\downarrow$) | DINO ($\uparrow$) | | - | - |:-:|:-:|:-:| |Block 4|rMLE|0.4888|0.4198|0.7776| ||LM |0.5855|0.2576|0.9012| ||GLASS|0.4872|0.3568|0.7315| ||DRAG |**0.7896**|**0.0898**|**0.9622**| |Block 5|rMLE |0.3980|0.5006|0.6739| ||LM |0.4432|0.3409|0.7614| ||GLASS|0.2917|0.4223|0.6811| ||DRAG |**0.5206**|**0.2231**|**0.9001**| --- > The experimental results lack discussion. The method is more effective at reconstructing data from deep layers compared to shallow layers. What accounts for this difference? Optimization-based DRAs optimize the sample $x$ by minimizing $d_\mathcal{H}$. When the split point is deep, this primarily guides $x$ to align with the high-level features of the target image, without necessarily ensuring a match of pixel-level accuracy. Prior DRAs often fail in reconstructing the image from deep-layer IR because they do not sufficiently restrict the search space. To improve DRA, we leverage a diffusion prior to constrain $x$, based on the assumption that the private image is an natural image. --- > The paper should include the discussion of FORA, etc,. methods. We agree that FORA and related works are important to discuss in the context of privacy threats in split inference, and we will include a discussion of these methods in the revised paper. These works consider privacy risk under a different configuration from ours by exploring query-free data reconstruction attacks in split learning. In that setup, the attacker cannot directly access $f_c$ but may capture or interfere with the training process to build a surrogate model $\tilde{f}_c \approx f_c$. Once this surrogate model is built, attackers can reconstruct private data using either optimization-based (e.g., DRAG) or learning-based methods. Xu et al. [1] note that combining these two research areas can lead to more powerful reconstruction attacks, as their developments are independent. On the other hand, our work focuses on the privacy risks associated with using foundation models as part of the model parameters in downstream tasks, implying that an attacker can feasibly access $f_c$ directly. Our findings highlight the need to develop privacy-preserving inference techniques, especially as new applications [2, 3] increasingly leverage foundation models. --- We appreciate your feedback and remain available to address any additional questions or concerns you may have. ### Reference [1] Xu, X., et al. A stealthy wrongdoer: Feature-oriented reconstruction attack against split learning. In CVPR, 2024. [2] Liu, H., et al. Visual instruction tuning. In NeurIPS, 2023. [3] Chen, J., et al. Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. arXiv preprint arXiv:2310.09478. 2023 [4] Wang, T., et al. Improving robustness to model inversion attacks via mutual information regularization. In AAAI, 2021 [5] Zou, T., et al. Mutual information regularization for vertical federated learning. arXiv preprint arXiv:2301.01142. 2023 [6] Duan, L., et al. Reimagining Mutual Information for Enhanced Defense against Data Leakage in Collaborative Inference. In NeurIPS. 2024
null
null
null
null
null
null
Editable Noise Map Inversion: Encoding Target-image into Noise For High-Fidelity Image Manipulation
Accept (poster)
Summary: This paper proposed a new inversion-based imag/video editing method called ENM inversion. The motivation is to improve text alignment with the target text prompt. The authors proposed editable noise refinement, which conduct inference time optimzation on the intermediate latents. The proposed ressults achieved sota performance on both image and video editing datasets. ## update after rebuttal As mentioned the in rebuttal reponse, I increase the score to accept only if the authors add related details to the final version. The authors haven't confirm yet, which I take as acknowledgement. Claims And Evidence: Most claims for valid and supported by evidence. But since the paper claims to be a general image manipulation methods, the authors should test on more types of manipulations. All the results presented in the paper seems to be minor editings like texture, color, style, expression. I'd like to see if it works for more editing tasks: (1) adding object, e.g., adding a hat. (2) multi object editing: e.g., you have a blue toy holding a yellow flower, you want to change to a yellow toy holding a blue flower. Methods And Evaluation Criteria: The methods are somewhat novel, though there are other works explring updating the latent space, but most of them are in the reverse process, not inverse process. The eval matrics are good. Would be better to add human eval since the metrics are sometimes not very reflective in human preference as noted in many papers like dreambooth and RB-modulation. Question: 1. In the construction of the loss (5)(6), the assumption is that the src prompt can give you a latents that reconstruct the image well. Can you prove if this assumption is correct? Will different src prompts lead to different results? e.g. "a walking tiger", "a tiger is waling on the ground", "a tiger", "a tiger in the jungle" might all refers to the same src image. Theoretical Claims: The formulations are correct. Experimental Designs Or Analyses: Question: 1. I think the editability mainly comes from the editing module like PnP, while the proposed method makes the output more consistent to the source image as eq. (6). So I would expecting that you outperforms the other method mainly on the editing part but not editing part. But it seems from the results that other methods with the same editing module still performs worse (e.g., fig.4 last row only the proposed method is able to change the color). Can you explain? 2. Does the method works on latest flow models like SD3.5/Flux? 3. The recent RF-inversion has similar ideas behind the scene, which construct a new vector field to update intermediate latent at each timestep, would be good to compare with this model or it's variants. Supplementary Material: I reviewed the suppl. meterials. Relation To Broader Scientific Literature: Mostly comprehensive, added one more suggestions in the experiment section above. Essential References Not Discussed: There are many papers on inversion-based editing on the rectified-flow models, would be good to have them for completeness. But not very essential since this paper only compares diffusion models. Other Strengths And Weaknesses: Discussed above. Other Comments Or Suggestions: no. Questions For Authors: included above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate you taking the time to review our research. Below, we have provided responses to points raised. **Claims And Evidence:** > I'd like to see if it works for more editing tasks: (1) adding object, e.g., adding a hat. (2) multi object editing: e.g., you have a blue toy holding a yellow flower, you want to change to a yellow toy holding a blue flower. **Answer:** Thank you for your valuable feedback. We have incorporated qualitative results demonstrating the ability of our method to handle object addition. However, our work does not specifically aim to address multi-object editing, and the dataset we used, PIE-Bench, does not contain multi-object editing tasks. We acknowledge the importance of this direction and consider it a promising avenue for future research. **Methods And Evaluation Criteria Question:** > In the construction of the loss (5)(6), the assumption is that the src prompt can give you a latents that reconstruct the image well. Can you prove if this assumption is correct? Will different src prompts lead to different results? **Answer:** While different source prompts can lead to slight variations in the results, the differences are not significant. As mentioned in NTI, inversion using a null source prompt already provides a latent that reconstructs the image well. When a source prompt is provided, even with the Classifier-Free Guidance scale set to 1, the reconstruction quality remains high. Additionally, NTI demonstrates that optimizing only the text embedding (rather than the latent) is sufficient for accurate image reconstruction. In our approach, we follow the same setup as other comparison methods by setting the scale to 1 during inversion. **Experimental Designs Or Analyses:** > I think the editability mainly comes from the editing module like PnP, while the proposed method makes the output more consistent to the source image as eq. (6). So I would expecting that you outperforms the other method mainly on the editing part but not editing part. But it seems from the results that other methods with the same editing module still performs worse (e.g., fig.4 last row only the proposed method is able to change the color). Can you explain? **Answer:** Thank you for your insightful review. You raise a valid point regarding the role of the editing module in determining editability. However, editability is influenced not only by the editing module but also by the initial noise. As explained in [1], the choice of latent space plays a important role in generating specific concepts. Our method is designed not only to better preserve the source image but also to find a latent that facilitates the generation of the edited image. Furthermore, Figure 6 demonstrates that our inversion approach allows the attention map to be applied to the edited region more quickly and stably. This indicates that our latent generates images more quickly and consistently with the source image while also enabling stronger enforcement of edits compared to other inversion methods. > Does the method works on latest flow models like SD3.5/Flux? > The recent RF-inversion has similar ideas behind the scene, which construct a new vector field to update intermediate latent at each timestep, would be good to compare with this model or it's variants. **Answer:** Yes, our method is applicable to Flux and other flow-based models. We have conducted additional experiments specifically on flow-based models [2][3] and integrated our approach with RF-inversion [3] to evaluate its effectiveness. | **Method** | **Structure Distance** ↓ | **PSNR** ↑ | **LPIPS** ↓ | **MSE** ↓ | **SSIM** ↑ | **CLIP Similarity (Whole)** ↑ | **CLIP Similarity (Edited)** ↑ | |--------------|----------------------------|------------|------------------|------------------|------------------|-------------------------|-------------------------| | SDEdit-Flux [2] | 118.97 | 14.41 | 329.92 | 450.06 | 60.82 | 25.06 | 22.50 | | RFInv [3] | 60.08 | 18.24 | 232.88 | 210.02 | 64.78 | 24.94 | 22.65 | | Ours + RFInv | 46.38 | 19.77 | 185.86 | 153.90 | 69.57 | 25.05 | 22.65 | Table 1. Results of comparing and combining our method with flow-based models. As shown in the table, our method enhances the performance of RF-Inversion, demonstrating its capability to work effectively on models like Flux. We appreciate your insightful comments and suggestions. **Reference** [1] Generating images of rare concepts using pre-trained diffusion models [2] SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations [3] Semantic Image Inversion and Editing using Rectified Stochastic Differential Equations --- Rebuttal Comment 1.1: Comment: I appraciate the authors for the response. I will increase the score to accept on condition that the authors incorpoate the following experiments in the finla version. (1) adding objects (2) results to demonstrate the authors statement "While different source prompts can lead to slight variations in the results, the differences are not significant." (3) The other results in the authors response above. Thanks.
Summary: This paper propose ENM Inversion, a technique for high-quality real image editing. By refining noise maps to align with both the source and target images, ENM Inversion encodes the target image more effectively into the noise maps, allowing for high-quality edits while preserving the source image's details. ## update after rebuttal The authors' rebuttal has addressed my previous concerns. After considering the feedback from the other reviewers and the authors' response, I have decided to maintain my original evaluation to a Weak Accept. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No, as there are no theoretical claims made. Experimental Designs Or Analyses: I reviewed the experimental results presented in Tables 1, 2, 3 and 4, and they appear to be correct. Supplementary Material: The Supplementary Material includes sections on Limitations, Analysis of Noise Map Differences Across Inversion Steps, Hyper-parameter Analysis, and Additional Qualitative Results. Relation To Broader Scientific Literature: This paper contributes to the broader scientific literature by addressing the challenge of high-fidelity image manipulation through the technique of encoding target images into noise for editable manipulation. Essential References Not Discussed: None Other Strengths And Weaknesses: **Paper Strengths:** The paper is well written. The main motivation is clear and easy to understand. **Paper Weaknesses:** 1. What does $Z_t^{s}$ represent in Figure 2? There is no clear definition of this symbol. 2. The authors state that "smaller differences between the reconstructed and edited noise maps are strongly correlated with better editing performance." However, this is not always the case. As shown in Figure 3, the camel, elephant, and giraffe exhibit similar noise map differences, yet their editing performance varies. Have the authors attempted using the same editing prompt, such as "camel," to perform different types of edits? 3. There are no definitions provided for $f$ in Equations 5 and 6. Please clarify the meaning of these symbols. Other Comments Or Suggestions: None Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for taking the time to evaluate our research. Below are our responses to all the points raised: **Paper Weaknesses:** > What does $Z_t^s$ represent in Figure 2? There is no clear definition of this symbol. **Answer:** We sincerely thank the reviewer for the detailed observations. In Figure 2, $Z_t^s$ represents the reconstructed latent. Methods such as Prompt-to-Prompt, Plug-and-Play, and MasaCtrl reconstruct the original image using this latent representation. We have updated Figure 2 to explicitly define each latent variable used in the illustration. Thank you. > The authors state that "smaller differences between the reconstructed and edited noise maps are strongly correlated with better editing performance." However, this is not always the case. As shown in Figure 3, the camel, elephant, and giraffe exhibit similar noise map differences, yet their editing performance varies. Have the authors attempted using the same editing prompt, such as "camel," to perform different types of edits? **Answer:** We acknowledge your concern regarding the relationship between noise map differences and editing performance. CLIPScore and LPIPS are based on deep learning models, which may introduce variations in perceived similarity even when noise map differences appear similar. We aimed to investigate the trend between noise map differences and editing performance. To do this, we conducted experiments on 20 different editing prompts, and as a result, the correlation coefficient between 'LPIPS / CLIPScore' and the 'difference between the reconstructed and edited noise maps' was found to be 0.8. Thank you for your careful review. > There are no definitions provided for $f$ in Equations 5 and 6. Please clarify the meaning of these symbols. **Answer:** The function $f$ is defined in Section 3.1 (Preliminaries, DDIM Inversion) of the paper as $z_{t-1} ← f(z_t, t, C)$. It is a function that calculates $z_{t-1}$ from $z_t$. We sincerely appreciate the reviewer for taking the time to review our research. We kindly ask you to consider our responses once more in your review. Thank you.
Summary: The paper introduces Editable Noise Map Inversion (ENM Inversion), a technique that improves both reconstruction quality and editing capabilities in diffusion-based image editing. ENM optimizes noise maps during inversion by minimizing the differences between reconstructed and edited versions, effectively encoding the target image's intended edits directly into the noise representation. Claims And Evidence: Yes. Methods And Evaluation Criteria: The experimental design is sound, employing well-established benchmarks (PIE-Bench and DAVIS) alongside widely accepted evaluation metrics in diffusion-based image editing research (LPIPS, PSNR, SSIM, and CLIP similarity). Theoretical Claims: This paper does not contain explicit theoretical proofs. It primarily contributes algorithmic and empirical insights rather than theoretical claims. Experimental Designs Or Analyses: The authors carefully selected appropriate baseline comparisons and established metrics to objectively measure editing performance and fidelity. Supplementary Material: Yes, Appendix A to Appendix D Relation To Broader Scientific Literature: The key contributions build on existing diffusion inversion techniques, including DDIM, Null-Text Inversion, Negative Prompt Inversion, and Plug-and-Play methods. Essential References Not Discussed: Han, Ligong, Song Wen, Qi Chen, Zhixing Zhang, Kunpeng Song, Mengwei Ren, Ruijiang Gao et al. "Proxedit: Improving tuning-free real image editing with proximal guidance." In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 4291-4301. 2024. Other Strengths And Weaknesses: The discussion of method efficiency is insufficient. While competing methods require only one inversion calculation per source image that can be applied across multiple target texts, the proposed method demands a separate inversion process for each target text and image combination—making it computationally expensive when performing multiple edits on a single image. Other Comments Or Suggestions: Additional visual examples showing challenging scenarios or failures of the proposed method could better illustrate practical limitations. Questions For Authors: Have you considered extending or comparing your approach to other generative modeling frameworks, such as flow-based models? If so, how does your method perform relative to such alternatives? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First of all, we sincerely appreciate your time and effort in reviewing our research. Below, we provide responses to all the points raised. **Other Strengths And Weaknesses:** > The discussion of method efficiency is insufficient. While competing methods require only one inversion calculation per source image that can be applied across multiple target texts, the proposed method demands a separate inversion process for each target text and image combination—making it computationally expensive when performing multiple edits on a single image. **Answer:** Thank you for your insightful comments. It is indeed correct that our ENM Inversion method requires a separate inversion process for each text-image combination. This design choice was made to generate an optimal noise map tailored to each specific editing requirement. To compensate for efficiency while maintaining performance, we have carefully optimized our approach. As shown in Table 2, our method significantly reduces computational costs compared to NTI and StyleD in terms of inference time. This demonstrates that ENM Inversion can be a relatively efficient methodology for various image editing tasks. Once again, we sincerely appreciate your valuable feedback. **Essential References Not Discussed & Questions for Authors:** > Have you considered extending or comparing your approach to other generative modeling frameworks, such as flow-based models? If so, how does your method perform relative to such alternatives? **Answer:** Since our research focused on the inversion of diffusion models, we did not include a comparison with flow-based models. However, based on the reviewer’s suggestion, we conducted additional experiments incorporating ProxEdit and flow-based models [1][2], and we have included the results in the Appendix. Additionally, we integrated RF-Inversion into our method and conducted further experiments. | **Method** | **Structure Distance** ↓ | **PSNR** ↑ | **LPIPS** ↓ | **MSE** ↓ | **SSIM** ↑ | **CLIP Similarity (Whole)** ↑ | **CLIP Similarity (Edited)** ↑ | |--------------|----------------------------|------------|------------------|------------------|------------------|-------------------------|-------------------------| | ProxEdit | 11.87 | 27.12 | 45.70 | 31.70 | 85.73 | 24.13 | 21.36 | | SDEdit-Flux [1] | 118.97 | 14.41 | 329.92 | 450.06 | 60.82 | 25.06 | 22.50 | | RFInv [2] | 60.08 | 18.24 | 232.88 | 210.02 | 64.78 | 24.94 | 22.65 | | Ours + RFInv | 46.38 | 19.77 | 185.86 | 153.90 | 69.57 | 25.05 | 22.65 | | Ours | 10.13 | 28.19 | 45.26 | 27.02 | 86.29 | 25.30 | 22.12 | Table 1. Results of comparing and combining our method with flow-based models. The table above demonstrates that our approach outperforms editing methods using flow-based models in terms of structural distance, background preservation, and editability. Furthermore, integrating RF-Inversion into our method leads to even greater performance improvements. We thank the reviewer for engaging with us in the discussion. **Reference** [1] SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations [2] Semantic Image Inversion and Editing using Rectified Stochastic Differential Equations
null
null
null
null
null
null
null
null
Unified Screening for Multiple Diseases
Accept (poster)
Summary: The problem of screening for multiple diseases is formalized as an optimization problem, specifically for the case where policies for each disease are predefined and the task is to decide which policies to activate given a vector of prior risks. Under a fixed budget and a few simplifying assumptions like sequential test independence, the optimal decision boundaries are studied analytically and through simulations. Claims And Evidence: The main claims are largely supported. However, the results in this paper are more limited than what is stated in the introduction, and not all of the simplifying assumptions are clearly stated throughout. The referral problem that is ultimately considered has just two diseases with competing risks, policies with deterministic screening schedules, and independent screening tests. I believe that the language used to describe the method is overly complex, obscuring key design choices while taking up unnecessary space. For instance, the main optimization problem presented in Equation 2 relies on an integer mapping for sets of binary decisions that seems to be a minor implementation detail, and greatly increases the complexity of the notation. It was a challenge to parse through details like these and figure out what is really going on in the core problem presented by this paper. Methods And Evaluation Criteria: Unfortunately, the verbosity mentioned above appears to have missed the need to justify key parts of the problem setting and the methodology. Starting from the beginning: * Please clarify the role of the screening target $Y_n$ when it is introduced. * Why can you assume that screening samples taken at different times are independent? Wouldn't nearby samples be more correlated? Or are these errors entirely due to measurement noise rather than an evolving disease? * Why can we assume that we observe the whole $x$ vector? It would be helpful to relate this to the medical examples that are mentioned earlier in the paper. * The objective of maximizing the time of the first adverse event that goes undiagnosed needs to be supported. Intuitively, it makes sense, but one could imagine many other objectives that accomplish similar things using expectations rather than minima. Theoretical Claims: The proofs for the lemmas appear to be correct, although I did not verify every detail. Experimental Designs Or Analyses: The experimental settings are fully synthetic and rather simple. Supplementary Material: I reviewed parts of the proofs for the lemmas. Relation To Broader Scientific Literature: My main concern relating to the broader literature is that the analysis of the problem might be artificially complicated by all the details that were introduced, and that the decision boundaries possibly correspond to much simpler results from decision theory, especially after you take into account all of the simplifying assumptions that were made along the way. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: The problem under consideration is highly significant. It would have strengthened this paper to present an actual algorithm for computing decision rules in one of the settings considered. All of the empirical results come from simple synthetic experiments. Other Comments Or Suggestions: Line 246: the equation has a typo. The variable $n$ appears to be used for two different purposes at once, $\delta_n$ and also $\exists n$ within the definition. Questions For Authors: Could you give some intuition on how the referral optimization task is not equivalent to a simpler policy learning task, with a cost function that takes on a more specific structure? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review of our paper and constructive comments. **Limited results:** In (1), we formalize the joint screening and diagnosis problem. This is distinct from (2), which focuses on the referral problem. We choose to solve (2) rather than (1) because our aim is not to propose entirely new screening and diagnosis policies—which can be difficult to justify and implement in clinical practice. Instead, we build upon existing screening policies already endorsed in clinical guidelines. Characterizing the optimal referral policy becomes significantly more complex and far less intuitive when considering more than two diseases. One would expect the number of boundaries of interest (e.g., $5$ as shown in Figure 1(b) for $N=2$) to grow combinatorial with $N$, as one must account for boundaries that separate screening decisions across different subsets of diseases. We adopt periodic screening policies with deterministic schedules because such approaches are widely used in clinical practice, as recommended by numerous guidelines—particularly for the early detection of chronic or progressive diseases. Common examples include annual mammograms for breast cancer screening, biennial colonoscopies for colorectal cancer, and regular HbA1c tests for diabetes monitoring. **Screening target $Y_n$:** May represent a biomarker value, clinical metric, or similar quantity. Conditional independence over time is utilized in likelihood ratio and in proofs characterizing the decision boundaries. While the underlying disease state may evolve over time, this evolution can be modeled as a drift in the expectation of $Y_n$. The randomness in $Y_n$ is then primarily attributed to measurement noise or natural biological fluctuations, which we assume to be independent across time. **Why $x$ is entirely observable?** One can use off-the-shelf risk prediction models to obtain disease risks for particular diseases. For instance, Gail model (for breast cancer), QRISK3 (for cardiovascular disease), (normalized) polygenic risk scores, and even AI-based models can be used to provide risk scores. **Why objective make sense:** Our objective can be seen as a simplification of the standard Quality-Adjusted Life Years (QALYs) framework. Each year of life is either $1$ (acceptable quality, e.g., healthy) or $0$ (unacceptable quality, e.g., impaired, suffering). If the disease is detected before the adverse event happens, we assume issues related to the disease can be resolved or managed at a level such that the disease will not cause an unacceptable quality of life. However, after the adverse event happens, the disease causes unacceptable quality of life. **Relation to broader literature:** To the best of our knowledge, there is no existing decision-making problem in the literature that yields the same structure of decision boundaries as those derived in our work. While our screening cost budget constraint bears some resemblance to the constraint in knapsack problems, the underlying objective in our setting is fundamentally different. **About empirical results:** We agree that any simulation experiment may not fully capture the real-world benefits of joint screening. A definitive evaluation of our joint screening protocol versus independent screening (current standard practice as indicated by many guidelines) would require a large-scale randomized controlled trial with two arms—an undertaking that is beyond the scope of our current work. Nonetheless, we hope that our theoretical results and in-silico experiments, which demonstrate the potential benefits of joint screening in a simplified yet intuitive and representative setting, will inspire further empirical research in this area. **About an actual algorithm:** Our referral problem (2) is a linear program (LP) and can be solved by utilizing any LP solver. Some coefficients in this LP are computed by Monte Carlo sampling. **Relation to policy learning:** If the goal were to learn the optimal referral rule directly from a dataset of patient screening trajectories, the problem could indeed be framed as a policy learning task with an appropriately defined cost function. However, our work takes a complementary, orthogonal approach. We begin with a well-defined optimization problem (2) and focus on analytically characterizing the structure of its optimal solution (the decision boundaries described in Figure 1(b) and formalized in Proposition 4.5). Rather than learning from data, our emphasis is on understanding the geometry and properties of the optimal referral rule under a known probabilistic model. Our results suggest a follow-up question: can the characterized decision boundaries be leveraged to develop sample-efficient algorithms for learning optimal referral rules from limited data? We view this as an important direction for future work. We hope that our response has addressed your concerns. Please let us know if you have any other concerns. --- Rebuttal Comment 1.1: Comment: Thank you for addressing many of my points of concern. I trust that the revised manuscript will reflect at least some of these clarifications.
Summary: This paper proposes a framework for unified screening of multiple diseases under budget constraints and competing risks. The authors formulate this as a referral problem where they choose which screening policies to activate based on patient risk profiles. They characterize optimal decision boundaries for the two-disease case and conduct in-silico experiments to compare against independent screening approaches. ## update after rebuttal After carefully considering the authors' rebuttal, my assessment remains unchanged. While the authors addressed some technical concerns, the fundamental issue of limited practical applicability persists. The proposed method still lacks validation with real-world medical data, which is crucial for demonstrating clinical relevance and potential impact. For a method targeting medical applications, empirical validation with representative clinical data is essential to establish both reliability and utility. Claims And Evidence: The core claims are supported by clear and convincing evidence through a combination of rigorous theoretical analysis and comprehensive experimental validation. The limitations are appropriately acknowledged, and the evidence presented aligns well with the scope of claims made. The paper maintains high standards of scientific rigor in both its theoretical and empirical components. Methods And Evaluation Criteria: While the theoretical method is sound, the evaluation would be much stronger with real medical datasets and more realistic screening scenarios that reflect actual clinical practice. The current evaluation framework demonstrates mathematical correctness but not practical applicability. Theoretical Claims: I reviewed some of the theoretical proofs, focusing primarily on Section 4 which contains the key theoretical claims about optimal screening policies. Other results seem to be correct but I have not checked the detailed proofs in the appendix. Additional verification by other reviewers would be valuable. Experimental Designs Or Analyses: The experimental design lacks clinical realism and proper statistical rigor is needed to support the practical claims. For example, parameter selection lacks justification (arbitrary choices for budget and screening costs, no explanation for choosing $T_0$ and $\mu_n$ for survival times). For methodology comparison, independent screening baseline uses equal budget split without exploring other allocations. Only minor improvement in survival time (37.70 vs 37.47 years) is observed, which may not be statistically significant, but no statistical analysis provided. Provided that this is on a simulated data, the practical impact of the paper's results appears to be questionable. The paper would be significantly strengthened by additional comparison to any existing clinical screening protocols. Supplementary Material: I reviewed some of the supplementary materials, focusing on appendix C: supplementary experiments. Relation To Broader Scientific Literature: While prior work like Wright et al. (2015) and Peng & Xiang (2021) studied competing risks in single-disease contexts, this paper extends the framework to optimize screening decisions across multiple diseases simultaneously. However, there are some limitations in connecting to literature. - Does not fully explain how their optimal policy compares to existing clinical guidelines. - Limited discussion of how their theoretical results relate to real-world screening scenarios. - No comparison to other optimization approaches in healthcare resource allocation. In summary, while the paper builds on existing ideas in disease screening and risk modeling, it could better contextualize its theoretical advances against practical clinical approaches in the field. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - The mathematical characterization of optimal policies is rigorous and well-developed. Weaknesses: 1. Limited practical applicability: - Only handles 2 diseases - Relies on simplified assumptions about screening schedules - No validation with real-world medical data - Doesn't address clinical implementation challenges 2. Minor improvements: - Small gain in survival times (37.70 vs 37.47 years) may not justify the increased complexity - No cost-benefit analysis of implementation effort versus expected gains 3. Missing context: - Limited discussion of how this would integrate with existing clinical protocols - No consideration of practical constraints like patient preferences or healthcare system limitations - Doesn't address how to handle more realistic scenarios with uncertain disease risks The paper makes solid theoretical contributions but needs stronger connections to clinical practice to demonstrate real-world significance. Other Comments Or Suggestions: NA Questions For Authors: 1. Figure 1 interpretation: - Could you explain the meaning of the curved boundaries in Figure 1(b)? What determines their shape and why do they differ from the straight-line boundaries in 1(a)? - What is the practical interpretation of the different colored regions in terms of clinical decision-making? A clear explanation would help evaluate the practical relevance of your theoretical results. 2. Scaling to more diseases: - What are the theoretical and computational challenges in extending your approach to 3+ diseases? - Do you expect qualitatively different behavior in the decision boundaries with more diseases? - How would the computational complexity scale? 3. Clinical validation: - Have you validated any aspects of your model against real clinical screening data or protocols? - What modifications would be needed to handle real-world factors like uncertain disease risks and variable screening costs? This would help evaluate whether the theoretical gains would translate to practice. 4. Statistical significance: - Did you perform statistical analysis on the survival time improvement (37.70 vs 37.47 years)? - How sensitive are these results to your parameter choices and assumptions? Understanding the robustness of the improvements would affect assessment of the paper's impact. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thorough review of our paper and constructive comments. **Figure 1** 1(a) shows the decision boundaries for independent screening (current standard). 2(a) shows the boundaries that characterize the optimal policy for our referral problem (2). Our main contribution is to mathematically characterize these boundaries (Lemma 4.1 to Lemma 4.4, Proposition 4.5). Main message: to act optimally, screening for disease 1 should also depend on the risk of disease 2. Boundary shapes also depend on the screening budget, screening costs, adverse event times, and risk distribution (see Appendix C). Practical interpretation: Clinics can use 1(b) to decide for what a patient with risk vector $(x_1,x_2)$ should be screened. **Scaling to more diseases:** $>2$ diseases does not put significant computational burden. Our referral policy optimization problem is a linear program (LP); hence, SoTA complexity bounds for LP apply to our setting. Characterization of the optimal referral policy will be much more complex and far less intuitive for the case of $>2$ diseases. One would expect the number of boundaries of interest (e.g., $5$ as shown in Figure 1(b) for $N=2$) to grow combinatorially with $N$ as there can be boundaries separating screening of one subset of diseases from another. **Clinical validation:** Any simulation study will not fully capture the real-world benefits of joint screening. A definitive evaluation of our joint screening protocol versus independent screening (current standard) would require a large-scale randomized controlled trial with two arms—an undertaking that is beyond the scope of our current work. Nonetheless, we hope that our theoretical results and in-silico experiments, which demonstrate the potential benefits of joint screening in a simplified yet intuitive and representative setting, will inspire further empirical research in this area. **Uncertain disease risks and variable screening costs:** One can use off-the-shelf risk prediction models to obtain disease risks for particular diseases. For instance, Gail model (for breast cancer), QRISK3 (for cardiovascular disease), (normalized) polygenic risk scores, and even AI-based models can be used to provide risk scores. In our formulation, screening costs for different diseases can be different. In Figure 4, we also show the effects of varying screening costs. **Statistical significance:** Since our experiments are on simulated data, we report exact gains obtained by solving (2). Significance tests are not necessary since we are not making claims based on limited data. **Parameter selection and sensitivity of our results:** The choice of $T_0$ and $\mu_n$ is done by examining clinical literature (first paragraph of Section 5.1). While our choices of screening budget and costs are not borrowed from a real-world study, in Appendix C, we discuss in detail how the behavior and the performance of the optimal referral policy changes as we vary each of these parameters. We hope that our detailed appendix resolves the reviewer's concerns about parameters. **Limited practical applicability:** There are many other examples where this methodology could be used to guide policymakers. One example can be found in sexual health services. For many settings, human immunodeficiency virus (HIV) and human papillomavirus (HPV) screening protocols are conducted at different health services and hence are independently considered (e.g., https://www.who.int/publications/i/item/9789240024168, https://pubmed.ncbi.nlm.nih.gov/38297406). Most of these screening protocols are guided by a certain level of risk (e.g., sexual behaviors). These screening measures carry varying costs (HIV antibody is not costly, whereas HPV requires more costs associated with invasive procedures carried out by a gynecologist (for cervical neoplasia) or proctologist (for anal neoplasia)). Finally, delayed screening has implications on the time to present with severe HIV disease (for HIV) and time to develop CIN2/3 or AIN2/3, that is, cervical or anal pre-cancer (for HPV). This methodology could help define which risk levels would warrant joint screening protocols (to plead for streamlined services) or if screening protocols can remain part to minimize these times to undesirable outcomes. **Minor improvements:** Compute complexity is not an issue, as the problem is LP. For $N=2$, our characterization of the decision boundaries and visualizations (e.g., Fig 1(b)) provide interpretable explanations of who should be screened. The same screening resources can be used more efficiently (no increase in screening costs for the population). We improve over independent screening in all cases (even when disease risks are independent). Reported numbers may vary based on the experimental setup, but the main message remains the same. We hope that our response has addressed your concerns. Please let us know if you have any other concerns. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response, but I remain concerned about two methodological aspects: 1. **Lack of real data validation**: While a full RCT may be beyond scope, retrospective analysis using existing clinical datasets would significantly strengthen your claims. Simulation studies alone, no matter how well-designed, cannot fully capture the complexities of real-world clinical settings. 2. **Absence of statistical testing**: The statement that significance tests are "not necessary" for simulated data mischaracterizes good methodological practice. The reported 0.23 improvement in average survival time requires statistical validation across multiple simulation runs with different seeds to demonstrate that this difference is reliable and not due to chance variation in your specific simulation instance. Without confidence intervals or p-values, it's difficult to interpret the practical significance of this finding. These additions would substantially strengthen the paper's conclusions and enhance its potential impact on clinical practice. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s emphasis on the need for statistical validation and would like to clarify that our results are not based on a single simulation. At each Monte Carlo iteration, we simultaneously run 200 independent simulations for each 10000 (x1, x2) pair to capture the variability in patient trajectories, diagnoses, and event occurrences. This inner-loop simulation ensures that the survival outcomes reflect a realistic distribution over possible patient outcomes, rather than a single realization. To further validate the robustness of our findings, we conducted an experiment where we saved the outputs of each of the 50 Monte Carlo iterations separately. Using this data, we solved 50 different linear programs to obtain optimal policies and corresponding average survival times for both the unified and independent screening approaches. The results across these runs show that the standard deviation of the average survival time is 0.0089 for unified screening and 0.0117 for independent screening. Given that the average difference in survival time between the two methods is approximately 0.2 (as shown in our work), the confidence intervals defined as mean ± 2 * std do not overlap. This provides strong empirical evidence that the observed improvement is not due to chance variation but reflects a consistent advantage of our unified approach.
Summary: This article offers a novel optimization framework for the complex task of engaging in unified screening for multiple diseases. They offer a novel optimization framework, attempting to balance multiple factors including disease risk, budget, and diagnostic test characteristics. Claims And Evidence: The authors begin with a reasonable summary of the challenges of disease screening and the optimal use of resources in this context, although this could be clarified with respect to why we might not just stack these individual screening programs in all patients (which is implicit but may benefit from being made explicit). It is not entirely clear to me why the screening for one disease depends on the screening for another, outside of contexts where the risks are clearly contingent (e.g. pulmonary hypertension and cardiac disease). Is this approach meant to be limited to such contexts? They further offer an in silico evaluation, attempting to assess the impact of their various policies in a simulated environment. While the in silico analysis is reasonably performed within these boundaries, I worry that the overall approach overstates the benefit of many screening programs implicitly. While the benefit of screening is intuitive, it often does not play out in practice (this excellent recent review should be discussed, as most screening programs have not actually been shown to offer any significant mortality benefit https://pmc.ncbi.nlm.nih.gov/articles/PMC10463170/). This should be softened. The authors claim "For example, treating one condition, such as heart disease, can enhance the effectiveness of screening for another, such as lung cancer" without citation. What is meant by this? Is this referring to improving the accuracy of e.g. a chest x-ray because there is less pulmonary edema? Overall, however, despite some of my concerns about the broader direction of the field (and my deeper skepticism of screening as naively applied), I feel that this paper makes a reasonable contribution to its literature field. I do, however, believe that these results must be discussed more cautiously, with the recognition that these toy formulations of screening may not fully align with the complex realities of clinical medicine in this context. Similarly, I feel the authors should provide some further justification of the value of this multi-disease screening approach and the specifics of their contentions regarding the relatedness of these diseases. Methods And Evaluation Criteria: The authors offer a detailed mathematical optimization formula to combine multiple different policies. It is commendable that their methods attempt to incorporate multiple different areas of analysis. I am not able to comment in detail on the accuracy of their formulae, as my background is more in clinical medicine and practical applications of machine learning in clinical contexts. Theoretical Claims: As outlined above, I worry that this work (as with much of the work in this field) overstates the effectiveness of screening in general. Experimental Designs Or Analyses: I am not able to review the mathematical proofs in detail given my background, however their in silico analyses are reasonable overall. Supplementary Material: I appreciate the authors' work to offer a very broad overview of the relevant literature in the supplement. Relation To Broader Scientific Literature: The authors offer an excellent engagement with the relevant literature, with appropriate commentary in the related works section. I also appreciate the excellent Table 1 approach to clearly situating this project within the broader literature. My concerns as outlined above are with a lack of engagement with literature skeptical of screening overall. Essential References Not Discussed: As discussed above, further engagement with some of the medical literature skeptical of the broader benefits of screening may be worthwhile here. Other Strengths And Weaknesses: Discussed elsewhere. Other Comments Or Suggestions: One quibble with the introduction is with the statement "For example, screening a patient with high risks for both lung cancer and cardiovascular disease for only one condition might fail to improve their overall health outcomes, whereas a unified screening approach could yield better results". This does not make it clear why one would not just screen for both in this patient. I believe it is substantiated elsewhere, bus this should be more clearly explained in the introduction. Questions For Authors: See above - I have several questions regarding the underlying theoretical assertion of the connection between the multiple diseases being screened. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review of our paper and the constructive comments. **Why screening of one disease depends on the screening of another:** In the example of pulmonary hypertension and cardiac disease, risks are clearly contingent. Our approach is not limited to such examples, as we offer a non-trivial solution even when disease risks are independent. Due to the nature of our objective function (related to survival time) and our constraint function (screening cost budget), the optimal threshold for activating the screening of one disease depends on the risk of the other disease. This coupling distinguishes our model and solution from independent screening, which yields a suboptimal policy for our referral problem. We will further clarify this in the revised paper. **Regarding the benefit of screening:** Thanks for pointing out this important review paper. We agree that screening does not always offer benefits and sometimes can even be harmful, as in the case of overdiagnosis. We will mention this review paper in the revised version and explicitly discuss the limitations of screening in line with this work. In our work, we focus on the case when screening does not hurt. In our case, the only detrimental effect of screening is a false positive rate, which is expressed as a constraint in the optimization problem. We cannot screen everyone for every disease since screening is costly and screening resources are limited. We characterize exactly how limited screening resources should be distributed over the population so that the expected benefit is maximized. **Clarification of the screening example:** Consider a patient who (potentially) has both heart disease and cancer but is only screened for cancer, where the adverse event from cancer is expected to occur after the adverse event from heart disease. If we only screen for cancer, but the patient unexpectedly dies from heart disease, then the cancer screening offers no benefit to the patient in terms of lifetime gain. If the patient is screened for both diseases, heart disease will be identified earlier, the adverse event due to heart disease will be prevented, and cancer screening will result in lifetime gain by detecting cancer before the adverse event associated with it happens. We hope that our response has addressed your concerns. Please let us know if you have any other concerns.
null
null
null
null
null
null
null
null
Self-Improving Language Models for Evolutionary Program Synthesis: A Case Study on ARC-AGI
Accept (poster)
Summary: This paper proposes SOAR, a framework for program synthesis that enhances language models through a self-improving evolutionary loop. Specifically, SOAR alternates between using an LLM for evolutionary search and applying hindsight learning to fine-tune its generation and refinement capabilities. This process enables continuous improvement of LLMs without human-engineered data, overcoming performance plateaus in conventional search-based methods. Extensive experimental results show that SOAR achieves state-of-the-art results among open-source LLMs on the ARC-AGI benchmark and provide insights into how AI systems can bootstrap their own improvement. This work opens new possibilities for advancing complex reasoning in program synthesis. ## update after rebuttal I am satisfied with the authors’ rebuttal and maintain my positive rating of 4. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/a. Experimental Designs Or Analyses: Yes, the experiment is well designed and offers insightful analysis. Supplementary Material: Yes. The appendix presents a comparison with prior work and detailed prompts used for generation and refinement. Relation To Broader Scientific Literature: Yes. This work opens new possibilities for advancing complex reasoning and problem-solving in program synthesis. Essential References Not Discussed: Yes. The generation and refinement loop for LLM-based program synthesis has been paritially explored in CodeRL's critic sampling. Specifically, the table 4 of its paper (https://arxiv.org/pdf/2207.01780) demonstrates the program synthesis performance on APPS with different rounds of program repair. This work should be discussed. Other Strengths And Weaknesses: ## Pros - The proposed framework achieves state-of-the-art results for program synthesis among open-source LLMs on the challenging ARC benchmark. It offers valuable insights into how program synthesis systems can transcend the limitations of their base models through self-improvement, a fundamental challenge in LLMs, opening new possibilities for advancing reasoning and problem-solving in program synthesis. - All claims in this paper are well-supported by comprehensive experimental results. The presentation is clear, with strong writing and effective visual illustrations. Readers will find the paper both engaging and informative. ## Cons As noted in the Discussion section, this work has two key limitations: - The impressive results rely on substantial computational resources (e.g., 6k synthesis attempts per task per self-improvement cycle), making real-world applicability challenging. - This paper only demonstrates its effectiveness on the ARC benchmark. Its generalization to other domains such as programming contest or mathematical reasoning is unclear. Other Comments Or Suggestions: N/a. Questions For Authors: N/a. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments. Reviewer cz1F noted that SOAR achieved state-of-the-art results among open-source inductive approaches on the ARC benchmark by transcending the limitations of based models through iterative self-improvement. They also noted the quality of our experimental paradigm, how it supports our claims, while praising both writing and visual illustrations. This said, the reviewer expressed three concerns that we address below: **Discussion of CodeRL:** We’d like to thank the reviewer for pointing out this work. SOAR and CodeRL both aim to improve program synthesis by improving pretrained language models with feedback from execution, but do so using different approaches: - **Learning signal:** SOAR uses hindsight learning to learn from both successes and failures (rich signal) while CodeRL uses the fraction of unit tests passed (weaker signal) - **Exploration:** SOAR uses a state-of-the-art method based on genetic algorithms to search the space of programs (REX), which enables a better exploration of program space than simply sampling from the current RL policy, as done in CodeRL. - **Learning algorithm:** CodeRL leverages a complex actor-critic architecture while SOAR relies on a straightforward supervised finetuning procedure. Finetuning is conducted on a small fraction of the generated data (50 programs out of 6000 generated per task), making the approach computationally cheaper (RL trains on all generated programs). - **Refinement:** CodeRL trains a separate policy and critic to perform refinements (code repairs) from code execution feedback, while SOAR uses a single model for generation and refinement. Our paper further demonstrates positive transfer between these two tasks (training on each makes the other better). - **Complexity:** SOAR implements a simple algorithm: search, relabel, finetune, while CodeRL relies on several specialized modules including: pretraining of the policy with collected code data, warm up of the policy with ground truth programs, freezing of the policy to pre-train the critic on ground truth data, reward shaping, and others. Overall, SOAR proposes a cleaner, simpler approach that’s easier to scale based on several key ideas: iterative improvement, finetuning of generation and refinement (positive transfer), and hindsight relabelling. The revised version of the paper now discusses this relevant work and its relation to ours. **Extensive computational resources (6k programs per task):** SOAR is only trained on 50 programs per task at each generation and does not fundamentally require 6k programs per task during the search phase. If we’re sampling so many attempts, it’s because ARC is difficult, and base models are unlikely to generate interesting programs in only a few trials. This number is comparable (and even lower) to the ones used in related approaches (e.g. 20k in Li et al., 2024; 8k in Greenblatt, 2024). An interesting future work could be to look at the optimal budget allocation between running longer searches or running more refinement iterations. The optimal search budget might vary across iterations too, as finetuning might itself accelerate search by increasing the ability of the generation and refinement model to encounter successful programs earlier on (see answer to Reviewer ifzp). We note that SOAR is only useful to improve upon program synthesis domains that could not be solved by search alone, which is why it requires substantial computational resources. We added a short discussion about the significant search budget and the necessity to adapt this parameter to the domain at hand in the revised version of the manuscript. **Generalization to other domains:** We picked the ARC benchmark because it was designed for the purpose of evaluating general program synthesis algorithms: it is hard, diverse, relatively out of distribution for LLMs, and explicitly designed to test core reasoning abilities beyond pattern matching. Given our limited computing budget, we decided to focus our resources on a deep study of the SOAR approach on ARC, as opposed to a shallower study on several benchmarks. This allowed us to make the space of design decisions explicit and study which worked best using carefully controlled experiments as several reviewers have noted. This said, SOAR is a general framework that doesn’t rely on any ARC-specific assumption and can directly be transposed to any programming-by-example domain including code synthesis like APPS. Our paper describes the methodology clearly and how to navigate the space of design decisions with careful experimentation. We thus expect SOAR to generalize across domains, but, as we acknowledge in the paper, leave the empirical verification of this claim to future work. Please let us know if the above discussion answers your concerns and consider raising your score if it does. If it doesn’t, let us know which concerns remain so we can try to address them. Thank you!
Summary: The paper introduces SOAR, a method for program synthesis that extends existing LLM-based methods by introducing an iterative fine-tuning approach. Recent LLM-based program synthesis work has relied on two methods: (1) directly querying the LLM in-context by expressing the task as language (possibly after fine-tuning the LLM on coding tasks) or (2) combining the LLM with classical search methods (for example, using the LLM prompt to generate code samples, evaluating those samples by their performance, selecting the best, and feeding them back to the prompt to get even better ones). This paper introduces an alternation between LLM-based search and fine-tuning of the LLM. This way, the search outputs from the LLM's prompt are used to improve the LLM weights, and these weights can then be used to generate better search outputs, and so on, thus iteratively improving the results. The paper benchmarks their method against baselines on the ARC-AGI dataset, a toy-yet-difficult task of transformations on colored grids, which humans handle well but ML systems do not. On this task, the paper claims state of the art performance. Further, the paper does a series of thoughtful ablations and analysis to demonstrate that the different components of their method provide an advantage. This includes not only the fine-tuning, but also innovations that they introduced into the search phase. Update after rebuttal: I raised my score to 4 after the rebuttal discussion. Claims And Evidence: The claim that this method performs better than previous methods tested on ARC-AGI seems to be well supported by the evidence, with a caveat about the measurement of cost and about the use of the test set (see my comments under "Experimental Designs Or Analyses"). I have concerns, however, that previous methods may not have been tested on ARC-AGI, and that, as a result, this paper does not compare against those methods (see my comments under "Methods And Evaluation Criteria"), but I could be wrong. Methods And Evaluation Criteria: The ARC-AGI benchmark is a little bit niche, so the question remains of whether the results generalize to other program synthesis tasks. However, the paper is clear about this limitation (even saying it in the title) and it does a good job about not over-claiming. Further, showing improvements on the ARC-AGI dataset is an achievement on its own, as the benchmark is far from being saturated. Trying the method on a diversity of tasks would be a great next step for a follow-up paper. Regarding the baselines (Table 1): could the paper have also used as baselines methods like "FunSearch" (Romera-Paredes et al. 2023) or "Evolution of Heuristics" (Liu et al. 2024)? In particular, the first of these was an impactful Nature paper, so it would be a natural choice. Both of these papers benchmark on a bin-packing problem that may make sense for this method too, as both papers solve the problem by generating code. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: It was informative to see the successive addition of components to the method and how they affect performance on the training set. I appreciated how this even included different variants of the components. On the other hand, I have a concern about the measurement of compute cost. The central claim in this paper is that SOAR achieves state-of-the-art performance on ARC-AGI when compared to baselines. However, both this method and the search-based baselines can be iterated for an indefinite amount of time (while the returns are diminishing, generally it is better to run for longer). A fair comparison, therefore, would require matching the compute cost of the methods in question. Was this done here? In particular, the baselines do either a 1-shot in-context query or an in-context search process while SOAR also requires multiple rounds of potentially costly fine-tuning. Was the cost of this fine-tuning taken into account? For example, how would the results change when the baselines are allowed more iterations so that their total cost (e.g. in inference FLOPs) matches the total cost of SOAR (inference and repeated fine-tuning FLOPs)? Additionally, I suspect, but I am not certain, that unsupervised test data may have leaked from one test example to another. SOAR seems to have done fine-tuning iterations on the test data but highlights that the labels were excluded. Still, I can imagine that information about the training examples of a test task can be incorporated by the fine-tuning and used in the next test task. This could give the method an unfair advantage over the baselines. A way to mitigate this, while remaining flexible, could be to allow SOAR to do anything it wants with the training data of a given test task but, before going to the next test task, reset the LLM back to its state just after training. This way, no information, even unsupervised, can leak from one test task to the next. On the other hand, I may be ignorant of common practices used when benchmarking on ARC-AGI, so I would be very curious to hear the opinion of other reviewers and of the authors on this. Regardless, given the intermediate results shown on the training set, I would guess that this method remains the state of the art when the test-set fine-tuning is removed. Supplementary Material: I did not review it. Relation To Broader Scientific Literature: The paper is clearly related to current scientific literature. The problem of including fine-tuning into the code discovery process is well motivated. The problem of code discovery is relevant in the modern machine learning literature. Please also see my "Summary". Essential References Not Discussed: * Around line 327, the paper points out that "fine-tuning generation capabilities on successful synthesis attempts [...] is an implementation of the STaR algorithm". This should not be buried here. Instead, if the STaR algorithm was used, it should be cited at the first moment the method is described. That is, it should be mentioned prominently in section 3. * Impactful state-of-the-art methods that used LLMs in combination with evolutionary search for the purposes of code discovery, such as "FunSearch" (Romera-Paredes 2023) or "Evolution of Heuristics" (Liu 2024), are not mentioned. Other Strengths And Weaknesses: Listed throughout the other sections of this review. Other Comments Or Suggestions: * Typo/grammar in "to scaffold search" (line 42, right column) * Typo in "to challenges" (line 48, right column). * Line 86 should say "state of the art" (noun), not "state-of-the-art" (adjective). * Line 86 should be less general. Instead of saying that the paper establishes a new SOTA on program synthesis, it would be more accurate to say that it establishes a new SOTA on the ARC-AGI program synthesis benchmark. More work is required to establish SOAR as SOTA on program synthesis as a whole. * Grammar/type in line 100, right column. * It is hard to tell whether the numbers in table 2 are significantly different from each other. Confidence intervals would help. * Line 430 (left column): I don't understand the sentence "Rather than pushing against existing performance, SOAR finds paths to bypass them entirely". Questions For Authors: My recommendation could be changed to acceptance if the following are questions are addressed, especially the first one: * I believe that the cost of fine-tuning was not taken into account in the comparison against the baselines, which could potentially affect the conclusion. Please see my question about this under "Experimental Designs Or Analyses". * Why not benchmark against state of the art program synthesis methods like FunSearch or Evolution of Heuristics? In particular, why not compare on the same benchmark as those papers? Please see my comments under "Methods And Evaluation Criteria". * Is it possible that unsupervised test data was leaked from one test example to another? Please see my comment under "Experimental Designs Or Analyses" Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful feedback. The Reviewer noted the importance of the problem we tackle and the quality and pedagogy of our experimental section but raised several concerns which we address below. **Controlling for compute costs:** We thank the reviewer for raising this point. In the revised manuscript, we now address it directly with a new figure: https://anonymous.4open.science/r/arc_example-EBB9/compute_match_14b.pdf. Performance improves with both model size and search budget, each following separate scaling laws. SOAR enables us to break through these plateaus and achieve higher performance levels. As shown in Figure 4, adding search and then iterative improvements via SOAR surpasses the performance ceiling reached by merely scaling model size (e.g. Claude-level performance). The new figure shows a similar pattern for the search budget. Performance with the base model (generation 0) plateaus after about 8k search attempts. In contrast, SOAR at generation 1 (after 6k search attempts, followed by finetuning, and then another 6k attempts) outperforms the base model even after 12.6k attempts (+7.5%). This 12.6k budget matches the one used by SOAR gen 1 (6k + 6k search + \~5% compute for finetuning, cf explanation below). After generation 1, search seems to plateau even earlier (~5k), but SOAR still achieves significant performance gains across generations. This experiment answers the reviewer’s question: Would using the same compute budget to generate more solutions with the base model yield similar performance gains as SOAR? The answer is no. The base model stagnates at ~8k attempts, with performance at 12.6k attempts remains on par with its 6k result. Thus, SOAR achieves superior results within the same compute constraints. The revised manuscript includes a new figure illustrating this, reinforcing that SOAR overcomes performance plateaus in model size and search budget scaling, achieving higher plateaus. Finetuning costs: Finetuning is inexpensive compared to the search phase. FLOPs per iteration is $6N \times (100 \cdot T)$, where $N$ is LLM parameters and $T$ is tokens per completion. With ≤100 datapoints per task, sampling FLOPs is $2N \times (6000 \cdot T \cdot n)$, making finetuning ~5% of total FLOPs—nearly negligible. Additionally, autoregressive generation is slower (token-by-token forward passes), while finetuning processes sequences in one forward and backward pass, per Austin et al. ("How to Scale Your Model", 2025). **Possible leaking across test tasks:** In the official ARC-AGI Kaggle competition and related literature, there are no strict constraints on the order of task processing or the use of unsupervised data across tasks. This flexibility allows methods to leverage examples from other tasks in an unsupervised fashion, though it’s unclear whether this provides a significant advantage over refining strong candidate solutions within a single task. Several prior works (e.g., Akyürek, et al. 2024 ) adopt similar practices. **Relation to the STaR algorithm:** The confusing comment has been clarified. SOAR isn’t an implementation of STaR but shares its spirit, enhancing LLM reasoning by bootstrapping from self-generated data. STaR applies this to reasoning tasks with chain-of-thought text, using binary signals to identify correct reasoning for finetuning. SOAR adopts a similar approach, finetuning on self-generated programs from a search process, guided by hindsight relabeling of input-output pairs, training both generation and refinement. This is now clearer in the paper. **Comparison with FunSearch and Evolution of Heuristics:** SOAR enhances search algorithms using LLM models for generation and mutation. FunSearch, like REX, is one such algorithm. While SOAR could be compared to FunSearch, it’s not a direct competitor (REX is): SOAR can be used to improve upon FunSearch by training its generation and refinement capabilities. In particular, FunSearch also uses a “crossover” operator to generate candidate programs from two seed programs. These could also be trained with SOAR using the exact same iterative finetuning paradigm. This is also applied to the evolution of thoughts of EoH. Studying SOAR’s impact on FunSearch, especially training crossover for performance gains, is future work due to limited resources. We address these in our related work section. **Generalization to other domains (eg bin packing):** Please refer to our answer to Reviewer cz1F who also commented on this point (last point in our answer). **Minor points:** - We corrected the typos pointed by the reviewer Please let us know if the above discussion answers your concerns and consider raising your score if it does. If it doesn’t, let us know which concerns remain so we can try to address them. Thank you! --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. Your answer to my question on compute cost makes sense to me. The issue of how test data can/should be used in a benchmark remains a slight concern for me, but it is no longer a concern specific to this paper. It doesn't really affect the conclusion of this paper either, so from my point of view, this is fine. Regarding the section in your rebuttal "Comparison with FunSearch and Evolution of Heuristics": I believe that, simply put, you are saying that given an LLM-based search method "X", it makes sense to compare X against SOAR on X, or to compare SOAR on X against SOAR's competitor on X. You are also saying that it does not make much sense to compare SOAR on X against another LLM-based search method Y. I agree. My suggestion of comparing against FunSearch makes only sense if you can also run SOAR on FunSearch, but this must remain future work because of resource constraints. Sounds perfectly reasonable. Overall, I think my questions have been answered very well. Based on this, I will switch my recommendation to acceptance. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the time and effort you invested in reviewing our paper and responses. Thank you for the improved score, we are truly grateful!
Summary: The paper introduces a novel framework for program synthesis that integrates large language models (LLMs) into a self-improving evolutionary loop. The framework alternates between two phases: (1) an evolutionary search phase using an LLM to generate and refine candidate programs for a given task, and (2) a learning phase where search traces (both successful and failed attempts) are used to fine-tune the LLM's generation and refinement capabilities. It is claimed that this creates a virtuous cycle where improved models lead to more effective search, generating richer training data for further model improvement. The paper evaluates the method on the challenging ARC-AGI benchmark. Claims And Evidence: * Claim: SOAR achieves significant performance gains on ARC-AGI compared to baseline search methods and single-shot LLMs + The quantitative results in Table 1 demonstrate that SOAR significantly outperforms single-shot approaches and even larger closed-source models using basic search. The iterative improvement shown in Figures 2 and 3 provides evidence for the self-improvement claim * Claim: SOAR establishes state-of-the-art results for program synthesis on ARC-AGI among open-source LLMs + Table 5 compares SOAR's performance to various prior inductive approaches on ARC. SOAR's performance of 41.25% on ARC-test with the pooled 32B model is shown to be competitive with or exceeding previous open-source and even some closed-source approaches (when considering budget and data usage). (There have also been further contemporaneous SOTA improvements) * Claim: SOAR leverages positive transfer between generation and refinement finetuning + The experiments give suggest a positive interaction and transfer of learned capabilities between generation and refinement * Claim: SOAR enables test-time adaptation and continuous improvement on target problems + Figure 3 and Section 4.4 demonstrate performance improvements during test-time training iterations on ARC-test * Claim: SOAR breaks through performance plateaus of search-based methods + The 'scaling plateaus' shown in Figure 4 don't seem to account for scaling up of commercial model sampling (eg: Greenblatt 2024) - which makes this line of 'claim' somewhat suspect. Methods And Evaluation Criteria: * Evolutionary Search with LLMs: Using LLMs for both program generation and refinement leverages the generative power of LLMs within a structured search framework, and evolutionary search is appropriate for exploring the large and complex program space in ARC-AGI. The use of REX (Thompson sampling with exploration bonus) for refinement seems like a reasonable choice to manage the search budget effectively. * Self-Improving Loop: The core idea, alternating between search and learning, is novel and makes intuitive sense for overcoming the limitations of fixed-capability models. Finetuning on search traces (both successes and failures) is a practical way to learn from experience. * Hindsight relabeling to augment training data is a clever technique to increase the training data quantity and quality. * Test-time Training: Adapting the self-improvement loop to test-time training appears to have been an 'addon technique' rather than the core thrust of the initial research * Majority Voting: Ensembling with majority voting is a standard and effective technique - though it seems to be admitting defeat as a final step amidst the other innovations here The use of ARC-AGI and the detailed experimental analysis make the evaluation strong and convincing. Theoretical Claims: There are no explicit theoretical claims that require proof checking. Experimental Designs Or Analyses: Unmentioned (though it seems clear from the experimental design) are the practical considerations of the Kaggle environment for the ARC-Prize. Given those constraints, many of the choices (LoRA, unsloth, model size, number of generations, etc) make a lot of sense - it would be nice for the over-arching explanation to be given, though. This detail would also explain the flow of experiments, from the 'grand design' through all the ablations, continually moving forwards. The State-of-the-art Comparison (Table 5) for ARC-AGI, including CodeIt, BARC-induction, Icecuber, and Greenblatt (2024) provided clear context for SOAR's performance and establishes its state-of-the-art status among open-source LLM methods (modulo concurrent submissions). Supplementary Material: Yes - these were clearly useful. Relation To Broader Scientific Literature: * Program Synthesis: The paper builds upon a long history of program synthesis research, referencing traditional approaches like Genetic Programming (Koza, 1994). It acknowledges the shift towards using deep learning for program synthesis (Balog et al., 2016; Ellis et al., 2021) and highlights the recent impact of LLMs (Roziere et al., 2023; Guo et al., 2024) * Evolutionary Algorithms: SOAR leverages evolutionary search principles, drawing inspiration from mutation and crossover operations in genetic algorithms. It cites work using LLMs as operators in evolutionary search (Lehman et al., 2023; Meyerson et al., 2024). SOAR extends this by making the evolutionary operators (LLMs) learn and improve through experience, a novel aspect compared to traditional evolutionary methods with fixed operators * ARC-AGI Benchmark: The paper directly addresses the ARC-AGI benchmark (Chollet, 2019) - a valuable benchmark, which is less easily gamed than many other benchmarks commonly used in research Essential References Not Discussed: * More foundational work on evolutionary computation and genetic programming: While Koza (1994) is cited, including other foundational texts or surveys on evolutionary computation or genetic algorithms might be helpful to provide a broader context for the evolutionary search component of SOAR. For example, work by Holland (1975) or Goldberg (1989) on genetic algorithms could be considered. Other Strengths And Weaknesses: Strengths * Originality and Novelty: The core ideas include those of using a self-improving evolutionary loop for program synthesis, where the search operators (LLMs) learn from search experience. Hindsight relabling in this context was also a nice touch! * Empirical Validation: The paper provides strong empirical evidence to support its claims through comprehensive experiments, ablation studies, and comparisons to baselines and state-of-the-art methods Weaknesses * While the paper's main contribution SOAR is effectively shown, there are also a lot of additional 'bells and whistles' that are also added (and ablated for) that somewhat muddy the picture. It seems clear that this was also partly the result of reporting everything that contributed to the final results at the end of a Kaggle 'mad scramble', rather than a pure research endeavour * Limited Qualitative Analysis: While the paper provides examples of generated programs, a more detailed qualitative analysis of the types of programs SOAR learns to generate and refine, and how the quality of programs evolves across iterations, could be beneficial. In the Discussion (Section 5), it was stated that "smaller models (7B) demonstrated steeper learning curves and seemed to discover qualitatively different solutions" - it would be great to know more about this Other Comments Or Suggestions: * Including an error analysis of the tasks that SOAR still fails to solve could provide further insights into the limitations of the approach and guide future research directions. Understanding common failure modes would be valuable. Typos * L82 : "SOAR learns to solves an extra" -> "SOAR learns to solve an extra" * L100 : "preventing them to improve from experience" -> "preventing them from improving through experience" * L143 : "(see proof in Section 4.1)" -> "(pure LLM results given in Section 4.1)" * L283 : Table 2: Generation acc should be all 2 d.p. Questions For Authors: Qualitative Evolution of Synthesized Programs: Could you elaborate on how the quality and characteristics of the programs synthesized by SOAR evolve across iterations of self-improvement? Are there observable trends in terms of program complexity, algorithmic sophistication, or reasoning capabilities as the model is iteratively finetuned? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer FfTX for their helpful feedback. The reviewer noted that the approach is reasonable, intuitive and novel. They commented on the strength and extensive details of our experimental studies, acknowledging that they supported our claims. This said, they raised several concerns that we address here: **Too many components in the method:** This kind of feedback is particularly helpful to help us simplify the presentation of the method. We argue SOAR is simple: 1) use some kind of search algorithm to generate candidate programs (REX for us); 2) select interesting programs and apply hindsight relabelling; 3) finetune; 4) repeat. Section 4.2 explores step 2’s design space for ARC-AGI, not a “mad scramble” but a guide for adaptation, and we think that it will help others adapt SOAR to their use case. The revised manuscript better separates the high-level idea from this exploration. **Need for qualitative analyses (features across generations and models, examples and error analysis):** We agree that qualitative analyses are important here. Our analyses found the following trends across generations (true of all model sizes, numbers for Qwen 14B): - The proportion of error-free programs rises on average from 0.92 (gen 0) to 0.98 (gen 4). - Complexity increases given the increase in Lines of codes from 16.6 to 24.5, number of control structures (5.2 to 9.5) and the maximum depth of control structures in the AST from 3.4 to 4.7. Interestingly, the number of helper functions remains stable (1.5 -> 1.4). To discuss qualitative results, we created an anonymous repository containing interesting examples at https://anonymous.4open.science/r/arc_example-EBB9/. We choose one example for each of the following categories, but feel free to explore other examples: - **Examples of tasks solved only by smaller models:** e.g. example in solved_by_smaller_models_only - **Examples of tasks only solved by later generations:** e.g. example in solved_in_later_generations - **Examples of failures and successes** e.g. examples in other folders The revised version of the manuscript will include graphs of the trends across models and generations in the Appendix, a link towards the codebase and the repository of examples. This will open the opportunity for others to study generated programs in more detail and discover new insights. One way could be to use LLMs to label each program along various dimensions: eg, does it use recursion? Does it identify objects? Symmetry? And ask LLMs to describe the overall solving strategy, then use these features to analyze how their distribution shifts across generations and models, whether the diversity of strategy used increases or decreases with finetuning, or when a solution is found. These analyses are left for future work, either by us or by anyone else using the dataset of generated programs we will release with the paper. **Practical constraints of the AGI-Benchmark:** This work did not run in the Kaggle competition but was constrained by limited compute resources. Experiments for the 14B model cost an estimated USD 4000 (excluding method development costs). This explains why we focused on a single domain: preferring conducting carefully experiments and exploration of the design space (Section 4.2) rather than superficially reporting final success rates on several benchmarks. These computational constraints forced us to use data-efficient finetuning methods (LoRA, unsloth) and capping examples per task we could train on (50/6000 generated codes). The revised manuscript makes these constraints more explicit and discloses a more detailed cost of our experiments to clarify some of the design choices. **Does SOAR break through scaling plateaus?** Does SOAR overcome scaling plateaus? In Figure 4, dashed lines represent one-shot closed-source large LLMs, not scaling plateaus. Here, “scaling plateaus” refer to the leveling-off of each curve as model size increases. Each enhancement shifts to a higher scaling law, breaking these plateaus. Our new figure (compute_match_14b.pdf in the repo) shows scaling search also hits plateaus, which self-improvement surpasses. The Claude+Search comparison (Greenblatt, 2024) is absent from Figure 4 (42%). We believe SOAR could break this plateau too, though applying it to Claude is infeasible (closed model, high cost). Updated Figure 5 and the new figure clarify these points. **Minor concerns and suggestions:** - **GA references:** We added the GA references suggested and complemented our EC related work. - **Majority voting:** Majority voting is here necessary to decide which final solution to submit to a given task given a whole search trajectory and is therefore used in all search approaches. - **We corrected typos** Please let us know if the above discussion answers your concerns and consider raising your score if it does. If it doesn’t, let us know which concerns remain so we can try to address them. Thank you! --- Rebuttal Comment 1.1: Comment: ### *Re: "Need for qualitative analyses (features across generations and models, examples and error analysis)"* Yes : Adding the graphs of the trends across models and generations would be very helpful. Revealing concrete examples of SOAR's strengths/weaknesses in your new REPO is an excellent additional contribution! --- I've increased my score to "4: Accept". --- Reply to Comment 1.1.1: Comment: Thank you for your feedback, which helps us improve the clarity of our paper. We are grateful that you raised your score!
Summary: This paper introduces SOAR, a framework for self-improving program synthesis tested exclusively on the Abstraction and Reasoning Corpus (ARC). SOAR operates in two phases: program search phase and learning phase. In the program search phase, it generates lots of candidate Python programs and selectively refines them. In the learning phase, it uses the programs from the search phase to finetune itself. It not only uses the successful programs but also uses a hindsight replay technique similar to Codeit from Butt et al. to obtain more program samples with input-output pairs. By pooling samples across different model sizes (7B, 14B, and 32B parameters), it can boost performance and achieve 41.25% accuracy on ARC-test. This pooling approach outperforms any individual model, suggesting that different model sizes solve problems in complementary ways. Claims And Evidence: The paper claims that LLMs can learn to solve ARC-AGI tasks by self-improving - interleaving program search and learning. They empirically show that this is possible by presenting the experiments on the ARC-AGI dataset. I would like the authors to clarify some claims if they are controlled with the same compute budget. For example, in table 4, they claim that the results demonstrate the importance of learning both generate and refinement. I wonder when comparing that to learning generate only, and refinement only, if they are all finetuned on the same compute budget. Also, section 4.3 suggesting that different model sizes may solve problems in complementary ways needs more evidence, as the pooling results do not seem to improve very dramatically. Methods And Evaluation Criteria: The evaluation criterion is the ARC-AGI dataset. One specific assumption is that the model is solving the full test set of ARC-AGI all together instead of solving it one-by-one. Theoretical Claims: N/A: The paper is empirical case study of LLM code generation on ARC-AGI benchmark Experimental Designs Or Analyses: The design for the experiments are sound as they based on the ARC-AGI training set to perform different ablation studies to see which methods e.g. synthetic data selection methods, achieve better results. Supplementary Material: The appendix including tables comparing results to previous works. The prompts and program samples are also provided which could be helpful for reproducibility. Relation To Broader Scientific Literature: It connects various previous ideas of LLM code generation work. Please see the below comments. Essential References Not Discussed: Not that I'm aware of. Other Strengths And Weaknesses: Strength: The paper presents a very thorough experiments study on ARC-AGI training set demonstrating how some of the design decision choices are made. It serves as a very good report on how to approach ARC with LLMs. Weakness: The work presents a very thorough case study of ARC-AGI tasks and how to approach the task from a PBE perspective with LLMs via program search and learning. However, the key ideas resembles a lot like previous related work. The search and finetune loop is similar to the Codeit ARC-AGI work from Butt et al. but with better LLMs and using Python code instead of custom DSL - the key idea of using hindsight replay to generate more data for finetuning. Using Python code instead of custom DSL, and also refinement for ARC-AGI tasks, has already been presented in previous works. The work integrates all these ideas together and shows the performance gains; however, it is also in the ballpark of previous works as well for solving ARC-AGI with code. In terms of accuracy with open models, it also lags behind the approach of direct prediction of the output compared to various Kaggle competition work and Akyürek et al.'s test time training work, which achieves 47.1%. Other Comments Or Suggestions: The terminology of generation accuracy and search accuracy is a bit confusing as generation also includes 3k sample candidates and checking which kinds of like searching. Questions For Authors: How does the base model affect the performance of SOAR? Would it be possible to achieve similar performance with other less powerful models than the Qwen series? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer 5bhq for their time and constructive feedback. The reviewer understood our work and noted the strength of our experimental study and how its results support our claims. This said, the reviewer raised several concerns that we have addressed below. **Are results controlled for compute budget?** Thank you for raising this question. Section 4.2 aims at answering the three main questions: - Does finetuning for generation help generation? → Yes. We compare with the base model thus do not control for finetuning costs (one is not trained, seed response to reviewer ifzp for detail analysis between finetuning and inference cost), but we control for the search budget (3k each) - Does finetuning for refinement help refinement? → Yes. Same search budget of 6k on each side, not controlled for finetuning cost (one is not trained) - Does finetuning the model for both works better than finetuning two models, one for each task? → Yes. We control for search (6k) and finetuning costs: specialized models are finetuned on 50 examples per task each, while the combined model is trained on the 100 pooled examples once. **Do models of different sizes solve problems in different ways?** Do models of varying sizes approach problems differently? Combining results from models sized 7B, 14B, and 32B yields up to a +4.75% improvement on the training set—a notable gain for the challenging ARC task. This suggests some complementarity: smaller models (e.g., 7B) still contribute value when paired with the largest (32B). Future work will explore why, but we conducted some simple analyses to start answering this question: Some models give plausible solutions, but they don’t handle edge cases well; they just implement a high-level idea of the transformation with missing details. We further identified examples of tasks solved by smaller models but unsolved by the 32B model that we make available here: https://anonymous.4open.science/r/arc_example-EBB9/. See more details in our answer to Reviewer FfTX discussing generated programs qualitatively. **Comparison to transduction approaches to ARC (Akyürek et al.):** Our approach relies on program induction and indeed underperforms some transduction approaches predicting output grids directly (e.g Akyürek et al.’s 47.1% with test-time training, or OpenAI’s o1). One thing to note is that transduction approaches are more susceptible to data contaminations: the test set output grids have been publicly available for years (e.g. on GitHub and Kaggle). Induction methods, on the other hand, need to predict correct transformation programs which are not found online. Akyürek et al. also note that their method was developed using a subset of the test set, raising concerns about overfitting to those specific examples. Whether transduction methods overfit or not, we believe it is valuable to pursue both approaches independently, as we do not know which one will solve ARC in the end. Program synthesis also has many applications beyond the ARC domain. Moreover, programs are more interpretable and open a window on the reasoning process of LLMs that transduction methods do not offer. **Reliance on Qwen base models:** We selected Qwen series models for their strong coding skills relative to their size. Experiments show that better base models yield superior final performance, though smaller models can match the initial performance of larger ones using SOAR. We expect SOAR to work with any model capable of finding correct solutions via search, with stronger models producing better outcomes. Limited compute resources prevented testing SOAR on other models, leaving this for future validation. SOAR could be adjusted to boost weaker models by increasing compute per iteration (longer searches, more finetuning) or adding iterations. Future work might explore pooling outputs from diverse models (e.g., Mistral, Gemma, Llama) to enhance complementarity, as seen in our experiments. While Qwen models excel, smaller alternatives like Mistral 3.1 or Gemma-3, improving in reasoning, could substitute. These ideas are now in the updated manuscript. **Assumption of access to full test set:** We now discuss this assumption in the manuscript but we note that this is a common assumption, as it is the format used in the original Kaggle competition. Our approach could in principle be used on one task at a time. This would require training a separate model for each task which would be computationally expensive and might hinder possible effects of generalization across tasks. **Confusing terminology Generation vs Search:** The Sample approach (pure generation and ensembling) is indeed a form of (simple) search. We updated the paper with less ambiguous terms: Sample vs Sample&Refine. Please let us know if the above discussion answers your concerns and consider raising your score if it does. If it doesn’t, let us know which concerns remain so we can try to address them. Thank you!
Summary: This paper introduces SOAR (Self-improving Operators for Automated program Refinements), a framework that enhances language models' program synthesis capabilities through an iterative self-improvement process. - SOAR alternates between a search phase (using a language model to generate and refine candidate solutions) and a learning phase (fine-tuning the model on these search attempts). - Instead of relying on fixed model capabilities, SOAR allows models to learn from both successful and failed synthesis attempts, leveraging hindsight relabeling to learn from all generated programs, not just correct ones. ## Update after rebuttal The authors answered my questions in great detail and provided convincing elements to address my concerns. I have, therefore, updated my rating to accept. Claims And Evidence: Claims supported. Methods And Evaluation Criteria: ARC-AGI is the program synthesis benchmark reference to measure reasoning abilities. Theoretical Claims: Nothing to report. Experimental Designs Or Analyses: Nothing to report. Supplementary Material: Nothing to report. Relation To Broader Scientific Literature: Nothing to report. Essential References Not Discussed: Nothing to report. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer xjKq for their review of our manuscript and appreciate the recognition of the key aspects of our approach. The review itself did not include any critique or suggestion for improvement, but it did not come with the highest recommendation either ("Weak accept"). This decision suggests there may be opportunities to strengthen our work, and we would be happy to address any additional feedback or suggestions for improvement. Please consider raising your score if you have no concerns. If you do, please let us know what they are so we can try to address them. Thank you!
null
null
null
null
Learnware Specification via Dual Alignment
Accept (poster)
Summary: The learnware system is a model reuse system which is designed to choose the optimal model from a model repository based on rules derived from user datasets. The core of this system lies in the use of specifications for model selection. This paper introduces a novel specifications generation method called Dual Alignment, which consists of two components: discriminative alignment and distribution alignment. Compared to the traditional RKME-based specifications generation, Dual Alignment demonstrates performance improvements across various metrics. Claims And Evidence: Yes Methods And Evaluation Criteria: There is an issue with the selection of the evaluation dataset. In Section 5.1, it is mentioned that “we extract 4 label spaces from the overlapping classes of 11 domains across the two datasets.” However, in the later testing in Section 5.3, only label spaces A and B were used. Why were only these two chosen? And how were they selected? Theoretical Claims: Yes Experimental Designs Or Analyses: In Table 1, the selection of size introduces a hyperparameter K, but its exact definition is unclear. What does K specifically refer to? Do K×5 and K×10 correspond to 20 and 100 in the RKME algorithm? In Table 2, two metrics are introduced: superclass accuracy and quality. However, their explanations are somewhat abstract, especially superclass accuracy. How is superclass accuracy actually calculated? Also, why can different quality values exist when superclass accuracy is the 100%? This part is a bit confusing. Additionally, compared to the RKME algorithm, the Dual Alignment method generates labels(check mark on label selection). Do these labels refer to the pseudo-labels generated during the Submitting Stage? Supplementary Material: I reviewed the Algorithm Details section, including the step-by-step process of the algorithm. I didn’t find any major issues. Relation To Broader Scientific Literature: Yes, this paper makes a significant contribution to the learnware paradigm. Previous learnware systems did not distinguish between discriminative ability and feature distribution, but this paper introduces and analyzes them in detail. By bringing in a new perspective, it enhances the overall capability of the learnware system and could serve as valuable inspiration for future research. The paper introduces a new classification dimension, helping users find the models they need more effectively. Additionally, the performance improvements from multiple analysis dimensions might complement each other which further boosting the system’s effectiveness.I’m curious if there will be any future work on combining this approach with the traditional RKME algorithm. Essential References Not Discussed: No Other Strengths And Weaknesses: Pros: This paper provides very detailed proofs. It gives upper bounds for both of the proposed loss functions and also includes an analysis of privacy issues in this scenario. From a theoretical perspective, it is a solid and comprehensive paper. Cons: The 5.1 Experimental Setup section does not clearly explain the metrics, which might make it difficult for readers who are not familiar with these metrics to fully understand the experiment results. Other Comments Or Suggestions: No Questions For Authors: No question Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the valuable feedback and appreciation of our work. We hope that our responses could mitigate your concerns. Q1: Question on the mixed task setting Ans: In the experiments of this paper, we set up four label spaces. It is no difficult to find that the label space A(B) contains the label space C(D). In the mixed task setting, if the domains corresponding to all label spaces are set as developer tasks, this will lead to duplicate tasks in the dock system. The learngware paradigm for duplicate tasks will only keep the tasks with the best model performance, so here we do not use label space C and D in the mixed tasks setting. These two label spaces are still constructed mainly for heterogeneous label space setting. Q2: Question on the parameter K. Ans: The parameter K is the number of dataset classes. If the number of the task dataset classes is 5, the specification size $K\times 5 = 25$. In the latest version, we will further clarify this. Q3: Questions on the evaluation metrics. Ans: In the Mixed Task Setting, both the superclass accuracy and quality metrics are actually classification accuracy metrics, just assessed differently. In this setting, we have 11 domains $\times$ 2 label spaces (A and B) $=$ 22 developer tasks as superclasses, i.e., 22 superclasses; and label space A$\cup$B$\times$11 domains $=$ 11 user requirement tasks. Thus, each requirement task contains two superclasses, i.e., each requirement task requires two corresponding learnware to be solved. This ensures that no single learnware model in the system can solve the requirement independently, but rather, a combination of models is required to address the requirement. Furthermore, in order to evaluate the performance of the learnware dock system constructed by each specification method, it is implemented here by evaluating the accuracy of the system in identifying useful learnware, i.e., the superclass accuracy. Meanwhile, our proposed method is to generate a class specification. In order to verify the inter-class discriminability of the class specification, it is evaluated here by accessing the class accuracy of the corresponding specifications through the developer's pre-trained model test, i.e., quality metric. In the latest version, we will further emphasize the description of the metrics. In addition, since smaller specification size lose more information about the original data, this can lead to poorer discriminability of the statutes, and thus poorer quality. However, since the deploying stage of the learnware paradigm based on neural embedding specification is identified useful learnware based on mean unbiased estimation, the discriminability of the specification do not greatly affect the performance of the learnware paradigm (superclass accuracy metrics). This is why the experiments in this paper show the values of different quality metrics with 100% superclass accuracy metrics. Q4: Question on the label (check mark on label selection) in the Table 2. Ans: In Table 2, label (check mark on label selection) refers to the label of the specification, i.e., the real label.The specification generated by the \textsc{Lane} method and the \textsc{Dali} method are class specification with label. According to the previous answer, we know that the assessment of quality metric requires label to be obtained, so the RKME and RKME-W methods do not have label to obtain the quality assessment metric values, whereas the \textsc{Lane} and the \textsc{Dali} can obtain them.
Summary: This paper introduces a approach (DALI) to generating high-quality model specifications in the learnware paradigm. Unlike existing methods that rely solely on distribution alignment, DALI incorporates discriminative alignment, which captures the model's intrinsic discriminative performance. By jointly considering both alignments, DALI enhances specification quality, enabling more effective model reuse in a learnware dock system. Theoretical and empirical results demonstrate DALI's superiority in characterizing model capabilities and handling diverse label space scenarios. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I have checked the theoretical claims. Experimental Designs Or Analyses: Yes, I have checked the experimental designs and analyses. Supplementary Material: No, this paper does not provide supplementary material. Relation To Broader Scientific Literature: Simultaneously consider the true label distribution and the model output distribution to generate the learnware specification maybe helpful, but this should be more carefully discussed and verified by more experiments in different scenarios (like tabular tasks). Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. The learnware paradigm is useful when handling well-trained models, it is an interesting paradigm and the specification design is a quite important problem. 2. This paper gives the theoretical analysis of the proposed learnware specification. Other Comments Or Suggestions: 1. To verify the effectiveness of the proposed learnware specification, it is better to provide more experiments on the tabular tasks. 2. More discussion of the learnware specification design can be provided, please see the questions for details. Questions For Authors: 1. Previous paper [Tan et al., 2024a] also encodes the marginal distribution of data and the conditional distribution of model, what is the difference between DALI and the previous work? In the [Zhou and Tan, 2024], specification is generated by the sketching the distribution of feature concatenated with the model output, what is the potential advantage of additionally considering the true labels of the task data, it there any experiments to verify the effectiveness of this design? 2. When the developer's task is easy, the model outputs are more likely to be accurate, and the two objectives are more likely to align. However, when the task is challenging, the objectives can be conflict. Would it be beneficial to separately model the distribution of true labels and model outputs? In designing learnware recommendation rules, both the information of p(Y|X) from true labels and model outputs can be utilized with techniques like class-wise MMD. 3. How does the newly proposed learnware specification perform in tabular tasks? 4. In the discriminative alignment, how about using the embedding network and MMD distance like in the distribution alignment? It is there any experiment to verify the effectiveness of random feature mapping? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the insightful feedback and the interest in our work! We hope our responses can address your concerns. Q1: Questions on differences with similar works. Ans: The method proposed by [Tan et al. 2024a] adds conditional distributions from the pre-trained model's output labels to the marginal distribution of the data, enhancing the discriminativeness of the generated specification distributions. This idea is similar to the \textsc{Dali} method, but their differences still exist in the following two aspects: (1) [Tan et al. 2024a] is an RKME-based method, and the quality of the generated specification heavily depends on the chosen kernel function. In contrast, our proposed \textsc{Dali} method replaces the kernel function with neural embedding, making it more adaptable to complex scenarios. (2) \textsc{Dali} collaboratively utilizes pseudo labels and true labels to characterize the model’s properties, whereas [Tan et al. 2024a] only leverages pseudo labels to enhance the discriminability of the class specification. As for the potential advantage of considering the true labels of the task data, existing work [Chen et al. 2025] has demonstrated the effectiveness of using true labels to characterize feature distributions. This paper can be seen as building on that work by additionally considering model discriminative performance through pseudo-labeling, thereby achieving superior specification generation. Q2: Concerns about conflict of objectives. Ans: In the learnware paradigm, the dock system stipulates that the performance of all pre-trained models submitted by developers will not be very poor. Therefore, the \textsc{Dali} method we proposed does not face the issue of objective conflict in most scenarios. On the contrary, simultaneously considering pseudo labels and real labels allows the generated specification to more fully characterize the model, thus promoting model search and reuse. Our experimental results clearly verify this. However, if we consider scenarios where the performance of submitted pre-trained models is not restricted, which goes beyond the scope of this paper, personally, I believe that your idea of separately modeling real labels and model outputs is a promising concept for future research, but it needs to be empirically verified in more open environments. Q3: Questions about the tabular tasks. Ans: This work is specifically designed for image-based scenarios and is not readily applicable to tabular data. Tabular data exhibits significant differences from image data in terms of feature sparsity, distribution skewness, and feature engineering nuances [Ye et al. 2024]. Handling tabular data may require additional customized modeling designs, which go beyond the scope of this paper. However, exploring the potential of the \textsc{Dali} method for more diverse modalities is an interesting direction for future research. Q4: Questions about the modeling of discriminative alignment and random feature mapping. Ans: In the discriminative alignment, inspired by [Konstantinov and Lampert 2019], [Mohri and Munoz 2012], and [Dong et al. 2022], we adopt the $\mathcal{H}$-discrepancy method to measure distribution differences under limited data. The results of the ablation experiments (Table 4) clearly validate the effectiveness of this modeling approach. Furthermore, [Zhao and Bilen. 2022] well demonstrated that random feature mapping can be used as an interpretation of the input data, preserving the data information in a low-dimensional embedding space. Meanwhile, Appendix C Proof of Proposition 4.5 provides a theoretical demonstration of why discriminant alignment can be as effective as distribution alignment. [Chen et al. 2025] Chen, W., Mao, J.-X., and Zhang, M.-L. Learnware specification via label-aware neural embedding. In Proceedings of the 39th AAAI Conference on Artificial Intelligence, Philadelphia, Pennsylvania, 2025. [Ye et al. 2024] Ye, H. J., Liu, S. Y., Cai, H. R., Zhou, Q. L., Zhan, D. C. A closer look at deep learning on tabular data. arXiv preprint arXiv:2407.00956, 2024. [Konstantinov and Lampert 2019] Konstantinov, N. and Lampert, C. Robust learning from untrusted sources. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pp.3488–3498, 2019. [Mohri and Munoz 2012] Mohri, M. and Munoz Medina, A. New analysis and algorithm for learning with drifting distributions. In Algorithmic Learning Theory: 23rd International Conference ALT, volume 7568 of Lecture Notes in Computer Science, pp. 124–138, 2012. [Dong et al. 2022] Dong, T., Zhao, B., and Lyu, L. Privacy for free: How does dataset condensation help privacy? In \textit{Proceedings of the 33rd International Conference on Machine Learning}, volume 162, pp. 5378–5396, 2022. [Zhao and Bilen. 2022] Zhao, B. and Bilen, H. Dataset condensation with distribution matching. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 6514-6523, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the author’s detailed response. I would like to revise my score upward. --- Reply to Comment 1.1.1: Comment: Dear Reviewer jxaa: Thank you so much for your kind reply and for adjusting the score! We will revise our paper according to the constructive reviews. Best Authors
Summary: The paper shows that existing specification methods primarily rely on distribution alignment to generate specifications and introduces DALI, which incorporates both discriminative and distribution alignments in the process. Theoretical and empirical results demonstrate that DALI improves specification quality, thereby facilitating model reuse in the learnware system. Claims And Evidence: The claims made in the submission are supported by evidence. Methods And Evaluation Criteria: Yes, the proposed method and the evaluation criteria make sense for the problem at hand. Theoretical Claims: I have generally checked the proofs, but some details have not been thoroughly verified. Experimental Designs Or Analyses: I have generally checked the experimental design and analysis, and they appear sound. Supplementary Material: I have roughly checked the proofs in the supplementary material. Relation To Broader Scientific Literature: The paper contributes to the field of the learnware paradigm by introducing a new learnware specification. Essential References Not Discussed: No, essential prior works are appropriately cited and discussed. Other Strengths And Weaknesses: Strengths: 1. This paper proposes a new specification that incorporates the model's discriminative performance. 2. The experiments compare the precision of model search and analyze the privacy protection of the new specification. Weaknesses: 1. The theoretical analysis (Propositions 4.3-4.6) in this paper is unrelated to the subsequent model search and reuse, and it does not theoretically explain how this specification helps with model search and reuse. 2. DALI improves the consistency of model performance over $R$ and $\mathcal{D}$ by optimizing $\mathcal{L}_{dis}$ during specification generation (which is one of the main contributions of the paper). However, the model search process does not explicitly leverage this characteristic and still primarily relies on a feature distribution-based matching approach. 3. Assuming the number of existing models in the system is $c$, the complexity of solving Eq. (11) should be at least $O(c^2)$. This implies that the proposed model search algorithm has a time complexity that scales at least quadratically with the number of models, whereas existing model search algorithms have a linear time complexity, indicating that the proposed algorithm is more computationally expensive. 4. In the experiments, the performance of DALI shows only a small improvement compared to the contrast method, RKME-W. Other Comments Or Suggestions: I have no other suggestions. Questions For Authors: 1. How does DALI compare to RKME in terms of specification generation efficiency? 2. What are the benefits of obtaining $c'$ candidate useful learnwares during the model search process? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your detailed feedback, and we hope our responses will address your concerns. Q1: The theoretical analysis is unrelated to the subsequent model search and reuse. Ans: Our theoretical analysis is closely related to subsequent model search and reuse. Notably, the quality of the generated specification is crucial in the learnware paradigm (please refer to “Introduction” section), as it directly influences the accuracy of subsequent model search and reuse to a certain extent. From the perspectives of loss upper bound (Propositions 4.3 and 4.4), optimization (Proposition 4.5), and privacy protection (Proposition 4.6), our theoretical analysis demonstrates that the proposed \textsc{Dali} method can generate superior specifications, thereby indirectly providing theoretical support for subsequent model search and reuse to some extent. Q2: The model search process still relies on a feature distribution-based matching approach. Ans: In the model search process, we perform class feature distribution-based matching approach on the generated specifications (Eq.(11)). Notably, the specifications are optimized by Eq.(4), and their generation process has fully considered the distribution properties and discriminative performance of the model. Therefore, the matching approach in Eq.(11) does not overlook these characteristics of the model. Q3: Questions on the efficiency of model search. Ans: The model search process of our \textsc{Dali} method is comparable in time complexity to existing methods such as RKME, RKME-W, and LANE, all of which require solving a quadratic programming problem similar to Eq.(11) with a complexity of at least $\mathcal{O}(c^{3})$. This is because the submitted requirement task may not find a single specification in the learnware dock system that can fully solve the task but may need to combine multiple specifications to accomplish it. To the best of our knowledge, efficiently conducting model search remains an open problem in learnware research, requiring further in-depth exploration in the future. Q4: Small improvement compared to RKME-W. Ans: RKME-W is an extension of the RKME method, incorporating knowledge distillation to retain network parameters as part of the specification. As a result, this method has a significantly larger resource overhead. In contrast, our proposed method outperforms RKME-W while maintaining a lower resource overhead, which we consider a significant improvement. We will emphasize this point in the revised version. Q5: Comparison of specification generation efficiency with RKME. Ans: We empirically evaluated the specification generation time of the \textsc{Dali} and RKME methods on datasets under the homogeneous label space setting. The average runtime of \textsc{Dali} was 22.51s, while RKME required 14.05s. Since our approach incorporates random neural embedding mappings, its generation time is indeed slightly longer than RKME’s kernel mapping in this scenario. However, the difference remains within an acceptable range. Notably, \textsc{Dali} achieves a significant performance improvement over RKME at a lower computational cost, a point we will emphasize in the revised version. Q6: The benefits of obtaining $c'$ candidate useful learnwares. Ans: One important purpose of learnware paradigm is to enable well-trained models in the dock system to be used "beyond the capabilities of any single model". That is to say, in the learnware dock system, the user can get more than one learnware to solve the user's requirement. For example, the dock system contains learnware related to cucumber, tomato, orange, apple, cabbages, and the user's requirement task is fruit. Then the useful learnwares (candidate useful learnwares) for the requirement may be orange, tomato, and apple. Then, we don't know which part of the task corresponds to orange or apple. At this point, we need further obtain the relationship between the task and the useful learnwares (candidate useful learnwares) to enable model reuse. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the responses and clarifications. Part of my questions have been addressed. I have adjusted my score accordingly. --- Reply to Comment 1.1.1: Comment: Dear Reviewer PFro: Thank you so much for your kind reply and for adjusting the score! If you have any questions, don't hesitate to let us know, and we'll do our best to address your concerns. Best Authors
null
null
null
null
null
null
null
null
On the Importance of Gaussianizing Representations
Accept (poster)
Summary: The authors propose adding a "Gaussianizing" step into normalisation layers such as batchnorm, which transforms the features so that they are approximately Gaussian-distributed. Specifically, they use the "power transform" originally proposed in the field of hypothesis testing, but propose approximating its objective by a quadratic so that the "correct" transform may be determined using a single Newton-Raphson step (which is important since this must be done every time a normalisation is used during a forward pass). The authors present many arguments, largely from an information-theoretic perspective, as to why having Gaussian features is desirable / important. This also motivates the authors' use of additive Gaussian noise as regularisation. The authors present a wide range of experiments exploring the effect of their method on generalisation under different ablations, with models / datasets focusing on image classification tasks using ResNets and ViTs. They also verify the Gaussianity of the features when using their method vs standard normalisation layers. They find almost universal benefits to generalisation using their method, though, as detailed in a plot in the appendix, the proposed method does increase runtime by about 50% during training, and about 25% during testing. ### Update after rebuttals I have maintained my score of "accept" - please see rebuttal comment Claims And Evidence: The main claim made by the paper is that forcing neural networks to have Gaussian distributed features throughout training can have desirable effects on performance, which is supported by their experiments. I have a couple of minor issues / questions about some of the experiments, which I will detail below. Methods And Evaluation Criteria: Yes. The hypothesis is that Gaussianizing representations could have positive effects on model performance. The proposed method effectively Gaussianises the features (as shown in Figure 5) and results in an improvement to validation accuracy on realistic tasks (Figure 1, Table 1, Table 2, etc.). Theoretical Claims: The paper does not make any theoretical claims (e.g. theorems, bounds). Derivations for the formulas used by their method are given in appendix C, which I did not check. Experimental Designs Or Analyses: The experiments only consider image classification tasks using ResNets and ViTs, though this includes a variety of architectures and datasets, and are run from multiple random seeds with error bars given. In an ideal world, the authors would have evaluated their method in another domain, such as language modelling with transformers, though given the extensive ablations and other investigations, I think this is an appropriate amount of experiments for a conference paper. In the ResNet experiments, Table 2 says that data augmentation was not used. Data augmentation is usually used for ResNet experiments on datasets like CIFAR-10 and CIFAR-100, and greatly improves performance (e.g. we would expect closer to 93 or 94% for a ResNet18 on CIFAR-10 using data augmentation, rather than the 89% achieved by the paper's baseline model). This is especially strange given that Table 1 says that data augmentation **was** used in the ViT experiments. I would have preferred to see data augmentation for the ResNet experiments, as this is standard practice. Supplementary Material: The authors have included code in a zip file, though I have not reviewed this. I looked at many of the sections and plots in the appendix and found that many of the questions I had about the method (how does it affect runtime, what happens if you remove the additive gaussian regularisation, etc) were answered there. Relation To Broader Scientific Literature: The paper proposes a modification that can be applied to any neural network which utilises normalisation layers, and therefore has very broad scope. Moreover, the specific algorithm proposed in this paper is less important than the proof of concept that Gaussianing features in normalisation layers is beneficial. Further refinements or alternative algorithms to efficiently Gaussianise features in neural networks could be developed in the future if required. Essential References Not Discussed: I am not aware of any essential references that have not been discussed. Other Strengths And Weaknesses: **Strengths** - The paper is well written. In particular, the connections to information theory and motivations are clearly and extensively explained. - I have not seen any work that explicitly Gaussianises the features in a neural network before, so as far as I'm aware, this is novel. - The suggested method does offer convincing improvements in validation accuracy - The authors have included a wide range of relevant ablations, analyses, and justifications, either in the main text or the appendix, which answered many questions I had while reading the paper. - The authors identify that, when your features are Gaussian, decorrelating / whitening transformations actually imply independent features. This could have important implications for recent works that utilise whitening / decorrelating transformations in optimisation, e.g. https://kellerjordan.github.io/posts/muon/ or https://arxiv.org/abs/2412.13148. - The authors try their method on 4 different types of normalisation (batchnorm, layernorm, instancenorm, and group norm) and show improvements in all cases. **Weaknesses** - The ResNet experiments do not use data augmentation, which is standard practice, and no explanation is given as to why. - The specific implementation proposed in this paper increases runtime by 25-50%, which is quite significant. - Some of the technical detail weren't very clearly explained. Specifically, slightly more explanation of where the NLL for the power transform comes from would make the paper easier to read. Also, it is not entirely clear what the QQ plots in the main paper (Fig 5 and 6) are showing (e.g. which layers are you plotting for, where does each value on the plot come from). Other Comments Or Suggestions: - It would be useful if the authors state at the start of section 3.2 that the additive Gaussian noise is an additional regularisation technique enabled / enhanced by the Gaussianisation of the features, and not actually part of the Gaussianisation process itself (assuming I have understood this correctly). I was confused by this while reading the paper. - Related to the previous point, I think it'd be nice to have figure 9 (comparing base model vs. base + gaussianised features vs base + gaussianised features + additive gaussian noise regularisation) in the main text. The Motivation section (section 5) could probably be shortened to accommodate this. - Please state in the main text a briefly summarised version of the speed results, e.g. "the method increases training time by roughly 50%" etc. It'd also be nice to briefly explain what specific part of the algorithm causes such large slowdowns. - The authors may be interested to try their method on some "speedrun" benchmarks e.g. https://github.com/KellerJordan/cifar10-airbench or https://github.com/KellerJordan/modded-nanogpt. Obviously the wallclock time will not be impressive given the method's current slowdowns, but loss vs iteration / epoch would be interesting. Questions For Authors: - Why did you not use data augmentation on the ResNets? Can you rerun these experiments with data augmentation and include them in the paper? - In Section 5.1.2. I didn't fully understand the point about (line 350) about not wanting to corrupt the distributions when regularising. Is corruption not the point of random noise / regularisation like dropout? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer ps2s, We address all of your comments below. > >Regarding the use of data augmentations in the ResNet experiments. > We have run an experiment to verify the performance of ResNet18 x CIFAR10 using BatchNormalNorm (BNN), and contrast this with a well-documented baseline - which is in line with your expected level of performance when using BatchNorm (BN) (https://github.com/kuangliu/pytorch-cifar). We use the same data augmentations of transforms.RandomCrop(32, padding=4) and transforms.RandomHorizontalFlip() as listed in the repository. Across $M=6$ runs, we obtained a mean validation set accuracy of $94.93$%$\pm0.05$, which surpasses the reference performance of $93.02$% listed in the repository; this latter figure being in line with your expected level of performance for BN when data augmentations are employed. This serves to demonstrate that BNN continues to scale and outperform with the use of data augmentations in the ResNet experiments; and is analogous to the findings for LayerNormalNorm (LNN) and LayerNorm (LN). > >Regarding the justification for not employing data augmentations in Table 2. > We next provide our justification for this choice in our experimental design. Our goal was for the set of experiments with data augmentations (Table 1) and without data augmentations (Table 2) to serve as an ablation - showing that the method performs well regardless of specific augmentation techniques used. Crucially: in many application areas, such as in time series analyses, and in fine-grained medical imaging tasks, it is often not clear what data augmentations are appropriate. Therefore, we believe demonstrating that our method performs strongly relative to other normalization layers - with and without the use of data augmentations - is extremely valuable. > >Regarding clarifying Figures 5 & 6. > Figure 5 shows the following: In a given layer of a neural network, we take all the post-normalization features for a given channel and minibatch combination. We then compute a QQ-plot and its associated $R^{2}$ value for the line of best fit. Now consider such a plot for three layers at various depths in the network. Thus Figure 5 serves to demonstrate graphically that normality normalization leads to higher normality in the features, as demonstrated by the higher $R^2$ values for the line of best fit. Crucially then, Figure 6 then substantiates these findings quantitatively: for each layer of the neural network (x-axis), we take $200$ QQ-plots corresponding to $20$ channels and $10$ validation minibatch combinations, we compute the $R^{2}$ values for each of these $200$ QQ-plots, then plot the mean $R^{2}$ value for that layer. Thus the figure demonstrates that throughout the layers of a network, normality normalization leads to much higher normality. > >"slightly more explanation of where the NLL for the power transform comes from" > Please see the paragraph about the NLL in our response to Reviewer Eseb, which precisely addresses your comment here. > >Regarding clarifying that Gaussian noise is an additional regularization technique. > We believe this is an excellent suggestion and have modified the paper accordingly. > >Regarding the inclusion of Figure 9 in the main paper. > We have made this change now - the camera-ready version of the paper affords an additional page (9 instead of 8) for the main text, thereby making its inclusion very natural. > >Regarding summarizing the speed results in the main text and describing what parts of the algorithm cause slowdowns. > We have now commented in Section 4.6 Additional Experiments & Analysis, under paragraph Speed Benchmarks, your suggested note on the speed difference. The main speed differences occur due to the operations log(1+x) ("log1p") and raising to the power. During the work, we investigated making series expansion approximations to these operations, and substantiating their efficacy is a promising direction for future work. > >"In Section 5.1.2. I didn't fully understand the point about (line 350) about not wanting to corrupt the distributions when regularising." > Here we are suggesting the following subtle distinction: we want to be able to add as much regularizing noise as possible, but without fundamentally corrupting the underlying signal. The key being that when using Gaussian encodings, the threshold for the amount of noise we can add is higher. You may also find the reference (Guo et al. 2005) (also referenced in our manuscript) to be of interest. We have furthermore added experimental results contrasting decorrelated BatchNorm (DBN) with decorrelated BatchNormalNorm (DBNN); please see the thread with Reviewer 7TPa. These experimental results provide further evidence for the strong performance of normality normalization across various normalization layers. We believe we have comprehensively addressed your comments here. We would be highly appreciative if you would consider increasing the score for our submission; thank you. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My concerns have been adequately addressed. I would suggest putting the explanation you have me of Figures 5 and 6 somewhere in the paper, if it is not already in there. Whilst other reviewers have raised some concerns and interesting points about related work and adversarial robustness, these are all minor in my opinion, so I would like to maintain my original score of "Accept", and emphasise that I think this is very interesting work that could spur further investigation into (efficiently) Gaussianising representations.
Summary: Full disclosure: I was a reviewer for a paper for ICLR 2025 which seems to be largely mirroring this paper and I assume that this is a resubmission of that paper (I'm reviewer G4ZL here: https://openreview.net/forum?id=9ut3QBscB0) This paper introduces normality normalization as a new type of normalization layer that attempts to impose stronger Gaussianity on the activations in neural networks. They motivate the via information theory by invoking classical facts about Gaussian distribution, namely that Gaussian is the “best-case signal” and “worst-case noise,” so adopting Gaussian representations plus Gaussian noise can maximize information capacity and tolerance to perturbations. The key technical part is applying Yeo-Johnson power transform on the normalized activations and making them marginally closer to normal (marginal likelihood). They also propose an additive Gaussian noise. Since this is only a way of correcting statistics, this can be applied to both Layer and Batch Norm to create “BatchNormalNorm,” “LayerNormalNorm,” etc. In experiments on CIFAR-10/100, SVHN, STL10, TinyImageNet, Caltech101, Food101, and ImageNet, for ResNets, WideResNets, & Vision Transformers), they show normality normalization outperforms standard normalization (Tables 1 & 2). The authors also claim enhanced robustness to random noise via quantitative attenuation metrics. ## Update after rebuttal After reviewing the rebuttal responses, am happy to increase my score from 3 to 4(Accept). I am happy with the main paper and the explanations provided here, and consider this paper to be a good contribution for ICML. Claims And Evidence: While the central claim that normality normalization outperforms the classical normalization seems to be validated by results in Table 1 & 2, there seems to be some gap between reported values for baselines and those reported in other papers. This leaves me the impression that the experimental setup (hyper params and training ) for the baselines may be not good enough to fully substantiate the empirical claims made in the paper. For example, I adapted the code found here: https://www.kaggle.com/code/kmldas/cifar10-resnet-90-accuracy-less-than-5-min, for a small resnet, and even the no data augmented reaches 90.25% accuracy. I understand that this is not the same ResNet they used, but the fact that a small ResNet can achieve >90% accuracy without data augmentation, makes the improvement to 90.4% in Table1 when using BNN, somewhat less significant. I'm eager to hear what authors have to clarify this. the sample code I used : ```python import os import torch import torch.nn as nn import torch.nn.functional as F import torchvision.transforms as tt from torchvision.datasets import CIFAR10 from torch.utils.data import DataLoader import time # Device setup device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') def to_device(x, device): return x.to(device, non_blocking=True) if isinstance(x, torch.Tensor) else [to_device(item, device) for item in x] class DeviceDataLoader: def __init__(self, dl, device): self.dl, self.device = dl, device def __iter__(self): return (to_device(batch, self.device) for batch in self.dl) def __len__(self): return len(self.dl) # Model definition def accuracy(outputs, labels): _, preds = torch.max(outputs, dim=1) return torch.tensor(torch.sum(preds == labels).item() / len(preds)) def conv_block(in_ch, out_ch, pool=False): layers = [nn.Conv2d(in_ch, out_ch, kernel_size=3, padding=1), nn.BatchNorm2d(out_ch), nn.ReLU(inplace=True)] if pool: layers.append(nn.MaxPool2d(2)) return nn.Sequential(*layers) class ResNet9(nn.Module): def __init__(self, in_channels, num_classes): super().__init__() self.conv1 = conv_block(in_channels, 64) self.conv2 = conv_block(64, 128, pool=True) self.res1 = nn.Sequential(conv_block(128, 128), conv_block(128, 128)) self.conv3 = conv_block(128, 256, pool=True) self.conv4 = conv_block(256, 512, pool=True) self.res2 = nn.Sequential(conv_block(512, 512), conv_block(512, 512)) self.classifier = nn.Sequential(nn.MaxPool2d(4), nn.Flatten(), nn.Linear(512, num_classes)) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = self.res1(x) + x x = self.conv3(x) x = self.conv4(x) x = self.res2(x) + x return self.classifier(x) def training_step(self, batch): images, labels = batch out = self(images) return F.cross_entropy(out, labels) def validation_step(self, batch): images, labels = batch out = self(images) loss = F.cross_entropy(out, labels) acc = accuracy(out, labels) return {'val_loss': loss.detach(), 'val_acc': acc} def validation_epoch_end(self, outputs): batch_losses = [x['val_loss'] for x in outputs] epoch_loss = torch.stack(batch_losses).mean() batch_accs = [x['val_acc'] for x in outputs] epoch_acc = torch.stack(batch_accs).mean() return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()} # Training functions @torch.no_grad() def evaluate(model, val_loader): model.eval() outputs = [model.validation_step(batch) for batch in val_loader] return model.validation_epoch_end(outputs) def train_model(epochs, max_lr, model, train_dl, valid_dl, weight_decay=1e-4, grad_clip=0.1): optimizer = torch.optim.Adam(model.parameters(), max_lr, weight_decay=weight_decay) scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, epochs=epochs, steps_per_epoch=len(train_dl)) history = [] start_time = time.time() for epoch in range(epochs): # Training model.train() train_losses = [] lrs = [] for batch in train_dl: loss = model.training_step(batch) train_losses.append(loss) loss.backward() # Gradient clipping if grad_clip: nn.utils.clip_grad_value_(model.parameters(), grad_clip) optimizer.step() optimizer.zero_grad() # Record & update learning rate lrs.append(optimizer.param_groups[0]['lr']) scheduler.step() # Validation result = evaluate(model, valid_dl) result['train_loss'] = torch.stack(train_losses).mean().item() result['lrs'] = lrs # Print progress print(f"Epoch [{epoch}], lr: {lrs[-1]:.5f}, train_loss: {result['train_loss']:.4f}, val_loss: {result['val_loss']:.4f}, val_acc: {result['val_acc']:.4f}") history.append(result) train_time = time.time() - start_time print(f"Training completed in {train_time/60:.2f} minutes") return history if __name__ == "__main__": stats = ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)) train_tfms = tt.Compose([tt.ToTensor(), tt.Normalize(*stats)]) valid_tfms = tt.Compose([tt.ToTensor(), tt.Normalize(*stats)]) # Dataset os.makedirs(data_dir, exist_ok=True) train_ds = CIFAR10(root= './data', train=True, download=True, transform=train_tfms) valid_ds = CIFAR10(root= './data', train=False, download=True, transform=valid_tfms) # DataLoaders batch_size = 400 train_dl = DeviceDataLoader(DataLoader(train_ds, batch_size, shuffle=True, num_workers=3, pin_memory=True), device) valid_dl = DeviceDataLoader(DataLoader(valid_ds, batch_size*2, num_workers=3, pin_memory=True), device) # Model model = ResNet9(3, 10).to(device) # Training history = train_model( epochs=15, max_lr=0.01, model=model, train_dl=train_dl, valid_dl=valid_dl, weight_decay=1e-4, grad_clip=0.1 ) ``` Methods And Evaluation Criteria: Yes. They evaluate classification performance on mainstream benchmarks (CIFAR, SVHN, TinyImageNet, etc.), measure standard top-1 (and occasionally top-5) accuracy, and compare to widely used baselines (BatchNorm, LayerNorm, etc.). They also run controlled studies on batch size, network width, and depth, making a strong case that normality normalization is robust across conditions. These are all common and accurate choices to backup their claims. Theoretical Claims: - The lemma (Appendix B) that in the bivariate normal case, uncorrelatedness implies independence and also that normality minimizes mutual information for given correlation. This is known in standard information-theoretic references and is stated correctly. - The second-order series expansion for the negative log-likelihood (Appendices C & D) and the single-step Newton–Raphson for estimating λ looks coherent to me. authors also provide empirical checks Experimental Designs Or Analyses: The experimental setup ticks the basic - Cross validation - They use 6 runs with independent seeds to obtain mean and standard error - They ablate key pieces: the power transform alone, noise alone, partial transform strength. - They measure Q–Q plots to show actual Gaussianity, which is a key claim of the proposed layer However, as mentioned in the Claims And Evidence section, I have some reservations about the optimality of the baseline results and would be eager to hear clarifications from authors Supplementary Material: - Appendx A: Comparisons with Gaussian dropout, partial transforms, Q–Q plots, joint normality tests, timing benchmarks - Appendices B, C, D: theoretical statements Relation To Broader Scientific Literature: - Normalization layers: The paper gives a lot of focus to batch and layer normalization, which is well deserved, they also mention some other key works such as Decorrelated BN or Switchable Whitenning. They also cite crucial references on maximum-entropy properties of the Gaussian distribution (Cover & Thomas), Box–Cox and Yeo–Johnson transforms, and typical random smoothing (Cohen et al.). - Whitening/Orthogonalization: Methods like Decorrelated Batch Normalization (Huang et al.) or Iterative Normalization attempt to whiten features, which is tangentially similar to making them normal. The authors focus on univariate normality; whitened or orthonormal constraints tackle correlations as well. - Robustness-Enhancing Approaches: The paper cites Randomized smoothing (e.g., Salman et al. 2019), which can provide certified $\ell_2$-robustness via Gaussian noise injection at inference. data augmentation (mixup, AugMix) also yield stable, noise-resistant features. Comparing normality normalization to these could further elucidate how it complements or surpasses them. Essential References Not Discussed: - Weight Normalization (Salimans & Kingma) - Filter Response Normalization (Singh & Krishnan) - Normalization Propagation (Arpit 2016) - While not Copula-based or rank-based inverse normal transforms as direct ways to Gaussianize data, or Lambert W transformations for heavy-tailed distributions - Copula-Based Gaussianization – In statistics, any multivariate data can be transformed to have normal marginals by using a copula. A Gaussian copul method assumes the data can be mapped into a joint Gaussian via monotonic marginal CDF mappings. While the paper focuses on certain transforms, it doesn’t mention the general copula framework, which might be interesting Other Strengths And Weaknesses: Strengths: - Straightforward to implement: only adding a power transform step plus scaled Gaussian noise. - Motivated theoretically very well, the idea of having Gaussian activations is quite an interesting approach and this paper takes an important step in that direction - Clear discussion of theoretical motivations (information-theoretic and statistical underpinnings). Weaknesses: - as mentioned earlier, the paper gives somewhat questionable baselines in Tables 1 & 2. I might be wrong but if - The experiments in the robust training seem rather thin - It would be interesting to account for the training overhead from computing 𝜆 in each iteration in comparison to standard BN Other Comments Or Suggestions: - A direct side-by-side experiment with at least one of the alternative norms that also claim small-batch stability (e.g., FRN or SwitchableNorm) might strengthen the empirical section. - Consider using a baselines that are reported in other papers, namely ResNet or ViTs, to make sure that the baselines are not sub-optimal Questions For Authors: - Have authors thought about connection to other domains, namely loss of plasticity in continual learning? Namely, in preserving diversity of hidden units and features during prolonged training Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 7TPa, We address all of your comments below. > >Regarding the baseline performance levels, and the code snippet you provided. > To address your inquiry regarding the baselines, we ran experiments with the additional use of mixup (Zhang et al. 2017) for several of the model & dataset combinations listed in Table 1 (with the experimental setup otherwise identical to that listed in Appendix E.2) across $M=6$ random seeds. The results are as follows: |Dataset|LN|LNN| |-|-|-| |CIFAR10|89.97 $\pm$ 0.16|**91.18 $\pm$ 0.13**| |CIFAR100|66.40 $\pm$ 0.42|**70.12 $\pm$ 0.22**| |Food101|73.25 $\pm$ 0.19|**79.11 $\pm$ 0.09**| These results precisely address your inquiry regarding the baseline performance levels, and provide further substantiating evidence that models trained with normality normalization continue to improve with the use of additional techniques, and consistently outperform other normalization layers. To further supplement this, please also see our rebuttal comment to Reviewer ps2s regarding the performance of ResNet18 x CIFAR10 with data augmentations, showing again the improvement in performance of BNN. Next we address the code snippet provided and the performance therein. We actually ran the code you provided, replacing BatchNorm2d with BatchNormalNorm2d: we were able to obtain a performance of $90.95$% which - especially when considering the small number of training epochs ($15$) used in the code snippet - is a significant improvement over the figure you quoted (This is also significant because it is very reasonable to expect that the difference in performance would likely grow with further number of training iterations, because generally techniques using stochastic regularization (such as our additive Gaussian noise with scaling, or as seen elsewhere ex: with the use of dropout) tend to improve more with number of training iterations). This adds to the evidence that, across a wide array of experimental setups - for example here in the code snippet you provided the optimizer employed (Adam) is different than in our ResNet experiments (SGD), the LR scheduler is different (OneCycleLR vs StepLR), gradient clipping is employed whereas it is not in our setting, and other aspects - that normality normalization consistently outperforms other normalization layers. Altogether these experiments serve to substantively address your inquiry regarding the baseline performance levels. > >Regarding a direct side-by-side experiment with an alternative normalization layer. > We have addressed this as follows: Since we had already invoked decorrelated BatchNorm (DBN) in the text of our paper, we ran experiments using DBN, and compared this with the implementation we developed for decorrelated BatchNormalNorm (DBNN). The experimental setup is consistent with Appendix E.1, and $M=6$ random seeds are also used throughout: |Dataset|Model|DBN|DBNN| |-|-|-|-| |CIFAR10|RN18|90.66 $\pm$ 0.05|**91.50 $\pm$ 0.03**| |CIFAR100|RN18|65.11 $\pm$ 0.06|**67.53 $\pm$ 0.10**| |STL10|RN34|66.76 $\pm$ 0.29|**69.36 $\pm$ 0.14**| These results demonstrate a consistent improvement of DBNN over DBN. > >Regarding accounting for the training overhead from computing $\hat{\lambda}$. > We believe our Subsection A.5 Speed Benchmarks and Figure 11, referenced in the main body of the text in Subsection 4.6 Additional Experiments & Analysis under paragraph Speed benchmarks, does just this. We have evaluated both the training time and test time overheads. > >Regarding additional noise robustness experiments. > Here we provide additional noise robustness results for ResNet18 x CIFAR100: |$\phantom{-}$|$\phantom{-}$|L5|L9|L13|L17| |-|-|-|-|-|-| |**L1**|BNN|**0.047 $\pm$ 0.002**|**0.074 $\pm$ 0.001**|**0.100 $\pm$ 0.002**|**0.386 $\pm$ 0.005**| ||BN|0.166 $\pm$ 0.005|0.316 $\pm$ 0.006|0.410 $\pm$ 0.008|1.881 $\pm$ 0.026| |**L5**|BNN||**0.027 $\pm$ 0.002**|**0.040 $\pm$ 0.003**|**0.155 $\pm$ 0.012**| ||BN||0.069 $\pm$ 0.007|0.088 $\pm$ 0.006|0.438 $\pm$ 0.030| |**L9**|BNN|||**0.043 $\pm$ 0.000**|**0.149 $\pm$ 0.002**| ||BN|||0.061 $\pm$ 0.001|0.250 $\pm$ 0.003| |**L13**|BNN||||**0.258 $\pm$ 0.002**| ||BN||||0.396 $\pm$ 0.011| This provides even further evidence for the findings we presented in Subsection A.3 Noise Robustness. > >Regarding the suggested references you mentioned. > We have now included a discussion on the related normalization layers you listed: weight normalization, filter response normalization, normalization propagation, as well as iterative normalization and EvoNorm. Regarding the copula-based, inverse normal transform, and Lambert W transformation approaches to gaussianizing: we agree these are very interesting avenues for exploration, and have now included a discussion on them in Section 6 Related Work & Future Directions. We believe we have comprehensively addressed your comments here. We would be highly appreciative if you would consider increasing the score for our submission; thank you. --- Rebuttal Comment 1.1: Comment: I thank the authors for these clarifications. I do find the rebuttal response convincing and I'm supportive of this paper being accepted. So I will increase my score from 3 to 4 (Accept). I have one pending a few more questions, if authors can clarify or perhaps discuss later in the manuscript, it might be helpful to future readers. - Because the Normality Normalization layers contain essentially two main components, the power transform and the noise injection, a natural question is, what is the effect of each component in isolation, and what is the effect of them combined? Maybe there is already a table or figure I missed? otherwise, some controlled experiment that will show the effect of each and then their combination, or even better, if the magnitude of noise is controlled on a grid and the power transform, then it becomes clear how the two components contribute to the overall effect? - The authors suggest that just one step of Newton type root finding suffices for finding $\lambda$, is there some experiment on what is the additional benefit of more steps (in temrs of Gaussianity metrics) , tha twould be nice - Finally, from my understanding, power transform method is only an approximate way of ensuring Gaussians, which works best if the issue is the distribution being long tailed. Is my impression accurate? If yes, are there more powerful ways of ensuring Gaussianity in a differentiable way?
Summary: The paper proposes a normality normalization that enforces Gaussian feature distribution using a power transform and additive Gaussian noise. The motivation for using the normal distribution is to enhance the model's robustness to random perturbations, improving generalization. ## update after rebuttal The authors have kindly pointed out the parts of their work which I missed or misunderstood. However, additional clarifications regarding the information-theoretical content of their paper exposed the lack of strong connection between the method and the information-theoretical framework employed. In the manuscript, it is claimed that > The normal distribution plays a central role in information theory – it is at the same time the best-case signal and worst-case noise distribution, > the mutual information game suggests gaining robustness to Gaussian noise is optimal because it is the worst case noise distribution However, Gaussian noise is only the worst-case noise for maximizing $I(X;X+Z)$ if we restrict the second moments of $X$; for other constraints (like restricted support, mean absolute value, etc.), the worst-case noise and best-case distributions **are not Gaussian**. Thus, robustness to AGN is not a generally desirable outcome: information theory suggests that there may be other cases, in which robustness to, e.g., uniform noise, should yield better results. Therefore, I insist on additional theoretical analysis (and ablation on noise distribution) being conducted. Otherwise, the work in its current state is not well-supported by the information theory (as constraints imposed and the corresponding "worst-case noise" seem arbitrary). For more information (and additional concerns), please refer to my final reply. As a result, I decided to keep my score. Claims And Evidence: - Supported Claims include improved generalization (Tables 1–2 show accuracy gains over traditional BatchNorm/LayerNorm) and Gaussianity in the features (Q-Q plots and $R^2$ metrics validate increased normality). - The method’s generality claim ("wherever existing normalization layers are used" from lines 42, 411) suggests validation also on non-CV tasks. However, such results are absent from the work. - Robustness to adversarial as well as random perturbations ("improving model robustness to random perturbations" from lines 42-43) — are not convincingly substantiated. Without adversarial experiments, these broader claims seem unsupported. Methods And Evaluation Criteria: - The proposed method combines a power transform with additive Gaussian noise without introducing extra learnable parameters. The approach is implemented as an augmentation to conventional normalization layers. - While the method is clear and evaluated on several vision architectures and datasets, there is a notable absence of comparisons to more recent or alternative normalization methods (e.g. EvoNorm [3a], Iterative Normalization [Huang et al., 2019]) beyond the classical baselines. - The Q-Q plots and $R^2$ metrics do not serve as a proper multivariate Gaussianization metric. Perhaps, special statistical tests should be employed, e.g., the Henze-Zirkle test [3b]. [3a] Liu et al. "Evolving Normalization-Activation Layers". arXiv:2004.02967 [3b] Norbert Henze and Bernd Zirkler. "A class of invariant consistent tests for multivariate normality". Communications in Statistics-theory and Methods, 19:3595–3617, 1990 Theoretical Claims: - The paper’s derivation of the quadratic approximation for the negative log-likelihood (NLL) to estimate the power transform parameter λ is central. It relies on a series expansion around $\lambda_0 = 1$, and assumes activations are locally Gaussian-like near $\lambda_0$. While empirically valid for the tested cases (Figure 14), it risks failure in deeper layers or complex datasets where activations are multi-modal or heavily skewed. Theoretical guarantees are limited to idealized setup, and the method’s robustness depends on the (unverified) assumption that activations are "close enough" to Gaussian. This constitutes a significant weakness if one wishes to claim universality. Therefore, the authors should test on multi-modal datasets and analyze layer-wise approximation quality. - There is no clear theoretical evidence that $I(X;X+Z)$ is maximized during the training, which is crucial for the application of Theorem 5.1. Therefore, it is unclear whether noise injection serves any meaningful role in this setup (at least, from the Theorem's 5.1 perspective). Please, compare this to the work [8a] from **Essential References Not Discussed**, where the mutual information is maximized explicitly. Experimental Designs Or Analyses: The experiments are extensive within the computer vision domain, proving performance on multiple datasets and architectures. Despite this, the experimental design omits comparisons with several state‐of‐the‐art (post-BatchNorm) normalization methods and does not explore non-CV applications, even though the method is promoted as generally applicable. Supplementary Material: I did review the Appendix. I did not review other supplementary materials. Relation To Broader Scientific Literature: The work is positioned as an extension of BatchNorm (Ioffe & Szegedy, 2015) and LayerNorm (Lei Ba et al., 2016) by explicitly aiming to make Gaussian activations building on classical power transforms (Box & Cox, 1964; Yeo & Johnson, 2000). It also relates to previous work on decorrelation and whitening (Chen & Gopinath, 2000) and on noise-based regularization. However, the discussion would benefit from a more thorough comparison with recent normalization methods and an explicit discussion of how this approach differs fundamentally from methods that already induce some form of Gaussianity (e.g, from the work [8a], see **Essential References Not Discussed**). Essential References Not Discussed: While the paper cites classical works, it omits several recent advances. For instance, EvoNorm (Liu et al., 2021) is not discussed despite being a contemporary normalization-activation layer method. Additionally, although Iterative Normalization (Huang et al., 2019) is referenced, a more detailed discussion comparing it with the proposed approach would emphasise the contribution. The work [8a] also uses noise injection and Theorem 5.1 to achieve Gaussian distribution, and, therefore, is closely related to the method proposed in this manuscript. [8a] Butakov et al. "Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax". Proc. of ICLR 2025. Other Strengths And Weaknesses: The impact of BNN on the runtime compared to BN is not very significant, which is a merit of the method. Other Comments Or Suggestions: Perhaps, the "booktabs" table style should also be used for Table 3. Questions For Authors: 1. Could the authors rigorously prove theoretical bounds on approximation error or provide another theoretical analysis to strengthen their claim that the quadratic approximation is universally valid under the power transform’s Gaussianization? In particular, how does the approximation error change in deeper layers or with non-Gaussian activations? What are potential limitations of your approach in situations where the activations exhibit strong multimodality or heavy skewness? 1. Could you evaluate normality normalization on non-vision tasks to justify the claim of “wherever existing normalization layers are used”? 1. Could you compare your method with alternative normalization methods (e.g. EvoNorm) beyond the classical baselines? 1. How is the maximization of $I(X;Y)$ from Theorem 5.1 enforced? 1. Have you tried leaving only the noise injection (i.e., not applying the power transform)? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer zhYV, We address all of your comments below. > >"The Q-Q plots and $R^2$ metrics do not serve as a proper multivariate Gaussianization metric. Perhaps, special statistical tests should be employed, e.g., the Henze-Zirkle test" > We very kindly note that we in fact already did precisely this in the paper; using the Henze-Zirkle (HZ) test statistic as well. Please see Section A.7. Joint Normality and Independence Between Features, which is referenced in the main text in Section 4.6 under paragraph Normality normalization induces greater feature independence. This precisely addresses your inquiry. > >Regarding a comparison to an alternative normalization method. > Please see the experimental results comparing decorrelated BatchNorm (DBN) (which is also closely related to iterative normalization) with decorrelated BatchNormalNorm (DBNN) in the thread with Reviewer 7TPa, which precisely addresses your inquiry here. > >"Robustness to adversarial as well as random perturbations ("improving model robustness to random perturbations" from lines 42-43) — are not convincingly substantiated." > We very kindly point out that we did in fact substantiate the claims we made through the experiments in Section A.3 Noise Robustness, which we also referenced in the main text of the paper in Section 4.6 under paragraph Normality normalization induces robustness to noise at test time. Furthermore, we had only mentioned adversarial robustness as it pertains to deep neural networks in general being susceptible to perturbations; we did not explicitly claim robustness to adversarial perturbations in the paper. However, in Section 6 we provide a line of reasoning which suggests greater adversarial robustness may be attainable, given the connection between robustness to random perturbations and adversarial perturbations. > >Regarding "justify the claim of “wherever existing normalization layers are used”?". > We would very kindly like to point out that we indeed investigated normality normalization across several normalization layers, as evidenced by Subsection 4.3 and Figure 1, where we also compared it with InstanceNorm and GroupNorm. > >Regarding exploration of non-CV tasks. > In terms of application areas, our intent was to be extremely comprehensive in our experiments in a domain of choice. Because we evaluate across several normalization layers, and in general extensively demonstrate the effectiveness of normality normalization, we believe this is a very strong and reliable indicator for the method's success translating to other (non-CV) domains. > >Regarding the series expansion of the NLL. > The power transform we use specifically addresses skewed data distributions - please see (Yeo & Johnson 2000) for a detailed investigation. Furthermore, we have indeed assessed the ability for normality normalization to achieve a high degree of gaussianity on complex datasets, as evidenced by Figures 5 & 6, and our work demonstrates that we can achieve better performance by enforcing a unit's pre-activations to be unimodal-normally distributed. > >Regarding a discussion on alternative normalization layers. > We have now cited several works on more recent normalization layers in our paper, including EvoNorm and iterative normalization, as well as weight normalization, filter response normalization, and normalization propagation. > >"There is no clear theoretical evidence that $I\left(X; X+Z\right)$ is maximized during the training, which is crucial for the application of Theorem 5.1." > We would with excitement like to clarify this point of inquiry: Our discussion of the mutual information term $I\left(X; X+Z\right)$ uses the following argument as motivation for gaussianizing pre-activations: if the pre-activations are gaussianized, then by Theorem 5.1 $I\left(X; X+Z\right)$ is *necessarily* maximized. This is because a Gaussian distributed variable $X$ maximizes $I\left(X; X+Z\right)$ (as shown by the Theorem) relative to any other distribution for $X$. Thus $I\left(X; X+Z\right)$ is maximized when the gaussianity of $X$ is maximized; even if this occurs implicitly. > >Regarding comparison to work [8a]. > We found the paper you referenced very interesting and have now cited it in our work. Interestingly, it differs from our work due to the aforementioned point; we discuss the implicit maximization of $I\left(X; X+Z\right)$ and use this idea only as motivation in our work for encoding pre-activations using the Gaussian distribution, whereas to the best of our understanding the work you reference develops a framework for explicitly maximizing this term; making the approaches in the two works interestingly quite distinct and quite complementary. We are eager to explore the possible interplay between our work and this work in follow-up work. We believe we have comprehensively addressed your comments here. We would be highly appreciative if you would consider increasing the score for our submission; thank you. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for additional clarifications and pointing out the parts which I missed or misunderstood during the review. Below, I reply to the rebuttal provided. 1. I acknowledge the answer to my first question ("Regarding the series expansion of the NLL."). Although the authors provide additional literature on the question regarding skeweness, I see no comments addressing the multimodality problem. I still insist on a proper theoretical investigation of the solution proposed. If no rigorous theoretical results can be achieved, I kindly ask the authors to emphasise that the second-order method is selected due to empirical success mostly, and no theoretical guaranties are provided. 2. I am now even more confused about the information-theoretical part of the work. If Theorem 5.1 is not used to achieve Gaussian distribution, but to justify Gaussianization, several questions arise: - There are other nosy channels with different optimal distributions. For example, if $Z$ is small, and we restrict $\mathbb{E} |X| = const$, $I(X;X+Z)$ is maximized for Laplace distribution. If we restrict $\text{supp} \\, X = [0;1]$, $I(X;X+Z)$ is maximized for uniform distribution, etc. For more details, please refer to "maximum entropy distributions". Therefore, choosing Gaussianization over achieving other maximum entropy distributions seems arbitrary. - The motivation behind the Gaussian noise injection is now more obscure. The authors say: > the mutual information game suggests gaining robustness to Gaussian noise is optimal because it is the worst case noise distribution However, for other min-max games for $I(X;X+Z)$, Gaussian noise is no longer the worst case, see the previous point. - Appendix A.1 and A.4 suggest that adding noise (Gaussian noise in particular) is crucial for the accuracy gains. However, as no MI maximization is performed, there are no rigorous theoretical explanation to this phenomenon. Perhaps, some sort of implicit MI maximization is occuring. In my opinion, the authors should explore this and also provide an ablation study on other min-max MI games to support the hypothesis that MI maximization and performance gains due to selecting the optimal distribution are indeed connected. 3. If the work is focused on empirical results, I still believe that other domains should be explored (e.g., NLP).
Summary: The paper presents a novel approach to improving the feature representations in deep neural networks by encouraging normality in activations. The authors introduce Normality Normalization (NormalNorm), a normalization technique based on the power transform to Gaussianize feature distributions and enhance robustness through additive Gaussian noise during training. The paper argues that the normal distribution is optimal for encoding information in neural networks, improving generalization and robustness. Extensive experiments demonstrate the superiority of NormalNorm over traditional normalization techniques like Batch Normalization (BatchNorm), Layer Normalization (LayerNorm), Group Normalization (GroupNorm), and Instance Normalization (InstanceNorm) across multiple model architectures and datasets. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application Theoretical Claims: I did not check the correctness of any proofs for theoretical claims Experimental Designs Or Analyses: I check the soundness/validity of any experimental designs or analyses Supplementary Material: i review the supplementary material except the theoretical part Relation To Broader Scientific Literature: The key contributions of the paper are not related to the broader scientific literature Essential References Not Discussed: I am not familar with this area and unsure whether there are more essential literature should be cited Other Strengths And Weaknesses: **Strengths**: 1. Theoretical Justification: The authors provide a solid information-theoretic foundation for why Gaussianity in activations is beneficial. They reference the mutual information game framework to argue that normal distributions maximize information transmission and robustness. 2. Methodological Novelty: The proposed Normality Normalization combines the power transform for Gaussianization with additive Gaussian noise with scaling, a distinct approach compared to conventional normalization methods (BatchNorm, LayerNorm, etc.). 3. Comprehensive Empirical Validation: The method is tested on multiple architectures (ResNets, Vision Transformers, WideResNets). Evaluations span diverse datasets, including CIFAR-10, CIFAR-100, SVHN, TinyImageNet, and ImageNet. Experiments consider factors such as network width, depth, and batch size, demonstrating that the method generalizes well. 4: Robustness & Generalization Benefits: The paper shows that Normality Normalization enhances test-time robustness to noise. It improves model generalization, often outperforming conventional normalization techniques. Feature representations exhibit greater independence, an attractive property for reducing redundancy. 5. Strong Quantitative Support: Statistical analysis using Q-Q plots demonstrates that activations become more Gaussianized. The impact of the power transform and Gaussian noise is separately analyzed to isolate their contributions. **Weaknesses**: 1. Computational Overhead: The power transform involves estimating a transformation parameter ($\lambda$) per feature channel, which increases computational complexity. Although the Newton-Raphson method approximates $\lambda$ efficiently, the added computations may slow down training, as evidenced in the runtime benchmarks. 2. Lack of Analysis on Adversarial Robustness: While Normality Normalization improves robustness to random noise, its effectiveness against adversarial perturbations is not fully examined. Given prior studies linking Gaussian robustness to adversarial defense, further testing in this area would be valuable. Other Comments Or Suggestions: 1. To enhance clarity, I recommend adding an explanation for the derivation of Equation (2). Specifically, it would be helpful to outline the reasoning behind this objective function and why it serves as the appropriate optimization target. Providing an intuitive justification—such as its connection to maximizing the Gaussianity of transformed activations or minimizing divergence from a normal distribution—would strengthen the reader’s understanding of its significance within the broader framework of Normality Normalization. Questions For Authors: 1. **Justification of the Optimization Objective in Equation (2)**: Could you provide a more detailed explanation for why Equation (2) is the appropriate objective function? Specifically: What is the intuition behind minimizing this negative log-likelihood (NLL) in the context of Normality Normalization? Does this directly encourage activations to follow a normal distribution, or is there an implicit assumption about the data distribution? 2. **Adversarial Robustness Claims**: The paper suggests that Normality Normalization improves robustness to random noise, which could imply improved adversarial robustness. Have you tested Normality Normalization against adversarial perturbations (e.g., FGSM, PGD)? If not, do you expect that the method will improve adversarial robustness, and why? 3. **Potential Issue with Baseline Model Choice**: In Table 1, the reported performance of ViT on ImageNet appears significantly lower than that of the standard ViT model, likely due to the smaller model depth and reduced number of attention heads. The improvement of Layer Normality Normalization (LNN) over Layer Normalization (LN) is observed on this smaller ViT model. Why did the authors choose this particular ViT architecture instead of using the standard ViT model configurations commonly used for ImageNet? Have the authors evaluated whether the performance gap between LNN and LN remains consistent for larger ViT models (e.g., ViT-B/16, ViT-L/16)? If the improvement diminishes on larger models, does this suggest that the benefits of Normality Normalization are more pronounced in smaller-scale networks? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer Eseb, We address all of your comments below. > >"While Normality Normalization improves robustness to random noise, its effectiveness against adversarial perturbations is not fully examined." > and > >"Adversarial Robustness Claims: The paper suggests that Normality Normalization improves robustness to random noise, which could imply improved adversarial robustness. Have you tested Normality Normalization against adversarial perturbations (e.g., FGSM, PGD)? If not, do you expect that the method will improve adversarial robustness, and why?" > We would very kindly like to point out that we invoked adversarial robustness only as it pertains to deep neural networks in general being susceptible to perturbations; we did not explicitly claim robustness to adversarial perturbations in the paper. However, in Section 6 Related Work & Future Directions we provide a line of reasoning which suggests greater adversarial robustness may be attainable, given the connection between robustness to random perturbations and adversarial perturbations. Thus we do expect that on average, greater adversarial robustness should be attainable. > >Regarding the derivation and justification of Equation 2: the NLL. > It is great that you inquire about this. We originally decided to defer the derivation for the NLL as it can be found in the literature, for example an outline is provided in (Yeo & Johnson 2000), and a sketch is provided in (Hernandez 1979) for the (related) Box-Cox power transform (https://drive.google.com/file/d/1__hvD4GgwSA3aj2OK9eVnlZg9JOmpSMs/view). We have now included the derivation in the appendix of the paper, and we give the idea here: Begin by taking a random variable $H$ (which can be arbitrarily distributed) and apply the power transform to it (or in practice, a data sample taken from $H$) to obtain $X$. We want $X$ to be as normally distributed as possible, which means we want to maximize the likelihood of $X$ under the Gaussian. This is given by taking the log of the Gaussian PDF for $X$ and maximizing it (this is equivalent to minimizing the negative log-likelihood (NLL)). That is, if you take the (negative) log of the PDF of a Gaussian random variable $X$, then substitute for $x$ the power transform as a function of $h$ (with correct consideration for change of variables), what you will obtain is precisely the NLL we have in our paper. > >Regarding the choice of ViT architecture. > We chose to use a somewhat smaller-scale ViT architecture to enable our extensive experiments on several datasets, and to enable high precision in the reporting of our experimental results through the multiple random seeds. In fact, we were able to obtain $M=6$ total seeds for the ImageNet experiments post-submission, for each of LNN and LN. The updated results for ImageNet, across these $M=6$ seeds, are: |Dataset|LN|LNN| |----------|----------|----------| |ImageNet Top1|71.54 $\pm$ 0.16|**75.25 $\pm$ 0.07**| |ImageNet Top5|89.40 $\pm$ 0.11|**92.23 $\pm$ 0.04**| These enable even greater confidence in our experimental results on ImageNet. To further address your inquiry, we ran experiments with the additional use of mixup (Zhang et al. 2017) for several of the model & dataset combinations listed in Table 1 (with the experimental setup otherwise identical to that listed in Appendix E.2). |Dataset|LN|LNN| |----------|----------|----------| |CIFAR10|89.97 $\pm$ 0.16|**91.18 $\pm$ 0.13**| |CIFAR100|66.40 $\pm$ 0.42|**70.12 $\pm$ 0.22**| |Food101|73.25 $\pm$ 0.19|**79.11 $\pm$ 0.09**| These results provide strong evidence that models trained with normality normalization continue to improve with the use of additional techniques for improving generalization performance, and that they continue to outperform models trained with other normalization layers. This also demonstrates that the network size is not an obstacle. Finally, we found that for both small and large width networks, and for small and large depth networks, normality normalization outperforms competing normalization layers. In Section 4.4 Effectiveness Across Model Configurations, in paragraphs Network Width and Network Depth and through Figures 2 & 3 respectively, we provide experimental evidence demonstrating this. We also found this trend to hold true in experiments with various ViT architectures, and we ultimately used the chosen architecture in the paper to enable extensive experiments and with multiple random seeds, as mentioned. We have furthermore added experimental results contrasting decorrelated batch normalization (DBN) with decorrelated batch normality normalization (DBNN); please see the thread with Reviewer 7TPa. These experimental results provide further evidence for the strong performance of normality normalization across various normalization layers. We believe we have comprehensively addressed your comments here. We would be highly appreciative if you would consider increasing the score for our submission; thank you. --- Rebuttal Comment 1.1: Comment: Thank you for all the responses. I will revise my score.
null
null
null
null
null
null
Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation
Accept (oral)
Summary: This paper presents a novel approach to the problem of learning adaptive-length representations. While previous methods, particularly MRL, have shown good performance, this work carefully studies the utility of high-dimensional but sparse representations, as opposed to lower dimensional but dense representation, for the adaptive-length setting. To accomplish this, the authors use sparse autoencoders and introduce a contrastive-based loss for training. Their method has significantly reduced training time, and the authors claim better performance over MRL and relevant baselines. Claims And Evidence: The claims are clear, although I have concerns on the experimental evidence (see my concerns on experimental designs or analyses below). Methods And Evaluation Criteria: Proposed methods and evaluation criteria are relevant Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments are well-designed, and most are sound. The experiment that requires careful attention is the one proposed in Section 4.1 and shown in Figure 3. Namely, I'm not convinced of this paper's timing results, and the authors lack a clear explanation as to why their method is so much faster than MRL. This is especially clear from the language; around line 246 in the second column, the authors say that the decrease in retrieval time is $\textit{likely}$ due to efficient sparse matrix multiplications, and that their results $\textit{suggest}$ that "higher sparsity enables more effective utilization of sparse matrix operations." To explain why I'm skeptical, a >2x speedup is achieved when the number of active dimensions is 2. In this case, the comparison in speed is between a length-2 dense representation and a length-(8192,16384,32768) representation with only 2 active dimensions. According to Section E.3, csr format is used along with sparse matrix operations. However, there is still an overhead associated with using csr format, and, considering that both CSR and MRL have only two active dimensions in this case, I find it very counterintuitive that CSR can be over 2x faster. I've noted that the authors reported the normalized retrieval time, and I'm wondering if this is may be a reason for this discrepancy. The authors introduced a base-time metric $\mathcal{T}$, the utility of which is unclear to me. How $\mathcal{T}$ is recorded is also unclear to me. It sounds like the authors used CSR with $h=16384, k=32$, and then normalized all timings by this value? First, I hope the authors can clarify if my understanding is correct or not. Second, I hope the authors can elaborate on why $\mathcal{T}$ is needed. They say "This metric enables a more realistic simulation of large-scale retrieval scenarios for fair computation comparison." Still, they do not elaborate on why normalizing $\mathcal{T}$ makes the comparison more fair. Indeed, the timing results are somewhat plausible, but I hope the authors can clarify how this speedup is achieved. Is it strictly a result of sparse matrix operations as opposed to some architecture changes over MRL? How does the base time metric affect the timings over the raw timings? Supplementary Material: I reviewed Section E.3 as a means to better understand the details of the timing experiments. Relation To Broader Scientific Literature: This work presents an interesting and useful alternative to MRL, the leading method for length-adaptive representations. The use of sparse autoencoders in this setting is novel. Essential References Not Discussed: None Other Strengths And Weaknesses: Some strengths include: - Clear presentation of ideas, experiments, and results - Paper is written well, aside from a few typos - Thoughtful and extensive experiment design Other Comments Or Suggestions: Some typos that I found: - first column, line 69: "more fast" --> "faster" - second column, line 272: this sentence seems incomplete/incorrect - first column, line 297: "from high-dimensional to high-dimensional" --> "from high-dimensional to low-dimensional" (?) Questions For Authors: Please see the questions in the Experimental Designs section. Answering these questions is important and may warrant additional clarification in the paper. I'll repeat the questions here to make them available in a list format: 1. How $\mathcal{T}$ is measured is also unclear to me. It sounds like the authors used CSR with $h=16384, k=32$, and then normalized all timings by this value? 2. Why is $\mathcal{T}$ needed? Why does it make the timing comparisons more fair? 3. How does the base time metric affect the timings over the raw timings? 4. Are timings improvements strictly a result of sparse matrix operations as opposed to some architecture changes over MRL? Is there anything the authors don't currently consider that may explain the speedup? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thanks for your detailed reading and valuable comments. We will address your concerns as follows. --- **Q1** Typos in line 69,272 and 297 **A1.** Thanks for your suggestions and we will fix the typos in the revision. --- **Q2** The experiment that requires careful attention is the one proposed in Section 4.1 and shown in Figure 3. Namely, I'm not convinced of this paper's timing results, and the authors lack a clear explanation as to why their method is so much faster than MRL. > To explain why I'm skeptical, a >2x speedup is achieved when the number of active dimensions is 2. In this case, the comparison in speed is between a length-2 dense representation and a length-(8192,16384,32768) representation with only 2 active dimensions. **A2.** We believe there may be some misunderstandings, and we’d like to clarify a key point. Most importantly, the speedup shown in Figure 3 (comparing sparse over dense computation) is **not the main reason** for CSR’s efficiency gains. The main goal of Figure 3 is to simply show that sparse MM can have similar (sometimes faster) computation to dense MM under the same active dimension (x-axis). The main gain of MRL is that it can use a smaller active dimension to better preserve the model accuracy. As shown in the table below (quoted from Table 4 (in Sec B.4.)), CSR only degrades 1.8\% accuracy with 8 active dimensions under MRL degrades 12.4\% accuracy. Therefore, to attain the same accuracy level, CSR models can utilize a **much smaller active dimension**. As a result, **even if the ``retrieval time per dim`` is similar** among sparse and dense MM, CSR can still attain significant speedup. And this part is the main reason for CSR's significant improvement in efficiency as we demonstrated in Figure 1(b). *Table 1: 1-NN performance comparison between MRL and CSR across various activation dim* | Active Dim| MRL | CSR | |-|-|-| |2|-| 66.17 | | 4 | -| 69.97 | | 8 | 62.19 | 73.84 | |16| 67.91 | 74.39 | | 32| 69.46 | 74.53 | | 2048 (Full rep with ResNet50) | 70.97 | 75.19 | --- **Q3** Why $\mathcal{T}$ is needed? **A3.** The datasets used in our paper vary by orders of magnitude. For example, FiQA-2018 has 57,638 entries, while ImageNet-1k contains 1.3M in its database. Considering this, we give an ablation study on database size $N$ in Figure 3(c). Our study demonstrates that the efficiency advantage of sparse methods scales with database size(e.g., 1M, 10M entries), making them ideal for real-world applications. Thus, to simulate real-world scenarios (e.g., massive-scale entities like in RAG), we standardize all datasets to ImageNet-1k's scale (1.3M entries) and use a time-based counter $\mathcal{T}$ to eliminate the effect of $N$ across different datasets. --- **Q4** How $\mathcal{T}$ is measured? It sounds like the authors used CSR with $h$=16384,$k$=32, and then normalized all timings by this value? **A4.** Indeed, we calculated the average time $\mathcal{T}$ for performing 2000 matrix multiplications of two sparse matrices using ``torch.sparse.mm()``. Here is a more detailed breakdown of the evaluation protocol for the retrieval time: 1. We precompute the embeddings of all ImageNet training data and store them in standard CSR (compressed sparse row) on GPU memory as the database for retrieval. 2. We compute the retrieval time as the average over 2,000 rounds of retrieval, after a warm-up period of 100 rounds. Each time, the query consists of 512 samples randomly drawn from the database, The warm-up procedure is common in measuring GPU computation time, cause it eliminates initialization bias. --- **Q5** How does the base time metric affect the timings over the raw timings? **A5.** It does not affect raw timings, as the relative retrieval time is computed by $(\text{raw timing})/\mathcal{T}$, as shown in Figure 3 and Table 1. We also present the raw timing comparison for MRL in Figure 1(b) to demonstrate our improvements. We will revise a more detailed explanation of $\mathcal{T}$ calculation in Sec. 4 and E.3 to clarify our motivation. --- **Q6** Are timings improvements strictly a result of sparse matrix operations as opposed to some architecture changes over MRL? Is there anything the authors don't currently consider that may explain the speedup? **A6.** As detailed in A2, we would like to reiterate that the core reason CSR is more efficient than MRL lies in its ability to maintain high fidelity even with very small active dimensions. The primary advantage of CSR is not simply that sparse matrix multiplication outperforms dense computation (as shown in Figure 3), but rather that CSR can achieve comparable model accuracy with significantly fewer active dimensions. --- Thank you for your thoughtful questions. We hope the responses provided adequately address your concerns. Please don’t hesitate to reach out if any further clarification is needed. --- Rebuttal Comment 1.1: Comment: Thanks for your response and clarifications, especially on the role of $\mathcal{T}$. I acknowledge that Figure 3a is not the main result for the efficiency gains. CSR achieves much better accuracy over MRL for the same number of active dimensions. However, the claim of Figure 3a is that extremely sparse but high-dimensional matrix multiplies are on average faster than dense but low-dimensional matrix multiplies. Again, it is very surprising to me that the speedups are >2x when the active dimension is 2. At the very least, I would expect that there is a crossing point between the MRL and CSR lines such that MRL is faster with respect to the experiment done for Figure 3a. By inspection, this might occur when $k=32$, as the growth rate of the normalized retrieval time for CSR is faster than MRL. I think this is a good paper overall, and will therefore keep my score for now. But I would happy to raise the score if there is a more definitive explanation why being more sparse improves the timings. According to Figure 3a, when $k \geq 4$, higher sparsity leads to even better timings for a fixed active dimension. What I want to ask the authors is the following about Figure 3a: Is the discrepancy among the timings (between MRL and CSR, and then also the timing improvement of CSR as $h$ changes) purely a result of efficient sparse matrix operations in PyTorch, or is it a difference in MRL vs CSR, or a difference in the way the timings were recorded for each method? --- Reply to Comment 1.1.1: Comment: Thank you for your prompt response and clarifying your remaining concerns! We are happy to address them point by point below. --- **Q1.** I expect a crossing point between MRL and CSR performance curves in Figure 3a, potentially at k=32, as CSR's normalized retrieval time appears to increase more rapidly than MRL's. **A1.** Indeed, there is a crossover point around $k=32$, where dense retrieval begins to be slightly faster than sparse retrieval (eg 0.0019s vs 0.0022s when $k=32$ and 0.0029s vs 0.0036s when $k=64$) -- but overall, it is on the same scale. This speed difference is not related to CSR or MRL methods, but stems from differences between dense and sparse matrix multiplication implementations on GPU. Since the two have the same complexity in theory ($O(mkn)$ for matrices of $m\times k$ and $k\times n$), this difference could be due to nuanced hardware (eg GPU) and software (eg CUDA) reasons (explained more in A3 below). Overall, we believe that it is still fair to say that dense and sparse retrieval has similar compute as long as $k$ is relatively small (e.g. $k\leq 64$), which is exactly the region that one wants to use efficient embedding methods for fast retrieval -- and CSR could outperform MRL by very large accuracy margins in these regions, even beating MRL with much larger $k$. We will elaborate this discussion in the revision. --- **Q2.** About Figure 3a: Is the discrepancy among the timings (between MRL and CSR, and then also the timing improvement of CSR as changes) purely a result of efficient sparse matrix operations in PyTorch, or is it a difference in MRL vs CSR, or a difference in the way the timings were recorded for each method? **A2**. Indeed, this discrepancy is merely a result of PyTorch's sparse/dense matrix operations, not from the MRL/CSR methods themselves. We evaluate it following exactly the same recording protocol for fair comparison. The main goal of this figure is exactly to get rid of these hardware nuances by benchmarking the runtime of sparse and dense operators under the same active dimension. The figure shows that the two have roughly the same time, which facilitates us to focus on the complexity measure of ``active dimension`` when comparing MRL and CSR. We will further clarify this in the discussion of Figure 3a as well. Thank you for noting this. --- **Q3** I would happy to raise the score if there is a more definitive explanation why being more sparse improves the timings. **A3.** Thanks! As discussed above, this difference in timing is caused by the implementation of dense/sparse matrix operations in PyTorch and GPU. While a rigorous analysis of this issue is actually beyond the scope of our work and ICML, we do look deeply into this problem in these days, and as a result, we have a preliminary insight of this problem that might be helpful for understanding this difference. Remind that for sparse multiplication, only the **overlapped** non-zero elements contribute to the outcome. For example, for calculating the $ij$-th output, we only need to use the overlapped activations in sparse vectors $s_i,s_j$. If no overlap at all, we can even omit it. Examining overlap only requires **comparing indices (int)** in the sparse matrices, which is much faster than **multiplying float vectors**. In a very sparse matrix (eg if the dimension $h$ is very large), the overlap could be more rare, leading to an even faster retrieval. This is a very nice property of CSR: it means that ***we can use a larger embedding dimension $h$ (more information) while achieving an even faster retrieval in the same time***! In comparison, in MRL/dense embedding, a larger dimension always leads to slower retrieval. To verify this in practice, we benchmark the number of multiplications using both dense and sparse matrices in CSR format (with row-wise product [1]) under the same default setup. |Active Dim|MRL| CSR (h=8192) | CSR (h=16384) | CSR (h=32768)| |-|-|-|-|-| |2|1.3×10e9|3.2×10e5|1.7×10e5|8.4×10e4| |4|2.6×10e9|1.3×10e6|6.7×10e5|3.4×10e5| We can see that the operation number of sparse ones can be several orders of magnitude smaller than that of the dense methods. Besides, a larger $h$ does further bring fewer computation and lead to faster retrieval in practice, which verifies our analysis above. Besides, there could also be other nuanced factors. For example, in PyTorch, sparse and dense multiplications call different backends: dense ones use cuBLAS GEMM that is highly optimized but heavyweight, while sparse ones uses cuSPARSE that has lower launch overhead. As this difference is more system-related, we leave more comprehensive analysis for future work. We will add this discussion for a well-rounded understanding. [1] Gustavson. "Two fast algorithms for sparse matrices: Multiplication and permuted transposition." ACM Transactions on Mathematical Software, 1978 --- Hope the elaboration help alleviate your concerns and please let us know if there is more to clarify!
Summary: In this paper, the authors propose Contrastive Sparse Representation (CSR) as an alternative to Matryoshka Representation Learning (MRL) for adaptive embeddings. MRL requires retraining models and suffers from performance drops at shorter embedding lengths, while CSR achieves adaptive representation through sparse coding, preserving high-dimensional semantic quality. CSR combines reconstruction-based sparse autoencoding with a contrastive loss to maintain accuracy and retrieval efficiency at various sparsity levels. Experiments on image, text, and multimodal benchmarks show that CSR outperforms MRL in accuracy and retrieval speed while requiring significantly less training time (up to 69× faster). CSR maintains the semantic integrity of the original embeddings and achieves strong generalization across downstream tasks with fewer computational resources, and it is a more efficient and scalable approach for adaptive representation learning. Claims And Evidence: CSR is shown to achieve higher performance and speed than MRL, supported by extensive experiments on ImageNet, MTEB, and MS COCO. The reduction in training costs is also well-supported, with experiment results showing that CSR requires significantly less training time than MRL, achieving up to a 69× speedup on ImageNet1k tasks. Furthermore, the paper provides evidence that CSR preserves semantic quality while improving efficiency by using a reconstruction-based sparse coding approach with contrastive loss, as shown through consistent accuracy at different sparsity levels. The generalization claim across modalities is overstated since the multimodal experiments are limited to MS COCO and Flickr30K, which may not represent broader multimodal challenges Methods And Evaluation Criteria: The proposed methods and evaluation criteria are clearly defined and appropriate for the problem. The authors present a detailed explanation of the CSR framework and evaluate it on a range of benchmarks (ImageNet, MTEB, MS COCO) using relevant metrics like retrieval accuracy and inference time. The comparison with MRL and other baselines is fair and well-structured. However, I believe the authors can do a better job at highlighting the novelty of the work. A lot of material in Section 3.2, especially Section 3.2.2, would fit better in the Preliminaries or Related Work section. This would help keep the focus on the core contributions of the paper, preventing the audience from being distracted by too much technical detail before understanding the main idea. Theoretical Claims: N/A - there is no theoretical claims. Experimental Designs Or Analyses: Yes - the experiment designs and analyses are sound and well-constructed. The authors evaluate CSR across multiple benchmarks (ImageNet, MTEB, and MS COCO) and provide thorough comparisons with MRL and other baselines using consistent and relevant metrics like retrieval accuracy, inference time, and training costs. The analysis includes ablation studies, scaling experiments, and tests on different sparsity levels. The inclusion of both vision and text tasks, along with multimodal settings, adds further credibility to the evaluation. Supplementary Material: Yes, all of them. Relation To Broader Scientific Literature: The paper builds on prior work in adaptive representation learning, particularly MRL, but proposes a more efficient sparse coding approach using reconstruction and contrastive learning. It also draws from earlier approaches of sparse autoencoders and contrastive learning to improve retrieval accuracy and efficiency while reducing training costs. Essential References Not Discussed: No, the paper discusses the key related works thoroughly. Other Strengths And Weaknesses: The paper does not explore how CSR handles complex multimodal tasks beyond simple retrieval tasks (e.g., cross-modal generation or reasoning). The scalability of CSR on extremely large datasets (beyond ImageNet or MS COCO scale) is not tested, which limits its generalizability to real-world large-scale applications. The impact of different backbone architectures (e.g., transformer vs. convolutional models) on CSR’s performance is not fully examined. Other Comments Or Suggestions: Providing a brief discussion on potential limitations or failure cases of CSR would further strengthen the paper. Questions For Authors: Can you clarify how CSR performs under extreme sparsity constraints (e.g., TopK = 4 or 8)? While the paper shows strong performance at moderate sparsity levels, it would be helpful to see a more detailed analysis of how CSR handles extreme sparsity, especially in comparison to MRL. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your careful reading and critical review. Following your suggestions, we have added more discussions on complex multimodal generation ability and the scalability of CSR. We further address each of your concerns below and hope you find them satisfactory. --- **Q1** Move extensive technical details (especially Section 3.2.2) to the Preliminaries or Related Work section to highlight the paper's core innovations. **A1.** We sincerely appreciate this suggestion. As you suggested, we will move the background part of Sec 3.2.2 to Preliminaries for better readability. In this way, Sec 3.2 would be mostly devoted to the design of CSR and highlight the novelty of our design. --- **Q2** CSR's ability to handle complex multimodal tasks(e.g., cross-modal generation). **A2.** In this work, our primary goal is to develop a more efficient approach to representation learning. To evaluate the quality of learned representations, we follow the standard evaluation protocol in the efficient representation learning literature, such as MRL [1], focused mainly on well-known image (e.g., ImageNet) and text embedding (MTEB [2]) benchmarks. We further evaluate CSR on multimodel embeddings as well in Table 2 (in the main text) and *Table 1* below (zero-shot retrieval performance) and follow common evaluation protocols of CLIP embeddings (e.g., CLIP\_benchmark [3]) with standard metrics like 1-NN accuracy, NDCG@10, and Recall@5. Although CLIP embeddings can be used for multiple downstream tasks (including generation), our focus here is not to explore these alternatives, and thus we follow the standard protocol of CLIP evaluation for our experiments. Please let us know if you need any further clarification. Ref: [1] Kusupati, Aditya, et al. "Matryoshka representation learning." NIPS,2022. [2] Muennighoff, Niklas, et al. "MTEB: Massive text embedding benchmark." arXiv preprint arXiv:2210.07316 (2022). [3] https://github.com/LAION-AI/CLIP_benchmark --- **Q3** The scalability of CSR on extremely large datasets (beyond ImageNet or MS COCO scale) is not tested, which limits its generalizability to real-world large-scale applications. **A3.** Our evaluation on ImageNet-1k follows the standard setup in MRL, which also did not have results that far beyond this scale as well. Indeed, **with constraints of our academic compute**, ImageNet-1k is already a large-scale dataset for us and it is not really feasible for us to conduct industry-level "extremely large datasets" like JFT-300M. While MRL only evaluates image and text scenarios, we are the first to include the evaluation on multimodal embeddings as well, showing consistent gains as the other domains. Therefore, we believe that our results (marked as ``comprehensive`` by Reviewers 11DH and cpsE) do support the generality of our approach. Following your suggestion, we evaluated CSR against MRL on the larger CC3M dataset (3M images, compared to ImageNet's 1M and MS COCO's 0.3M). Results in Table 1 demonstrate CSR's consistent superiority across various active dimensions, confirming its scalability. *Table 1: Zero-Shot Retrieval Performance on MS COCO* |Model|Active Dim|I2T@5|T2I@5| |-|-|-|-| |ViT-B/16(Pre-trained)|512|69.23|83.03| |+MRL|256|54.46|61.06| |+CSR|256|**57.75**|**70.34**| |+MRL|128|48.96|55.86| |+CSR|128|**49.97**|**63.12**| |+MRL|64|38.71| 45.72| |+CSR|64|**40.19**|**52.39**| --- **Q4** The impact of different backbone architectures (e.g., transformer vs. convolutional models) on CSR’s performance is not fully examined. **A4.** In fact, we have included the results of CSR under both transformer (ViT) and convolutional models (ResNet-50) in both Figure 4 and Figure 5, where CSR behaves quite similarly under different backbone architectures, indicating that CSR is rather general and agnostic of the underlying backbone architecture. --- **Q5** Providing a brief discussion on potential limitations or failure cases of CSR. **A5.** CSR faces dead latent issues under extreme sparsity, particularly in multimodal settings(Sec. D.4). Based on our extensive experiments in other domains and ablation studies, we identify this as a technical challenge requiring solutions such as alternative loss designs or deeper architectures. --- **Q6** Can you clarify how CSR performs under extreme sparsity constraints (e.g., TopK = 4 or 8)? While the paper shows strong performance at moderate sparsity levels, it would be helpful to see a more detailed analysis of how CSR handles extreme sparsity. **A6.** Good question! Our additional experiments (Table 2 below) demonstrate CSR's robust performance even under extreme sparsity constraints (TopK = 2 or 4), consistently outperforming MRL *Table 2: 1-NN results on ImageNet 1k* |Active Dim|2|4|8|16| |-|-|-|-|-| |CSR|66.17|69.97|73.84|74.39| |MRL|-|-|62.19|67.91| Thank you for your constructive feedback. We hope our responses have addressed your concerns, and we welcome any further questions or discussion.
Summary: The paper presents Contrastive Sparse Representation (CSR) as a novel approach to adaptive representation learning, addressing the limitations of Matryoshka Representation Learning (MRL), which requires extensive retraining and suffers from performance degradation at shorter representation lengths. Claims And Evidence: Claim 1: CSR Outperforms MRL in Accuracy and Speed Evidence: Compared to the MRL method, CSR achieves average performance gains of 4.6% and 6.8% on image-to-text retrieval, and 9.1% and 6.5% on text-to-image retrieval across the two datasets. This indicates that CSR provides higher accuracy while maintaining efficient retrieval times. Claim 2: CSR Reduces Training Time Significantly Evidence: The training time for CSR is reported to be a fraction of that required by MRL. For instance, CSR can be trained on ImageNet in about half an hour with a single GPU, compared to the extensive retraining needed for MRL. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem of adaptive representation learning. CSR effectively addresses the limitations of previous approaches, and the chosen metrics provide a comprehensive assessment of its performance across multiple dimensions. This makes the methodology robust and applicable to real-world scenarios, thereby contributing valuable insights to the field. Theoretical Claims: Several theoretical claims are made regarding the effectiveness and efficiency of Contrastive Sparse Representation Learning (CSR). However, the paper does not provide formal proofs for these claims. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are largely valid and appropriate for evaluating CSR. However, improvements could be made in terms of documentation for reproducibility, broader comparisons with other methods, and the inclusion of statistical significance testing. Supplementary Material: Yes. the appendix at the bottom of the paper has been reviewed. Relation To Broader Scientific Literature: no Essential References Not Discussed: no Other Strengths And Weaknesses: no Other Comments Or Suggestions: no Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your thoughtful assessment of our paper. We appreciate the recognition of our work's contributions, particularly noting the clear improvements of CSR over MRL by ``Outperforms MRL in Accuracy and Speed Evidence``, ``Reduces Training Time Significantly``. Meanwhile, thank you for acknowledging that our ``methods and evaluation criteria are well-aligned with the problem of adaptive representation learning`` and provide ``comprehensive assessment of its performance across multiple dimensions``. Next, we will address your comments on the aspects for further improving this work. --- **Q1** Documentation for reproducibility. **A1.** Experimental details with key hyperparameter settings are provided in Appendix B.3, C.4, and D.3. We have included preliminary ([code](https://anonymous.4open.science/r/ICML_rebuttal-78D1/)). and will definitely complete the resources available for reproducibility. --- **Q2**. Broader comparisons with other methods. **A2.** Thanks for your suggestions! We acknowledge that several other methods (e.g., pruning, quantization, and distillation) also enable acceleration. However, it is important to clarify that while these methods primarily focus on accelerating the backbone model and embedding generation, CSR distinctively focuses on optimizing the post-processing phase, specifically the transition from embedding to retrieval. This distinction positions CSR as fundamentally **orthogonal** to existing acceleration approaches. To illustrate, we combine CSR with Int8 quantization, as demonstrated in the *Table 1* below. This combination achieves additional acceleration beyond quantization alone while incurring only minimal degradation in the compressed model’s performance. The effectiveness of CSR arises from its capability to maintain high performance even when employing significantly reduced active dimensions (e.g., 8). Compared to alternative approaches, applying CSR to the original model consistently results in the highest performance retention. We will incorporate this detailed discussion into the revised manuscript. *Table 1: 1-NN Acc Comparison on Different Methods* | Method | Active Dim | Vanilla | Int8 Quant | Retrieval Time | |-----------|------------|---------|------------|----------------| | Resnet50 | 2048 | 75.19 | 73.48 | 5.17 | | +CSR | 8 | 73.84 | 72.32 | 0.28 | | Resnet50 | 2048 | 70.97 | 69.11 | 5.16 | | +MRL | 8 | 62.19 | 59.38 | 0.42 | --- **Q3** Statistical significance testing. **A3.** Thank you for your suggestion! Due to time constraints, we firstly report the standard division results in *Table 2* below for the comparison under 8 active dimensions, calculated with 5 independent runs. We can see that the stdev is only around 0.5, while the gap between MRL and CSR can be as large as 15.9 (for MRL, we adopted the original paper's results that did not report stdev). Consequently, our method consistently outperforms MRL by a significant margin. *Table 2: CSR statistical results on ImageNet1k* | Methods | Active Dim | 1-NN Acc | | ------- | ---------- | ------------ | | MRL | 8 | 62.19 | | MRL-E | 8 | 57.45 | | CSR | 8 | 73.39 ± 0.51 |
Summary: The authors propose a method for converting pretrained dense embedding vectors into sparse embedding vectors and show that it often outperforms standard approaches such as Matryoshka Representation Learning (MRL) in terms of both accuracy, training time and retrieval speed. Their CSR method is inspired by Sparse Autoencoders and projects fixed embeddings into a higher-dimensional space and activating only the TopK dimensions for a compact representation. CSR uses an SAE style reconstruction loss as well as a non-negative contrastive loss. They compare across various domains, architectures and tasks. This is an exciting contribution to the field and has the potential to become the new standard for efficient vector representations. Claims And Evidence: The authors claim that Contrastive Sparse Representation (CSR) 1. Has higher accuracy than MRL when controlling for the number of active dimensions * Figure 7(a)-(b) on ImageNet 1-NN accuracy * Table 2 on MS COCO and Flickr30k (image to text and text to image) * Figure 7(c) on a subset of the MTEB retrieval dataset * Table 1 on a subset of data from MTEB 2. Has shorter training time than MRL * Figure 1(c) 3. Has faster retrieval time than MRL * Figure 1(b) * Table 1 on subset of data from MTEB Methods And Evaluation Criteria: The proposed methods and benchmark datasets make sense for the problem at hand. They do a very thorough comparison against Matryoshka Representation Learning (MRL) by evaluating across similar domains and benchmarks used in the original MRL paper. Theoretical Claims: The authors reference one theoretical proof for motivation from Wang et al. 2024, but do not rely on this for their experimental results. I did not check the correctness of this theorem. Experimental Designs Or Analyses: The authors have done a good job of benchmarking their approach across multiple modalities and benchmarks. They also benchmark against prior work (MRL by Kusupati et al. 2022) Supplementary Material: I have read the supplementary materials. Relation To Broader Scientific Literature: The results in this paper are related to the broader literature around efficient vector representations. The main target of their approach, MRL, is widely used by modern embedding models including OpenAI’s text-embedding-3-large. Their approach has the potential to become the new standard for efficient representations. Essential References Not Discussed: No Other Strengths And Weaknesses: This work is potentially quite significant for the field of retrieval, text embedding and image embeddings. Other Comments Or Suggestions: Typos: Figure 1 caption “Compared to MLR” should be “MRL” Figure 1 caption “we outperform MSR on 1-NN accuracy” should be “we outperform MRL”? Section C.3 “MTEB benchmar” should be “benchmark” Table 5 “NV-EmbeV2” should be “NV-EmbedV2” The embedding models in Table 1 all use MRL; while this is stated clearly in the supplementary material, it would be helpful to state this explicitly in the main text. My review is contingent on the authors releasing the code for CSR so that others can replicate their results. Questions For Authors: Figure 1 (c) how many active dims for the data points in this plot? Is this the same data as in Table 1? If so please state this explicitly. In Figure 7(c), are the results for active dimensions 32? If so, please state this explicitly in the caption. Currently the caption hints at this by stating “the results of CSR-32…”. In Figure 7(c), it would be helpful to see the full performance of each model (e.g. NV Embed, Nomic-v1.5 etc.) with the full vector representation. This would remind the reader that sparse representations with low active dimensions usually have lower accuracy than the full embedding. It would be helpful to briefly discuss MRL-E, SVD and Rand-LP in the main text (and not just in the supplementary materials), as these appear in multiple figures and tables. It would be helpful to elaborate on the Section E.3 with regards to retrieval time evaluation. It is slightly confusing that the method is called CSR (contrastive sparse representation), while the embeddings are stored in the CSR (compressed sparse row) format. I assume this was intentional - if so, it could be worth explicitly mentioning this by saying something like “the training method is called CSR, and the vector is stored in the standard CSR format…” Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for appreciating the quality of our work. The concerns have been addressed as below: --- **Q1** Typos in Figure 1 caption, Section C.3, and Table 5. **A1.** Thank you for pointing out! Following your great suggestions, we will fix them in the revised manuscript. --- **Q2** It would be helpful to state that embedding models in Table 1 use MRL. **A2.** Thank you for your great suggestion! We will clarify this distinction in Table 1 in the revised manuscript. --- **Q3** Releasing Codebase. **A3.** We will certainly open-source the entire project upon acceptance. For now, we have released the [codebase](https://anonymous.4open.science/r/ICML_rebuttal-78D1/) of the ImageNet results as a reference implementation. --- **Q4** How many active dims in Figure 1(c)? **A4.** Here, we computed the average 1-NN accuracy across active dimensions 8 to 128 for each method to have a holistic evaluation. We will add this explanation to the experiment setup in Appendix B for further clarification. --- **Q5** In Figure 7(c), are the results for active dimensions 32? **A5.** Thank you for pointing out! Indeed, the results in Figure 7(c) are for active dimensions 32. We will explicitly clarify this detail in the figure caption in the revised manuscript. --- **Q6** In Figure 7(c), it would be helpful to see the full performance of each model (e.g., NV Embed, Nomic-v1.5 etc.) with the full vector representation. **A6.** Thank you for pointing this out! We will include the full performance of each model for better reference. One major advantage of CSR compared to these MRL models is that it is a very lightweight method and can be easily built upon the latest SOTA embedding models, while the others have to rely on their own heavy pretraining, which leads to inferior full representation performance as well. Therefore, the advantages of CSR are essentially two folds: 1) we can use off-the-shelf SOTA embedding models for the best full representation performance, and 2) we have the slightest degradation when converted to efficient embeddings in a lightweight way. --- **Q7** Briefly discuss MRL-E, SVD and Rand-LP in the main text. **A7.** Thank you for this valuable suggestion! We will elaborate on their setups in the main text as well in the revision. --- **Q8** It would be helpful to elaborate on Section E.3 with regard to retrieval time evaluation. **A8.** Thanks for your valuable suggestion! Here is a more detailed breakdown of the evaluation protocol for the retrieval time: 1. We first precompute embeddings for the entire ImageNet training set, storing them in a standard CSR (compressed sparse row) format in GPU memory as the retrieval database. 2. We compute the retrieval time as the average over 2,000 retrieval rounds, following an initial warm-up period of 100 rounds. In each retrieval round, the query set consists of 512 samples randomly drawn from the database. The warm-up phase is standard practice when benchmarking GPU computations, as it effectively eliminates initialization bias. --- **Q9** It is slightly confusing that the method is called CSR (contrastive sparse representation), while the embeddings are stored in the CSR (compressed sparse row) format. I assume this was intentional - if so, it could be worth explicitly mentioning this by saying something like “the training method is called CSR, and the vector is stored in the standard CSR format…” **A9.** Indeed, we intentionally call our method CSR (contrastive sparse representation) to be the "ML version" of CSR, i.e., how to learn a model that converts dense embeddings to highly sparse ones. And thanks for your suggestions -- we will add the explicit discussions on this terminology connection and distinction to be clearer. --- We are genuinely grateful for your thoughtful feedback, which has been instrumental in helping us refine the manuscript. Please do not hesitate to reach out if you have any additional comments or questions.
Summary: This paper focuses on the problem of creating adaptive representations from foundation models, focusing on contrastive sparse coding (CSR) as a novel method applied after pre-training to produce efficient representations for a range of downstream tasks. CSR is compared with Matryoshka Representation Learning (MRL), which creates dense representations at multiple scales by truncating dense feature vectors. In contrast to MRL, CSR creates sparse representations where a target number of dimensions activates per input. The paper demonstrates several advantages of CSR, including higher fidelity, faster retrieval, and lower training cost compared to MLR and other simple baselines. The experiments focus on image, text, and text-image tasks, showing consistent gains over MRL in all cases. Ablations analyze the design choices, including sparsity level, input and hidden dimensionality, and data scaling. The method seems practically useful as a simple post-training method for which.one can tradeoff computational efficiency with accuracy reasonably in some relevant downstream tasks. Claims And Evidence: The paper does a good job of comparing CSR to popular adaptive representation baselines within the MRL family along with some simpler baselines, though it may be useful to also compare with or at least discuss other approaches that focus on pruning, quantization, and/or distillation. The method shows clear gains over MRL in most cases examined across the sparsity spectrum both in terms of fidelity and runtime, though there is some loss in fidelity — the degree to which this matters is probably out of scope of this contribution, but is important to acknowledge. Methods And Evaluation Criteria: Methods and evaluation criteria appear comprehensive, and not designed to favor the CSR method from what I can tell. That being said, ablation studies could go deeper, e.g., to try to understand the limitations e.g. in fidelity at lower sparsity levels of the sparse coding approach, and whether wider hyperparameter searches across design factors may help reduce approximation error further. Theoretical Claims: The paper did not introduce any new theoretical claims, though they did make use of Wang et al, Theorem 5. Given that the empirical evidence backed up the theoretical claim, I do not have specific concerns here. Of course, additional theory characterizing the limits of this CSR method would be welcome! Experimental Designs Or Analyses: No specific concerns here. It could be useful to examine whether CSR holds in other representation learning settings beyond 1-KNN probes. What about linear probes and/or few shot scenarios? Supplementary Material: Yes. All expected supporting material was included. Relation To Broader Scientific Literature: The paper relates to recent sparse autoencoder findings on interpretability that gained some interest in the past year, and may serve to interest researchers beyond that particular use. There’s also a potential connection to other recent works on sparsity including sparse mixtures-of-experts. In addition, there is a wide literature on post-training methods including pruning, distillation, quantization, etc., that could be discussed given the practical importance to efficiency, especially on device! Essential References Not Discussed: N/A Other Strengths And Weaknesses: The application of modern sparse coding (with contrastive task regularization) to adaptive feature representations is not something I’ve seen, which could be of good practical relevance in efficiency research. That being said, there is not much in the way of theoretical or empirical insights from this paper. Other Comments Or Suggestions: Since Theorem 5 is taken from Wang et al and is not a conribution of this paper, make sure to refer to it as such in the text anywhere it’s mentioned to minimize confusion. Questions For Authors: From Figure 3, it’s not clear to me what makes TopK=16 a “sweet spot”. In terms of retrieval time, it is the highest, though it is indeed the case that benefits over MRL baseline are maintained. Can you explain why this is considered a sweet spot? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your constructive comments and suggestions, which are helpful for us to improve the quality of our paper further. The concerns have been addressed as below: --- **Q1** May be useful to add a discussion on pruning, quantization, and distillation methods. **A1.** Indeed, CSR, pruning, quantization, and distillation all enable acceleration. However, while the other three methods accelerate the backbone and inference embedding generation, our approach (CSR) focuses on post-processing optimization from embedding to retrieval. Because of this, CSR is **orthogonal** to those other methods. For example, combining CSR with Int8 quantization, as shown in *Table 1* below, provides additional speed-up beyond quantization alone, with minimal loss in the compressed model’s performance. This benefit arises because CSR utilizes a significantly smaller number of active dimensions (e.g., 8) while maintaining overall effectiveness. Compared to other methods, applying CSR to the original approach results in the highest retention of performance. Thanks for bringing this up, and we will include this discussion in the revised version. *Table 1: 1-NN Acc Comparison on Different Methods* |Method| Active Dim | Vanilla | Int8 Quant | Retrieval Time | |-|-|-|-|-| |Resnet50| 2048|75.19|73.48|5.17| |+CSR|8|73.84|72.32|0.24| |Resnet50|2048|70.97|69.11|5.16| |+MRL|8|62.19|59.38|0.42| --- **Q2** Ablation study on fidelity at lower sparsity levels. **A2.** Please see A6. --- **Q3** Whether wider hyperparameter searches across design factors may help reduce approximation error? **A3.** Thank you for raising this point. CSR's primary influential parameters are the activation dimension $k$ and the hidden dimension $h$, both of which we analyze extensively in **Figure 5**. As for other standard training hyperparameters (learning rate, loss coefficients, etc.), Gao et al. [1] conducted extensive ablations in the context of SAEs, and we adopted their default configuration. We will include this clarification in the revised version. Ref: [1] Gao, Leo, et al. "Scaling and evaluating sparse autoencoders." arXiv preprint arXiv:2406.04093 (2024). --- **Q4** Further evaluation in linear probes and few-shot scenarios. **A4.** Good question! Following your suggestions, we have conducted additional experiments evaluating CSR with linear probing (*Table 2*) and few-shot learning (*Table 3*). *Table 2* shows the Top-1 accuracy of CSR under linear probing on ImageNet1K, while *Table 3* reports 1000-way {3-, 5-, 7-}-shot average performance across three test sets on ImageNetV2. We utilize the same backbone architecture as MRL [2] in the few-shot scenario for a fair comparison. *Table 2* compares the linear probing performance of CSR and MRL across varying active dimensions, demonstrating that CSR exhibits minimal performance degradation relative to MRL. A similar trend appears in *Table 3*, where CSR shows stronger robustness in few-shot classification. These additional results underscore CSR’s consistently high performance across different downstream tasks beyond 1-NN. We hope this addresses your concerns, and we welcome any further discussion. *Table 2: Linear probing performance comparison between different methods.* |Methods |Active Dim|Top-1 Acc| |-|-|-| |ResNet50|2048|80.59| |+CSR|128|79.76| |+CSR|32|78.94| |+CSR|8|78.60| | ResNet50|2048|76.80| |+MRL|128| 76.30| |+MRL|32| 75.03| |+MRL|8|66.63| *Table 3: Few-shot performance comparison between different methods.* | Methods | Active Dim | 3-Shot | 5-Shot | 7-Shot | |-|-|-|-|-| |ResNet50|2048|0.57|0.62|0.65| |+MRL|8|0.52|0.55|0.57| |+CSR|8|0.56| 0.61|0.63| Ref: [2] Kusupati, Aditya, et al. "Matryoshka representation learning." NIPS,2022. --- **Q5** Referation to Wang et al's Theorem 5. **A5.** Thanks for the reminder. We will revise the statement and include proper reference to Wang et al. --- **Q6** What makes TopK=16 a “sweet spot”? **A6.** Thank you for highlighting this point. Here, we consider $k=16$ as the ''sweet spot'' because it attains the optimal balance between accuracy and efficiency. As demonstrated in Figures 4 and 5, $k=16$ outperforms smaller $k$ values (e.g., 8) while maintaining higher efficiency than MRL (Figure 3(b)). *Table 4* further confirms that $k=16$ is a strong trade-off point. Nevertheless, using a smaller $k$ remains an option for those who prioritize speed over performance. *Table 4: Performance vs Efficiency under Different Sparsity* |Top K||2|4|8|16|32| |-|-|-|-|-|-|-| |1-NN Acc|| 66.17|69.97|73.84|74.39|74.53| |Relative Retrieval Time||0.2|0.2|0.3|0.5|1.4| --- We are sincerely grateful for your valuable feedback, which has been instrumental in refining our manuscript. Should you have any additional comments or questions, please do not hesitate to contact us. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my questions. I like the additional experiments on few-shot and linear probing, and some of the points for discussion. Therefore I raise my score and believe this to be an interesting approach for the community to know about. One point I want to clarify though: I’m confused by Table 1 because there are two different rows for Resnet50 with 2048 dimensions with different accuracy and runtime numbers. I’m not sure why one is used to benchmark with vs without CSR and the other is used to benchmark MRL. Can you clarify? --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful feedback and for improving your score. We’re glad to hear that you found the updated results satisfactory. Below, we address your remaining question: --- **Q1** I’m confused by Table 1 because there are two different rows for Resnet50 with 2048 dimensions with different accuracy and runtime numbers. I’m not sure why one is used to benchmark with vs without CSR and the other is used to benchmark MRL. **A1.** Thank you for pointing this out! As illustrated in Figure 2, CSR can be added **on top of any SOTA backbones**, whereas MRL requires training the backbone **from scratch** in order to learn adaptive representations. Therefore, in *Table 1* of the rebuttal, the two ResNet-50 entries correspond to these two paradigms. Specifically: - The first row uses a ResNet-50 backbone pre-trained via the timm library [1], which serves as the fixed SOTA baseline for evaluating CSR. - The second row uses the ResNet-50 backbone from the original MRL paper [2], which was trained from scratch. We directly adopted their released weights for this comparison. More implementation details can be found in Sec B.3. To further clarify, we also include an additional experiment in *Table 5* below, where both CSR and MRL are applied on **the same backbone weights**. Even in this matched setting, CSR outperforms MRL by **a notable margin (+5.59)**. Given that CSR also requires significantly less training time (see Figure 1(c)), we believe this highlights CSR’s effectiveness and efficiency in learning adaptive representations. Thanks for your valuable comments! We will include these additional results in the revision. *Table 5: 1-NN results of different methods with same backbone weights* |Method|Active Dim| Vanilla | Int8 Quant | Retrieval Time | |-|-|-|-|-| |Resnet50|2048|70.97|69.11|5.16| |+CSR|8|**67.78**|**65.44**|0.28| |+MRL|8|62.19|59.38|0.42| Ref: [1] https://huggingface.co/timm/resnet50d.ra4_e3600_r224_in1k [2]Kusupati, Aditya, et al. "Matryoshka representation learning." NIPS,2022. --- **Q2** Run number difference between two ResNet50. **A2.** Thank you for taking the time to carefully review our rebuttal! In *Table 1*, we report **relative retrieval time**, consistent with the main paper. For additional clarity, we also provide the **raw absolute timing values** here: 0.014476 vs. 0.014448. The difference only appears at the fifth decimal place, which we attribute to **GPU-level randomness** rather than any difference in evaluation methods. Overall, both methods operate at the same scale. Importantly, CSR with an active dimension of 8 can **significantly improve efficiency on top of this baseline**, while maintaining strong performance. We appreciate you pointing this out, and we will include the raw timing values for all methods in the revision to improve clarity. --- We hope the above explanations address your concerns. Please don’t hesitate to let us know if any further clarification would be helpful.
null
null
null
null
SGD Jittering: A Training Strategy for Robust and Accurate Model-Based Architectures
Accept (poster)
Summary: The paper introduces SGD Jittering, a training method for model-based architectures (MBAs) solving inverse problems. By adding small, random noise to gradient updates during training, SGD Jittering improves robustness and generalization accuracy without modifying input data or increasing computational cost like adversarial training (AT). Theoretical analysis proves its advantages over standard mean squared error (MSE) training. Experiments on denoising, seismic deconvolution, and MRI reconstruction show superior performance. Claims And Evidence: The claims are supported by clear evidence and proofs. Methods And Evaluation Criteria: The method and evaluations make sense, but the method is similar to previous works like SGLD. Therefore, the novelty is not very good. Theoretical Claims: I checked some proofs but not all proofs. Experimental Designs Or Analyses: Experiments are ok but can be improved when more inverse tasks like superresolution are added. Supplementary Material: no supplementary material. Relation To Broader Scientific Literature: The key contributions are similar to Stochastic gradient Langevin dynamics. Essential References Not Discussed: The key contribution is similar to stochastic gradient Langevin dynamics (SGLD), the the comparison between the proposed method and SGLD is not discussed. Other Strengths And Weaknesses: Strengths: 1. Provides analysis linking noise injection to implicit regularization of gradient/Hessian smoothness, advancing understanding of robustness in iterative inverse solvers. 2. Demonstrates effectiveness on real-world tasks (seismic deconvolution, MRI) where robustness and generalization are critical. The extension to proximal gradient (SPGD Jittering) highlights broader applicability. 3. Well-structured presentation of theory, experiments, and ablation studies. Weaknesses: 1. SGD jittering is similar to stochastic gradient Langevin dynamics, which makes the method less novel. In addition, the theoretical or empirical differences are not discussed. 2. Theoretical guarantees are limited to denoising tasks; broader analysis for general inverse problems (e.g., ill-posedness, nonlinearity) is not provided. 3. Experiments focus on a few inverse problems. Testing on more diverse tasks (e.g., super-resolution) could better validate generality. 4. While outperforming MSE and adversarial training, comparisons to other robustness methods (e.g., gradient penalty, Lipschitz regularization) are missing. Other Comments Or Suggestions: The section structure should be reorganized. e.g., Sec. 3 only has one paragraph. Questions For Authors: Please see the weakness part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and helpful questions. Please find detailed responses below. > Comparing to SGLD We thank R-x3Tj for mentioning SGLD, but we clarify that our SGD jittering is fundamentally different from SGLD in both goal and mechanisms. SGLD adds noise directly to network parameter updates to approximate sampling from the Bayesian posterior. It aims to capture model uncertainty, not necessarily to improve robustness or generalization. This process requires careful scheduling of the noise and often results in slower training. In contrast, SGD jittering injects noise into the hidden inputs (intermediate activations) during training—not into the parameters—with the explicit goal of enhancing robustness and generalization in IPs. Importantly, no noise is used at inference time, so reconstructions remain deterministic. Moreover, our method integrates naturally with MBAs and does not require any modification to the optimizer. > Theoretical guarantees are limited to denoising tasks; provide broader analysis for general IPs We agree that extending the theoretical guarantees to more general IPs, including nonlinear and ill-posed settings with data/model mismatch, remains a rich direction for future work. We see our current analysis as an important first step toward building a theoretical foundation for robust training schemes in MBAs, and we hope it will inspire further progress in this area. > Testing on more diverse tasks (e.g., super-resolution) could better validate generality In addition to the main experiments, we also evaluated our method on a natural image deblurring task using the CelebA dataset. Due to space constraints, these results were not included in the main text. Below, we provide test performance on in-distribution (CelebA), out-of-distribution (FairFace), and adversarial settings. Table 2 shows improved ID/OOD accuracies and robustness over MSE training, with robustness nearly matching AT. We will include these results and discussion in the revised manuscript to further support the generality of our approach. | PSNR/SSIM | ID-CelebA | Adv. Attack | OOD-FairFace | |----------------------|-------------------|-------------------|-------------------| | MSE training | <34.14 / 0.954> | 29.81 / 0.812 | <32.94 / 0.940> | | AT | 32.10 / 0.928 | **31.83 / 0.902** | 31.26 / 0.918 | | Input Jittering | 34.06 / 0.942 | 31.28 / 0.857 | 31.04 / 0.912 | | SGD jittering (Ours) | **35.12 / 0.960** | <31.46 / 0.884> | **33.23 / 0.945** | Table 2: Image deblurring. Best performances in **bold**, second best in <...>. > Other robustness methods Thank the reviewer for the suggestion. Prior work [1] has compared a MBAs trained with AT to end-to-end randomized smoothing (RS) for MRI reconstruction, and showed that AT improves robustness significantly than RS. It supports our choice of **AT as a strong baseline for evaluating robustness in IPs**. As suggested, we evaluated our method against a widely used robustness technique Lipschitz regularization using spectral normalization (SN). SN constrains the Lipschitz constant of the network by bounding the spectral norm (i.e., largest singular value) of each weight matrix, which helps stabilize training and improve robustness. As shown in Table 3 result for 4xMRI reconstruction. SN promotes adv. robustness, but at the cost of reduced accuracy to some extend. This is likely due to its restrictive nature of SN limiting the model’s expressive power. | PSNR/SSIM | ID | Adv. Attack | OOD | |:--------------------:|:-------------:|:-------------:|:-------------:| | MSE training | <28.21 / 0.603> | 25.68 / 0.382 | 29.92 / <0.779> | | AT | 27.68 / 0.564 | **27.17** / <0.549> | 27.74 / 0.597 | | Input Jittering | 28.18 / 0.595 | 25.05 / 0.420 | <29.97> / 0.740 | | SGD jittering (Ours) | **28.22 / 0.607** | 26.77 / **0.552** | **30.36 / 0.788** | | SN (new baseline) | 27.71 / 0.576 | <27.09> / 0.542 | 27.92 / 0.594 | Table 3: MRI reconstruction. Best performances in **bold**, second best in <...>. [1] Alkhouri et al. Robust physics-based deep MRI reconstruction via diffusion purification We sincerely appreciate the reviewer’s constructive feedback. We believe the added experiments and clarification will strengthened the manuscript and hope they satisfactorily address your concerns.
Summary: The paper introduces "SGD Jittering," a new training strategy designed to enhance the robustness and generalization of Model-Based Architectures (MBAs) for image inverse problems. Specifically, the authors propose to inject random zero-mean Gaussian noises into gradient updates at each iteration within deep unrolling networks during training. Theoretically, they demonstrate that this simple noise injection improves average-case robustness and generalization accuracy compared to standard mean-squared-error and adversarial training, respectively. Empirically, they validate their method across several inverse problem tasks, including a toy denoising example, seismic deconvolution, and single-coil MRI reconstruction. Claims And Evidence: Yes. The paper provides both empirical and theoretical results on non-convex deep neural networks, systematically comparing SGD Jittering with standard MSE training and worst-case adversarial attacks. Additionally, to the best of the reviewer's knowledge, this work is the first to offer a theoretical analysis of generalization accuracy in inverse problems, particularly in the presence of small perturbations in test data. Methods And Evaluation Criteria: Yes, particularly in the empirical evaluation, where the model is trained on the fastMRI dataset and tested on a different knee dataset from Bickle & Jin (2021). Note that the test set contains giant tumor cells absent from the training data, ensuring a robust out-of-distribution (OOD) evaluation. Theoretical Claims: The assumptions used for the theorems align well with existing literature. The training convergence analysis under the SGD optimizer is a direct extension of Garrigos & Gower (2023). Additionally, I do not see major flaws in Theorems 7.2 and 7.4. Experimental Designs Or Analyses: Yes, the empirical design is methodologically sound and provides informative insights. Supplementary Material: Yes, the analysis of theoretical conclusion and additional implementation details. Relation To Broader Scientific Literature: Prior works, Fawzi et al., 2016, showed that small perturbations in data space can significantly degrade performance, particularly for deep networks trained on ill-posed inverse problems. This paper’s approach of introducing SGD perturbations at the optimization level offers a new perspective. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1, This paper presents an interesting and easily implementable training strategy to improve robustness and generalization in model-based deep learning. 2, The paper is well-written and easy to follow, with clear mathematical formulations and consistent notations. 3, The theoretical analysis effectively supports the intuition behind the proposed method, explicitly linking SGD jittering to improved robustness and generalization. Weaknesses 1, The main theorems, 7.4 and 7.5, are primarily derived for image denoising, leaving their applicability to more general inverse problems (e.g., MRI or seismic reconstruction) unclear. Extending the theory may require additional assumptions on the forward model. 2, The results on proximal-based MBAs appear to be an ad-hoc empirical extension, rather than a direct consequence of the theoretical analysis. It is unclear whether the current theoretical framework applies directly or if additional assumptions are needed for deep neural network priors. 3, The paper lacks a more comprehensive review of generalization and robustness in inverse problems. While Definition 3.2 introduces a generalization risk formulation, it does not fully account for forward model mismatches between training and inference, which are critical in practical applications. Other Comments Or Suggestions: N/A Questions For Authors: 1, Can the authors clarify what regularization functionals are used for constructing $r_\theta$ ? 2, Can similar approach and theoretical analysis to deep priors without explicit potentials, such as those based on regularization by denoising (RED) frameworks ? 3, Can the authors further clarify why the chosen definition of generalization risk (Eq. (3)) is suitable for general inverse problems? Would other definitions (such as those involving a distributional shift of the forward operator $A$ or the noise $z$) be potentially more relevant? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and helpful questions. Please find detailed responses below. > Generality of Theoretical Results We agree that Theorems 7.4 and 7.5 were established specifically in the denoising setting. Extending the theoretical analysis to more complex inverse problems would require additional assumptions and broader treatment of the forward model. We will explicitly acknowledge this limitation in the revised manuscript and outline it as a promising direction for future work. > Extension to Proximal-Based MBAs The analysis of robustness and generalization for SPGD are not a direct extension of the current SGD theoretical results. While GD-LU and PGD-LU differ structurally—based on whether the consistency update and neural network modules (which learns $\nabla r$ in GD-LU or acts as a proximal operator in PGD-LU) are applied in parallel or sequentially—both architectures allow for noise injection into hidden states during training. This perturbs intermediate representations, thus promoting robustness. While our theoretical analysis focuses on SGD, prior work has established convergence for both SGD and SPGD in related setting. These results ensure that the reconstructed outputs remain consistent with the forward model $A$, thus preserving reconstruction accuracy. Motivated by this, we extend our approach empirically to the proximal setting. Our experiments confirm that jittering improves both robustness and generalization in proximal settings. > Generalization Risk Definition Eq.3 Our current definition aims at capturing small shifts in test data, but assuming same and known forward model is used at inference. We agree that explicitly addressing forward-model mismatches between training and inference would be an important direction. This work serves as an initial discussion in addressing generalization and robustness issue only due to data-shift. > Clarification of Regularization Functional $r$ In model-based architectures, the regularization function $r$ is implicitly defined through its gradient learned by a neural network $NN_\theta = \nabla r$. Thus, we do not explicitly specify the functional form of $r$. Please refer to L:117 under Eq.4 for a detailed discussion. > Clarifying Connections to RED frameworks Their motivation and implementation are fundamentally different. MBA is a supervised learning strategy whose training is formulated as a bilevel optimization problem (Eq.5-Eq.8), where the network is trained end-to-end and allows for noise injection during training. In contrast, RED uses a pre-trained denoiser as an explicit regularization function during inference, without end-to-end training. The denoiser in RED is fixed and not learned as part of the reconstruction process, so SGD jittering is not applicable to RED. > Review of robustness, generalization for IPs Thank the reviewers suggestion, and we will include the following paragraph in the manuscript: "To improve robustness, several strategies have been proposed, including training-time exposure to diverse perturbations such as noise injection and data augmentation [1-2], and adversarial training, enabling models to better handle noise during inference. Other approaches focus on architectural innovations—for example, diffusion models have shown inherent robustness to noise in MRI reconstruction [4], while PINN [5] embed domain knowledge to enhance stability without compromising interpretability. To improve generalization in IPs, methods like data augmentation with synthetic perturbations [6] and domain adaptation via techniques such as CycleGAN [7] have been used to expand the training distribution and improve adaptability to unseen data. Additionally, incorporating geometric constraints, as in [8] for electrocardiographic image reconstruction, has shown improved generalization by embedding prior knowledge into the learning process. While these approaches typically target either robustness or generalization, achieving both simultaneously remains an open and challenging problem." [1] Krainovic et al. Learning Provably Robust Estimators for Inverse Problems via Jittering [2] Zhou et al. Towards Understanding the Importance of Noise in Training Neural Networks [4] Dar et al. Adaptive diffusion priors for accelerated MRI reconstruction [5] Peng et al. Robust Regression with Highly Corrupted Data via Physics Informed Neural Networks [6] Guan et al. Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures [7] Zhu et al. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks [8] Jiang et al. Improving Generalization by Learning Geometry-Dependent and Physics-Based Reconstruction of Image Sequences We sincerely appreciate the reviewer’s constructive feedback. We believe the added literature review and clarification will strengthened the manuscript and hope they satisfactorily address your concerns. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal, and I’m maintaining my original rating.
Summary: The authors study the robustness and generalization properties of model-based architectures. The goal is to solve inverse problems with interpretable algorithms, such as loop-unrolling networks, and maintain two desirable properties: i) robustness to adversarial attacks, ii) generalization to small natural shifts in test-time data. The authors propose an algorithm called SGD jittering that makes progress in achieving these properties without sacrificing performance. The algorithm is based on a noisy version of Gradient Descent in the loop-unrolling architecture. Claims And Evidence: The claims are supported by the evidence. Methods And Evaluation Criteria: The authors only include comparisons to other model-based algorithms. I think it would be better if stronger baselines were included, such as solving these problems with diffusion models. The latter lack the interpretability of model-based methods, but it would be nice to see what price we pay in performance to gain this interpretability. It would also be nice to include examples of where this interpretability is important. Theoretical Claims: I did not rigorously check all the proofs, but I read the theoretical results and the assumptions and they make sense. Experimental Designs Or Analyses: I checked the experimental results (seismic deconvolution and MRI). As I mentioned above, I believe the biggest weakness is the lack of stronger baselines. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The paper improves the robustness and generalization of model-based algorithms that offer interpretable solutions to inverse problems. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Some weaknesses are mentioned above. I would further add that it is unclear what is the fundamental limit on the trade-off between generalization and robustness. How should the reader think of $x_g$ in Section 3? As an adversarial input or as a natural shift of $x$? It would also be nice for the paper to include some convincing evidence on why model-based algorithms are interpretable in the context of MRI/Seismic Deconvolution and compare the performance with stronger baselines. In terms of strengths, the paper has interesting theoretical results, it is well-presented and it has a clear motivation. Other Comments Or Suggestions: N/A. Questions For Authors: Could you include comparisons with stronger baselines for the problems you want to address? MBAs do not need to necessarily improve upon these baselines, but it needs to be more clear what's the trade-off between interpretability, performance, robustness and generalization. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful suggestion, and please find the point-to-point response below. > Stronger baselines such as diffusion models (DM) We agree that diffusion models (DMs) have demonstrated impressive results in image generation. In response, we added DDPM-based experiments for MRI reconstruction. To ensure a fair comparison under similar computational constraints, we adapted the denoising U-Net to fit within the same GPU memory as other methods. Since our work proposes a general framework for IPs, we chose to compare against the standard DDPM rather than specialized task-specific DMs, consistent with prior work [1]. Table 1 compares DDPM with other methods. DDPM achieves comparable in-distribution (ID) performance to MBAs trained with both MSE loss and the proposed SGD jittering, and shows stronger robustness than standard MSE-trained MBAs. However, it underperforms in OOD generalization. While DDPM serves as a strong baseline for robustness and ID accuracy, SGD jittering achieves better generalization under distribution shifts. We will add the comparison to DM in the main manuscript as an interesting baseline. It is also worth noting that DDPM requires ~10× more parameters and is significantly more data-intensive, whereas MBAs are more data-efficient [2] due to their optimization-inspired iterative structure. We also refer to prior work [1], which compares DDPM, DiffRecon, AdaDiff to MSE-trained MBAs for MRI reconstruction. Their results show that while DM can generalize well in some cases, MBA methods consistently perform better on ID data. Results in [1] shows that **MSE-trained MBA is a strong baseline**, and our proposed SGD jittering further improves their robustness and generalization. | PSNR/SSIM | ID | Adv. Attack | OOD | |:--------------------:|:-------------:|:-------------:|:-------------:| | MSE training | <28.21> / 0.603 | 25.68 / 0.382 | 29.92 / 0.779 | | AT | 27.68 / 0.564 | **27.17** / <0.549> | 7.74 / 0.597 | | Input Jittering | 28.18 / 0.595 | 25.05 / 0.420 | <29.97> / 0.740 | | SGD jittering (Ours) | **28.22** / <0.607> | 26.77 / **0.552** | **30.36 / 0.788** | | DDPM (new baseline) | 28.17 / **0.611** | <27.13> / 0.536 | 29.72 / <0.782> | Table 1: MRI reconstruction. Best performances in **bold**, second best in <...>. > Robustness and generalization tradeoff We thank the reviewer for raising this important question regarding a fundamental challenge. The tradeoff has been studied from various perspectives (i.e, distributional and optimization) [3,4], with evidence that no single training objective can simultaneously optimize both. In our work, we contextualize this tradeoff within IPs by explicitly defining robustness and generalization accuracy in relation to the forward model. Our theoretical and empirical results show that AT and standard MSE training prioritize robustness and accuracy, respectively, but fail to address both objectives effectively. In contrast, the proposed SGD jittering implicitly regularizes the model, enabling simultaneous improvement in both metrics. > How to interpret $x_g$ For robustness (Eq.2), $g$ is interpreted as artifacts due to noise to measurement $y$. Eq.2 measures how reconstruction $H_\theta(y_g)$ deviates from the clean $x$. For generalization (Eq.3), $x_g$ is considered as natural shift of $x$. Eq.3 measures how well the model reconstruct $x_g$ from its corresponding measurement $y_g$, or maintaining consistency with the physics model. [1] Dar et al. Adaptive diffusion priors for accelerated MRI reconstruction [2] Monga et al. Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing [3] Zhang et al. Theoretically Principled Trade-off between Robustness and Accuracy [4] Krainovic et al. Learning Provably Robust Estimators for Inverse Problems via Jittering We sincerely appreciate the reviewer’s constructive feedback. We believe the added experiments and clarification strengthened the manuscript and hope they satisfactorily address your concerns. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal and I am raising my score to 3.
Summary: The paper investigates robustness-accuracy tradeoffs, where the authors focus on unrolling-based methods. The authors consider different training strategies for increasing the robustness to average-case perturbations or distribution-shifts. As a specific solution for unrolling-based methods, the authors propose to add jittering noise in each step of unrolling, and demonstrate a good robustness-accuracy tradeoff for practical setups. ## update after rebuttal Thanks again to the authors for their rebuttal. With the exception of my comment regarding the choice of noise levels, the authors addressed my concerns well. I think this is an interesting paper and keep my score as (weak) accept. Claims And Evidence: - The main claim of the paper is that SGD jittering (the proposed training strategy) yields better generalization and higher average-case robustness compared to standard MSE training. This is not surprising, since the same holds true for other robustness-enhancing methods (input jittering or adversarial training). The authors support this by proving it for denoising setup (and technical assumptions), and provide sufficiently convincing experiments. - Another claim of the paper is that SGD jittering improves accuracy at the same time as robustness, and thereby overcomes or mitigates the robustness-accuracy-tradeoff (see e.g. L061-066). This is very interesting, but not supported by theory, and I have some concerns and questions regarding the experimental setup. Methods And Evaluation Criteria: The authors train unrolled networks with the four different training strategies (MSE, adversarial training, input jittering, SGD jittering) and evaluate on in-distribution data, adversarial attacks and out-of-distribution examples (Tables 1 and 2), which is appropriate to investigate the claims. See also experimental design and analyses. Theoretical Claims: I checked the proof of Theorem 7.5 (the average-case robustness result) and don't see any major issues. Experimental Designs Or Analyses: I find the visualization of the toy problem results interesting, but think that real-world dataset examples would be more beneficial for the reader. The MRI results are relatively convincing, the seismic deconvolution problem is more niche. For the MRI results (and also the toy problem), I have some concerns regarding the in-distribution performance of standard MSE training. Specifically, it is very surprising to me that jittering improves the in-distribution accuracy compared to MSE training (which optimizes for it) (see also questions). Moreover, the authors write that the jittering levels (of SGD and input methods) are chosen based on 'robustness and accuracy' (L.831) and only states the concrete values, which are hard to interpret. Since the main conclusion is that the method performs better in robustness and accuracy than competing methods, I think there should be a more principled approach to choosing it (or description of it if applicable). Supplementary Material: I reviewed the convergence proof (B), skimmed over (C) and reviewed the robustness result in part (D). Relation To Broader Scientific Literature: The authors sufficiently describe the related work, including existing demonstration of robustness-enhancing methods via noise injection. Essential References Not Discussed: NA Other Strengths And Weaknesses: The investigated problem of addressing the robustness-accuracy tradeoff is important, and proposing methods which are less expensive than adversarial training is valuable. The idea of injecting noise for increasing robustness to the input, or the intermediate layers, has already been presented in the literature. However, the analysis and theoretical arguments of this approach with respect to average-case robustness is interesting and novel. Other Comments Or Suggestions: - I find the title a bit miss-leading, as "unrolling methods" (considered in the paper) is only a small subset of "model-based architectures". Moreover, the strengths of the paper are with respect to robustness, but there is noticeably less support for the accuracy part (minor comment). - I am a bit confused by the notion of generalization, and I think the papers actually discusses robustness to distribution-shifts. While there are varying notions in the literature, generalization results are often associated to a finite sample case and in-distribution data, but here the authors effectively investigate a distribution shift by adding noise to the training distribution. - L. 758,759: A term is missing in the inner product. - Notation of the robustness risk: The authors denote the average-case robustness risk by $R_{\epsilon}$ and the worst-case robustness risk by $R_{e}$, which leads to confusion at first I think. - L: 173,174: The authors write that AT learns to 'ignore' $A^{-1} e$, but this is not true in general (e.g. only when the perturbations are very large). AT finds a tradeoff between reconstructing the signal and minimizing the error. - L. 355,356: "tradeoff in resolution" Questions For Authors: - In L. 167, the authors write that AT is slow and requires iterative solvers for the attack vector. How many steps were used in PGD for adversarial training / testing? - Regarding L. 831, could you give more details on how the jittering noise levels were chosen? - In Figure 1 I would expect that the MSE training yields a better in-distribution results (particularly compared to adversarial training)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and helpful suggestions. We address the reviewer’s comments and questions below. >Jittering outperforms MSE training in in-distribution results We acknowledge the reviewer’s observation and appreciate the opportunity to elaborate further. As noted in L.310–313 (prior to the Seismic Deconvolution section), training with standard MSE loss may lead to suboptimal solutions due to the highly non-convex loss surface with respect to network parameters. Our proposed SGD jittering acts similarly to layer-wise noise injection, which is known to help escape local minima by promoting exploration of the loss landscape [1]. Consequently, jittering can achieve better in-distribution and generalization performance than MSE training in some cases, as supported by both our empirical results and prior work [2]. > L.167: number of steps used in PGD for AT As detailed in Appendix G (L.836), we used 20 PGD steps with a step size of 0.1 for both seismic deconvolution and MRI reconstruction tasks. While the runtime efficiency is a useful byproduct, our main point is that methods like AT and input noise injection introduce perturbations directly to the input, often break forward model consistency and result in overly smoothed reconstructions. Our contribution is to introduce SGD jittering, which promotes robustness in a more principled manner while preserving high accuracy and physical model fidelity. > How jittering noise levels were chosen As shown in Figure 5 (L.420), we compared robustness and accuracy across a range of jittering noise levels and selected the setting with the best overall trade-off. A similar hyperparameter search was conducted for the input injection baseline. Notably, we observed that input injection exhibited greater sensitivity to noise level variations, whereas our proposed SGD jittering showed more stable performance across a range of noise levels. While some tuning is still required, this suggests that SGD jittering is more robust to hyperparameter selection in practice. We will include a short discussion of this in the main paper. > Notion of Generalization We agree that in classical learning theory, generalization typically refers to a model's ability to perform well on unseen in-distribution data, often analyzed in the finite-sample setting. However, in this work, we adopt a broader and increasingly common notion of generalization under distribution shift, which aligns with recent literature on robustness and transferability. To avoid confusion, we will revise the manuscript to clarify this broader usage of "generalization" and explicitly distinguish it from classical in-distribution generalization. > Typos and clarification We thank the reviewer to pointing it out, we will fix the typo and make clarification of the terms in the manuscript. [1] Orvieto et al. Explicit Regularization in Over-parametrized Models via Noise Injection [2] Lim et al. Noisy Recurrent Neural Networks We sincerely appreciate the reviewer’s time for reviewing and the constructive suggestions, and we believe that the additional clarifications improve the quality of the submission. We hope this addresses your concerns. --- Rebuttal Comment 1.1: Comment: Many thanks to the authors for their rebuttal and for addressing my concerns. The paper is interesting and I keep my score as (weak) accept. One comment further to the choice of noise levels: The rebuttal states that SGD jittering is less sensitive to the choice compared to input jittering. Thats an interesting observation, but the systematic way to choose the final noise levels should be clearly stated (beyond "best robustness-accuracy"), since these choices are important for baselines (e.g. input jittering). I encourage to include this in the dicussion as well and add more details in the plots (e.g. Figure 5) regarding noise levels or adversarial attack levels $\epsilon$.
null
null
null
null
null
null
TuCo: Measuring the Contribution of Fine-Tuning to Individual Responses of LLMs
Accept (poster)
Summary: This paper proposes a novel method, Tuning Contribution (TuCo), to measure the contribution of fine-tuning to individual responses of large language models (LLMs). The authors introduce a decomposition framework that splits an LLM’s response into a Pre-Training Component (PTC) and a Fine-Tuning Component (FTC), enabling a more fine-grained analysis of fine-tuning effects. Experimental results show that TuCo is sensitive to different inputs, which shed more light on how to monitor and control the model’s behavior after finetuning. However, as stated in the weakness part, the paper needs more carefully defined experiments to justify the proposed metric. In summary, I believe the paper is quite novel and has the potential to bring a bigger impact. But the current version is not good enough for ICML. I would be very happy to increase my evaluation if the core issues are addressed. Claims And Evidence: Not quite. See the weakness part. Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked the main context and part of the Appendix. The proofs looks good. Experimental Designs Or Analyses: Not good enough and could be improved. See the weakness part. Supplementary Material: I read part of the appendix. Relation To Broader Scientific Literature: The related work and background part is well written. Essential References Not Discussed: I find there are plenty of papers discussing finetuning behaviors, like [1]. Discussing the differences between their theoretical framework would be helpful. [1] Ren, Yi, and Danica J. Sutherland. "Learning dynamics of llm finetuning." ICLR 2025 Other Strengths And Weaknesses: ## Strength: 1. Unlike previous approaches that focus on benchmark-level fine-tuning effects, this work quantifies fine-tuning effects at the individual response level, which is quite novel and provides new perspectives on understanding the model’s behavior. 2. The discussions about the relationship between jailbreak prompt and TuCo are inspiring. Given the fact that the FT model is trained on some safety-related dataset, the TuCo for this sensitive information should be large. However, a carefully defined jailbreak prompt can circumvent it by triggering some “un-updated” region of the origin model. This finding could inspire more robust alignment strategies in the future. ## Weakness: 1. The authors claim in their introduction that instead of simply comparing its final hidden states, TuCo considers more detailed representations. However, the superiority of TuCo over this simple method (i.e., directly compare last hidden states) is not well justified. Appendix B compares OutputCo and TuCo. But which one is better, and why? Plus, from Proposition 4.2 we know that TuCo is part of the upper bound of the L1 distance of two x, then, why not directly observe ||x-x||? 2. I find the experiments in the current version cannot support the claims well. The dataset used in finetuning is very important in measuring the model’s behavioral change. So, ablation studies varying the finetuning data will make the conclusion more solid. Other Comments Or Suggestions: N/A Questions For Authors: ## Questions: 1. I am not quite sure about how to understand $f_\theta(x,l)$. Could it be understood as the function of the network's first L layers? In other words, it converts the input x to the hidden embeddings of the L-th layer. If my understanding is true, why do we need the “circuit modeling” in this paper? 2. In definition 4.3, what does the map $(x_1, \cdots, x_n) \rightarrow x_n$ looks like? Directly select the last column of FTC? 3. Many experiments require a pretrained model and a finetuned version of it. But on what dataset are these models trained on? I tried to find the answer in the paper but could not find it. It is important for many conclusions, e.g., those in section 5.2: if the model is finetuned on web-text data rather than chat-data, will the claim still hold? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and constructive feedback. We would like to clarify some points: > Appendix B compares OutputCo and TuCo. But which one is better, and why? They have different interpretations, and are most appropriate for answering different research questions. OutputCo tells us how large is the distance between pre-trained and fine-tuned models final hidden states. This could be used in a similar way to e.g. computing distribution distances between the final logprobs of each model. Meanwhile, TuCo tells us how large is aggregate change in intermediate layer outputs due to fine-tuning. In this sense, it gives a quantitative view of how the *computation performed by the model* is affected, rather than only the final outcome. We are not aware of comparable metrics in the literature. > I find the experiments in the current version cannot support the claims well. The dataset used in finetuning is very important in measuring the model’s behavioral change. So, ablation studies varying the finetuning data will make the conclusion more solid. We would like to ask the reviewer if they could point to specific claims they consider are not well-justified, so that we can make the appropriate improvements to the manuscript. Further, we clarify that, in our experiments, we seek to demonstrate the applicability of TuCo in the wild, i.e. on real-world widely-used open weight models, without relying on bespoke toy datasets. Rather, in Section 5.1, we make controlled interventions by varying the magnitude of the fine-tuning component $FTC$. We demonstrate this can be used to control model behavior, and even improve its performance on certain MMLU tasks. This validates the relevance of measuring the magnitude of $FTC$ when studying the interactions between prompt content, model behavior and capabilities. > [...] why not directly observe $||x_{FT}-x_{PT}||$ As we argue in Appendix A, an effective metric should be interpretable for practicioners, useful for empirical analysis, and practical to compute. The fact that TuCo is normalized (i.e. between 0 and 1) allows it to be more intuitively interpreted as a fraction (i.e. "30% contribution of fine-tuning"). An unnormalized metric, such as $||x_{FT}-x_{PT}||$, is potentially subject to significant changes in scale across models and prompts, harming its interpretability and usefulness. Further, per Section 5.5 and the prior answer, TuCo is qualitatively distinct from simply comparing final hidden states, even if one uses a normalized metric (i.e. OutputCo). We use TuCo to quantitatively show jailbreaks attenuate the effect of fine-tuning (Section 5.3), that the attenuation is strongest the stronger the jailbreak (Section 5.3, MSJ results), and that successful jailbreaks show stronger attenuation (Section 5.4). > I am not quite sure about how to understand $f_\theta(x, l)$. As pointed out in Section 3, most commonly-used GPT architectures have residual connections around self-attention and MLP layers. This means that, on a layer $l$ computing a function $f_{\theta_l}$ (where $\theta_l$ are the parameters of the layer), the residual stream is updated as $x_{out} \leftarrow x_{in} + f_{\theta_l}(x_{in})$. Since there are $L$ layers, we have functions $f_{\theta_1}, \cdots, f_{\theta_L}$. For notational simplicity, instead of writing the function computed by the $l^{th}$ layer as $x \mapsto f_{\theta_l}(x)$, we write it as $x \mapsto f_\theta(x, l)$. > What does the map $(x_1, \cdots, x_n) \mapsto x_n$ look like? Directly select the last column of FTC? Yes, that is correct: this map picks out only the hidden state for the final token of the prompt. > [...] But on what dataset are these models trained on? We thank the reviewer for pointing this out as an area of improvement. Llama 2, Llama 3 and Gemma use a combination of publicly, private and synthetic instruction tuning and preference data, including conversational data and safety data. Mistral and Vicuna are only fine-tuned for instruction following. Zephyr-Gemma is fine-tuned on synthetic chat and preference data. The preference ratings take into honesty into account, but, per Tunstall et al. (2024), the samples are focused on helpfulness rather than harmlessness. We have added a more detailed overview to the appendix. > if the model is finetuned on web-text data rather than chat-data, will the claim still hold? In this case, we would expect not to see such a clear separation. ### Conclusion In the above, we hope to have addressed the reviewer's mentioned concerns, with particular regard to providing more details on model data mixes, and on what TuCo contributes over a simple comparison of final hidden states. Given the above, we would like to ask the reviewer to consider increasing their score. If any concerns remain, we are happy to provide further clarifications and improvements to the manuscript.
Summary: The authors seek to understand the effect of finetuning on a model. They propose to decompose the forward pass of a finetuned model into the pretrained component (PTC) and fine-tuned component (FTC). They then propose Tuning Contribution (TuCo) as a measure of the relative effect sizes. They subsequently analyze TuCo within many empirical settings. The authors provide a constructive algorithm for calculating TuCo. Theoretically, they relate it to prior literature on transformer circuits. Empirically, they show that scaling the FTC can act like a form of steering. They also perform various other ad-hoc analyses, relating TuCo to jailbreaks and instruction tuning. ## Update after review The authors have addressed some of my concerns. We agree that while the method does not beat baselines on downstream tasks, I agree that it is an interesting proof of concept for a new analysis technique. Hence, I will update to a weak accept. Claims And Evidence: One of the authors' central claims is that the FTC approximates the effect of finetuning. However, no direct evidence is provided for this claim. One way to check this would be to take the FTC after 1 epoch of finetuning (FTC-1ep), and the FTC after 2 epochs of finetuning (FTC-2ep). Would FTC-2ep be approximately double the magnitude FTC-1ep, while having the same direction? It is also unclear why the authors settled on this definition of TuCo. From first principles, it seems much more natural to use other definitions, e.g. the difference in model weights between the two models under comparison. Methods And Evaluation Criteria: In Section 5.1, the authors try controlling model behaviors by scaling the FTC, similar to existing work on activation steering. However, there are many methodological problems here. The Model-written Evals dataset is not good. - Firstly, it has substantial spurious correlations. For example, in the 'subscribes to Christianity' task, the question is binary MCQ, and the 'answer matching behaviour' is always 'Yes'. Thus, the reported results could simply be explained by the model learning to say 'Yes' more. Other prior work which uses MWE does preprocessing to fix these issues [1], [2]. - The model-written Evals dataset has a variety of other data quality issues as identified here: https://www.lesswrong.com/posts/yxdHp2cZeQbZGREEN/improving-model-written-evals-for-ai-safety-benchmarking#E__Issues_Identified_in_Anthropic_Model_Written_Evals In figure 2, the authors plot the change in aggregate model propensities ('agreement' across all samples) as a result of scaling the FTC. However, they do not report variance between individual examples. Prior work [2] indicates that, often, there is a large difference in the magnitude of the steering effect between individual samples. Looking only at the population aggregate obscures this effect, making steering look more effective than it actually is. [1] https://arxiv.org/abs/2312.06681 [2] https://arxiv.org/abs/2407.12404 Theoretical Claims: The authors motivate TuCo from the perspective of 'generalized components'. It is very unclear what these 'generalized components' are and how they work. The authors should substantially revamp sections 4.2 and 4.3 to provide a clearer explanation. I find the argument in 4.2 unclear and at times controversial. It is not clear that a transformer can be decomposed into a linear sum of circuits; the authors should explain their reasoning more clearly. It is also a very controversial claim that finetuning works by adding more circuits. I would like to see more justification of this perspective, preferably with references to existing empirical case studies. The authors claim that TuCo is a generalization of earlier work on circuit analysis. I find this claim controversial, as they do not explain how their framework subsumes earlier theory such as https://transformer-circuits.pub/2021/framework/. Furthermore, the algorithm for computing TuCo (Algorithm 1) only uses model activations, and does not discuss circuit components. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: Understanding the effect of finetuning is generally valuable for building an empirical science of ML. Essential References Not Discussed: The authors work aims to develop insight by looking at changes in model activations before and after finetuning. However, they do not discuss related literature on model diffing. It seems important to discuss other related techniques like model stitching [1] and sparse crosscoder analysis [2], which have the same 'type signature'. [1]: https://arxiv.org/abs/2106.07682 [2]: https://transformer-circuits.pub/2024/crosscoders/index.html The authors also do not discuss Other Strengths And Weaknesses: I am not convinced of the significance of TuCo. From a practical perspective, the authors demonstrate signs of life with activation steering, but do not compare to relevant baselines such as CAA. They also use flawed evaluation methods which raise significant concerns about the validity of results. In the jailbreak setting, the authors show that TuCo is lower on successful jailbreaks, but this does not seem to yield any technique for preventing the jailbreak, nor does it provide an especially clear insight as to why specific jailbreaks work as opposed to other. Overall, I am not convinced that "TuCo is a relevant interpretability tool", as it has not yet led to interesting insights. I encourage the authors to show how TuCo can be used for a practical problem of interest. I also am not convinced by the claim that "Model developers can use TuCo to detect inputs where finetuning has less impact and adjust accordingly"; if this were the case I would encourage the authors to include a case study where they do this. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to address the queries raised. Some claims dismissing our experiments are incorrect and unjustified. > No direct evidence [...] FTC approximates finetuning FTC is exactly the difference in layer outputs between the finetuned and pretrained models; if FTC is zero then FT=PT. Therefore it is a rigorous and universal notion of "effect of fine-tuning" for a given prompt. > difference in model weights [instead]? Comparing model weights would be agnostic to the prompt, which is not our problem setting (Secton 4.1). TuCo quantifies how much a prompt's $FTC$ contributes to the final hidden state, as an interpretable fraction (0 to 100%). This seems natural. > [In MWE] the 'answer matching behaviour' is always 'Yes'. **This is not true.**. The matching answers are balanced, (50% "Yes" and 50% "No"). This is easily seen in https://github.com/anthropics/evals/blob/main/persona/subscribes-to-Christianity.jsonl. > [MWE has] data quality issues identified in [...] This is an appendix of a blog post, which cannot be taken at face value. Moreover, **it does not claim any issues with the Persona section of the MWE dataset, the only one we use in our evaluations**. This source is both unreliable and inapplicable. > do not report variance > large difference in the magnitude of the steering We added variance to the plots. But this is redundant: there is no "difference in magnitude" that can skew the mean estimator, since we are averaging booleans (constant magnitude). > unclear what these 'generalized components' are and how they work > generalization of earlier work on circuit analysis For a circuit computing $g: (x_l, l) \mapsto g(x_l, l)$ at layer $l$, when the input hidden state is $x_l$, $g$ is a generalized component (Def. 4.1). Thus, Def. 4.1 applies to circuits in Elhage et al. (2021). We updated the paper to explicitly point this out. We add that this seemed clear to other readers. > transformer [as] linear sum of circuits? We do not claim full circuit decompositions exist or are known. We only make this assumption in the thought experiment in Section 4.2, in light of the great diversity of circuits identified in prior work. > only uses model activations, [not] circuit components This is an important strength of our method: exact circuit decompositions need not exist or be known, but TuCo can nonetheless be computed for any model, because it assumes access to only intermediate model activations and pre-trained and fine-tuned models. > [discuss] model stitching and sparse crosscoder We appreciate the suggestion, and updated our related work section. But the connection is indirect. These methods do not yield scalar-valued metrics, so their "type signatures" are different. > demonstrate signs of life [...] but do not compare to [...] CAA We politely ask for a reference to this CAA. We also believe "signs of life" unnecessarily diminishes our results. > flawed evaluation methods [MWE] As mentioned above, the reviewer's discrediting of MWE is unjustified and based on false claims. We kindly ask the reviewer to either clarify the perceived methodological flaws, or to remove the claim. > [no] technique for preventing the jailbreak Per Section 5.4 and Appendix F.4, applying a threshold to TuCo detects jailbreaks, and model outputs can then be halted. Note TuCo is an analysis technique not designed to detect jailbreaks, and yet has out-of-the-box predictive power. > [no] insight as to why specific jailbreaks work As pointed out in line 371, our results in Section 5.4 indicate that the attenuation of the contribution of fine-tuning to the model's final hidden state (which is what TuCo direcly measures) is associated with jailbreak success. > not yet led to interesting insights Our work yields novel scientific insights on the interplay between LLM jailbreaking and fine-tuning, which are both of widespread interest in the community. In this sense, we consider TuCo to have produced interesting insights. For example, we quantitativey identify a clear link between jailbreaking and the attenuation of the effects of fine-tuning, which had been merely hypothesized in prior work (Kotha et al. 2023, Wei et al. 2024). > claim "Model developers can use TuCo [...] and adjust accordingly" This claim is made in the Conclusion and Future Work section -- we leave to future work the application of TuCo to improving fine-tuning methodology and dataset construction. ### Conclusion We hope to have addressed points raised by the reviewer, and pointed out incorrect statements dismissing our experimental results. In light of these clarifications and corrections, which address the basis for the negative review, we would like to ask the reviewer to consider increasing their score. --- Rebuttal Comment 1.1: Comment: Thank you for the extensive theoretical clarifications. I appreciate being corrected on incorrect claims re MWE and will update my opinion accordingly. Here is your reference to CAA: https://arxiv.org/abs/2312.06681 > We also believe "signs of life" unnecessarily diminishes our results. I understand that the authors contribute significant theoretical work. I maintain that, without a comparison to strong baselines, the results remain a 'sign of life'. For example, in the steering experiments presented in Fig 2, there is no comparison to existing steering methods. I think the steering experiments are important because they are an example of a causal intervention - the authors intervene on the magnitude of the FTC and show that this affects the model's likelihood of predicting the correct answer. Causal interventions are important to validate hypotheses generated through interpretability research. In mechanistic interpretability and related fields, my prior remains that new bodies of theory should be validated against downstream tasks as soon as possible. While the analysis sections on jailbreaking and web text are interesting, these ancillary observations cannot serve as the primary support in favor of a new method. > Per Section 5.4 and Appendix F.4, applying a threshold to TuCo detects jailbreaks, and model outputs can then be halted. Note TuCo is an analysis technique not designed to detect jailbreaks, and yet has out-of-the-box predictive power. Thank you for the clarification. This seems important if true, since it is another example of a causal intervention. If this is the case, then I did not understand section 5.4 on the first read through and I still do not understand it. Please help me understand how you halt model outputs based on TuCo and how effective this is at preventing jailbreaks. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the engaging discussion, and appreciate their openness to updating on the claims about MWE in the initial review. > Here is your reference to CAA: https://arxiv.org/abs/2312.06681 The now-referenced method CAA (Contrastive Activation Addition, Panickserry et al., 2024) seems to be relevant related work, so we have included it in the related works section. This method computes vectors that can be added to the residual stream to steer the model to exhibit a behavior. Panickserry et al. do this by averaging the difference in model hidden states between contrastive pairs of prompts, with one showcasing the behavior and the other not. We remark that *the purpose of us including the $FTC_\alpha$-scaling experiments in the paper is not to show we have the "best" steering technique for LLMs. In fact, this is not the goal of the paper - rather, our goal is to propose a universal, prompt-level analysis technique for measuring the effects of fine-tuning.* They serve instead as validation of the relevance of our chosen "object of study" (i.e. the magnitude of $FTC$) by intervening on it, and showing it can be used to control model behaviors and capabilities. As such, we do not seek to establish $FTC_\alpha$ scaling is better than existing steering methods, since TuCo is not designed with this in mind. Rather, it suffices to show steering is possible and statistically significant, which we do show in Section 5.1 across various tasks and models. As such, while we consider a comparison with CAA will strengthen our work and will include such a comparison in a final revision, we see our current experiments in Section 5.1 as sufficient to prove our point. > steering experiments [...] should be validated against downstream tasks as soon as possible There are in fact a variety of "downstream tasks": we assess the interventions on MMLU eval tasks encompassing 17 different areas (each with several subtasks; see Appendix F.1.1), including areas such as biology, CS, maths, etc, and with model sizes ranging from 7 B to 13 B (see Fig. 6, Appendix F.1.1). We also include MMLU humanities tasks, including logic, philosophy or history (Fig. 7). Interventions on more specific tasks in social sciences, STEM and others are evaluated separately (Fig. 8-10). This is in addition to the MWE dataset we already discussed, which assesses deviations in tens of different viewpoints and biases (Appendix F.1.2). While we subscribe to the reviewer's view that interventional experiments are crucial to establish causal explanations for interpretability, we believe that our suite of intervened tasks is already comprehensive. We will be keen to hear specific additional downstream tasks the reviewer considers are missing, and will strive to include them in the appendix of a final revision. With that said, the theoretical and observational results should not be dismissed, as they all are consistent and reinforce each other, together with the interventional results. > how you halt model outputs based on TuCo and how effective this is at preventing jailbreaks In Section 5.4 and Appendix F.4, we report that applying thresholds to TuCo to predict jailbreak success yields an AUC score of over 0.8 for all models under consideration except for Vicuna v1.5 13B, where it is 0.78. This means one could in principle pick a threshold (depending on one's relative tolerance for false positives and false negatives) and use TuCo to detect jailbreaks, and obtain a non-trivially-performant classifier. This means TuCo has meaningful jailbreak detection power. We remark, however, that TuCo is not intended as a jailbreak detection method. We include this experiment to display the relationship between jailbreaks being successful and them decreasing the effects of fine-tuning. Still, as the reviewer points out, this does indicate our framework produces non-trivial performance in a downstream task, despite not being designed with it in mind. The effects we are observing through TuCo are useful for predicting an important characteristic of model outputs, before the output itself is even generated. This suggests such effects are not spurious or accidental. ### Conclusion We thank you for your pointer to CAA, and hope to have addressed your concerns regarding the presence of interventional experiments and downstream task evaluations in our work, together with the initial concerns from the initial review. Given this, we would like to politely ask if the reviewer would consider increasing their score.
Summary: This paper investigates the impact that fine-tuning has on the forward pass representations of large language models (LLMs). The authors define the Tuning Contribution (TuCo) as a metric measuring the contribution of fine-tuned model representations as compared to pre-trained representations on the model’s forward pass for a specific input. The authors propose this metric as a tool to measure the degree of impact that fine-tuning has on individual model inputs. TuCo’s utility is assessed via empirical experiments focussing on a range of LLMs of up to 13 billion parameters, including Llama3, Gemma, and Vicuna. In a first experiment, the authors use TuCo to control model behavior by scaling the extent to which fine-tuning should contribute to the model’s final output. Second, the authors compare TuCo scores for web-crawled and chat-completion data and show that this score is substantially higher for chat-completion data. Finally, the paper shows that TuCo notably decreases when jailbreak attacks are applied to initially harmless prompts. ## Update after rebuttal I appreciate the authors' response to my questions and comments. I kept my score as it already indicates acceptance. Claims And Evidence: The claims stated in the paper are supported by empirical evidence. Methods And Evaluation Criteria: The proposed evaluation criteria are comprehensive and make sense in the context of the paper's problem statement and proposed solution. Theoretical Claims: I did not check the correctness of the proof for Proposition 4.2 in Appendix D in great detail. Experimental Designs Or Analyses: The experiments reported in Section 5 are comprehensive and technically sound. They largely contribute to a better understanding of the paper's proposed method and provide empirical evidence of TuCo's utility. Supplementary Material: I inspected the supplementary material but did not check / verify the provided code. Relation To Broader Scientific Literature: The paper provides a brief but detailed overview of the related literature. Section 3 (Background) of the paper is largely redundant as knowledge of Transformers as well as pre-training and fine-tuning of LLMs can in my opinion be assumed by the reader. This space is better spent on moving additional details of the empirical results out of the appendix and into Section 5. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: I overall found the paper to be very well-written and easy to understand, despite presenting a complex approach to better measure contributions of pre-training and fine-tuning on model representations, and as such represents a solid contribution. The empirical evaluations are detailed and comprehensive and demonstrate TuCo's utility. The paper spends too much time focussing on "setting the scene" and providing background information as well as deriving TuCo. I believe that it would benefit from moving parts of this into the appendix and instead increase its focus on empirical evaluations in the main manuscript (critical tables and figures mentioned in Section 5 have been moved to the appendix but would help the reader understand the results better in the main manuscript). Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing the original contributions of our work, the comprehensiveness and soundness of our experiments, and the quality of our technical exposition. In the following, we address the reviewer's points regarding the allocation of space to background and experiments in the manuscript. > The paper spends too much time focussing on "setting the scene" and providing background information as well as deriving TuCo. I believe that it would benefit from moving parts of this into the appendix and instead increase its focus on empirical evaluations in the main manuscript We thank the reviewer for the suggestions on improving the focus of our paper. We agree that we would like to move some of the figures/tables from the appendix to the main paper, and that, for many readers, an extensive description of transformers is not required. Because the later sections depend on equations introduced in the background, it serves also to introduce necessary notation. However, we will strive to reduce it while keeping essential notation, to make space for some of the appendix's content.
Summary: This paper introduces “Tuning Contribution” (TuCo), a new method to measure how much fine-tuning affects the outputs of a large language model (LLM) on a per-prompt basis. Formally, ToCUo is calculated by the ratio of the total magnitude of the "fine-tuning component" to the sum of "pre-training component" and "fine-tuning component", each of which is computed using the model’s hidden states at every layer. Empirical results demonetrate TuCo aligns with the controlability of model behavior during fine-tuning and its implication in LLM safety (e.g., Jailbreaks decrease Tuning Contribution, especially successful attacks). ## update after rebuttal The authors' rebuttal address my concerns. I will keep my score. Claims And Evidence: Most of the claims are well supported (e.g., Empirical Evidence That TuCo Reflects Fine-Tuning Effects, Definition of Tuning Contribution). However, some claims need more justifications: (1) Lipschitzness of the layers may be strong in practice in the theoretical bound (2) The paper only tests a nice spread of open-source models (various LLaMA 2 and 3 sizes, Vicuna, Mistral, Gemma, etc.) but still only up to 13B parameters, but it is not yet shown how TuCo behaves on 30B, 70B, or even larger-scale systems. (3) Other model architeictures also need to taken into consideration, like MoE structure. Methods And Evaluation Criteria: The paper’s methods and evaluations (MMLU for academic performance, curated chat datasets for alignment style, multiple jailbreak tactics for adversarial stress-testing) align well with the goal of measuring fine-tuning’s real-time influence on the model’s output. Theoretical Claims: Yes. No major issues are found. Experimental Designs Or Analyses: Yes. No major issues are found. Supplementary Material: Yes. Appendix D Proofs and E Experimental details. Relation To Broader Scientific Literature: The paper might help contribute to the field of mechanistic interpretability, and LLM safety by tying these threads together into a computationally tractable framework for the effect of a single data point on fine-tuning. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: (1) Given the TuCo is amodel-dependent metric (e.g., depending on the model architecture). In real-world practice, how do you suggest the practitioners to use the metric? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their recognition of our extensive experimental suites and the relevance of our method to interpretability, as well as their thoughtful suggestions on areas of improvement. We would like to address some of the points raised: > (1) Lipschitzness of the layers may be strong in practice in the theoretical bound In Appendix D.5, we rigorously justify our assumption of Lipschitzness for the commonly-used transformer layer with root-mean-square normalization applied before attention and MLP layers. Intuitively, normalization ensures the input of attention and MLP layers is always of bounded norm, and such layers are locally Lipschitz (or, for MLP layers, globally Lipschitz). Further, the fact that a numerical $\epsilon$ is used during normalization (i.e. one normalizes $x \mapsto \frac{x}{\sqrt{||x||^2 + \epsilon}}$) ensures the normalization map itself is Lipschitz. Hence, the resulting layers are Lipschitz. The boundedness of PTC hence also follows. > (2) The paper only tests a nice spread of open-source models (various LLaMA 2 and 3 sizes, Vicuna, Mistral, Gemma, etc.) but still only up to 13B parameters, but it is not yet shown how TuCo behaves on 30B, 70B, or even larger-scale systems. The computation of TuCo requires access to model parameters and a modified forward pass. As such, we would need to host a model ourselves to run our experiments on it. Given GPU and budget constraints, we were unable to evaluate TuCo on models of larger scale. Instead, we sought to evaluate a large suite of models up to 13B parameters to demonstrate the general applicability of our method. > (3) Other model architeictures also need to taken into consideration, like MoE structure. TuCo is agnostic to the specific architecture of model layers, and applies without modification e.g. to MoE architectures. We will implement and evaluate TuCo for JetMoE-8B (https://huggingface.co/jetmoe), a recent MoE model of tractable size for which pre-trained and fine-tuned checkpoints are freely available. This includes modifying the HuggingFace implementation of the forward pass to support TuCo computation and running our experimental suite, which we were unable to complete in the short rebuttal period. We will include results in a camera-ready version if this work is accepted. > (1) Given the TuCo is amodel-dependent metric (e.g., depending on the model architecture). In real-world practice, how do you suggest the practitioners to use the metric? We clarify that TuCo places very light assumptions on model architecture (i.e. only that the intermediate hidden states are updated as $x_{l+1} = x_{l} + f_\theta(x_l, l)$). In particular, TuCo does not depend on the use of any particular kind of layer (e.g. self-attention). As mentioned in the conclusion (Section 9), we suggest practitioners use TuCo to detect inputs where fine-tuning is less effective, allowing them to adjust their datasets and mitigate potential vulnerabilities. This approach not only aids interpretability research by identifying prompts that attenuate finetuning effects but also lays the groundwork for integrating adversarial attack prevention in user-facing applications. ### Conclusion In the above, we hope to have addressed the reviewer's concerns regarding justifications of our theoretical assumptions, the scale and architectures of models considered, and downstream applications for TuCo. We would like to ask if the reviewer would consider increasing their score in case their points have been addressed. Otherwise, we are happy to provide further clarification.
null
null
null
null
null
null
From Individual Experience to Collective Evidence: A Reporting-Based Framework for Identifying Systemic Harms
Accept (poster)
Summary: This paper introduces a method for identifying systemic discrimination or harm by aggregating individual reports of adverse events. The authors formalize this as the incident database problem, where reports arrive sequentially and are analyzed to detect subgroups that experience disproportionate harm. The authors propose a sequential hypothesis testing framework that determines whether specific subgroups are overrepresented in reports of harm. Claims And Evidence: The paper proposed two statistical algorithms (Sequential Z-test and Betting-style test) that effectively identify subgroup disparities. The paper provide guarantee by derive bounds on the error probability stopping time. Methods And Evaluation Criteria: The proposed method is evaluated with synthetic and real-world dataset. The paper measures how quickly the method can detect harm, which is critical for real-world applications. Theoretical Claims: I checked the proof of Prop 3.2, 3.4. Experimental Designs Or Analyses: The experimental design in the paper is generally sound and well-structured, with strong empirical validation. I have checked real-world and simulated experiment. For the simulated experiment, it would be better to show the result from synthetic experiment with different parameters for analysis. Supplementary Material: I didn't review the supplementary materials. Relation To Broader Scientific Literature: Earlier work formalize the fairness auditing as a batch hypothesis testing problem, this paper formulate the problem sequential hypothesis testing, and enables the use of existing method for sequential hypothesis testing. Essential References Not Discussed: I'm not aware of essential references that is missed. Other Strengths And Weaknesses: - The paper tackles a well-motivated real-world problem. - The paper is experimented with both synthetic and real-world dataset. - The clarity and structure of the paper could be improved, for example, explicitly stating the problem definition in a more direct and structured manner. - The evaluation of experiment seems to be limited. For example, a explicit false positive/negative rate analysis could help reader understand better. - Assumption that baseline rates $(\mu_0 G) $ are known, which is unclear in practice. Other Comments Or Suggestions: NA Questions For Authors: Are there other existing method that could be comparable to the proposed approach? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your time writing the review! We have grouped responses to your comments below. If there are any further weaknesses in the work that are concerning for you, please don’t hesitate to let us know. **Experiments** > _“For the simulated experiment, it would be better to show the result from synthetic experiment with different parameters for analysis.”_ To clarify, the mortgage experiments in 5.2 are only partially simulated: they draw from real-world data, and the component that we simulate is reporting behavior, which is exactly the main parameter of interest for our problem setting. To this end, we simulate three different patterns of reporting, as discussed on L396. In Appendix D.2, we give more details on these simulated reports, as well as how they relate to the parameters that affect modeling (i.e. $\rho_G/\rho$) that were discussed in Section 3.1. As our computations (in D.2) show, the three ways that we simulate reports do correspond to meaningfully different values of the parameter $\rho_G/\rho$. If you meant something else by “parameter,” please let us know! > _“a explicit false positive/negative rate analysis could help reader understand better.”_ In our problem setting, we felt that false positives/negatives were not quite the right abstraction for understanding the performance of algorithms. Thus, while we covered measures that are conceptually similar, we did not use the language of false positives/negatives explicitly. We will outline how these ideas relate below. For false positives: The notion of “false positive” in our setting is subtle. For the pure hypothesis testing setting, a false positive would be a group for which $\mu_G \leq \beta\mu_G^0$ but is returned by the algorithm; the likelihood of this error is what is provably controlled at level $\alpha$. For both the vaccine and mortgage experiments, all tests identified groups where $\mu_G \geq \mu_G^0$, i.e. “FPR” of 0. For our application, we are also interested in a notion of “true” harm — i.e., we hope that the groups identified by the algorithms actually reflect groups that (post-hoc) we know to have been harmed. For the vaccine experiment, this was broadly the “young men” category; all our algorithms only identified the groups (M, 12-17) and (M, 18-24), also suggesting a “FPR” of 0. For the mortgage experiment, we wanted to identify groups with a high relative risk of denial in general, but there are not necessarily hard cutoffs for what would have counted as a “true/false positive.” Tables 2-3 show that our algorithms generally found groups with high true relative risk, suggesting a low “FPR” overall. For false negatives: Because our tests are sequential and could run for arbitrarily many timesteps, it is impossible to ever fully conclude that a non-null hypothesis has not been rejected. (In fact, our power results indicate that, in the limit of $t \to \infty$, both our proposed tests will, with probability 1, identify any group with $\mu_G > \beta\mu_G^0$.) Furthermore, all of our algorithms stop fairly quickly (see Tables 1, 2, 3) in almost all the settings we test. As we discuss in Table 3, there are a handful of settings where a group has not been identified within 40k steps; heuristically, these could be considered “false negatives” and we report those rates in Table 3. **Writing (clarity & structure; problem definition)** While our work does involve many moving parts, we have done our best to keep the presentation modular. Section 2 is focused fully on notation and model; the beginning of Section 3 gives the problem statement explicitly; and the beginning of Section 4 outlines our general solution concept. We would love to hear if there are specific aspects of the presentation that were unnecessarily confusing, or any concrete suggestions for revision that would improve clarity. **Knowledge of base rates** > _“Assumption that baseline rates (μ0G) are known, which is unclear in practice.”_ We believe that there is a strong case to be made that it is reasonable to expect $\mu_G^0$ to be tracked by system owners. For example, it is already mandatory, by the Home Mortgage Disclosure Act, for banks to record and publicize the demographic details of all home loan applicants, and the CDC tracks vaccine uptake rates). Generally, we expect that most organizations track some internal metrics for system usage even if this information is not released publicly. On the other hand, we hope that for future systems, our work is one motivation to actively to ask or mandate organizations to collect/share this data. **Other methods** We considered the question of baselines for this problem carefully, but for our problem setting, the two algorithms proposed in section 4 are adaptations of the main approaches to sequential testing given in the literature. One notable alternative that we excluded was Wald’s SPRT, which requires making more parametric decisions and is thus not directly comparable.
Summary: The authors propose a framework to identify subgroups that are more likely to experience adverse events in a incident database. Therefore, they construct two algorithms that can deal with the sequentially arriving events to perform hypothesis testing. They show that their algorithms work nicely in empirical practice. Claims And Evidence: The authors break the argument for their claim (identify subgroups with adverse events leveraging reports of negative interactions) into three parts: 1) They relate reported incidence rates to true incidence rates, under assumptions on the reporting behavior of the group (proofs in appendix). 2) They show the theoretical validity and power of their two algorithms (proofs in appendix). 3) They further support it with convincing empirical evidence. They show how assumptions on the reporting behavior might be chosen and how to relax them bit by bit (while still performing valid tests). Methods And Evaluation Criteria: Their approach address how to access fairness claims from individual reports, and how to do so on a regular basis. Hence, they show a way how to implement mechanisms to ensure fair treatment of AI systems in practice. Theoretical Claims: I've checked the proofs of Proposition 3.2, Proposition 3.4, Theorem 4.1 on validity of the sequential z test and Theorem 4.3 on validity of the betting style test. They are well-written and sound. There is one tiny error in the proof of Theorem 4.1. When showing that $M$ is a supermartingale, they take the $exp(-t\eta^2/8)$ out of the conditional expectation in the second equality. However, it's also inside $exp(-(t+1)\eta^2/8)$ instead of $exp(-1\eta^2/8)$. In my opinion, it would be worth it to show the two steps for the next inequality in detail (upperbound - $\beta\mu$ by - the expectation & subgaussianity). Showing the supermartingale property is the crucial proof step afterall, and might be checked by others too. Experimental Designs Or Analyses: The authors show two experiments, one fully empirical (real-world data and reporting) and one semi-synthetic (real-world data and synthetic reporting). They are well-conducted and give insights into expected stopping times of their algorithm. Supplementary Material: I checked the related work, practical considerations, and most of the proofs, see Theoretical Claims. Relation To Broader Scientific Literature: They expand the literature on sequential testing for monitoring adverse incidents, with the focus on group fairness. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper is well-written, and address an important question, especially in light of the recent political developments. Other Comments Or Suggestions: The authors could also stress in the main paper, that it is statistically valid to rerun tests with different $\beta$. It is done so in the empirical results, but the argument that it is statistically valid is only mentioned in the appendix. Questions For Authors: 1. The authors mention in the practical considerations that varying baseline preponderance can be handled under their framework. How? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thanks for your time writing the review and reading our paper! It is a good catch on the 4.1 proof, and we agree it would be clearer to break it up as you suggested — we’ll do so in the revision! To answer your question about handling variations in $\mu_G^0$, [1] show in Section 3 how to extend their standard algorithm to handle a variety of settings where the problem varies over time. These properties come almost “for free” from the testing by betting setup, and doing something analogous for our betting-style algorithm is straightforward despite our tests themselves being different. We glossed over this point a bit in the version of the draft we submitted but will be more explicit in our revision. [1] Chugg et.al. Auditing Fairness by Betting
Summary: This paper introduces methods for identifying subgroups disproportionately affected by AI-related harms. It does so by applying sequential hypothesis testing methods to a stream of incidents incoming into a database. Two methods are proposed: sequential Z testing and “betting-style” approach where the test essentially “bets against” the null hypothesis. The paper includes some theoretical results on validity and shows that the two algorithms are essentially equivalent from a validity perspective. The work also tests two real-world examples: myocarditis reports from COVID-19 vaccines and mortgage allocations. In both cases, they report empirical “times to first alarm” from their tests as well as relative risk metrics. Claims And Evidence: The claims in the work do seem to be supported by clear and convincing evidence. Both theoretical proofs and empirical results on real datasets are provided. I do not find any problematic claims. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for the problem being solved. The framing of the problem as a sequential hypothesis test is both a novel formulation and clever method to elucidate impacted groups. Theoretical Claims: I did not verify the proofs in detail. Experimental Designs Or Analyses: As far as I could tell without running code or exploring data myself, the experimental designs and analyses seem sound. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This work is an important contribution to the broader literature on AI harms. Previous work has established and described some of the incident databases of the type described in this paper. Others have taxonomized systemic risks of harm to people. None, to my knowledge, have proposed such a method for identifying groups that are more likely at risk of harm based on existing and incoming incident reports. Essential References Not Discussed: I am not aware of any essential references that were not discussed. Other Strengths And Weaknesses: The primary strength of this paper is in its clever formulation of the subgroup detection problem as a hypothesis testing problem. I think this gives the method considerable flexibility to detect a wide variety of vulnerable groups. The work is clearly organized and presented, and the experimental results are convincing. I don’t really see any major weaknesses with this work. I think it’s a well done contribution that is important and unique. It should be in ICML because it deals with the ever more important realm of AI harm detection in a novel and sound way. Other Comments Or Suggestions: N/A Questions For Authors: 1. Given the times to first alarm that you observe, do you have a sense of how well these methods might perform in terms of runtime, etc. in practice in a real system? For example, could you see this being implemented on the existing AI incident database at https://incidentdatabase.ai/. 2. Do you have thoughts on how to handle a situation where you do not have access to sensitive group variables or demographic data? Would it be possible to model the group membership as a latent variable with some uncertainty? Or try to extract relevant variables from the report itself? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thanks for your time writing the review and the thoughtful questions! (Q1) We are overall optimistic about how our methods might work for a real-world system, and in future work we hope to develop and/or highlight collaborations with practitioners with real incident reporting databases. The linked AI incident database is not directly compatible with our framework (see also our discussion in Appendix A). While incidentdatabase.ai collects one-off stories of problematic incidents, they are not necessarily linked to specific systems — any incident with any AI system is eligible for report — which makes it hard to make claims about patterns of problems with specific systems. While perhaps those incident reports could be separated by an associated AI system, an additional challenge is to formalize and identify appropriate “base rates” or “subgroups” for this setting. (Q2) This is definitely a question on our minds for future work. One related work we mention [1] identifies subgroups by clustering online, but the task of running a hypothesis test is much more stringent than making predictions. Dealing with the intricacies of a sequential hypothesis test is the main technical challenge in this setting — such an approach must address data reuse for both identifying subgroups and running a valid test (i.e., avoiding the sequential analogue of p-hacking). [1] Dai et.al. Learning With Multi-Group Guarantees For Clusterable Subpopulations
Summary: This paper work studies the problem of identifying systemic harms through individual reporting mechanisms using incident databases where individuals can report negative interactions with a system (such as loan denials or vaccine side effects) to identify subgroups disproportionately experiencing harm. The authors frame this as a sequential hypothesis testing problem and for each subgroup, test whether that group is overrepresented in reports relative to their representation in the base population by a factor of β. Under their assumptions about reporting behavior, this overrepresentation serves as a proxy for actual disparities in harm. Two approaches are are looked at for operationalizing: a sequential Z-test and a betting-style approach. The authors present results on two real-world applications: identifying myocarditis risk from COVID-19 vaccines in young men, and detecting racial disparities in mortgage loan approvals. In both cases, the methods successfully identify known instances of disproportionate harm using only a fraction of the data that was originally used to discover these issues. Claims And Evidence: Yes, I think the technical claims made in this paper are well supported. The authors provide a nice formulation of the problem in terms of hypothesis testing, with Theorems 4.1-4.4 which establishing the control of false positives and power of the proposed approaches. It would seem that we may encounter difficult as the number of groups grows large, but I don't feel that is a fundamental flaw. Methods And Evaluation Criteria: Yes, I think the evaluation is reasonable. I'm not a public health expert so I'm not sure whether the scenarios are exactly aligned with what would be used in practice, but to my reading this makes sense. Theoretical Claims: Yes, I read through all proofs, and they are all sound. Experimental Designs Or Analyses: See above re:my comment on not being a subject matter expert, but the design themselves are good and demonstrate the central claims of the paper. Supplementary Material: Yes. I read the entire supplement. Relation To Broader Scientific Literature: This work builds on pre-deployment auditing and batch post-hoc methods (Cen & Alur, 2024; Cherian & Candès, 2023) by creating a continuous post-deployment monitoring framework, dynamically discovering affected subgroups rather than relying solely on predefined protected categories, sharing philosophical goals with multicalibration (Hebert-Johnson et al., 2018). From a statistical perspective, the work uses sequential testing frameworks to fairness monitoring, connecting with recent applications by Chugg et al. (2024) and Feng et al. (2024) but with distinct test objectives. The betting-style algorithm leverages cutting-edge advances in sequential hypothesis testing (e.g., Waudby-Smith & Ramdas, 2024). Essential References Not Discussed: N/A Other Strengths And Weaknesses: I found this paper to be well described and implemented. The paper creatively combines individual reporting, sequential testing, and fairness auditing into a coherent framework. This integration addresses a real gap in current practice for assessing disparate impacts. I thought the authors do a nice job of looking at multiple statistical approaches and assessing their efficacy. The empirical evidence is also quite good. My main concern is how scalable this method would be in practice, but I believe that to be second order. Other Comments Or Suggestions: N/A Questions For Authors: 1. What guarantees or assumptions are needed regarding access to the reporting system across different demographic groups? Since the method relies on comparing the proportion of reports from a group to their base population proportion, would systemic barriers to report submission (e.g., technology access, language barriers) impact the validity of your conclusions? 2. While you demonstrate effectiveness with 29 and 115 groups respectively, how does your approach scale to settings where the number of potential subgroups grows combinatorially with the number of features? Are there modifications to make this more efficient beyond the Bonferroni correction? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your time writing this review and the thoughtful questions! (Q1) This is a good question --- differential rates of (access to) reporting is something that we’ve thought about a lot. In the current version of this work, this can be modeled with the group-specific reporting parameters discussed on section 3 — and, though we don’t focus on it in the main exposition, in principle a different $\beta$ could be set for each group. That said, while it is natural to model known underreporting (e.g. that arises due to access reasons), the current version of our framework doesn’t help with identifying or estimating the _degree_ of underreporting (which, e.g., some of the related work on L80-86 addresses). We think better understanding this question is an important direction of future work. (Q2) Our theory shows that the stopping time increases by only a logarithmic factor in the number of groups, and only additively (rather than multiplicatively). Thus, for settings where the number of groups is combinatorially large (e.g. we have $2^d$ groups in the case of $d$ binary features), we would expect an additive impact on the stopping time of approximately $O(d)$. Our experiments with 29 and 115 groups suggest that the impact of Bonferroni in practice is even less pronounced than the $\log(|\mathcal{G}|)$ suggested by theory, and we suspect this to be true in general. As for algorithmic improvements, it is not obvious that current mathematical tools allow for any improvement over the Bonferroni correction. Some recent developments in sequential testing with e-values can handle composite null testing (e.g. [1]) — however, their guarantee is subtly different, in that they are only able to confirm that a harm _has_ occurred to one of the groups, but not identify which one it was. This is an area of future work we are definitely interested in, though it seems that it will require developing more sophisticated theory. [1] Cho et.al. Peeking with PEAK: Sequential, Nonparametric Composite Hypothesis Tests for Means of Multiple Data Streams
null
null
null
null
null
null
Understanding Model Ensemble in Transferable Adversarial Attack
Accept (poster)
Summary: The authors investigated the issue of transfer attacks based on ensembles. They provided a theoretical framework for the transferability of adversarial examples, which can be controlled by theor loss and variance among models. The authors conducted some experiments to validate their theoretical findings. Claims And Evidence: Most of the claims are clear and reasonable. Methods And Evaluation Criteria: This paper does not introduce new methods or datasets. Theoretical Claims: There are some flaws in the authors' theory. Firstly, in transfer attacks, the surrogate model and the target model have different model architectures and parameter numbers. However, the authors implicitly assume that the surrogate and target models share the same parameter space (right column, Line 111-112), which significantly limits the generalizability of their theoretical results. Secondly, I suggest revising some of the notation. It is strange that (x, y) represents an adversarial example (right column, Line 121-122). Finally, the theoretical results do not seem particularly novel, as they represent a widely accepted conclusion. Experimental Designs Or Analyses: The experiments are limited. The authors' experimental results do not differ significantly from those in previous empirical studies. Additionally, the authors mentioned conducting experiments in ImageNet, yet I could not find any results in ImageNet. The model architectures and attack methods used in the experiments are also limited. The authors could include visual transformers and more transfer attack methods. Supplementary Material: I have read the theoretical proof in Appendix. Relation To Broader Scientific Literature: The authors establish a theory regarding ensemble transfer adversarial attacks. Essential References Not Discussed: Some theoretical papers have been overlooked, e.g., [1]. [1] Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness, NeurIPS 2024. Other Strengths And Weaknesses: The authors' theory appears to offer little benefit for the future development of transfer adversarial attacks. It is also unclear how tight the suggested bound is. Other Comments Or Suggestions: I have no further comments. Questions For Authors: Please refer to the above points. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for your constructive comments! We address all your questions and concerns in the following responses. >**Q1**: In transfer attacks, the surrogate model and the target model have different model architectures and parameter numbers. However, the authors implicitly assume that the surrogate and target models share the same parameter space (right column, Line 111-112), which significantly limits the generalizability of their theoretical results. **A1**: Thank you for your comments. - Firstly, in Section 4.2 (Remark 2), we explicitly discuss how Theorem 4.3 can be generalized to cases where the surrogate and target models have distinct parameter distributions. This addresses part of the concern regarding parameter space assumptions. - Secondly, as the first theoretical work to formally link adversarial transferability with generalization theory, our primary contribution is a foundational framework rather than an exhaustive treatment of all scenarios. To our best knowledge, transferable model ensemble adversarial attacks remain less theoretically explored. The field still lacks papers that understand it theoretically. We hope our work inspires follow-up studies to fully address the problems you mention in the future. >**Q2**: I suggest revising some of the notation. It is strange that (x, y) represents an adversarial example (right column, Line 121-122). **A2**: Thank you for your constructive comments. We agree with your suggestion and will use $(x^{\text{adv}}, y)$ instead to represent an adversarial example in our final version. >**Q3**: The theoretical results do not seem particularly novel, as they represent a widely accepted conclusion. **A3**: We sincerely appreciate the reviewer's comments. Firstly, we're encouraged that multiple other reviewers (uk2i, kypp, and Ejwu) have explicitly recognized the novelty of our theoretical framework in their comments. It suggests our approach does offer fresh insights to the field. Secondly, we theoretically analyzes and validates three important practical guidelines to improve adversarial transferability, including (1) incorporating more surrogate models, and (2) increasing their diversity, and (3) reducing their complexity in cases of overfitting. Our work provides the first theoretical framework to explain these empirical observations in adversarial transferability, while offering practical insights for algorithm design. We hope this foundation inspires deeper understanding and future advances in the field. >**Q4**: The experiments are limited. The authors' experimental results do not differ significantly from those in previous empirical studies. Additionally, the authors mentioned conducting experiments in ImageNet, yet I could not find any results in ImageNet. The model architectures and attack methods used in the experiments are also limited. The authors could include visual transformers and more transfer attack methods. **A4**: We appreciate the reviewer's constructive feedback, which has helped us strengthen our work. Our experimental design was carefully crafted to validate our novel theoretical contributions previously unexplored in the literature. In response to the reviewer's valuable suggestions, we have expanded our experiments to include additional attack methods, diverse model architectures (including transformers) and larger dataset (ImageNet). These new experiments, detailed in our response to Reviewer Ejwu's Question 3, provide important insights for future research in transfer adversarial attacks. Furthermore, in our response to Reviewer uk2i's Question 1, we have also expanded the experimental validation by employing another effective attack methodology. This additional analysis not only confirms our theoretical findings but also broadens the scope and strengthens the practical implications of our research. >**Q5**: Some theoretical papers have been overlooked, e.g., [1]. > >[1] Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness, NeurIPS 2024. **A5**: We appreciate you bringing this work to our attention. We will ensure proper citation of [1] in our final version. While [1] examines flatness and transferability, we establish the theoretical link between statistical learning theory and adversarial transferability. Our framework provides new insights into transferability through the lens of generalization theory, offering complementary (rather than overlapping) perspectives to [1]. We’re happy to further clarify these differences if needed. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response. However, I find the theoretical contributions of the proposed variant insufficiently compelling for generalizing bounds across diverse model architectures and parameterizations. The domain distance (Line 1816) is defined as maximum per-sample loss between the proxy model and the target model over the sample space. Any adversarial difference is inherently bounded by the domain distance, making the resulting bound appear vacuous in practice (Line 1831). Moreover, the empirical validation is inadequate. The authors should benchmark the proposed method against SOTA attacks and conduct isolated evaluations rather than just combined evaluation ("proposed method + existing techniques"). [1] Learning to transform dynamically for better adversarial transferability, CVPR 2024. [2] Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness, NeurIPS 2024. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback on our theoretical analysis in Appendix D.4.2. We emphasize that Appendix D.4.2 is not intended as a core contribution of our work, and it can be deleted without affecting the overall contribution of this paper because Appendix D.4.1 serves a similar role as Appendix D.4.2. Regarding the experiments, our method consistently enhances SVRE across diverse model architectures, and SVRE itself is one of the SOTA attack methods in adversarial transferability. The methods in [1-2] are also not ensemble attack algorithms and cannot be compared in our validation experiments. While our focus of this paper is theory, the experiments regarding an effective attack algorithm that outperforms SOTA is out of scope of this work. We will also ensure that [2] is properly cited in Section 2.1, 2.2 and Appendix C.1 in our revision. We will clarify that [2] represents the first theoretical study in this field (although our theoretical framework remains fundamentally distinct from theirs). [1] Learning to transform dynamically for better adversarial transferability, CVPR 2024. [2] Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness, NeurIPS 2024. **____________________________________________________________________________________________________________________________________________________** Finally, we reiterate the key contributions of our work, and we genuinely hope to earn your support. The primary objective of our research is to construct a theoretical framework that bridges statistical learning theory with adversarial transferability. As demonstrated in Section 4.3, the analogy between them have already inspired numerous studies to develop innovative attack algorithms. Our main contributions include: 1. Introducing transferability error, diversity, and ensemble complexity as novel analytical tools for adversarial transferability research, drawing inspiration from learning theory literature [1] (Section 3 and Appendix B.1) 2. Proposing vulnerability-diversity decomposition for both squared loss and KL divergence loss, extending concepts from bias-variance decomposition [2] and ensemble learning [3] to explain the effect of ensemble attack algorithms (Section 4.1 and Appendix B.2-B.3) 3. Deriving an upper bound for ensemble complexity in adversarial transferability through analysis inspired by Rademacher complexity bounds [4] (Section 4.2 and Appendix A.1-A.4) 4. Establishing a transferability error bound using ensemble complexity and novel information-theoretic tools to address the "independent surrogate models assumption", building upon uniform convergence theory [1] and recent information theory advances [5] (Section 4.2 and Appendix B.4) 5. Developing information-theoretic analysis of transferability error inspired by information-theoretic generalization error analysis [6] (Section 4.2 and Appendix B.5) These novel contributions systematically extend and unify results from those papers spanning 1992 to 2024 within a cohesive theoretical framework. We also discuss dozens of papers in adversarial transferability in recent years. Beyond these theoretical innovations, we have: 1. Conducted comprehensive validation experiments across three datasets to substantiate our theoretical claims, supplemented by additional experiments including the evaluation for another attack algorithm, the role of disjoint training set, a further explanation of ensemble model complexity and a practical algorithm demonstration (Section 5.1-5.2 and rebuttal) 2. Provided intuitive examples, extensive analyses, and thorough discussions of related works to elucidate the connections between our findings and existing understanding of adversarial transferability (Appendix D.1-D.8) 3. Extended our theoretical framework through preliminary explorations of alternative parameter spaces (Appendix D.4.1-D.4.2) Given the novel theoretical contributions of our work, the additional experimental validation provided during rebuttal, and the consistent positive evaluation of Reviewer uk2i, kypp and Ejwu, we sincerely hope you recognize the significance of this paper. [1] Rademacher and gaussian complexities: Risk bounds and structural results. JMLR 2002. [2] Neural networks and the bias/variance dilemma. Neural computation 1992. [3] Diversity and generalization in neural network ensembles. AISTATS 2022. [4] Size-independent sample complexity of neural networks. COLT 2018. [5] Concentration without independence via information measures. TIT 2024. [6] Information-theoretic analysis of generalization capability of learning algorithms. NeurIPS 2017.
Summary: The paper presents a theoretical framework for model ensemble adversarial attacks, focusing on transferable adversarial examples. It defines transferability error, diversity, and Rademacher complexity, and decomposes transferability error into vulnerability and diversity. The authors apply information theory to derive bounds on transferability error and suggest practical strategies for improving adversarial transferability. The framework is validated through experiments on multiple datasets. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: The key contributions of the paper are well-related to the broader scientific literature. The authors build on existing work in adversarial attacks and model ensemble methods, providing a theoretical foundation for transferable model ensemble attacks. They reference relevant studies on adversarial transferability, model ensemble diversity, and generalization in machine learning. The paper also discusses the connection between adversarial transferability and model generalization, drawing insights from statistical learning theory. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper introduces a new theoretical framework to understand the role of model ensembles in transferable adversarial attacks. This framework combines Rademacher complexity and information-theoretic tools, which adds some theoretical novelty. 2. The authors provide a mathematical decomposition of the transferability error in model ensemble adversarial attacks, highlighting the trade-offs between vulnerability and diversity. This new perspective contributes to the understanding of adversarial transferability. 3. The experimental results are comprehensive and validate the theoretical claims across multiple datasets and model architectures. Weaknesses: 1. While the framework is theoretically innovative, its practical effectiveness remains unclear. The paper does not provide a sufficient comparison with existing state-of-the-art adversarial attack methods, nor does it demonstrate the practical performance of the method. 2. The paper could benefit from a more detailed discussion of the practical implications of the theoretical results, particularly in terms of designing more effective adversarial attack algorithms. 3. The experiments could be expanded to include a wider range of model architectures and datasets to further strengthen the empirical validation. 4. The mathematical derivations in the paper are highly complex and lack intuitive explanations, which may pose difficulties for readers, especially those without a deep background in information theory. To improve its practical applicability, the paper should provide a clearer connection between the theoretical framework and real-world use cases. 5. Although the paper presents a new theoretical framework, it does not sufficiently compare it to current adversarial attack methods, such as gradient-based attacks, input transformation techniques, or other model ensemble methods. Without demonstrating that the proposed framework performs better than existing approaches, its contribution remains unclear. 6. Despite the theoretical contributions, the paper fails to demonstrate significant practical benefits or applications. The framework's real-world impact is not well established. Other Comments Or Suggestions: Suggestions: 1. The authors should include direct comparisons with state-of-the-art adversarial attack methods in the experiments, especially focusing on black-box attacks and model ensemble techniques, to highlight the advantages of the proposed approach. 2. The authors should consider simplifying the mathematical derivations and providing more intuitive explanations or examples, making the theoretical framework more accessible to a broader audience. 3. The authors should provide more experimental validation of the method’s performance in real-world adversarial tasks, and compare it directly with other methods to establish its practical value. 4. To improve the paper's usability, the authors should consider releasing the code and providing practical examples to help other researchers implement and test the framework. Questions For Authors: See weaknesses and suggestions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your insightful review of our work! >**Q1**: The authors should include direct comparisons with state-of-the-art adversarial attack methods in the experiments. **A1**: We sincerely appreciate the reviewer’s constructive feedback. In direct response to Reviewer uk2i’s Question 1, we have significantly expanded our evaluation to incorporate additional attack algorithms as suggested. >**Q2**: The mathematical derivations in the paper are highly complex and lack intuitive explanations, which may pose difficulties for readers, especially those without a deep background in information theory. **A2**: Thank you for your constructive comments. We will provide more step-by-step explanations about information-theoretic analysis in Appendix B.5 in the final version. >**Q3**: The paper could benefit from a more detailed discussion of the practical implications of the theoretical results, particularly in terms of designing more effective adversarial attack algorithms. **A3**: We thank the reviewer for the constructive suggestion. We show an example here to investigate how properly controlling the model complexity of surrogate models can contribute to more effective adversarial attack algorithms, which is in line with our theory. We use ImageNet as the dataset here. To conduct model ensemble attack, we fine-tune surrogate models—VGG16, InceptionV3, and Visformer—using a sparse Softmax cross-entropy loss [1]. This modification encourages sparsity in the model’s output distribution, and we observe in our experiments that the model complexity (L2 norm of the weight matrix) are reduced after using such a loss: | | VGG16 | Visformer | InceptionV3 | |:-------------------:|:-----:|-----------|:-----------:| | Original | 37.37 | 25.94 | 49.24 | | Sparse Softmax Loss | 33.12 | 20.6 | 48.53 | Building upon three advanced transfer attack methods—MI-FGSM [2], SVRE [3], and SIA [4]—we propose their sparsity-enhanced variants (MI-FGSM-S, SVRE-S, and SIA-S) through the integration of sparse Softmax loss during surrogate model training. We consider eight kinds of model architectures and measure attack performance via attack success rate, where higher value corresponds to stronger attack effectiveness. | | ResNet50 | VGG16 | MobileNetV2 | InceptionV3 | ViT-B16 | PiT-B | Visformer | Swin-T | |-----------|----------|--------|-------------|-------------|---------|--------|-----------|--------| | MI-FGSM | 66.0 | **99.9** | 76.8 | 97.5 | 37.3 | 53.8 | 88.9 | 66.7 | | **MI-FGSM-S** | **68.9** | 99.7 | **79.2** | **99.1** | **39.0** | **54.5** | **90.6** | **68.1** | | SVRE | 65.2 | **99.9** | 79.0 | 98.6 | 32.4 | 49.2 | 92.3 | 64.3 | | **SVRE-S** | **66.9** | **99.9** | **81.2** | **98.9** | **34.2** | **51.3** | **93.0** | **65.9** | | SIA | 97.2 | **100.0** | **98.4** | **99.7** | 75.9 | 91.9 | 99.0 | 96.1 | | **SIA-S** | **98.1** | **100.0** | 98.2 | 99.6 | **79.2** | **93.2** | **99.5** | **97.5** | As can be seen in the table, these variants outperform their standard counterparts in most cases, demonstrating the benefit of controlling model complexity in both CNNs and visual transformer settings to improve adversarial transferability. Beyond this example shown above, we believe that our work can also inspire the development of more stronger attack algorithms in the future. [1] From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification. ICML 2016 [2] Boosting Adversarial Attacks with Momentum. CVPR 2018. [3] Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability. CVPR 2022. [4] Structure Invariant Transformation for better Adversarial Transferability. ICCV 2023. >**Q4**: To improve the paper's usability, the authors should consider releasing the code and providing practical examples to help other researchers implement and test the framework. **A4**: We appreciate the reviewer's valuable suggestion. We are happy to release the code and provide implementation examples to facilitate reproducibility and adoption by the research community. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response and clarifications. I appreciate the effort to address my concerns, particularly the expanded experiments and additional explanations. While the theoretical contributions are valuable, I still feel the experimental section could be more comprehensive. As shown in the table, the improvements under the three advanced transfer attack methods and their sparsity-enhanced variants remain limited. Given the theoretical focus of the paper and the modest experimental gains, I am inclined to maintain my current evaluation. The work has merit, but it would benefit from more substantial and comprehensive experimental validation in future iterations. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's constructive feedback and continued positive evaluation of our work. As the reviewer rightly noted, our paper primarily focuses on theoretical contributions. The novelty and significance of our theoretical framework has also been acknowledged by other reviewers like uk2i and kypp. Beyond our theoretical innovations, we have also: 1. (Section 5.1-5.2) Conducted comprehensive validation experiments across three datasets to substantiate our theoretical claims. 2. (Rebuttal) Provided additional experiments, including - the evaluation for another attack algorithm, - the role of disjoint training set, - a further explanation of ensemble model complexity, - a practical algorithm demonstration (our method consistently enhances SVRE across diverse model architectures. Notably, SVRE itself is one of the SOTA attack methods in adversarial transferability). 3. (Appendix D.1-D.8) Provided intuitive examples, extensive analyses, and thorough discussions of related works to elucidate the connections between our findings and existing understanding of adversarial transferability. 4. (Appendix D.4.1-D.4.2) Extended our theoretical framework through preliminary explorations of alternative parameter spaces. We sincerely appreciate your valuable feedback and will revise our paper according to your suggestions, such as incorporating the additional experiments from the rebuttal. Thank you once again for your time and insightful comments.
Summary: The paper provides a theoretical study on transferability of model ensemble adversarial attacks. The authors formulate the problem by considering the expected value of the attacked loss over the distribution of model ensemble (equation 1) and the averaged attacked loss over the set of considered models (equation 2). The goal is to use concentration and uniform convergence analysis to bound the transferability error in Definition 3.1 through bounding the gap between (1) and (2) over all input samples (Lemma 3.2). The authors define the model ensemble Rademacher complexity in (8) and bound it in Lemma 4.2. In Theorem 4.3, they connect the bound on the defined Rademacher complexity to bound the transferability error. Section 5 includes experimental results supporting the theorems. Claims And Evidence: Mostly yes. First, let me say that I like the authors' idea on how to extend the mathematics of uniform convergence analysis in statistical learning theory to the transferability on model ensemble attacks. The theorems look correct and make sense to me. The only gap that I can find in the authors' analysis is the term $H_{\alpha}^{\frac{1}{\alpha}}(P_{\theta^N}\Vert P_{\otimes_{i=1}^n \theta})$. The main difference between uniform convergence analysis in statistical learning theory and the authors' formulation is that in generalization analysis, we commonly assume the samples are drawn independently from a distribution, and so there is no term $H_{\alpha}^{\frac{1}{\alpha}}(P_{\theta^N}\Vert P_{\otimes_{i=1}^n \theta})$. However, in the standard model ensemble attack scenario, the models may have been trained with fully or partially identical training data, and therefore the models could be quite correlated. Therefore, it seems to me that the term $H_{\alpha}^{\frac{1}{\alpha}}(P_{\theta^N}\Vert P_{\otimes_{i=1}^n \theta})$ could be quite large and make the bound vaccuous in practice. I suggest the authors to clearly discuss the above point in the paper, because I find it a major difference between the two problem settings. However, I still tend to rate the work positively, as I find the authors' idea very interesting. Methods And Evaluation Criteria: The experimental methodology appears well-structured and aligned with the theoretical claims. Theoretical Claims: While I have not verified every derivation in full detail, the results appear correct and consistent with existing theoretical techniques. Experimental Designs Or Analyses: One key aspect of the experimental design that needs discussion is the correlation between ensemble models due to shared training data. Have the authors examined how the results change if disjoint training sets are used for the ensemble? Based on the theoretical framework, using disjoint training sets should reduce the correlation term, thereby improving transferability. I encourage the authors to conduct and report such an experiment. Supplementary Material: I have reviewed the supplementary proofs. Relation To Broader Scientific Literature: The work extends the mathematics of uniform convergence analysis to the setting of adversarial transferability in model ensembles. The results parallel standard generalization bounds and adapt them to this new context. Essential References Not Discussed: No missing key reference as far as I can tell Other Strengths And Weaknesses: See my previous comments. Other Comments Or Suggestions: See my previous comments. Questions For Authors: See my previous comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your insightful review of our work! >**Q1**: ...However, in the standard model ensemble attack scenario, the models may have been trained with fully or partially identical training data, and therefore the models could be quite correlated. Therefore, it seems to me that the term $H\_\alpha^{\frac{1}{\alpha}}\left(\mathcal{P}\_{\Theta^N} \\| \mathcal{P}\_{\bigotimes\_{i=1}^N \Theta} \right)$ could be quite large and make the bound vaccuous in practice. **A1**: To make it clear and easy to understand, we provide an intuitive approximation below. We choose $\alpha=10$, $\delta=0.01$, $\beta=1$ in our Theorem 4.3. Let $P=\mathcal{P}\_{\Theta^N}$ and $Q=\mathcal{P}\_{\bigotimes\_{i=1}^N \Theta}$. We consider the model parameters for a given precision so that $P$ and $Q$ are discrete distributions. - Equation (8) from [1] tells us that $H\_\alpha(P \\| Q)=e^{(\alpha-1)D\_\alpha(P,Q)}$, where $D\_\alpha(P,Q)$ is the Rényi divergence. - Let $\delta \in [0,1]$ be the TV distance between $Q$ and $P$, and $\beta_1=\min\_{a \in \mathcal{A}} \frac{Q(a)}{P(a)}$ be defined in Equation (8) from [2], i.e., the minimum of the ratio of the probability density function of distributions $Q$ and $P$. - Now we approximate $\beta\_1$. Consider there are $t$ parameter configurations for each model. For simplicity, we assume that part of the models ($f(N)$ models) play a key role in adversarial transferability, and the other $N-f(N)$ models are random sampled from these $f(N)$ models. - For the product of marginal distribution $Q$, the parameters from each model are random. Consider the case of uniform distribution, where every parameter in the $N$ models share the same probability, i.e., $Q(a)=\frac{1}{t^N}$. - For the joint distribution $P$, we also consider the case of uniform distribution, where $f(N)$ models are fixed and $N-f(N)$ models are randomly sampled, i.e., $P(a)=\frac{1}{t^{N-f(N)}}$. - Therefore, $\beta\_1 \approx\frac{Q(a)}{P(a)}=t^{-f(N)}$, which is less than 1. - Substitute the above into Theorem 3 from [2], we have $$H\_\alpha(P \\| Q) \le 1+\frac{\delta\left(\beta\_1^{-1}-1\right)}{1-\beta\_1} \le \beta\_1^{-1} \approx t^{f(N)}$$ - Substitute the above into Theorem 4.3 in our paper, we have $$\sqrt{\frac{18 \gamma \beta^2}{N} \ln \frac{2^{2+\frac{1}{\gamma}} H\_\alpha^{\frac{1}{\alpha}}\left(\mathcal{P}\_{\Theta^N} \\| \mathcal{P}\_{\Theta\_{i=1}^N \Theta}\right)}{\delta}} \le \sqrt{\frac{20}{N} \ln \left(800 \cdot t^{\frac{f(N)}{10}}\right)} \approx \sqrt{\frac{140}{N} + \frac{2 f(N)}{N} \ln t}$$ Here are several cases: 1. $f(N)=\mathcal{O}(N^{s})$, where $s \in (0,1)$ 2. $f(N)=\mathcal{O}(\ln N)$ 3. $f(N)=s N$, where $s \in (0,1)$ For Cases 1 and 2, the above term asymptotically converges to zero as N becomes large. Notably, the true Hellinger term may be smaller than our derived upper bound above. Quantifying the core subset of models $f(N)$ that dominate ensemble attack performance presents a theoretically profound and practically significant research direction. This problem is particularly well-suited for future exploration, as it could fundamentally advance our understanding of transferable adversarial model ensemble attacks. [1] https://arxiv.org/pdf/2303.07245. TIT 2024. [2] https://arxiv.org/pdf/1503.03417. arXiv preprint. >**Q2**: ...Based on the theoretical framework, using disjoint training sets should reduce the correlation term, thereby improving transferability. I encourage the authors to conduct and report such an experiment. **A2**: Thank you for your valuable insight! As suggested by the reviewer, we evaluate three settings: - Full: All models are trained on the full training set. - Split: Models are trained on disjoint partitions of the training data. - Split-FT: Models are first trained on disjoint data partitions and then fine-tuned on the full dataset. We conduct experiments on MLP across two datasets. We measure attack performance via accuracy, where lower accuracy corresponds to stronger attack effectiveness. | | MNIST | | | Fashion MNIST | | | |:----------:|:-----:|--------|--------|---------------|--------|--------| | $\epsilon$ | 8/255 | 16/255 | 32/255 | 8/255 | 16/255 | 32/255 | | Split | **80.44** | **55.12** | **12.04** | 61.59 | 36.33 | **11.83** | | Split-FT | 81.99 | 59.31 | 15.15 | **61.57** | **36.21** | 12.27 | | Full | 84.26 | 68.05 | 23.65 | 64.35 | 41.49 | 15.65 | The results in the table demonstrate that employing disjoint training sets (Split) indeed enhances adversarial transferability and reducing model accuracy, consistent with our theory. Also, if we consider larger model architectures, the limited training data in each split subset may result in model underfitting, which may adversely impact attack effectiveness. We will incorporate the full results in our final version.
Summary: This paper proposes novel definitions for theoretically analyzing the adversarial transferability of adversarial attacks with a model ensemble; then, it provides three practical guidelines to improve the transferability of the model ensemble attacks. Specifically, the paper first defines the transferability error, the gap between the adversarial risks between the most transferable example $z^*$ and the adversarial example $z$ that the model ensemble outputs. The paper analyzes this transferability error in two different ways. In the first analysis, the paper decomposes the population risk on $z$ into the vulnerability term (which measures the attack power of the ensemble attack) and the diversity term (which measures the diversity of the ensemble attack). This suggests two guidelines: improving the ensemble attack’s power and diversity are both beneficial to the transferability of the model ensemble attack. In the second analysis, the paper provides an upper bound of transferability. This upper bound contains two terms. The first term represents the complexity of the surrogate models, and the second term decreases as the number of surrogate models increases. This upper bound gives us another guideline: having more surrogate models with less model complexity is beneficial to the transferability of the model ensemble attack. With a set of experiments, the paper experimentally supports the theoretical findings. Claims And Evidence: The paper supports all of its claims well with proof and experimental results. Methods And Evaluation Criteria: I checked the evaluation methods and criteria. Although the evaluation can be improved further, the experiments are well-designed to support the theoretical findings. Theoretical Claims: I did not have enough time to check the proofs in the appendices. Experimental Designs Or Analyses: The experimental designs are overall well-designed, and the analyses make sense. In particular, the analysis of the different behaviors on the CIFAR dataset seems interesting. Supplementary Material: Most of the appendices are written for the proofs, and I’m interested in checking those proofs. However, I did not have enough time to read the proofs during the review period. I read appendices D and E, which do not contain the proofs. Relation To Broader Scientific Literature: This paper makes many contributions to adversarial machine learning. This paper provides theoretical understandings of model ensemble attacks and practical guidelines for ML practitioners. Essential References Not Discussed: The paper cited the needed references well. Other Strengths And Weaknesses: ### Strengths 1. To the best of my knowledge, all the proposed concepts (transferability error, prediction variance, and model ensemble Rademacher complexity) are novel. 2. The decomposition of the transferability error into two interpretable terms seems impressive. 3. The paper presents all of the theoretical analysis, practical discussions, and experimental validation of the proposed idea. 4. The experiments support the findings well. ### Weaknesses 1. Only one attack method is used for the evaluation. Comparing another attack method (either extremely strong or weak) would give us more insights about the vulnerability-variance tradeoffs. 2. I’m not very convinced whether or not the experiments are enough to explore the model complexity. Only three values of $\lambda$ are used, and no other quantity controls the model complexity in the experiments. 3. The effect of increasing the $\lambda$ parameter is not clearly explained, and it is unclear whether it definitely lowers the model complexity or not. In my opinion, the empirical model ensemble Rademacher complexity is computable for low values of $N$, so the model complexity can be quantified during the experiments. Other Comments Or Suggestions: 1. Please clarify the effect of increasing $\lambda$ in the paper and explain how it relates to the model complexity. 2. If there are other factors that can quantitatively control model complexity, consider adding more experiments involving them. 3. Please consider experiments with other attack methods. 4. A comparison between CIFAR-10 and CIFAR-100 could also be interesting. They contain the same set of images, only more labels in CIFAR-100, but this changed the variance behaviors. Does variance on CIFAR-100 experiments decrease for lower $\lambda$? Questions For Authors: * Could you explain what factors would increase/decrease the model complexity in practice? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your insightful review of our work! >**Q1**: ...Comparing another attack method (either extremely strong or weak) would give us more insights about the vulnerability-variance tradeoffs. **A1**: We sincerely appreciate the reviewer's constructive suggestion. We have conducted additional experiments using the VMI-FGSM attack [1] on MNIST. $\lambda=10^{-4}$ | Steps | 1 | 3 | 6 | 9 | 12 | 16 | 19 | 22 | 25 | |-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | ASR | 2.1 | 6.9 | 8.4 | 17.1 | 24.6 | 29.5 | 37.3 | 38.4 | 40.2 | | loss | 0.012 | 0.044 | 0.225 | 0.351 | 0.378 | 0.390 | 0.392 | 0.397 | 0.401 | | Variance | 0.007 | 0.025 | 0.033 | 0.019 | 0.008 | 0.006 | 0.004 | 0.003 | 0.003 | $\lambda=10^{-3}$ | Steps | 1 | 3 | 6 | 9 | 12 | 16 | 19 | 22 | 25 | |-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | ASR | 2.2 | 7.5 | 8.3 | 16.9 | 25.4 | 30.1 | 38.2 | 38.9 | 40.6 | | loss | 0.011 | 0.039 | 0.214 | 0.337 | 0.365 | 0.381 | 0.385 | 0.392 | 0.399 | | Variance | 0.004 | 0.017 | 0.029 | 0.015 | 0.006 | 0.005 | 0.003 | 0.003 | 0.003 | The observed vulnerability-variance tradeoffs demonstrate consistent alignment with our paper. We will include the complete experimental details and analysis on other datasets in our final manuscript. [1] Enhancing the transferability of adversarial attacks through variance tuning. CVPR 2021. >**Q2**: Please clarify the effect of increasing $\lambda$ in the paper and explain how it relates to the model complexity... **A2**: Thank you for your insightful question. Firstly, as suggested in Lemma 4.2 and Appendix A, the empirical model ensemble Rademacher complexity can be upper bounded by model complexity and the number of ensemble components (for instance, the analysis in Section 4.2 states that reducing the weight norm of the model and increasing the number of the models will reduce the empirical model ensemble Rademacher complexity). These two kinds of effect have been reported in our experiments: - In Section 5.1, we adjust the weight decay factor $\lambda$ to change the model complexity and investigate the trends in how complexity interacts with other factors. - In Section 5.2, we improve the number of ensemble components and observe an increasing trend of attack success rate. Following the reviewer’s suggestion, we conduct a deeper investigation into the impact of model complexity by applying a max norm constraint to the model parameters. This technique limits the L2 norm of each weight vector to a predefined threshold to control the model complexity. - Larger max norms enable richer feature representations at the cost of potential overfitting. - Smaller max norms promote simpler models and may induce underfitting by limiting model capacity. As illustrated in the table below, this trade-off manifests across architectures (MLPs and CNNs with 1–3 layers) and varying max norm values. We measure attack performance via accuracy, where lower accuracy corresponds to stronger attack effectiveness. | Max norm | FC1 | FC2 | FC3 | CNN1 | CNN2 | CNN3 | Avg | |----------|--------|--------|--------|-------|-------|-------|-------| | 0.1 | 84.66 | 87.80 | 85.39 | 97.57 | 98.31 | 98.59 | 92.05 | | 0.5 | 59.37 | 68.31 | 74.05 | 96.50 | 97.66 | 98.34 | 82.37 | | 1.0 | 64.31 | 55.27 | 57.12 | 95.37 | 97.08 | 97.93 | 77.85 | | 2.0 | 68.00 | 57.40 | 57.86 | 95.41 | 97.04 | 97.87 | 78.93 | | 4.0 | 68.19 | 57.94 | 58.12 | 95.53 | 97.00 | 97.85 | 79.11 | | 5.0 | 69.68 | 59.40 | 59.26 | 97.48 | 98.02 | 98.87 | 80.45 | The results demonstrate a clear trend: as the max norm constraint is relaxed from very small values (e.g., 0.1) to moderate levels (e.g., 5.0), the attack effectiveness first increases and then decreases. This pattern indicates that excessively restrictive constraints can impair model expressiveness, whereas an optimally tuned max norm effectively balances model complexity and representational capacity. More importantly, these findings support our paper's claim regarding the influence of the weight decay factor $\lambda$ on model complexity. >**Q3**: A comparison between CIFAR-10 and CIFAR-100 could also be interesting... **A3**: We sincerely appreciate your insightful question. As shown in our paper, the attack success rate and loss exhibit an initial increase followed by stabilization with growing steps, while the variance first rises and then declines. We fully agree with the reviewer's insightful comment regarding dataset-dependent characteristics; indeed, the inflection points vary across datasets due to their distinct characteristics. Due to the space limitations of this rebuttal, we will extend the step range and include a comparative analysis of CIFAR-10 and CIFAR-100 in Appendix E in our final version.
null
null
null
null
null
null
TANGO: Clustering with Typicality-Aware Nonlocal Mode-Seeking and Graph-Cut Optimization
Accept (poster)
Summary: The paper introduces TANGO (Typicality-Aware Nonlocal Mode-Seeking and Graph-Cut Optimization), a clustering algorithm that leverages typicality, a global measure of a point's confidence to be a mode, to address the limitations of traditional mode-seeking methods that rely on local data characteristics and case-by-case threshold settings. TANGO integrates typicality-aware mode-seeking with graph-cut optimization and an improved path-based similarity to aggregate data into clusters. Experimental results on synthetic and real-world datasets demonstrate TANGO's effectiveness and superiority over state-of-the-art clustering algorithms. ## update after rebuttal Our evaluation remains unchanged after reviewing the rebuttal; the paper deserves an Accept (4) score. Claims And Evidence: Yes, the main claims in the paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-suited for the problem and application at hand. Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes, I reviewed the supplementary material, specifically focusing on the ​code implementation Relation To Broader Scientific Literature: The paper advances clustering by introducing ​typicality, a global measure inspired by ​PageRank, to address limitations of local mode-seeking methods like ​Mean Shift and ​DPC. It integrates ​graph-cut optimization and ​path-based similarity, building on spectral clustering techniques. Theoretical analysis ensures efficiency, while experiments on diverse datasets demonstrate superiority over state-of-the-art methods, bridging local and global perspectives in clustering. Essential References Not Discussed: no Other Strengths And Weaknesses: The paper’s strengths lie in its originality, introducing typicality as a global measure inspired by PageRank, and its significance, demonstrated through superior performance on diverse datasets and theoretical rigor. It is well-structured and clear, with effective illustrations. Weaknesses include sensitivity to the hyperparameter $k$, potential scalability issues with path-based similarity, and limited validation on highly noisy or imbalanced datasets. Other Comments Or Suggestions: 1. It is necessary to further explain how Typicality adjusts contributions through density-weighted mechanisms (e.g., rank-based dependencies in Eq. (6)), distinguishing it from PageRank’s uniform jump probability assumption. Additionally, clarify how the recursive formula (Eq. (1)) reflects the "attraction" mechanism of density peaks. 2. Add runtime comparisons between TANGO and other algorithms on datasets of varying scales in Table 2 or the appendix to validate the practical efficiency of the claimed time complexity $O(nk^2d)$. 3. Expand Figure 10 to include experiments isolating the contributions of sub-cluster generation (Typicality module), path-based similarity (PBSim), and spectral clustering, demonstrating the necessity of their synergy. 4. Analyze the fluctuations in TANGO’s performance with varying $k$ values in Figure 9 5. Provide 1–2 cases where TANGO underperforms to discuss its limitations. 6. Define the variable $p$ in $O(p^3 + n)$ (whether it aligns with the $p$ in Algorithm 3). 7. Discuss classical spectral clustering methods in the related work section. Discuss why spectral clustering was chosen for merging sub-clusters, unlike hierarchical clustering in other DPC-based techniques. 8. The adoption of SNN-based density estimation indeed enhances the TANGO algorithm's performance, as it better captures local structures, delivering superior capability in generating sub-clusters. While other comparative algorithms likely rely on simpler density estimation methods, it remains to be verified whether the proposed algorithm would maintain its performance advantage, as demonstrated in the experiments, if the comparative algorithms (e.g., LDP-MST, DPC, DEMOS, LDP-SC) were to adopt equivalent density estimation techniques. 9. Replace the equality symbol in Line 253, “$T(x_j) = T(x_j) + T(x_i) \cdot B_{ij}$”, with the assignment operator “$T(x_j) \leftarrow T(x_j) + T(x_i) \cdot B_{ij}$”. 10. Add a footnote to the “Par.” column in Table 2 to clarify it represents “hyperparameter settings”. 11. Fix inconsistent citations (e.g., “Eq.2” → “Eq. (2)”). Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper. We answer your main concerns below. Concern about the validation on highly noisy or imbalanced datasets: TANGO can indeed perform well on noisy and imbalanced datasets such as "cluto-t4-8k", "cluto-t5-8k", "cluto-t8-8k", "cluto-t7-10k" and "unbalance". This may be due to the mode-seeking step that can naturally alleviate the impact of noise, and path-based spectral clustering has the advantage to capture complex data distribution. We can include these in the experiments as an additional discussion. Other Comments: 1: From the PageRank perspective, for each point $x_i$ and its nearest higher density neighbor (leader) $x_j$, $B_{ij}$ denotes the probability from $x_i$ jumping to $x_j$, and $x_i$ has the probability $(1-B_{ij})$ to stay. The right side of Figure 2 has shown how Eq. (1) reflects a density peak $x_i$ collecting typicality from all points in its "attraction". For example, as shown in the right side of Figure 2, when $T(x_i) = B_{1i} T(x_1) + B_{2i} T(x_2) + B_{3i} T(x_3) + \rho_i$ from Eq. (1), and $T(x_1)=\rho_1$, $T(x_2)=B_{42}T(x_4)+\rho_2$, $T(x_3)=\rho_3$, $T(x_4)=\rho_4$, also from Eq. (1) respectively, then $T(x_i)=B_{1i} \rho_1 + B_{2i} (B_{42}\rho_4+\rho_2) + B_{3i} \rho_3 + \rho_i$, indicating that $x_i$ collects typicality from $x_1$, $x_2$, $x_3$ and $x_4$, which are all points in its "attraction". We will make this clearer in the revision. 2: The datasets in Table 2 is not much larger, and most of the algorithms as well as TANGO can complete their execution within a very short time (no more than 3 seconds). That’s why we have further presented the experimental results and corresponding running times on image segmentation task, which is done by clustering a dataset containing 154,401 samples (each image is a dataset and each pixel is a sample, as mentioned in the right side of Line 411), to show the efficiency on larger datasets. 3: We will include these in ablation study. 4: We will include an analysis in the revision. In Figure 9, as $k$ increases, the clustering performance of TANGO initially rises and then stabilizes. The parameter $k$ affects the similarity measure. When $k$ is small, the similarity may not be comprehensive enough to capture the complex distribution around two points, thus increasing $k$ can lead to better performance. When $k$ is relatively large, increasing $k$ will introduce new shared nearest neighbors of two data points $x_i$ and $x_j$, which, however, will have relatively small contribution to the similarity as these neighbors have large distance to both $x_i$ and $x_j$, and similarity values become stable. In this case, the subsequent process of the algorithm will have similar results and thus the performance will also become stable. 5: One possible example of limitation is that TANGO would perform worse when dealing with the right side of the "Compound" dataset, where the low density points lie uniformly around the high density points with extreme density discrepancy between them. In such a case, the typicality of low density points would be less than that of higher density ones, making them become a single subcluster. Future work could explore whether other types of dependency can address this situation. We will include this discussion in the revision. 6: $p$ is the number of modes (subclusters) detected by Algorithm 2. We will make this more formal and clearer. 7: We will include more related works about spectral clustering in the revision. The reason why choosing spectral clustering is that it comprehensively considers a global graph-cut cost of the whole partition. On the other hand, hierarchical clustering partitions the data greedily and ignores the global impact on the whole partition at each greedy step, making it always achieve an inferior and imbalanced partition. 8: Thank you for the suggestion. We tested applying SNN-based similarity and corresponding density to LDP-MST, LDP-SC and DEMOS, but observed still less performance than TANGO, even worse than their original implementations in some cases. This might be due to that these methods didn't employ typicality to construct tree-like subclusters, and their aggregation approaches for the subclusters also missed some important information that can be revealed by path-based spectral clustering technique. By the way, SNN-based approach has also been used in LDP-MST and LDP-SC by their authors, to determine the similarity between subclusters. 9, 10 and 11: We will correct all the issues you have mentioned.
Summary: This paper introduced the notion of "typicality" in density-based clustering, which measures the likelihood or confidence that a certain point should be a mode (a center) of a cluster. Existing techniques determine modes based on local measures (e.g., density of a point), but the premise of the paper is that in some situations, global features of the dataset will dictate whether a point should really be a mode. The measure of typicality introduced is defined recursively depending on a points density and the typicality of other nearby points. With this measure in hand, the paper presents TANGO, a framework for density based clustering that incorporated the typicality measure with a path-based similarity measure and spectral clustering. Experiments are run on several datasets. ## Update after rebuttal Thanks to the authors for the updates. I appreciate the larger experiments and the accompanying runtimes. Overall I still have a positive overall view of the paper. Claims And Evidence: The motivation for typically is well reasoned and Figure 1 is nice. The experimental results do provide an indication that the method is outperforming other methods. This is not major but there is a claim in the supplement that seems overstated. "As the in-degree distribution in a graph always follows the power law behavior..." This is overstated. There is evidence for power law distributions being strong but claiming that in-degree distributions "always" follow this distribution is not true. Post rebuttal: thanks to the authors for acknowledging this concern. Methods And Evaluation Criteria: Strengths: * The definition of typicality and the overall methodological approach of TANGO seems reasonable * As far as evaluation, I appreciate the additional detail in the supplement on ablation study, the study on the effects of the hyperparamter k, and the image segmentation experiments. * The evaluation across several different datasets and multiple baselines in Figure 5 is good Weaknesses: * The datasets considered are on the smaller side * The paper mentions using spectral clustering, but there's more than one method that has been called spectral clustering and there can be many different variations of this. Would be better to state in more detail what is meant (e.g., computing top how many eigenvectors, and then clustering them with what? k-means? Or do you find low-conductance cut in the graph and use recursive bipartitioning?) Post-rebuttal: thanks to the authors for clarifying what approach they use. Theoretical Claims: I did not check the proofs of the theoretical claims, but the theoretical claims are appear to be straightforward algorithm runtime results. There are no surprises with these results or concerns about correctness. Experimental Designs Or Analyses: I checked the experimental results and comparisons with other methods. The baselines, dataset, and cluster quality metric (e.g., ARI, MNI< ACC) are reasonable. For the results in Figure 5, it seems a little strange to optimally tune hyperparameters. While this might help in ways with making comparisons fair (by in some sense finding the best case scenario for each approach), it feels somewhat unnatural in that in practical applications one typically cannot tune all hyperparameters in this way. So this somewhat hides one of the difficult parts of running these methods. Nevertheless, this not a major concern since at least this was done for all methods. Supplementary Material: I skimmed the claims about power-law distributions and looked over the supplementary experiments. Relation To Broader Scientific Literature: Yes, the paper outlines the related work. Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: Strengths: * The definition of typically is reasonable and the paper does a good job motivating it. I think Figure 1 is helpful and it's placement at the front of the paper is good. * The fact that typically can be computed efficiently is a strength * The paper overall is fairly well written and organized Weaknesses: * The paper mentions "some theoretical analysis" as a contribution but it's vague early on. When we get to the details, the theoretical analysis seems mostly to be straightforward results about the complexity of computing typicality and runtime results. It would be better if the authors simply stated up front what "theoretical analysis" they provide, if it is really a big contribution. Otherwise, it feels a bit like the paper is trying to get credit for establishing theoretical results without stating what theory is provided. * TANGO seems to perform well in practice but the experimental results do not seem that extensive. The datasets considered are a bit on the small side. * The fact that all choices of k were tried makes me worry about runtime. Other Comments Or Suggestions: Just some minor suggestions: * Typo: "garph" instead of graph in at least one place * Figure 3 is good but far removed from Figure 1, which makes it cumbersome to page back and forth to get the point of this figure. Both Figures also have a lot of whitespace. You may be able to make Figure 3 smaller and just have it be a subfigure, and then use another subfigure to show the important part of Figure 1 again. Just a suggestion. * In the preliminaries, the paper mentions similarities, the density of a point, and the dependency. The meaning of these becomes clear later but is not specified at first, making this a confusing read for me. It might be good to either define these more carefully up front, or at least mention to the reader that specific measures of similarity, density, and dependency would follow later to keep them from thinking they are missing something important about the setup. Questions For Authors: What are runtimes for the methods? How expensive is it to run TANGO for so many choices of k? How does TANGO perform on larger datasets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper. We answer your main concerns below. Overstatement about power law distribution: Thank you for pointing this out. We will correct it. Weakness 1: Thank you for the comment. To demonstrate scalability, we have expanded our evaluation to a substantially larger image segmentation dataset (each image is a dataset containing 154,401 samples, as mentioned in the right side of Line 411) in Appendix C.4, which has shown the promising results of TANGO on larger datasets and its efficiency. We will add more detailed analysis on the image segmentation experiments in the revision. Weakness 2: Thank you for pointing this out. Specifically, we use the spectral clustering method called Normalized Cut, which computes the Symmetric Normalized Laplacian of similarity matrix, and applies k-means++ on the rows of the matrix whose columns consist of the eigenvectors of Laplacian corresponding to the smallest $nc$ eigenvalues ($nc$ is the number of target clusters). For implementation, we used the "SpectralClustering" module from "scikit-learn". Other Comments and Weaknesses: Thank you for suggestions! We will address them in the revision. Question 1: For finding the optimal choice of $k$ in TANGO, we use "gp_minimize" from "scikit-optimize" to find the value $k$ that maximizes the ARI by iteratively selecting $k$ based on a Gaussian Process model and an acquisition function. In practice, we found that we can run just around 20 different values of $k$ from range 2 to 100, to find the optimal one. For Question 2, please refer to the response to Weakness 1. --- Rebuttal Comment 1.1: Comment: Thanks for your reply and the additional details! This answers my question about your spectral clustering approach and clarifies how many values of k you use. Nice to hear you have some experiments on larger datasets. One thing that has still not been clarified for me is the running time in practice for this and competing methods. --- Reply to Comment 1.1.1: Comment: Thanks for your reply! We answer your concern about the practical running times in the following. For datasets in Table 2, most of the comparisons as well as TANGO can complete their execution within a very short time. That's why we further evaluated running times on the image segmention task, where each image is a dataset containing 154,401 samples and each sample refers to a pixel. Figure 8 in appendix also shows the running times for TANGO and 4 representative competing methods (see the numbers above each image). For TANGO, we also presented the running times of the similarity matrix calculation (parallelized with 20 threads) in parentheses, which is the main cost of the overall algorithm and can be easily parallelized, as described in Line 820. It can also be seen that the remaining part of TANGO is highly efficient, which aligns with the theorems about efficiency. We also present the running times in Figure 8 below, as well as some of the largest datasets in Table 2. | | TANGO | QKSPP | CPF | LDP-MST | LDP-SC | |:--------:|:---------------|--------:|:--------:|:---------:|:--------:| | Image 1 | 23.53s (20.15s) | 39.51s | 40.56s | 25.13s | 47.65s | | Image 2 | 23.94s (20.47s) | 41.15s | 33.45s | 33.71s | 44.53s | | Image 3 | 23.45s (19.79s) | 38.04s | 25.39s | 33.94s | 59.11s | | Image 4 | 23.03s (20.26s) | 37.58s | 38.36s | 33.77s | 31.39s | | Image 5 | 23.51s (19.86s) | 38.04s | 34.82s | 23.52s | 35.14s | | Image 6 | 23.23s (19.92s) | 36.63s | 34.78s | 26.61s | 34.19s | | MNIST(AE) | 7.31s (7.06s) | 3.51s | 12.43s | 2.47s | 3.15s | | isolet1234 | 7.53s (7.24s) | 16.18s | 51.12s | / | 2.43s | | waveform | 6.77s (6.61s) | 0.42s | 1.35s | 0.42s | 1.95s |
Summary: The paper introduces TANGO, a novel clustering algorithm that integrates typicality with graph-cut optimization. The primary contribution is the concept of typicality, a novel measure to quantify the confidence of a point being a mode for a cluster. Experimental results demonstrate the efficacy of the proposed algorithm. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I have checked the proof of Theorem 1 and Theorem 2. The proof of Theorem 1 is confusing. The introduction of the R matrix is not clear, and it is unclear why R is a symmetric matrix. The definitions are not clearly explained. Experimental Designs Or Analyses: Yes, I have checked the whole experiment section. Supplementary Material: Yes i have reviewed the supplementary material. I have reviewed appendix A and B. Relation To Broader Scientific Literature: TANGO builds on density-based clustering by introducing typicality, a global confidence measure to identify cluster modes without manual tuning, unlike DPC and Quick Shift. It integrates graph-cut optimization with a path-based similarity metric, improving upon spectral clustering and density-peak methods. TANGO achieves superior clustering performance, outperforming 10 state-of-the-art methods across 16 real-world datasets. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: 1. The method is theoretically sound and well-motivated. 2. The remarks in between the definitions and other theories are refreshing and contribute to the explanations of these definitions/theorems. 3. The visualizations showing which exact dependencies are broken by typicality are helpful in seeing that TANGO does actually address the drawbacks of other methods such as Quick Shift/DPC. 4. The results presented are significant. Weakness 1. It is unclear what the parameter k denotes in Line 331, Algorithm 3. k is used as nearest neighbours (Line 198), most similar density points (Line 211) and a number of hops parameter (Line 245). 2. The method takes the total/desired number of modes as a parameter (p), so it cannot find modes on its own. 3. It is explained why typicality as a measure is important but not why the proposed implementation through hierarchical dependencies is a good choice for typicality. 4. What does the dependency matrix B mean? The note on Line 265 should be expanded. 5. The path-based similarity between two sub-clusters G_i, G_j (Line 285) can be explained more clearly by formally defining C and explaining it to be the connectivity matrix before Definition 5. 6. Also, a more convincing reason (than being intuitive) for the connectivity formulation being what it is (max over all paths of min C in that path) would be appreciated. 7. Are the presented results reproducible? 8. The results are limited to only small datasets, with the largest being MNIST (10k nodes). It would be important to know if the results scale to very large datasets such as ogbn-arxiv/etc. as well. 9. TANGO performs qualitatively worse in the image segmentation results (Pg15). For example, in row 3, the cloth and the hand completely blend, and there are many “blemishes” in the segmentation mask compared to all the other methods. In row 4, TANGO completely fails to segment the wave. 10. The ablation study, while present, is not extensively discussed or analyzed. 11. In the ablation study, typicality appears to contribute only marginally to performance, with a low but noticeable uplift. How do its theoretical benefits translate into real-world advantages? Other Comments Or Suggestions: 1. Mention the method used to generate visualizations for the datasets (ex. UMAP/t-SNE). 2. Line193, right column. The following statement is unclear and would help from rephrasing/breaking up: Therefore, we define a similarity measure based on shared nearest neighbors, where we distinguish the varying contribution to similarity of each shared neighbor to have better robustness. 3. Clarify the difference between TANGO (typicality) and TANGO (final) in Fig.4 Questions For Authors: Refer to weakness Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper. We answer your main concerns below. Weaknesses: 1: Sorry for the confusion. $k$ in Line 331 is the number of nearest neighbors to define similarity (Line 198) and density (Line 211), which are the same $k$ that is the input parameter of TANGO. $k$ in Line 245 is just related to the summation of infinite series $T = \sum_{k=0}^{\infty}(B^\intercal)^k\rho$, and has nothing to do with the former ones. We will make these notations clearer. 2: $p$ is not an input parameter but the number of modes (subclusters) automatically identified by Algorithm 2, and it is used to help the analysis of time complexity. We will make this more formal and clearer. 3: In the hierarchical dependency, each data point only links to its nearest higher density neighbor, and this is an effective and efficient "density hill-climbing" procedure to assign data points to their corresponding modes. It has been widely used in many density-based clustering methods such as Quick Shift, DPC, Quick Shift++, DEMOS, CPF, and LDP-MST. Its consistency guarantee has also been theoretically proven by several articles such as "On the consistency of quick shift" and "A theoretical analysis of density peaks clustering and the component-wise peak-finding algorithm". Future research could also explore other types of dependency, as described in the right side of Line 435. 4: In the dependency matrix $B$, each element $B_{ij}$ denotes the weight of dependency from point $x_i$ to $x_j$. The notation of $B$ is first specified in the Preliminaries section (Line 119, 151 and 152). We will also make the note on Line 265 more thorough. 5 and 6: Thank you for your suggestions. We will make the definition of $C$ more formal. The detailed clarification on the connectivity formulation is included in the proof of Theorem 3 in the appendix. We will make it more thorough in the main body of the revision. 7: We have provided the code and datasets in supplementary material for reviewer to validate its reproducibility. The parameter settings are also included in Table 2 in the appendix. 8: Thank you for your suggestion to evaluate our algorithm on larger datasets. In fact, we have already provided experiments on image segmentation task with 154,401 samples per image (each image is a dataset and each pixel is a sample, as mentioned in the right side of Line 411). 9: The performance of TANGO is indeed not perfect in the image segmentation task. However, we use this task as an extension to test the efficiency on larger datasets, and also as a preliminary result (without tuning the hyperparameters) to demonstrate its promising application for other tasks. The performance of TANGO varied for different images, with overall better performance on several images (row 1, row 2, and row 5) but some flaws in some areas of other images. Note that in row 3, although the cloth and the hand blend for TANGO, it is the only method that successfully segments features in the face (mouth and eyes). 11: In fact, the contribution of typicality can be observed by comparing TANGO-b and TANGO in the ablation study. There are significant performance decrements when the typicality component was removed, such as "semeion" (ARI from 65.37 to 52.18), "ionosphereEW" (from 49.15 to 39.13), "isolet1234" (from 59.57 to 49.75) and "Umist (AE)" (from 85.22 to 78.67). Figure 6 has also shown the real-world advantages of typicality. Other Comments: 1: We have already mentioned the t-SNE in Line 712 for Figure 6 and will make it clearer. 3: TANGO (typicality) is to visualize the breaking of dependency via typicality, and TANGO (final) represents the final clustering results by TANGO, as mentioned in the right side of Line 330. We will make this clearer in the revision.
Summary: The authors first propose a global perspective metric, typicality, to quantify the confidence of a point being a mode. This addresses the limitation of current mode-seeking methods, which require manually setting thresholds or human intervention to identify modes. They also design an efficient and effective algorithm to compute typicality and provide theoretical analysis. Furthermore, they introduce the TANGO clustering method, which leverages typicality to detect modes and form subclusters, and aggregates data into final clusters using an improved graph-cut technique based on path-based similarity. Claims And Evidence: TANGO addresses the issue that current mode-seeking methods identify modes by breaking certain dependency connections but rely heavily on local data characteristics, requiring case-by-case threshold settings or human intervention to be effective for different datasets. Experimental results demonstrate the effectiveness of this approach. Methods And Evaluation Criteria: TANGO introduces a novel evaluation metric for computing the typicality of a point and employs an improved spectral clustering technique to aggregate typical subclusters. Theoretical Claims: The paper presents an innovative density-based algorithm, but provides limited discussion on the theory. Experimental Designs Or Analyses: The experimental results are comprehensive. Supplementary Material: The supplementary material includes additional experiments and theoretical analysis of the TANGO algorithm. Relation To Broader Scientific Literature: Based on the challenges in existing work, the authors propose a novel approach. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. The authors integrate both local and global distribution characteristics of data and propose a novel clustering framework that fuses local and global information, introducing a new global perspective into mode-seeking methods. 2. By introducing typicality, the authors reveal the global significance of data points under locally defined density-based dependencies and use typicality to detect modes in a fully automated manner. Additionally, they provide theoretical analysis and an efficient method for computing typicality. 3. The authors design an improved path-based similarity method to comprehensively and effectively assess the similarity of subclusters and adopt a graph-cut method to determine the final clustering. Weaknesses: 1. Section 4.4: The authors claim to aggregate subclusters using the graph-cut method after obtaining the modes and corresponding tree-like subclusters based on typicality. However, further clarification is needed on how the aggregation operation is performed. 2. Section 4.4: The authors apply spectral clustering to tree-like subclusters based on path-based similarity to obtain the final clustering results. Since it is well known that spectral clustering requires a specified number of clusters, the method could potentially lead to a trivial solution where all data points are assigned to a single cluster if the number of clusters is not pre-specified. The authors should address this concern and clarify how this issue is resolved. 3. Appendix C.6 (Ablation Study): The authors should further clarify the differences between TANGO-a and TANGO-b and whether the experimental settings are identical. Additionally, it is recommended that they include an ablation study examining the effect of using only the Typicality-Aware Mode-Seeking technique without applying the Aggregating Mode-Centered Subclusters technique. 4. Minor textual errors: Line 350 contains a redundancy: "clustering labels labels." Other Comments Or Suggestions: The authors are encouraged to carefully address the points raised in the Weaknesses section and provide strong responses. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper. We answer your questions below. Weakness 1: The aggregation operation is done by considering each tree-like subcluster as a vertex in a similarity graph, where the similarity between these subclusters is determined by a path-based connectivity, and finally the spectral clustering (specifically, NCut method) is applied to aggregate these vertices into a final partition with the specified number of clusters. More formal clarification on path-based similarity between subclusters is included in the proof of Theorem 3 in the appendix. We will further clarify this operation in the revised version. Weakness 2: In our experiments, we have pre-specified the target number of clusters for spectral clustering as the number of ground-truth clusters for each dataset, and fixed the target number of clusters at $5$ on image segmentation task. It is a common topic when the target number of clusters is not pre-specified in spectral clustering, and there exist many classical methods such as Eigengap-based method, Modularity-based method and Self-Tuning Spectral Clustering method to deal with this situation. We can also integrate these methods to automatically determine the target number of clusters when it is not pre-specified. Weakness 3: We will make the differences between TANGO-a and TANGO-b clearer. In TANGO-a, we removed the whole typicality-aware mode-seeking step and just directly apply spectral clustering on all points in dataset with path-based similarity. This is to show that the typicality-aware mode-seeking step is essential. In TANGO-b, we include the mode-seeking step but not typicality-aware, to further validate the significance of typicality. We have conducted the ablation study you suggested by removing the aggregating component to validate its necessity, and observed significant performance decrements (ARI dropped from 39.4, 64.21, 65.37, 63.49, 39.77, 49.15, 59.57, 82.88 to 13.96, 39.44, 36.85, 36.42, 23.16, 6.01, 47.24, 37.19, resp.) on 8 datasets used in current ablation study. We will include this discussion in the revision. Weakness 4: Sorry for the confusion. In fact, the second "labels" is the name of the variable and "clustering labels" is a description of it. We will make this clearer.
null
null
null
null
null
null
A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models
Accept (poster)
Summary: The authors propose to apply the Riemannian Preconditioner introduced in the previous work to improve the Mixture of LoRA framework. The Riemannian Preconditioner enhances LoRA training by projecting the full matrix gradient to the subspace of LoRA matrixes, which better approximates full fine-tuning compared to the unscaled gradient descent. However, applying the preconditioner with Mixture of LoRA is coupled with a further rescaling of the manifold constructed for each expert, leading to underestimated gradients. The authors incorporate a new scaling mechanism and develop an engineering approximation to address the issue. Extensive experiments across various downstream tasks including Question Answering, the GLUE Benchmark, and the Vision-Language task are conducted to validate the approach's efficacy. Claims And Evidence: The overall claims are clearly stated and well supported by extensive experiments across different benchmarks: a) The authors claim that incorporating a Riemannian Preconditioner into the Mixture of LoRA framework yields superior performance. b) They claim that the scaling mechanism introduced in the preconditioner helps address the issue of underestimated gradients. c) They claim that the engineering approximation solution improves training dynamics and model performance. Methods And Evaluation Criteria: Yes. The proposed method builds upon the Riemannian Preconditioner approach from prior work to develop modifications aligned with the Mixture of the LoRA framework. Extensive experiments across various downstream tasks including Question Answering, the GLUE Benchmark, and the Vision-Language task are conducted to validate the approach's efficacy. Theoretical Claims: The theoretical claims in this paper is generally correct and sound. The Riemannian Preconditioner is built upon the previous work. They formally derived the preconditioner's form under the Mixture of LoRA scenario. They further propose a rescaling mechanism to address the underestimation issue—although this component lacks a fully rigorous theoretical proof, it is validated by experimental results, which demonstrate improved performance. Experimental Designs Or Analyses: The experimental methodology is valid. The authors evaluate their approach across different tasks, from language to vision understanding. I did not detect any significant issues with their design or analysis. Supplementary Material: Yes, I have reviewed all the supplementary materials provided at the end of the paper. Overall, they further support the efficacy of the proposed approach. However, the multi-task learning results pose a potential concern regarding how efficiently this method performs in that scenario. In particular, the rescaled gating mechanism with the AdamW optimizer does not demonstrate a performance improvement under multi-task conditions. Relation To Broader Scientific Literature: This work combines two established research directions: Mixture of LoRA (Low-Rank Adaptation) and Riemannian Preconditioning. Mixture of LoRA extends the low-rank fine-tuning paradigm by introducing multiple “expert” components. These experts are activated selectively for each token via the gating mechanism. Riemannian Preconditioners are proposed to ensure the update is done in accordance with the full rank gradient projection onto the subspace of LoRA matrices, stabilizing the training process. By merging these ideas and introducing a rescaling mechanism to address gradient underestimation, the paper demonstrates improved performance on various benchmarks. This approach introduces moderate innovation supported by extensive experiment results. Essential References Not Discussed: The references cover the related works about Mixture of LoRA and the gradient preconditioners. Works about LoRA and LoRA variants are also covered. Other Strengths And Weaknesses: This paper combines Mixture of LoRA and Riemannian preconditioning with a rescaling mechanism to address gradient underestimation. While this integration is useful and supported by thorough experiments, the approach remains incremental.The theoretical justification for rescaling appears partially heuristic. Overall, the work provides a moderate advance that could benefit practitioners interested in more effective low-rank fine-tuning. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your reviews and thank you for **acknowledging our efforts on theoretical and experimental analysis**. For your concerns mentioned in the review, we provide corresponding response below: **Response to your concern** of AdamW performances under Multi-Task scenarios Our supplementary material does not indicate an out-performance of our method for AdamW optimizer under multi-task scenarios. We considered it may due to lacking a sufficient exploring across different multi-task scenarios, since we only conducted the multi-task experiments under a single mixture of two tasks (ScienceQA and MRPC). Consequently, we conducted more experiments on multi-task scenarios in our revision, including two mixtures of tasks and two various configurations of MoE structure. In our revision, we grouped six tasks from the GLUE Benchmark into two mixtures. The first mixture consists of CoLA, SST-2 and MRPC tasks, which serves as a multi-task scenario involving both grammar checking, sentiment classification, and equivalent sentences judging; The second mixture consists of STS-B, QQP and QNLI tasks, which serves as another multi-task scenario involving both sentence similarity scoring, equivalent questions judging, and question-answering NLI. For evaluation, we tested candidates on each of the tasks individually and then averaged per-task performances within the mixture as the overall evaluation for that mixture. For sufficiently assessing the multi-task performance of our proposed gate-based rescaling method, we conducted experiments under two different MoE configurations, i.e., $20/10/4$ and $10/5/4$. We trained each candidate for 2000 steps under RAdamW and gRAdamW (RAdamW with our proposed gate-based rescaling method) optimizers. The following two tables illustrate our performances under the first and the second mixtures respectively. **Mixture 1: CoLA + SST-2 + MRPC** | **Configuration** | **$RAdamW$** | **$gRAdamW$** | | ----------------- | ------------ | ------------- | | $20/10/4$ | 70.15 | **71.39** | | $10/5/4$ | 71.64 | **72.13** | **Mixture 2: STS-B + QQP + QNLI** | **Configuration** | **$RAdamW$** | **$gRAdamW$** | | ----------------- | ------------ | ------------- | | $20/10/4$ | 74.61 | **75.74** | | $10/5/4$ | 74.81 | **75.36** | As a result, we concluded that overall our proposed method is still effective to boost AdamW optimization under multi-task scenarios. We have revised our supplementary material to integrate these experiments and conclusions.
Summary: This paper introduces a new approach to enhance the performance of MoE-LoRA for fine-tuning foundation models by incorporating Riemannian Preconditioners. This approach ensures that the gradient updates align more closely with the full-rank optimization, thereby stabilizing and accelerating the training process. Moreover, they identify a previously overlooked issue: the gate values in MoE-LoRA introduce additional scaling that distorts the gradient updates and undermines the effectiveness of Riemannian Preconditioners. To resolve this, the authors propose a novel gate-value-based rescaling method that adjusts the gradients of each expert to account for the impact of gate values. The results show substantial improvements in performance. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria, including the use of benchmark datasets, are appropriate and well-suited for the problem or application at hand. Theoretical Claims: Yes, I checked the correctness of the derivation for the theoretical claims. Experimental Designs Or Analyses: Yes, I checked the soundness and validity of the experimental designs and analyses presented in the submission. There are no issues with the experimental design. Supplementary Material: Yes, the supplementary material was reviewed, some details are reported. Relation To Broader Scientific Literature: The manuscript provides a thorough discussion of the relevant literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths are as follows,** 1-This manuscript is written with a theoretical style. 2-This manuscript features comprehensive and in-depth related work, such as conducting theoretical analyses of the most relevant work. 3- This manuscript presents experiments conducted on both QA datasets, the GLUE benchmark, and Multimodal benchmarks. The results consistently show performance improvements compared to RAdamW and RSGD. 4-The experimental results demonstrate that the authors have effectively addressed the two limitations they declared, show that the lift of the method is remarkable. **Weaknesses are as follows,** 1-They effectively showcased the scalability of their method on Multimodal Large Model LLaVA, which speaks to its broader utility. However, I noticed that they only tested a single configuration of expert numbers. This limits the contribution of this section compared to the rest of the manuscript. 2- Despite testing the performance on multiple foundation models, I am curious to see how the method performs on LLaMA models of varying sizes. 3- In the MoE, the sum of all gating values (g) is constrained to 1. In my experience,I’m not entirely clear on the rationale behind this choice, and it would benefit from a clearer explanation. 4- While the authors have done a thorough job reviewing related work, they should more clearly to highlight the contributions. 5- In Subsection 4.5, it should be “Table 4” instead of “table 4.” Other Comments Or Suggestions: Please see weaknesses above, and I recommend that the authors more prominently highlight the contributions of this work in the abstract, introduction, and conclusion. Questions For Authors: Please see the weaknesses 1 and 3. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your valuable reviews and your **agreements on our proposed method and our efforts on literature review**. We have conducted several new experiments and provided responses to all your concerns: **Response to W1** about the limitation of LLaVA experiments In our revision, we conducted more LLaVA experiments on both Visual7W and VMCBench using LLaVA-v1.5-7B. Specifically, we implemented candidates under two new MoE configurations with different expert numbers: $16/8/4$ and $10/5/4$. The following table illustrates the results. An overall enhancement of our method can still be witnessed under different configurations, especially for SGD. | **Candidates** | **Visual7W** | **VMCBench** | | :----- | -----: | -----: | | $RSGD_{16,8,4}$ | 0.72 | 0.59 | |$gRSGD_{16,8,4}$ | **0.74** | **0.69** | | $RSGD_{10,5,4}$ | 0.71 | 0.63 | |$gRSGD_{10,5,4}$ | **0.74** | **0.73** | | $RAdamW_{16,8,4}$ | 0.76 | 0.71 | |$gRAdamW_{16,8,4}$ | **0.77** | 0.71 | | $RAdamW_{10,5,4}$ | 0.76 | 0.76 | |$gRAdamW_{10,5,4}$ | 0.76 | **0.77** | Our performance boosts for LLaVA under AdamW might not be that remarkable. Therefore, to conduct a further significance analysis of our AdamW boosting, we also implemented more candidates and trained them by AdamW. Please refer to the following table. Similar phenomenon can be invariably witnessed. | **Candidates** | **Visual7W** | **VMCBench** | | :------ | ------: | ------: | | $RAdamW_{5,5,4}$ | 0.77 | 0.75 | | $gRAdamW_{5,5,4}$ | 0.77 | **0.76** | | $RAdamW_{5,2,4}$ | 0.73 | 0.75 | | $gRAdamW_{5,2,4}$ | **0.77** | **0.78** | | $RAdamW_{3,2,4}$ | 0.75 | 0.75 | | $gRAdamW_{3,2,4}$ | **0.76** | 0.75 | **Response to W2** about performances on different LLaMA models We conducted more experiments on LLaMA models besides Llama-3.2-3B. Specifically, among all the LLaMA 3.2 models there are only 1B and 3B models that are purely textual. Therefore, we decided to include further experiments on Llama-3.2-1B. Four QA benchmarks have been tested, each trained for 2000 steps. Due to the limited resources and time, we set a relatively smaller MoE configuration to speed up training, which is $10/5/1$. Results are illustrated in the following table. We also included this table in the revision of our paper. | **Candidates** | **ScienceQA** | **CommonsenseQA** | **OpenBookQA** | **SIQA** | **Avg.** | | :----- | -----: | ----: | ----: | -----: | -----: | | $RSGD_{10,5,1}$ | 47.71 | 49.47 | 48.80 | 50.41 | 49.10 | | $gRSGD_{10,5,1}$ | **49.87** | **59.30** | **54.00** | **57.06** | **55.06** | | $RAdamW_{10,5,1}$ | 46.18 | 42.92 | 41.60 | 44.11 | 43.70 | | $gRAdamW_{10,5,1}$ | **46.58** | **43.82** | **43.40** | **45.50** | **44.83** | Besides LLaMA 3.2 models, we also tested Llama-3.1-8B. Until now we only conducted four ScienceQA evaluations with each training for only 600-800 steps. Results are below: | **Candidates** | $RSGD$ | $gRSGD$ | $RAdamW$ | $gRAdamW$ | | :------------- | -----: | ------: | -------: | --------: | | **ScienceQA** | 71.49 | **76.35** | 87.50 | **87.68** | **Response to W3** about the sum-to-1 constraint in MoE In general MoE, constraining the sum of all gate values to 1 contributes to the model stability and probabilistic interpretation. Firstly, it normalizes gate outputs to get rid of some uncontrolled issues during training and inferring, such as gradient explosion or vanishing, overflow of variables from accumulated forwarding, etc. During training, by fixing the sum to 1, the gating network focuses solely on allocating relative importance among experts, rather than learning absolute weight magnitudes. This also reduces the complexity of the optimization problem and stabilizes training; Secondly, sum-to-1 constraint can be interpreted as the probabilistic distribution of selecting each expert, aligning with the requirements of probabilistic models and enabling clearer theoretical foundation of MoE. **Response to W4 and your suggestion** of prominently highlighting our contributions Thank you for your suggestions. Our contributions include integrating mixture of LoRAs with Riemannian preconditioners to alleviate both limited representation and sub-optimality issues; emphasizing the distortion issue behind per-expert preconditioning; and proposing a gate-based rescaling method and its engineering approximation to further boost MoE-LoRA training. We have already revised our abstract, introduction and conclusion sections, to explicitly involve these statements. **Response to W5** about small errors We revised the mentioned "table 4" to "Table 4" in Section 4.5 in our revision, together with checking and fixing some other small notation errors. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed feedback. My main concerns focus on: (i) The underlying principles of the gating values. The authors provide a detailed explanation, which makes it easier for me to understand their core contribution. (ii) The evaluation on LLaVA and LLaMA, which further strengthens their technical contribution. Moreover, I have carefully read the comments from other reviewers and agree with the theoretical contribution and general value of this work. As an additional suggestion, I also recommend that the authors release their code to the community. Overall, the authors addressed all my concerns, and no further issues need to be resolved. I am inclined to accept this work and therefore raised my initial score. --- Reply to Comment 1.1.1: Comment: We really appreciate for your acknowledgements and raising your evaluation of our work. We will release our code to the community after this paper is published. Thank you very much.
Summary: This paper introduces a training strategy for Mixture-of-Experts (MoE) models with LoRA. It uses Riemannian preconditioning and gate-value scaling to address gradient sub-optimality and representation limitations. The proposed method modifies traditional preconditioners to stabilize gradient updates and improve training robustness. Experiments on NLP and VQA tasks, including QA datasets, the GLUE benchmark, and VG/VMCBench, demonstrate faster convergence and enhanced performance. ## update after rebuttal The authors have addressed my concerns, and l would like to retain my positive score. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. The authors provided a solid theoretical foundation for their method, integrating Riemannian preconditioner and gate-value scaling to address key challenges. Methods And Evaluation Criteria: Methods and evaluation criteria are highly appropriate and well-suited for addressing the problem and application at hand, including benchmarks such as QA datasets, the GLUE benchmark, and VG/VMCBench. Theoretical Claims: I reviewed the theoretical claims and their corresponding proofs in the submission. The authors provided detailed derivations and mathematical justifications for their methods, mainly focusing on the integration of Riemannian Preconditioner, such as Limitation 1/2 (Limited representation and Gradient Sub-optimality), Riemannian Preconditioner in LoRA Expert and Rescaling Preconditioners (Section 3.1/3.2). Experimental Designs Or Analyses: I checked them. The experimental setup and analyses appear to be well-structured and appropriate for assessing the claims made, such as multiple benchmarks (Table 1/2/3), convergence analysis (Figure 2), and ablation study (Table 4.5). Supplementary Material: I reviewed the supplementary material all. Relation To Broader Scientific Literature: The paper is well-aligned with recent literature, like LoRA and LoRA Variants, MoELoRA, and Gradient Preconditioners. The authors discuss literature related to these concepts, including works such as MiLoRA, LoRA+, DoRA, MoLA, MoV, etc. And more, the authors have done a nice job of using theory to build connections between these concepts. Essential References Not Discussed: The literature discussed by the authors is indeed comprehensive and closely related to the core topics addressed in the paper. They highlighted key advancements in LoRA, MoELoRA, and Riemannian preconditioning. Other Strengths And Weaknesses: Strengths: * Nice presentation, Reasonable motivation, and Interesting theoretical contribution. The authors introduce a simple yet powerful idea inspired by mathematical principles. * The authors propose a method based on gate scaling theory to enhance the performance of MoE-LoRA, which takes into account the influence of manifold curvature. * The authors derive a rescaling method based on Riemannian preconditioning and provide a complete theoretical derivation process. * This method effectively balances the gradient updates among experts, addressing challenges such as curvature distortion in the MoE. * The engineering approximation seems to provide computational efficiency. Weaknesses: * The convergence is crucial for understanding this method. Regrettably, the authors did not carefully address this in Figure 2. A detailed explanation is necessary, such as the meaning of the dual-axis, and the significance of training loss and validation loss. Then, overlapping the axes in the middle of the subplots should be noted. * Equation 13 appears highly valuable, yet its explanation could be more transparent. The authors should provide a clearer and more detailed interpretation of this equation. * In Subsection 3.3, the authors propose a more flexible engineering approximation, which is an interesting contribution, this approach seems to achieve low computational overhead. However, more elaboration on the advantages of this approach would be helpful. * A significant advantage of this method that enhances MoE-LoRA as a training strategy. However, the experiments comparing the proposed method with the MoE- LoRA baseline should be clearer. For example, MoLA-SGD (2, 4, 6, 8) should ideally be presented alongside MoLA in the lower part of the Table. * The legend of Figure 3 should be streamlined for clarity. Other Comments Or Suggestions: While the theoretical contributions and experimental results are well-presented, it would be beneficial to include a more detailed discussion of the practical implications of this method. And I highly recommend that the authors consider open-sourcing their implementation. Questions For Authors: Please see Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for providing **postive feedbacks on our presentations, derivations, and also experiments**. For the weaknesses and suggestions you pointed out, we highly value them and provide responses below: **Response to W1** about further explaning the convergence and fixing the issues in Figure 2 We have added in our revision further explanations for the converging figures in Figure 2. For example, the x-axis represents training steps, the left y-axis in each figure represents the training or validation losses, while the right y-axis in each figure represents the accuracy metrics of test sets; Before implementing our gate-based rescaling method, the training and validation losses of RSGD optimizer across four tasks are significantly reduced around 100-200 steps, while after implementing our method, they are significantly reduced earlier around 0-100 steps; In addition to converging speed, we also notice an out-performance of our method in terms of converged loss and QA accuracy; Finally, to address the axis overlapping issue, we re-arranged each subplot in the figure to be less tight with each other and re-drew the Figure 2 in our revision. **Response to W2 and W3** about further elaborating Eq.13 We further elaborate why we implement Eq.13 ($X=\hat{W}+\sum_{i=1}^{N_{Expert}}\hat{\sqrt{g_i}}B_iA_i+(g_i-\hat{\sqrt{g_i}})\hat{B_i}\hat{A_i}$), which is our engineering approximation for achieving Eq.11 and Eq.12. Since by forwarding as Eq.13, the gradient updating process of $X$ can be derived as the following (Similar as Eq.9 and Eq.10, we treat gate value $g_i$s as constants when focusing on the gradients of $A_i$s and $B_i$s): $$ \begin{aligned} X_{new}=&\hat{W}+\sum_{i=1}^{N_{Expert}}[\hat{\sqrt{g_i}}(B_i-\eta\nabla_{B_i}\mathcal{L})(A_i-\eta\nabla_{A_i}\mathcal{L})+(g_i-\hat{\sqrt{g_i}})\hat{B_i}\hat{A_i}] \\\\ =& (\hat{W}+\sum_{i=1}^{N_{Expert}}\hat{\sqrt{g_i}}B_iA_i+(g_i-\hat{\sqrt{g_i}})\hat{B_i}\hat{A_i})-\eta\sum_{i=1}^{N_{Expert}}\hat{\sqrt{g_i}}[B_i(\nabla_{A_i}\mathcal{L})+(\nabla_{B_i}\mathcal{L})A_i] \\\\ =& X-\eta\sum_{i=1}^{N_{Expert}}\hat{\sqrt{g_i}}(B_i\nabla_{A_i}\mathcal{L}+\nabla_{B_i}\mathcal{L}A_i) \\\\ =& X-\eta\sum_{i=1}^{N_{Expert}}(\hat{\sqrt{g_i}})^2Proj_{col(B_i)}(\nabla_{X}\mathcal{L})^T-\eta\sum_{i=1}^{N_{Expert}}(\hat{\sqrt{g_i}})^2Proj_{row(A_i)}(\nabla_{X}\mathcal{L}) \text{ (same as the derivation of Eq.10)} \\\\ =& X-\eta\sum_{i=1}^{N_{Expert}}g_iProj_{col(B_i)}(\nabla_{X}\mathcal{L})^T-\eta\sum_{i=1}^{N_{Expert}}g_iProj_{row(A_i)}(\nabla_{X}\mathcal{L}), \end{aligned} $$ so that Eq.12 can be achieved. The advantages of implementing Eq.13 can be elaborated from two aspects: Firstly, it achieves Eq.12 when doing gradient updating of $X$ while still keeps the original behavior of training gates, because it holds the same gate gradient, $\nabla_{g_i}X={A_i}^T{B_i}^T$, as normal forwarding; Secondly, it provides equivalent behavior and the same result of normal module forwarding $X=\hat{W}+\sum_{i=1}^{N_{Expert}}g_iB_iA_i$, and only requires a relatively low overhead. **Response to W4** about the presentation issue of MoLA experiments We found that the candidates order presented in our baseline comparison experiments (Table 4) may not appropriate. As you suggested, we have moved the $MoLA-SGD (2,4,6,8)$ candidate to the lower part of the table, alongside with other MoLA candidates. **Response to W5** about the legend of Figure 3 As you mentioned, we notice that the legend of Figure 3 should be streamlined since it has a chance to cover part of the blue line (actually it does not cover, but that still leads to unclarity). As a result, we shortened the names of lines in the legend by deleting the "Loss" word and drew the figure again. **Response to your suggestion** of discussing practical implications Thank you for your suggestions. The practical implications of this work are mainly focused on boosting the training of MoE-LoRA, which may be applied to the fields like efficient and low-resource model training, continual or multi-task learning, stablized training and modular task adaptation, etc. For example, some current works use MoE-LoRA structure to distillate knowledge from a much larger dense model, such as Xu et al.[1] Our proposed method may enhance their distillation process. We have added these practical implications to our paper, as a new section before the conclusion section. **Response to your suggestion** about open-sourcing Yes, we will open-source our implementation on Github after this paper is published. [1] Xu, Haiyang, et al. "Sparse Mixture of Experts Language Models Excel in Knowledge Distillation." CCF International Conference on Natural Language Processing and Chinese Computing. Singapore: Springer Nature Singapore, 2024.
Summary: This work proposes an improved training strategy for MoE-LoRA, aiming to address the limited representation and suboptimal gradient issues when fine-tuning foundation models with plain MoE-LoRA. They first analyze the limitations of LoRA, including the insufficient representation capacity of low-rank matrices and gradient optimization problems. To enhance the representation power of LoRA, they introduce the MoE framework and then incorporate Riemannian Preconditioners to optimize the gradient update process. Through theoretical analysis and experimental validation, they demonstrate the effectiveness of the improved method in various downstream tasks, including question answering, language understanding, and vision-language tasks. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem or application at hand. Theoretical Claims: Yes, I carefully reviewed the theoretical proofs presented in the manuscript, particularly focusing on the core contributions related to the improved MoE-LoRA training strategy. Experimental Designs Or Analyses: Yes, I have reviewed the soundness and validity of the experimental designs and analyses. Supplementary Material: Yes, I have reviewed the entire supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper are well-grounded and significantly advance the broader scientific literature, particularly in LoRA, MoE, and optimization techniques for foundation models. Essential References Not Discussed: All key related works are discussed in the paper, to the best of my knowledge. Other Strengths And Weaknesses: ***Strengths,*** **Innovation and Theoretical Value**: The introduction of the Riemannian Preconditioner into the MoE-LoRA is an innovative approach. This work addressed the instability issues encountered during the training of plain MoE-LoRA. Moreover, the manuscript provides a detailed theoretical analysis of the gradient update process in MoE-LoRA, revealing the underlying problems. And they further propose a gradient rescaling method based on gating values, which offers a solid theoretical foundation for their method. **Experiments and Performance**: The method has been extensively tested across a variety of downstream tasks, including question answering, GLUE benchmark tests, and vision-language tasks, using different base models such as Llama, GLM, and LLaVA. The results demonstrate the effectiveness and generalizability of the proposed approach. The improved MoE-LoRA achieves significant performance improvements when using base optimizers. **Practicality**: The manuscript introduces an engineering approximation method, which decomposes the optimized and non-optimized parts in the forward propagation. This approach effectively resolves the difficulties associated with directly implementing the theoretical method, making it practical. For me, I'm intrigued by this section. **Flexibility**: This work can be seamlessly integrated into existing MoE-LoRA baselines, such as MoLA. It can serve as a theoretical complement to current MoE-LoRA training strategies. ***Weaknesses,*** **Notations issues**: Some necessary notations and operators should be declared before use, even if they are commonly used conventions. For instance, symbols like X and Proj. should be defined. **Unclear Abbreviations**: Abbreviations should be fully explained, especially those that may not be universally understood. For instance, FFN should be clearly defined. Additionally, what is the meaning of FFT? Might this be a small typo, I guess? **More explicit conclusion** (Equation 12): As far as I understand, Equation 12 appears to be the core conclusion of this work. Therefore, it is crucial to provide a clear and detailed explanation of how and why Equation 12 can achieve full fine-tuning equivalency. This will help readers better understand the core contribution of this work. **Analytical glitches**: n/k/r seem to be of significant importance. However, the analysis of these parameters appears to be insufficient. A more thorough investigation is needed, and the optimal candidates should be emphasized. Other Comments Or Suggestions: It is commendable that this work offers a nice theoretical depth. They provided rigorous derivations to demonstrate that this work can achieve full fine-tuning equivalency in mathematical way. Additionally, they provided robust engineering implementations and alternative approximations, making this method both practical and scalable. However, given that the manuscript involves a substantial number of formulas and derivations, it is strongly recommended that the authors carefully review each step of the derivations to ensure their rigor and accuracy. Questions For Authors: After carefully reviewing the theoretical section, although I understand how the proposed training strategy achieves full fine-tuning equivalency, it would be highly beneficial for the manuscript if the authors could provide a clearer explanation of how and why Equation 12 enables full fine-tuning equivalency. This would greatly enhance the readability and comprehension of the core contribution. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your **acknowledgment on our innovations and theoretical value**. We’ve checked our paper again carefully to address any issues you mentioned. Here are our responses to your valuable concerns: **Response to W1 and W2** about the notations and abbreviations issues We have carefully reviewed our paper again to address those undeclared or unclear notations and abbreviations. For example, as you mentioned, the symbol $X$ represents the overall weight matrix after integrating pretrained weights $W$ and LoRA modules $A$s and $B$s; $Proj_V(M)$ represents a projection function which projects a given matrix $M$ onto a sub space constructed by all vectors in set $V$. And when we treat all vectors in $V$ as the rows of another matrix, such as $P$, then the projecting action can be calculated as $Proj_{row(P)}(M) = MP^T(PP^T)^{-1}P$; FFN and FFT represent different concepts. FFN is the abbreviation for Feed-Forward Network; and FFT is the abbreviation for Fully Fine-Tuning. In our revision, we have addressed all the above notation issues, as well as some other unclear notation issues we found during our re-check. **Response to W3 and Q1** about more explanations of Eq.12 on its fully-finetuning equivalency Eq.12 is the refined Riemannian-preconditioned backpropagating equation in MoE cases, after applying our proposed gate-based rescaling method. It further approaches global fully finetuning because of two basic reasons: Firstly, it is derived by implementing Riemannian preconditioners to calibrate each LoRA expert's gradient (given by Eq.6), thus ensuring each LoRA expert can get close to their respective full-rank training behavior locally (i.e., per-expert fully-finetuning equivalency), according to Zhang et al.[1]; Secondly, we notice a further distortion of each expert space introduced by their respective gate values, leading to an inconsistency between per-expert local optimals and the global optimal. Therefore, we further introduce a respective gate value $g_i$ as a re-scaler to each expert's Riemannian preconditioner (given by Eq.11), to relieve the expert distortion resulting from the multiplication of gate value during forwarding. As a result, Eq.12 can further approach global fully-finetuning equivalency (e.g., larger gate values introduce less distortion. Thus, through Eq.12, experts with larger gate values are re-scaled less than those with smaller ones). We have already integrated above discussions about fully-finetuning equivalency in our revision. **Response to W4** about insufficient $n/k/r$ analysis We already presented our $n/k/r$ analysis in Table 5 in Section 4.6, which consists of seven different candidates tested under SGD and AdamW optimizers. Llama-3.2-3B serves as their foundation model. Table 5 already demonstrates our overall effectiveness across various $n/k/r$ configurations. To make the investigation further sufficient, we added two new experiments under LLaVA-v1.5-7B with two different $n/k/r$ configurations (16/8/4 and 10/5/4). Please refer to the results in our rebuttal to Reviewer 1wAK. An overall enhancement of our method can still be witnessed. We have already emphasized our preliminary conclusion in Section 4.6 that, the value of $k$ is more important to performance boosting of our method under SGD optimizers. Now we provide a further analysis about both our performance boosting and the final overall performance: Firstly, for performance boosting, we calculate its correlation with $n$, $k$ and $r$ from Table 5, which are $0.357$, $0.912$ and $0.093$ respectively under SGD optimizers, demonstrating our preliminary conclusion is correct. While under AdamW optimizers they are $0.001$, $0.197$ and $0.806$, indicating that $r$ might be more important for boosting AdamW; Secondly, for final performance, we claim that it results from both our boosting effectiveness and the MoE fundamental features (such as a too-large MoE structure may lead to underfitting, while a too-small one may lead to overfitting). After the mixed influence of both aspects, the optimal configuration for LLama-3.2-3B in our ScienceQA experiments is the $10/5/1$ with our re-scaled SGD optimization; while for LLaVA-v1.5-7B in VMCBench experiments, for example, it is $10/5/4$ with our re-scaled AdamW optimization. **Response to your suggestion** of reviewing each derivation step Thank you for your suggestion. We have reviewed all the derivation steps in our paper again to make sure each step is theoretically correct. [1] Zhang, Fangzhao, and Mert Pilanci. "Riemannian preconditioned lora for fine-tuning foundation models." arXiv preprint arXiv:2402.02347 (2024).
null
null
null
null
null
null
Learning to Quantize for Training Vector-Quantized Networks
Accept (poster)
Summary: This paper proposes an improvement to the STE method for training VQ networks. While the backpropagated gradient bypasses the codebook in the STE framework, this paper proposes Meta Quantization (MQ), which adopts a bi-level optimization strategy and learn quantization with a hyper-net in a meta-learning fashion. This enables the task loss to reach the codebook through multiple routes. It can also discard the hyper-net and retain only codebook weights, without affecting downstream tasks. Empirical evidence suggests better reconstruction and generation quality across various datasets. Claims And Evidence: • Claim: The paper argues that introducing MSE loss for codebook optimizing by hyper-net could enhance the training. o Question: Although the experiments have shown the effectiveness of the generated codebook by hyper-net on image reconstruction and generation. There is little direct discussion of computational cost or difficulty of converging using the meta-learning strategy. o Potential Improvement: It will be helpful to show the performance between a conventional VQN and MQ trained with the same epochs, or figure out the steps needed until they have converged, respectively. • Claim: Hyper-net reparameterization and meta-learning help avoid code collapse. o Question: The authors show near-100% code usage on image reconstruction and generation, however, it is not clear about the process of codebook optimization itself. o Potential Improvement: It will be better to demonstrate how codes are generated, selected, and which part is optimized. As mentioned in Sec. 4.1 (only a few codes are selected, a substantial part of the hyper-net still receives gradients). Methods And Evaluation Criteria: • Methods: The authors optimize the codebook with a hyper-net and introduce task loss for explicit codebook improvements. This makes sense for index collapse problem and potential better image reconstruction or generation performance. • Evaluation Criteria: The paper relies on standard generative modeling metrics such as MSE, LPIPS, SSIM, and FID across established datasets, following [Straightening’ 23] by Huh et al. published in PMLR 2023. • Potential Weakness: Since the method focus on explicit codebook improvements, it will be better to illustrate the codebook distribution, using tools such as tSNE, to directly validate the effectiveness of the proposed method. Theoretical Claims: The paper uses the logic of meta-learning, referencing the established notion that bi-level optimization can optimize both model and “hyperparameters” jointly. Experimental Designs Or Analyses: Design Strength: The authors compare their method to a variety of widely used VQN baselines on multiple tasks. They measure codebook usage, reconstruction quality, and generation quality, providing a holistic overview. Potential Weakness: 1. For reconstruction evaluation in Tab. 1, it follows [Straightening’ 23] by Huh et al. published in PMLR 2023, but some results are missed. E.g. VQVAE+Affine+OPT+replace+l_2 shows an MSE of 1.74, and LPIPS of 0.227, better than MQVAE proposed in this paper with MSE of 3.05 and LPIPS of 0.29. It’s better to involve these missing results and analyze about the gap. 2. Recent research such as CVQ-VAE [online’ 23] by zheng et al. published in ICCV 2023 should also be considered for more comprehensive comparison. Supplementary Material: NA Relation To Broader Scientific Literature: The authors reference VQVAE, VQGAN, and relevant codebook-improvement methods (Gumbel-VQ, VQGAN-LC, FSQ, etc.). They also draw comparisons to meta-learning methods and hyperparameter optimization. Essential References Not Discussed: The paper does cite and discuss relevant references, including meta-learning strategies and VQNs that focus on codebook collapse. However, it is better to expand the discussion of recent VQN researches dealing with the collapse and compare with them, such as CVQ-VAE [online’ 23] by zheng et al. published in ICCV 2023. Other Strengths And Weaknesses: Strengths: 1. This paper proposes a novel vector quantization network, incorporating meta-learning methods and VQNs. 2. The approach is adaptable for multiple VQN models without affecting downstream tasks, so it could be straightforward to integrate into new pipelines. Additional Weaknesses (in more detail): 1. Computational Overhead: The new approach requires unrolled gradient steps (or finite difference approximations). The paper mentions an approximation but does not provide an in-depth breakdown of the training cost or memory usage on large codebooks. 2. Limited Sensitivity Analysis: The ablation studies are somewhat narrow. The authors partially test turning off bi-level optimization or using various hyper-net types, yet do not systematically explore the effect of different unroll lengths or layer widths of the hyper-net. 3. Generalization Beyond Vision: The paper focuses on visual tasks. It remains unclear for the effectiveness of other task losses and how they affect the codebook optimization by the hyper-net. Other Comments Or Suggestions: • Implementation Details: Additional clarifications about hyperparameter tuning and computational cost might help replicate the strong results. • Downstream Utility: While the paper shows standard metrics, some real-world usage scenarios or domain-specific tasks (e.g., image segmentation, speech coding) might illustrate practical advantages. Questions For Authors: 1. Choice of Unroll Steps: Have you tried multiple unroll steps (beyond one-step) or different finite-difference approximations? How does that affect training time and results? 2. Potential Memory Overheads: Could you provide more precise measurements of how memory/time usage increases for the partial unrolling or the hyper-grad approximations? 3. Domain Generalization: Do you think the approach can directly transfer to other discrete representation tasks such as speech tokens or molecular modeling, without major changes? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your constructive feedback very much. We provide our response to your review as follows. > Computational Cost and Memory Overheads We conducted additional experiments to address your concerns. When evaluated on the CelebA dataset with a batch size of 128, the increase in memory usage is marginal and acceptable in practice. For time comparison, we set VQVAE as the baseline, which needs around 3.6 hours to finish the training of 50k steps (and does not improve after that). We find that, MQVAE only requires 2.4 hours to reach the same LPIPS score as VQVAE (around 9.5k steps), and can keep improving after that. This demonstrates that our method converges much faster than VQVAE and is able to outperform baselines with extended training time. | Method | Memory (GB) | Wall time to reach baseline (h) | Total wall time (h) | | ---------------- | ----------- | ------------------------------- | ------------------- | | VQVAE (baseline) | 7.29 | 3.6 | 3.6 | | MQVAE | 7.35 | 2.4 | 12.2 | > Difficulty of converging using the meta-learning strategy Empirically, we did not observe stability issues during training. Theoretically, related convergence analyses for this type of gradient-based bilevel optimization algorithm can be found in [5], [6], and the references therein. Our MQ belongs to this type of optimization and is guaranteed to be stable and converge under certain conditions. > Demonstrate how codes are generated and selected and which part is optimized; The codebook distribution; Implementation Details Please follow this anonymous link https://anonymous.4open.science/r/MQVAE-B52C for the figure illustration and code implementation, and we will open-source them once the paper is accepted. > Compare with VQVAE+Affine+OPT+replace+l_2, and CVQ-VAE According to the performance reported in [1], MQVAE performs slightly worse than the combination of VQVAE+Affine+OPT+replace+$l_2$. Fortunately, we can show that when combined with additional techniques such as $l_2$ projection, MQVAE can still outperform [1] on the MNIST datasets, as shown in the table below. We also include a comparison with CVQ-VAE [2] for your reference. | Method | MSE ($\times 10^{-3}$) | LPIPS ($\times 10^{-1}$) | | ------------------------------ | ---------------------- | ------------------------ | | CVQ-VAE | 2.87 | 3.73 | | VQVAE+Affine+OPT+replace | 1.81 | 2.56 | | VQVAE+Affine+OPT+replace+$l_2$ | 1.74 | 2.27 | | Ours+$l_2$ | 1.64 | 2.18 | > Limited Sensitivity Analysis: Have you tried multiple unroll steps (beyond one step) or different finite-difference approximations? How does that affect training time and results? Our finite-difference (FD) approximation currently supports only one-step unrolling. We conducted additional experiments with alternative approximations, including the conjugate gradient (CG, [3]) and Neumann series (NMN, [4]) methods. Our ablation studies were done on the CelebA dataset. We found the type of approximation largely does not affect. | Method | MSE | LPIPS | | ------ | ---- | ----- | | FD | 3.10 | 0.14 | | CG | 3.24 | 0.14 | | NMN | 2.98 | 0.14 | > Downstream Utility and Domain Generalization: Do you think the approach can directly transfer to other discrete representation tasks, such as speech tokens or molecular modeling, without major changes? Based on our experiments, we believe that our framework can be directly adapted to other discrete representation tasks. Our approach is flexible in accommodating different types of task loss, as demonstrated in our experiments where both MSE loss and perceptual loss have been evaluated. In both cases, MQ shows superiority over VQ. [1] Huh, Minyoung, et al. "Straightening out the straight-through estimator: Overcoming optimization challenges in vector quantized networks." International Conference on Machine Learning. PMLR, 2023. [2] Zheng, Chuanxia, and Andrea Vedaldi. "Online clustered codebook." *Proceedings of the IEEE/CVF International Conference on Computer Vision*. 2023. [3] Rajeswaran, Aravind, et al. "Meta-learning with implicit gradients." *Advances in neural information processing systems* 32 (2019). [4] Lorraine, Jonathan, Paul Vicol, and David Duvenaud. "Optimizing millions of hyperparameters by implicit differentiation." *International conference on artificial intelligence and statistics*. PMLR, 2020. [5] Pedregosa, Fabian. "Hyperparameter optimization with approximate gradient." *International conference on machine learning*. PMLR, 2016. [6] Rajeswaran, Aravind, et al. "Meta-learning with implicit gradients." *Advances in neural information processing systems* 32 (2019). --- Rebuttal Comment 1.1: Comment: Thanks for the explanation.
Summary: This paper proposes a novel vector quantization training framework Meta-Quantization inspired by meta-learning, which decouples the optimization of codebook and autoencoder into two stages, enabling dynamic codebook generation and task-specific training. The proposed method outperforms existing vector quantization approaches on image construction and generation tasks. Claims And Evidence: This paper achieves direct backpropagation on the codebook instead of using STE, enabling codebook training task-specific. The experiment in both image generation and reconstruction tasks shows that Meta-Quantization consistently outperforms multiple baselines and ablation methods, validating its superiority. The experiment also shows that in codebook utilization, the methods achieve the best performance. Methods And Evaluation Criteria: **Methods** - The paper innovatively introduces a hypernet to take the place of the embedding-parameterized codebook. This substitution circumvents the need for direct optimization of the codebook itself. Moreover, after the first-stage training, only the generated codebook needs to be stored and can be directly applied in the subsequent training process. - To tackle the optimization problems of the hypernet and the encoder-decoder, the paper employs a two-stage optimization framework. In this framework, the parameters of the two structures are optimized in a hierarchical manner, and an efficient gradient-based optimization algorithm with finite difference approximation is utilized. **However**, is the bi-level optimization approach adopted in this paper necessary? Since the Hyper-Network is also composed of linear layers or MLPs, if we perform backpropagation on the Hyper-network parameters $\psi$ together with the encoder-decoder parameters $\phi$ and $\theta$ using a single-step training method, would the performance differ significantly from bi-level optimization? Is there any practical or theoretical reason why bi-level optimization might yield superior results? **Evaluation Criteria** Evaluation with VQVAE - In image reconstruction task, in addition to MSE and LPIPS, this paper uses the perplexity of the model as an evaluation metric to measure the similarity of the codebook. A higher perplexity value indicates a more uniform assignment of codes. The evaluation is carried out on the CIFAR10 and CelebA datasets. - In image generation task, the FID evaluation metric is adopted. MaskGIT is applied to the CelebA dataset, and the results are extended to the image generation task, enabling the direct utilization of the codebook trained in the first stage. Evaluation with VQGAN - The model is trained on the ImageNet-1K and FFHQ datasets. - In image reconstruction task, the evaluation metrics included rFID, LPIPS, PSNR, and SSIM. The assessment is conducted on the validation sets of ImageNet and FFHQ. - In image generation task, the FID evaluation metric is used, and the evaluation is performed on the FFHQ dataset. Theoretical Claims: This paper references the theoretical analysis of gradient analysis for DART's two-level optimization and generalizes the optimization algorithm, containing no errors in proofs. Experimental Designs Or Analyses: The experiments in this paper are comprehensive, conducting evaluations on state-of-the-art models VQVAE and VQGAN. The proposed method is compared against other SOTA models, incorporating codebook utilization comparisons alongside original evaluation metrics. Additionally, implicit codebook methods such as FSQ and LFQ are also analyzed. Experimental results demonstrate that the proposed approach not only efficiently utilizes the codebook but also achieves superior performance across all tasks. In ablation studies, experiments are conducted to validate the effectiveness of bi-level optimization and compare different Hyper-Net architectures. Results showed that the MLP-based Hyper-Net outperformed other variants, confirming that more complex Hyper-Net designs consistently yield better performance. Supplementary Material: This paper does not include any appendices or supplementary materials. Relation To Broader Scientific Literature: This paper adopts the DART method in the field of vector quantization, which has implications for the image tokenization domain but has no impact on other areas. Essential References Not Discussed: Essential references have beed discussed. Other Strengths And Weaknesses: The description of convergence is inconsistent. While Figure 2 states that $\phi$ and $\theta$ are trained to convergence before training $\psi$, Algorithm 1 shows that they are updated together. This discrepancy creates ambiguity regarding the actual optimization procedure implemented in the paper. Other Comments Or Suggestions: There are no other comments or suggestions. Questions For Authors: In the introduction, it is mentioned that the codebook utilization in previous methods is low. However, in the experiments (Table 3, 4, 5), the codebook utilization of VQGAN-LC is also quite high. Please provide a justification for this conclusion and explain the advantages of this method over VQGAN-LC in terms of codebook utilization. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your constructive feedback very much. We provide our response to your review as follows. > Is the bi-level optimization approach adopted in this paper necessary? Yes, it is necessary. The two components of our method address distinct challenges. Specifically, the hypernet resolves the issue of codebook collapse and enhances codebook utilization, while the bilevel optimization approach ensures that the task loss gradient reaches the codebook. As demonstrated in our ablation studies (Section 5.3), both components are essential and must be combined to achieve optimal results. > The description of convergence is inconsistent. We apologize for the confusion. The figure illustrates the workflow of bilevel optimization for a single gradient step, whereas the algorithm blocks detail our specific implementation. Our approach applies a finite difference-based approximation, wherein one gradient descent step approximates the converged solution of the autoencoder. Thus, a consistent pseudo-algorithm is to replace "Update $\psi$ using gradient descent: $\nabla_{\psi}\mathcal{L}(\phi-\xi\nabla_\phi\mathcal{L}(\phi, \theta, \psi), \theta-\xi\nabla_\theta\mathcal{L}(\phi, \theta, \psi), \psi)$" in the while loop with "Copy $\phi^\prime=\phi$ and $\theta^\prime=\theta$, and update $\phi^\prime$ and $\theta^\prime$ using gradient descent by $\nabla_\phi\mathcal{L}(\phi, \theta, \psi)$ and $\nabla_\theta\mathcal{L}(\phi, \theta, \psi)$ until converging, resulting in $\phi^{\prime*}$ and $\theta^{\prime*}$ (the computation graph is retained during descent). Update $\psi$ using gradient descent: $\nabla_{\psi}\mathcal{L}(\phi^{\prime*}, \theta^{\prime*}, \psi)$". This is consistent with the figure description, in which the lower level is optimized until convergence, followed by one update of the upper level. We use a one-step unroll scheme, approximating $\phi^{\prime*} \approx \phi-\xi\nabla_\phi\mathcal{L}(\phi, \theta, \psi)$ and $\theta^{\prime*} \approx \theta-\xi\nabla_\theta\mathcal{L}(\phi, \theta, \psi)$, which corresponds to the pseudo-algorithm presented in our paper. > Justification of advantages over VQGAN-LC. VQGAN-LC also addresses codebook under-utilization and thus exhibits higher codebook usage compared to the original VQGAN. In contrast, our approach employs a hypernet reparameterization, which enables simultaneous updates of all codebook entries.
Summary: The paper proposes to train VQ-VAE under a meta-learning framework. To be more specific, the paper introduces a hyper-network to replace the embedding-parameterized and trains the model with bi-level optimization. Experiments are conducted on image reconstruction and generation tasks. The proposed MQ-VAE improves over the VQ baseline. Claims And Evidence: The Gradient Analysis section is quite interesting. However, empirical evidence is lacking. I can only find an ablation in Table 6, which disables the indirect gradient by zeroing out $\xi$. However, the improvement is not much. I am curious about the true magnitude of the indirect gradient compared to the direct one. A plot wrt the training iterations may better support this. Methods And Evaluation Criteria: ++ Applying the bi-level optimization to training VQ-VAE/VQGAN is reasonable. ++ The evaluation criteria include image reconstruction and image generation. Theoretical Claims: The formulation and derivations look correct. There are no particular proofs for theoretical claims. Experimental Designs Or Analyses: The experimental designs are sound and comprehensive. Specifically, the baseline methods include VQVAE and VQ-GAN. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The difficulty of training VQ-VAE lies in the non-differentiability of the *argmin* operator in the bottleneck. This paper proposes a bi-level optimization approach to mitigate this issue. A recent paper [1] tackles this by bounding the quantization error with a spherical vector quantization-like bottleneck. The vector quantization is implicitly modeled by a linear projection, which resembles the hypernetwork design. However, the model can be trained without bi-level optimization, achieving very similar results on ImageNet 128x128 in Table 4. It looks like these two papers try to tackle the same problem from different perspectives, and it would be good to see if [1] can also benefit from the proposed bi-level optimization technique. [2] also proposes to propagate gradients more smoothly via a rotation. [1] Zhao, et al. "Image and video tokenization with binary spherical quantization." arXiv preprint arXiv:2406.07548 (2024). [2] Fifty, et al. "Restructuring Vector Quantization with the Rotation Trick." arXiv preprint arXiv:2410.06424 (2024). Essential References Not Discussed: See above. Other Strengths And Weaknesses: ==== Strength ==== 1. The paper is well-written and easy to understand. The overview figure nicely illustrates the gradient flow. 2. Evaluations are comprehensive. ==== Weaknesses ==== Some weaknesses have been covered above. There is one more weakness that I would love to point out. 1. Bi-level optimization introduces the computational cost overhead. The paper didn't show any result on the computational cost. There are not many implementation details about the paper. Nevertheless, an interesting experiment is to see the comparison between MQ-VAE and regular single-level optimized VQVAE with the same training budget. In other words, the vanilla VQVAE would enjoy a longer training schedule, which will generally lead to constant improvement from my experience. Other Comments Or Suggestions: No. Questions For Authors: 1. Training cost compared to vanilla VQ-VAE 2. A more fair comparison with VQ-VAE under the same training budget. In other words, the authors should allow the baseline to train for longer iterations. 3. More illustrations on the gradient analysis (see the claims and evidence section). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your constructive feedback very much. We provide our response to your review as follows. > Magnitude Comparison In our experiment, we have found that the magnitude of both indirect and direct are around $10^{-2}$, so none of them dominate each other. Please follow this anonymous link https://anonymous.4open.science/r/MQVAE-B52C for the figure. > Whether [1] can benefit from bi-level optimization We want to clarify that [1] addresses an essentially different problem. In our formulation, the core idea is to generate a codebook using (1) a learnable embedding and (2) a learnable transformation (e.g., linear projection). The method presented in [1] does not involve learnable embeddings; moreover, its projections are considered part of the backbone autoencoder architecture, whereas in our case, the projection (a hypernet) is integrated into the codebook. Thus, [1] falls outside the scope of our study, and we cannot comment on applying bilevel optimization to [1] based on our main paper's arguments. > Relation to [2] This work primarily focuses on improving the STE estimator used in the original VQVAE paper. Although it facilitates smoother gradient propagation through the non-differentiable quantization layer, the codebook update still depends solely on the distribution of codes and features and does not incorporate the task loss. Therefore, that work addresses a different issue and is orthogonal to our approach. > Computation Cost and Implementation details We conducted additional experiments to address your concerns. When evaluated on the CelebA dataset with a batch size of 128, the increase in memory usage is marginal and acceptable in practice. For time comparison, we set VQVAE as the baseline, which needs around 3.6 hours to finish the training of 50k steps (and does not improve after that). We find that, MQVAE only requires 2.4 hours to reach the same LPIPS score as VQVAE (around 9.5k steps), and can keep improving after that. This demonstrates that our method converges much faster than VQVAE and is able to outperform baselines with extended training time. | Method | Memory (GB) | Wall time to reach baseline (h) | Total wall time (h) | | ---------------- | ----------- | ------------------------------- | ------------------- | | VQVAE (baseline) | 7.29 | 3.6 | 3.6 | | MQVAE | 7.35 | 2.4 | 12.2 | [1] Zhao, et al. "Image and video tokenization with binary spherical quantization." arXiv preprint arXiv:2406.07548 (2024). [2] Fifty, et al. "Restructuring Vector Quantization with the Rotation Trick." arXiv preprint arXiv:2410.06424 (2024).
Summary: This paper introduces Meta-Quantization (MQ), by using a hyper-net and bi-level optimization to alternatively train the codebook with the autoencoder in Vector Quantization Networks (VQN). Experiments show MQ has better codebook ultilization, image reconstrcution and generation performance. Claims And Evidence: Yes. VQ has optimization issues and this paper tackles this. Methods And Evaluation Criteria: - The optimization seems to be very complicated. Why not use simpler method such as clustering the embeddings or EMA update? What are the advantages against existing methods for better codebook ultilization? - What is the training stability and efficiency of the optimization compared to other methods? - I am confused about the details of the hyper-net. In lines 287-292, it is described as an MLP. Does it mean it takes a single embedding and generate a single code entry? How does it produce the whole codebook? Theoretical Claims: Yes. Experimental Designs Or Analyses: Lack of comparison on the training stability and efficiency of the optimization with other methods. Supplementary Material: no supplementary material provided Relation To Broader Scientific Literature: This paper is related. It proposes a meta-learning approach for codebook optimization. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The MQ method and the optimization algorithm makes sense. - Results are strong. Weekness: - It is not clear if the training is efficient. Other Comments Or Suggestions: Please provide more details of the hyper-net. Questions For Authors: In lines 69-74, what is the "specific task"? What is "task-aware"? Is it the VQVAE loss? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your constructive feedback very much. We provide our response to your review as follows. > Why not use a simpler method such as clustering the embeddings or EMA update One of the advantages and novel aspects of MQ, compared to simpler methods, is that the codebook update follows a more complete loss path in that it directly interacts with the task loss gradient. For example, the simplest VQVAE is designed to reconstruct the input image using the mean squared error loss. However, updating the codebook by enclosing the code and feature in latent space does not involve the MSE between the image and the reconstructed output. In contrast, our meta-learning-based method avoids this incomplete gradient issue. Please refer to Figure 2 in the main paper for additional details. > What is the training stability and efficiency of the optimization compared to other methods? > Empirically, we did not observe stability issues during training. Theoretically, related convergence analyses for this type of gradient-based bilevel optimization algorithm can be found in [1], [2], and the references therein. Our MQ belongs to this type of optimization and is guaranteed to be stable and converge under certain conditions. > It is not clear if the training is efficient. We conducted additional experiments to address your concerns. When evaluated on the CelebA dataset with a batch size of 128, the increase in memory usage is marginal and acceptable in practice. For time comparison, we set VQVAE as the baseline, which needs around 3.6 hours to finish the training of 50k steps (and does not improve after that). We find that, MQVAE only requires 2.4 hours to reach the same LPIPS score as VQVAE (around 9.5k steps), and can keep improving after that. This demonstrates that our method converges much faster than VQVAE and is able to outperform baselines with extended training time. | Method | Memory (GB) | Wall time to reach baseline (h) | Total wall time (h) | | ---------------- | ----------- | ------------------------------- | ------------------- | | VQVAE (baseline) | 7.29 | 3.6 | 3.6 | | MQVAE | 7.35 | 2.4 | 12.2 | > I am confused about the details of the hyper-net. In lines 287-292, it is described as an MLP. Does it mean it takes a single embedding and generates a single code entry? How does it produce the whole codebook? In this case, the MLP will separately project each codebook entry with respect to their dimensionality, meaning if the embedding is $e_n, n=1...,N$, the generated codebook is $MLP(e_n), n=1,...,N$. Nevertheless, if any $MLP(e_n)$ is selected and receives a gradient, the MLP will be updated, resulting in a different codebook being generated by this updated MLP. > In lines 69-74, what is the "specific task"? What is "task-aware"? Is it the VQVAE loss? The specific task depends on the benchmark. For instance, in image reconstruction tasks, the task loss is the MSE loss in the case of VQVAE or a combination of perceptual loss, MSE loss, and adversarial loss in the case of VQGAN. Task awareness means that the gradient used to update the codebook directly incorporates the task loss. One contribution of this work is to establish this connection. In Figure 2, the task loss reaches the codebook through various pathways; previous VQ methods updated the codebook solely based on the gradient of selected codes toward selected features, without incorporating the task loss. [1] Pedregosa, Fabian. "Hyperparameter optimization with approximate gradient." *International conference on machine learning*. PMLR, 2016. [2] Rajeswaran, Aravind, et al. "Meta-learning with implicit gradients." *Advances in neural information processing systems* 32 (2019).
null
null
null
null
null
null
ULPT: Prompt Tuning with Ultra-Low-Dimensional Optimization
Reject
Summary: The paper proposes a more efficient prompt tuning method in that they need to optimize over fewer variables. They achieve this efficiency through a kind of sketching with the Johnson-Lindenstrauss Lemma. They experiment on NLP tasks. Claims And Evidence: The most problematic claim is wrt efficiency. In fact, you still need to reconstruct the large matrix $\tilde{P}$ at inference time, so you do not have a memory advantage in this respect. Also, you need to consider the additional computational requirements of such computations, which I didn't see discussed or evaluated empirically. For instance, LoRA weights can be simply merged with the model weights and no overhead at inference time is induced. So it seems to me the only advantage could be that this method requires even fewer trainable parameters at finetuning stage, but the limitations should be made much clearer in the text. Moreover, many LoRA-like methods that scale better than LoRA have been proposed but not compared in the experiments. Methods And Evaluation Criteria: GLUE is fine, but more reasoning tasks should be provided. I would suggest to use more standard benchmarks and methods, such as Llama for autoregressive tasks. Very little evidence on the computational overhead introduced by the method in terms of time/memory. Theoretical Claims: OK, even if not very informative theorems. Experimental Designs Or Analyses: See above Supplementary Material: OK Relation To Broader Scientific Literature: Not all references to new LoRA-based methods are discussed, in fact many methods that scale better than LoRA have been proposed. One of them is VeRA, which is discussed but not compared empirically. Other references include LISA and ReFT. Pan, Rui, et al. "LISA: layerwise importance sampling for memory-efficient large language model fine-tuning." Advances in Neural Information Processing Systems 37 (2024): 57018-57049. Wu, Zhengxuan, et al. "Reft: Representation finetuning for language models." Advances in Neural Information Processing Systems 37 (2024): 63908-63962. Essential References Not Discussed: See above. Other Strengths And Weaknesses: Strengths - The writing is very clear - Good use of sketching - GLUE experiments are useful Other Comments Or Suggestions: Many equations lack a comma at the end Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer’s thoughtful comments, especially for recognizing our clear paper writing, effective parameter reduction through sketching, and useful GLUE experiments. We now provide detailed responses to each of the concerns. > “You still need to reconstruct the large matrix $\tilde{P}$ at inference time, so you do not have a memory advantage in this respect.” We thank the reviewer for the comment. At inference, ULPT does reconstruct the full prompt embeddings, using the same memory as vanilla prompt tuning. However, ULPT targets LLM customization, where an enormous number of customized LLMs are stored but few are active at a time. ULPT significantly reduces the storage for customizations of foundation models, which is a novel use case. > “You need to consider the additional computational requirements of such computations” and “Very little evidence on the computational overhead introduced by the method in terms of time/memory.” We appreciate this suggestion. To clarify, the computational overhead introduced by ULPT at inference time is minimal compared with the rest of the network, as the reconstruction for the prompt embeddings occurs only once for each loading of the model. We empirically compare the run time of ULPT’s up-projection against vanilla PT (the results are averaged on 100 runs). **Table 2** | Runtime Setting| Llama1B| Llama 3B| |-|-|-| | Vanilla PT (loading high-dim embeddings)| 0.64 ± 0.04 ms| 0.91 ± 0.04 ms| | ULPT up-projection (r=2)|0.56 ± 0.06 ms| 0.59 ± 0.04 ms| | ULPT up-projection (r=64)|1.43 ± 0.09 ms|1.87 ± 0.06 ms| | ULPT up-projection (r=256)|4.09 ± 0.10 ms|5.80 ± 0.16 ms| | Decoding| 1481.15 ± 64.26 ms| 2536.67 ± 42.14 ms| As seen in Table 2, the embedding up-projection is negligible relative to the decoding time. We will include the runtime analysis in the revised manuscript. > “LoRA weights can be simply merged with the model weights and no overhead at inference time is induced. So it seems to me the only advantage could be that this method requires even fewer trainable parameters at finetuning stage, but the limitations should be made much clearer in the text. ” We acknowledge that our ULPT is different from LoRA, as we do not keep the original weight matrix. As mentioned in the previous point, the overhead caused by up-projection is negligible compared with the rest of the network (at most 0.3%). The advantages of our work include: - As recognized by the reviewer, our method has much fewer trainable parameters than LoRA and other methods, which is crucial to the storage of massive customized LLMs. - In addition to storage saving, our ULPT also combats the overfitting problem and achieves higher performance than full-dimensional prompt tuning and LoRA (Table 1 in our paper). In the revision, we’ll clarify that we don’t merge the low-rank embeddings. > “many LoRA-like methods that scale better than LoRA have been proposed but not compared in the experiments.”, “VeRA, which is discussed but not compared empirically.” and “more reasoning tasks should be provided … such as Llama for autoregressive tasks” We thank the reviewer for highlighting additional LoRA variants, and suggesting additional experiments on Llama for autoregressive tasks. In the rebuttal period, we included additional baselines VeRA and FourierFT, and added two generation benchmarks: GSM8K (math reasoning) and MBPP (code generation). We compare ULPT with those baselines using the Llama 3.2 models (1B and 3B). Results are presented in **Table 1 (rebuttal to Reviewer Ycju)** due to the rebuttal space limit. We see that ULPT remains highly competitive, outperforming LoRA, VeRA, and FourierFT when the number of parameters is controlled. Importantly, LoRA and VeRA cannot achieve the ultra-low parameter usage as ULPT that is at the level of a few thousand parameters. We will include these comparisons in the revised manuscript. > “Other references include LISA and ReFT.” Thanks for suggesting additional parameter-efficient fine-tuning methods beyond prompt tuning and LoRA. We will include discussions of LISA and ReFT in the related work section of our revised manuscript. > “Many equations lack a comma at the end” Thanks for the suggestion. We’ll adopt a better style (including punctuation for equations) in our revision. --- We believe these clarifications and additional results have addressed the reviewer’s concerns. We are grateful for the reviewer’s feedback, and look forward to your support of our work! --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their reply. I have the following remaining important concerns and suggestions: - The core contribution of this paper is to down-project the prompt tuning embedding matrix with a random matrix inspired by sketching methods. The down-projection saves some number of trainable parameters for finetuning. I feel that this contribution is not very original and not much significant to the literature. - The usefulness of the method is very limited. There doesn't seem to be a significant accuracy improvement and the main benefit would be lower number of trainable parameters. For example, in Table 1, taking the highest ranks DPT and ULPT, accuracy is basically same and ULPT requires 27.1K parameters and DPT requires 55.6K, with a saving of 28500 parameters. This means that, if using float32, you save 114 kilobytes of storage. This saving is negligible. - The authors say that their method is useful when "an enormous number of customized LLMs are stored but few are active at a time". Storage cost is very low so you would need tens of millions of customizations before seeing any significant saving, which seems like a very hypothetical scenario. - Prompt tuning already adds tokens to the input, leading to increased inference time and KV cache memory requirements. Even though the authors show that their method's overhead is small, when "an enormous number of customized LLMs are stored but few are active at a time", this loading operation needs to be performed every time a new customization is loaded, resulting in compounded time overheads (which is more costly than storage). - Regarding reasoning benchmarks, GSM8K is an older benchmark. I suggest the authors to take a look at Table 1 and 2 of ReFT. This is a suggestion for future versions of their paper, I understand that running these experiments now is computationally expensive. --- Reply to Comment 1.1.1: Comment: Thank you for your additional comments. We address your concerns as follows: > “ I feel that this contribution is not very original and not much significant to the literature.” We respectfully disagree with the comments. The effective integration of random projections with prompt tuning has not been previously studied in the literature. Moreover, we show in our analysis (Figure 3 in our paper) that naively down-projecting the prompt embedding, specifically in the ultra-low dimensional setting, introduces significant difficulty in learning, and our proposed learnable scaling and shifting embeddings resolve this problem while keeping the parameter efficiency. The reviewer fails to mention any specific literature but simply “feel(s)” our contribution if not very original. This is a major concern of the review, not our paper. > “There doesn't seem to be a significant accuracy improvement and the main benefit would be lower number of trainable parameters.” We again disagree with the comments. In terms of performance, for example, ULPT with r=64 (7.9k parameters) achieves the best performance on both GLUE and SuperGLUE compared with all other methods (Table 1). When controlling for the same rank for DPT (55.6k parameters), ULPT significantly outperforms DPT on SuperGLUE (76.8 vs. 73.9). Despite the improved task performance, we would like to point out that improving efficiency is also a major contribution to the deep learning literature. For example, the [ICML’25 review guideline](https://icml.cc/Conferences/2025/ReviewerInstructions) highlights a time-efficiency contribution. Apparently, our parameter-efficient methods can also be a key contribution to the machine learning community. If saving parameters is not a significant contribution, the reviewer essentially asserts that most LoRA-like papers are subpar to ICML, which is absurd. We urge the reviewer to read and follow ICML’25 review guideline when judging the merit of our paper. > “ This saving is negligible.” and ”Storage cost is very low so you would need tens of millions of customizations before seeing any significant saving, which seems like a very hypothetical scenario.” We thank the reviewer for recognizing significant parameter savings in massive LLM customizations. This is exactly the scenario of how LLM is used today. For example, this [news report](https://www.reuters.com/technology/artificial-intelligence/openais-weekly-active-users-surpass-400-million-2025-02-20/) mentions that OpenAI has more than 400M weekly active users. Even if one user keeps one customized LLM, we have 400 million customized LLMs. It is hard to imagine that improving efficiency of LLM customization is a hypothetical scenario. > “Prompt tuning already adds tokens to the input, leading to increased inference time and KV cache memory requirements… this loading operation needs to be performed every time a new customization is loaded, resulting in compounded time overheads (which is more costly than storage)” For KV cache, we confirm that our approach does not add to any overhead compared with prompt tuning (which is a lightweight and useful way of tuning LLMs). Our work is built on top of prompt tuning, and saves a large number of parameters while further improving task performance. We further measure the decoding speed to alleviate the reviewers concern. We follow the setups in Table 1 and use rank of 2 with 100 prompt tokens. **Table 3: Decoding speed (tokens/second)** |Model| No customization| ULPT| |-|-|-| |Llama 1B|82.76 ± 0.33|82.71 ± 0.33| |Llama 3B|48.74 ± 0.25| 48.70 ± 0.22| We found no meaningful difference in decoding speed with or without the additional prompt tokens. > “I suggest the authors to take a look at Table 1 and 2 of ReFT. This is a suggestion for future versions of their paper” As we mainly follow previous prompt tuning literatures for the experimental settings, we thank the reviewer for their suggestions on our future work. Since ReFT was only published in December last year, we did not have enough time for adopting their setups for our ICML submission. We’ll discuss the paper in our revision and adopt the settings in future work.
Summary: This paper proposes a new low-dimensional parameterization for prompt tuning that could achieve better performance than the original prompt tuning with only 2% of the parameters. Claims And Evidence: The claims are in general clear and convincing. One issue regarding the claims is the intriduction of shift embedding and the scale embedding. It is unclear why the introduction of these two could result in better performance and if there are better parameterization. Methods And Evaluation Criteria: The proposed method is evaluated on the GLUE fine-tune tasks, comparing to other fine-tuning methods. I believe these are standard criteria and do make sense. Theoretical Claims: Theorem 3 impose a pretty strong assumption, namely the Polyak-Lojasiewicz inequality. This is one inequality that essentially bound the function value gap toward the optimal value by the norm of the gradient, which serves as a substitute of the strongly convexity assumption. Under such assumption, every local optimal will be gloabl optimal, thus I believe that the theoretical claim should be correct but not very significant. I didn't check the proofs carefully but would believe that it's correct. Experimental Designs Or Analyses: The experiment design involves multiple fine-tuning tasks in GLUE and SuperGLUE, which is good. The not-so-good part is that the experiment is only conducted on T5-base model, and it's interesting to see the result on newer models such as Llama3.2 or Qwen2. Supplementary Material: I didn't check the supplementary material carefully. Relation To Broader Scientific Literature: I think the work is clear about its relations to the previous works in this research direction. Essential References Not Discussed: There is one previous work on extreme-parameter-efficient fine-tuning method, namely the fine-tuning in the Fourier domain, see [1]. This is also a fine-tuning idea with a non-traditional parameterization to save the memory to an extreme. I think this method is worth comparing with both in theory and in experiments. References: [1] Gao, Ziqi, et al. "Parameter-Efficient Fine-Tuning with Discrete Fourier Transform." Forty-first International Conference on Machine Learning. Other Strengths And Weaknesses: The paper is clearly written and easy-to-follow. For the weakness, I think the biggest one is again more discussion on the intriduction of shift embedding and the scale embedding. I think it would be helpful if the authors could discuss the necessity of these two new variables theoretically. In particular, the Fourier domain parameterization in [1] seems to require less parameters than the method proposed in this paper. References: [1] Gao, Ziqi, et al. "Parameter-Efficient Fine-Tuning with Discrete Fourier Transform." Forty-first International Conference on Machine Learning. Other Comments Or Suggestions: I think there are some typos but I didn't check all of them carefully. For example, in the statement of Theorem 3, "Polyak–Lojasiewic" seems missing a "z" at the end. Questions For Authors: I would refer to "Essential References Not Discussed" and "Other Strengths And Weaknesses" sections. I would be happy to reevaluate this work if the authors could give more discussion on the introduction on the two new embeddings and also discuss and compare with the missing literature I mentioned. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback. We appreciate that the reviewer says “The claims are in general clear and convincing” and that “The experiment design involves multiple fine-tuning tasks”. Below we address each of the comments in detail. > “One issue regarding the claims is the introduction of shift embedding and the scale embedding.” and “it would be helpful if the authors could discuss the necessity of these two new variables theoretically” Thanks for raising this point. Our empirical analysis (Section 4.3) demonstrates that without scale and shift embeddings, the optimization process becomes significantly more difficult, particularly in the ultra-low-dimensional setting (e.g., 2-dimensional prompts, Figure 3). Additionally, Figure 4 reveals that the learned shift embeddings exhibit high similarities across different r configurations, further justifying our heuristic design of the shift and scale embeddings. Theoretically, our Theorem 3 ensures that these additional embeddings do not negatively impact the optimization process. We’ll provide further explanation in the revision. > “Theorem 3 impose a pretty strong assumption, namely the Polyak-Lojasiewicz inequality… Under such assumption, every local optimal will be gloabl optimal, thus I believe that the theoretical claim should be correct but not very significant.” We appreciate the reviewer’s insightful comments on our theoretical assumptions. While the Polyak-Lojasiewicz (PL) inequality seems to be a strong condition, recent studies such as [1] demonstrated that over-parameterization often induces optimization landscapes to approximate PL-like conditions. In real-world applications, modern language models are heavily overparameterized, and tend to satisfy the PL* condition (a variant of PL condition) as shown in [1]. We acknowledge the review’s point that the condition may not always hold in practice, but this is the case for almost every theoretical analysis. That being said, our theorem provides a meaningful insight on ULPT in practice (namely, random projection for embeddings does not add to optimization difficulty), which is novel and has not been stated before. [1] Loss landscapes and optimization in over-parameterized non-linear systems and neural networks, Liu et al. 2020. > “it's interesting to see the result on newer models such as Llama3.2 or Qwen2” and “the Fourier domain parameterization seems to require less parameters than the method proposed in this paper…I think this method is worth comparing with both in theory and in experiments.” We thank the reviewer for suggesting comparisons with Fourier-based methods and evaluations on newer models. We conducted additional experiments using the Llama3.2 (1B and 3B) models on two generation datasets: GSM8K (math reasoning) and MBPP (code generation). These generation tasks also complement the 21 tasks in our main paper. Due to space limit, we kindly refer the reviewer to **Table 1 of our rebuttal to Reviewer Ycju**. As shown, ULPT consistently outperforms FourierFT under controlled parameter budgets. For instance, when controlling parameters at 4.1K (ULPT r=2 vs. FourierFT n=128), our ULPT achieves higher performance on both GSM8K (39.7 vs. 35.8) and MBPP (26.1 vs. 21.5) for Llama 1B, and similar advantages are observed in the 3B setting (66.3 vs 63.1 on GSM8K and 33.9 vs. 21.9 on MBPP). Moreover, ULPT remains competitive or superior to other baselines such as LoRA, VeRA, and vanilla prompt tuning that require significantly more parameters to learn. Theoretically, both FourierFT and ULPT leverage random matrices to reduce learnable parameters for fine-tuning, but they achieve this through fundamentally different mechanisms. FourierFT compresses weight updates by leveraging random spectral entries in the frequency domain, while ULPT operates in prompt space by parameterizing the embeddings with an up-project random matrix. The projection preserves the embedding distances essential for transformer attention (our Theorem 2). We will discuss these differences in more detail in our revision. > “I think there are some typos but I didn't check all of them carefully. For example, in the statement of Theorem 3, "Polyak–Lojasiewic" seems missing a "z" at the end.” Thanks for the catch! We’ll fix them in the revision. --- We hope that our clarifications and additional results address all the concerns. We greatly appreciate the reviewer’s willingness to reevaluate our manuscript! Please let us know if there are any further questions. Thanks! --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal especially on the comparison with Fourier Fine-tune. I'd liek to increase my evaluation since it addresses most of my concerns.
Summary: This work proposes a change to prompt tuning where first they decompose the standard n x d parameters as two matrices that are multiplied together n x r @ r x d, but the second matrix if random and frozen, thus vastly reducing the number of learnable parameters. Additionally they add new shift and scale learnable vectors of size d which they find helps optimization and provide theoretical results that show their learned low rank embedding vectors can maintain the same distance relations amongst themselves that the original vectors do. They test their approach on both GLUE and SuperGLUE tasks and find their method achieves stronger average results. Claims And Evidence: Yes, their method outperforms others on a wide variety of benchmarks like GLUE and SuperGLUE and is competitive in other more difficult settings like MRQA and other datasets. Additionally they have a number of ablation studies that show each part of their system seems important and contributes to the final performance. Methods And Evaluation Criteria: Yes they make sense. While benchmarks like GLUE and SuperGLUE are oversaturated, they include other more difficult benchmarks. Additionally, experiments with large scale Bloomz models (up to 3B parameters) show that their method generalizes w.r.t. model type (decoder-only) and scale. Theoretical Claims: I did not check the correctness of proofs Experimental Designs Or Analyses: Their experimental design makes sense. They also did extensive ablation studies on things like the rank of the learned parameters, the scale + shift parameters, and which parts of the decomposition are trainable. Some work like https://arxiv.org/abs/2205.12647 seems to suggest that prompt tuning methods tend to be weaker on tasks that require long generation. It would have been nice to see how their approach fared in this more challenging setting. Supplementary Material: I did not review the supplementary material Relation To Broader Scientific Literature: Their work appropriately references other work in the field, including things like citations to works that first proposed the decomposition of prompt parameters. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper and their method are both clear and straight forward. They could do a better job at explain what Theorem 2 means practically, I assume that by maintaining distance relations it means that the downstream transformers attention will be unaffected by the low rank representation, but something like that could be more clearly stated. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We appreciate the reviewer for their thorough evaluation and the “strong accept” recommendation! The reviewer fully recognizes the contributions of our work, as well as the comprehensive analysis and clear writing. > “It would have been nice to see how their approach fared in this more challenging setting.” We thank the reviewer for highlighting the importance of evaluating our approach on tasks involving long generation. We conducted additional experiments on the GSM8K and MBPP datasets for math reasoning and code generation with the maximum generation for a few hundred tokens. We used one of the newest Llama models (3.2), and due to the constraint on time and resource, we considered the 1B and 3B variants. **Table 1:** Results from additional experiments. We report accuracy on GSM8K and pass@1 on MBPP. The updated code is available in our anonymous GitHub repo (see footnote 1 in our manuscript for the link). | Method | Param (1B) ↓ | GSM8K (1B) ↑ | MBPP (1B) ↑ | Param (3B) ↓ | GSM8K (3B) ↑ | MBPP (3B) ↑ | |-|-|-|-|-|-|-| | ICL (4-shot) | - | 34.3 | 21.1 | - | 62.5 | 23.9 | | LoRA (r=1) | 106.5k | 38.5 | 26.7 | 286.7k | 62.9 | 32.1 | | LoRA (r=4) | 426.0k | 40.1 | 27.2 | 1.15M | 63.4 | 34.3 | | LoRA (r=8) | 852.0k | 40.2 | 24.7 | 2.29M | 62.2 | 37.8 | | VeRA (r=1) | 41.0k | 39.3 | 24.4 | 114.7k | 65.5 | 35.5 | | VeRA (r=4) | 41.1k | 39.6 | 27.8 | 114.9k | 65.0 | 34.4 | | VeRA (r=8) | 41.2k | 40.9 | 29.5 | 115.1k | 65.7 | 33.9 | | FourierFT (n=128) | 4.1k | 35.8 | 21.5 | 7.2k | 63.1 | 21.9 | | FourierFT (n=512) | 16.4k | 34.9 | 27.3 | 28.7k | 66.6 | 35.3 | | FourierFT (n=1024) | 32.8k | 36.6 | 25.9 | 57.3k | 65.5 | 35.4 | | PT | 20.5k | 40.2 | 24.7 | 30.7k | 65.3 | 33.1 | | ULPT (r=2) | 4.1k | 39.7 | 26.1 | 6.2k | 66.3 | 33.9 | | ULPT (r=64) | 4.7k | 42.4 | 28.7 | 6.8k | 65.6 | 34.3 | | ULPT (r=256) | 6.7k | 41.4 | 26.3 | 8.7k | 66.4 | 32.9 | These results in Table 1 show that, ULPT remains competitive or superior compared with other baselines including LoRA and its recent variants (VeRA and FourierFT), as well as vanilla prompt tuning. In particular, LoRA and VeRA fail to work in the ultra-low parameter setting, as the required parameters are magnitudes more than ours. FourierFT uses fewer parameters, but their performance is much worse than ours (e.g., 35.8 VS 39.7 for GSM8K at 1B scale with 4.1k parameters). > “They could do a better job at explain what Theorem 2 means practically” Thanks for the suggestion. Practically, since transformers heavily rely on embedding distances during the forward pass to compute attention patterns, Theorem 2 shows that our randomly up-projected low-dimensional prompt embeddings approximately preserve these pairwise distances. This suggests that the model’s attention mechanism operates on embeddings that reflect the same relational structure as the full-dimensional prompts. We will clarify in the revision. --- Once again, we thank the reviewer for their strong support and valuable feedback!
null
null
null
null
null
null
null
null
Approximate Differential Privacy of the $\ell_2$ Mechanism
Accept (poster)
Summary: This paper studies the $\ell_2$ mechanism for releasing a $d$-dimensional statistic with bounded $\ell_2$ sensitivity under approximate differential privacy. To release a $d$-dimensional statistic $T(x)$, the $\ell_2$ mechanism samples an output $f_X(y)\propto \exp(-\lVert y - T(x)\rVert_2 / \sigma)$ for suitable $\sigma$. This can be compared to the Gaussian mechanism which outputs $f_X(y)\propto \exp(-[\lVert y - T(x)\rVert_2 / \sigma]^2)$, i.e., where the norm comes in squared. The $\ell_2$ mechanism can be viewed as an instantiation of the $K$-norm mechanism (Hardt & Talwar, 2010), and so it is known to satisfy $\frac{1}{\sigma}$-DP. Its approximate DP properties, however, are not well understood, and exploring these is the main topic of the paper. To prove that the $\ell_2$ mechanism supports approximate DP, they use the fact that a mechanism $M$ is $(\varepsilon, \delta)$-DP iff $\Pr[\ell_{M, X, X'} \geq \varepsilon] - e^{\varepsilon}\Pr[\ell_{M, X', X} \leq -\varepsilon] \leq \delta$ where $\ell_{M, X', X}$ is the privacy loss of $M$ on arbitrary neighboring datasets $X, X'$ (Balle & Wang, 2018). Upper bounding the first term and lower bounding the second in the left-hand side becomes a means to computing a $\delta$ for a given $\epsilon$. The authors show that both terms can be defined by the probability mass that $M(X)$ and $M(X')$ respectively assigns to regions in $\mathbb{R}^d$ defined by specific spherical caps. They give algorithms for bounding both terms, which, when combined in Algorithm 3, allows for checking if a $(\varepsilon, \delta)$-guarantee holds for a given $d$ and $\sigma$. Empirically, they demonstrate that binary searching over $\sigma$ using Algorithm 3 allows for tight privacy analysis of the mechanism (Figure 4). They also demonstrate that analytically computing the optimal $\sigma$ for the $\ell_2$ mechanism is feasible and does not scale with $d$ (Figure 5), even if it is slower than the corresponding computations for Laplace and Gaussian mechanisms for a $\ell_2$ guarantee. They further show that sampling the $\ell_2$ mechanism takes roughly only a factor ~2 more time than the Laplace/Gaussian counterpart (Figure 6). Figure 1 makes the point that the $\ell_2$ mechanism can meaningfully improve on the mean squared error over the Gaussian mechanism for $d$ up to $100$ (for $\varepsilon=1, \delta=10^{-5}$). ## update after rebuttal The authors answered my questions adequately. I stand by my score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I read through the proofs in the main part of the paper. I did not check every detail, but it seemed convincing to me. I did not check the appendix however. Experimental Designs Or Analyses: Yes, all the listed experiments look sound to me. I have minor questions regarding the parameter ranges listed later. Supplementary Material: Yes, I briefly went through the code repository, and it looks structured and well-commented. I did not run it. Relation To Broader Scientific Literature: The paper gives the first useful (in the sense of efficiently computable) approximate DP guarantee for the $\ell_2$ mechanism. This mechanism is already known as an instantiation of the $K$-norm mechansim (Hardt & Talwar, 2010), but the approximate DP analysis is novel. In (Ganesh & Zhao, 2021) they also used spherical caps to bound the privacy loss of "generalized Gaussians", but these are (1) different probability distributions and (2) they give looser bounds. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: Strengths: 1. Well-written paper with clearly stated contributions. 2. The privacy analysis of the $\ell_2$ mechanism appears to be tight (Figure 4), and the fact that it can (at least sometimes) beat Gaussian noise in mean squared error is interesting (and potentially practically useful). 3. As demonstrated by Figure 5 and 6, it also appears as if the $\ell_2$ mechanism is scalable with respect to time. Weaknesses: 1. Some parameter ranges for the experiments could need more motivation, e.g. why $d=100$ everywhere? 2. Connected to the preceding point, the improvement over the Gaussian/Laplace mechanism seems concentrated to moderately small $d$ for reasonable $(\varepsilon, \delta)$. Other Comments Or Suggestions: I think the paper could benefit from a more thorough investigation of when the $\ell_2$ mechanism improves over the Gaussian mechanism. Figure 1 (Left) shows what happens up to $d=100$ for a fix choice of $(\varepsilon, \delta)$, but it would be interesting to see what happens for larger $d$ and other privacy regimes. Questions For Authors: I ask the following questions to better understand the results. I do not expect my score to change dramatically based on answers to these questions in isolation. Questions: 1. From the discussion in Section 4 regarding Figure 1, it seems as if the mean squared error of the $\ell_2$ mechanism approaches that of the Gaussian as $d$ increases for general settings of $(\varepsilon, \delta)$. Is this true in general, and if so, is there a simple argument for why that is? I think the improvement for small $d$ is interesting in its own right, but it would be useful to know if there is any benefit to using the $\ell_2$ mechanism for larger $d$. 2. Does the computation in Figure 5 meaningfully depend on $(\varepsilon, \delta)$? I could believe that $d$ has no influence, but it is not clear to me if the computation becomes prohibitive for certain regimes of $(\varepsilon, \delta)$. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the review! > Some parameter ranges for the experiments could need more motivation, e.g. why $d=100$ everywhere? Our experiments focus on the $d \leq 100$ setting to highlight the range where the $\ell_2$ mechanism offers the largest improvement over both Laplace and Gaussian noise. As noted at the end of Section 4.2, the gap between $\ell_2$ and Gaussian noise shrinks to $<1$% at $d=350$ (and essentially vanishes for larger $d$). If desired, we can add more language highlighting this behavior. > Connected to the preceding point, the improvement over the Gaussian/Laplace mechanism seems concentrated to moderately small $d$ for reasonable $(\varepsilon, \delta)$. We agree that the $\ell_2$ mechanism's improvement is concentrated to moderate $d$ (e.g., $d < 500$, and most notably on $d < 100$). Regarding the privacy regime, we note that the curve in the left plot of Figure 1 looks essentially the same in higher ($(0.1, 10^{-7})$-DP) and lower ($(10,10^{-3})$-DP) privacy settings, as mentioned at the end of Section 4.2. > From the discussion in Section 4 regarding Figure 1, it seems as if the mean squared error of the $\ell_2$ mechanism approaches that of the Gaussian as $d$ increases for general settings of $(\varepsilon, \delta)$. Is this true in general, and if so, is there a simple argument for why that is? I think the improvement for small $d$ is interesting in its own right, but it would be useful to know if there is any benefit to using the mechanism for larger $d$. Our experiments demonstrate that the $\ell_2$ mechanism error converges to Gaussian mechanism error as $d$ increases, irrespective of the $(\varepsilon, \delta)$ regime. However, we do not have a formal proof that this holds. A possible intuitive explanation is that the $(\varepsilon, \delta)$-DP $\ell_2$ mechanism ``spends its $\delta$'' violating $\varepsilon$-DP close to the origin, while the Gaussian mechanism continues doing so arbitrarily far into its tails (and thus can incur arbitrarily high privacy loss). The result is that the $\ell_2$ density peaks at a smaller $\ell_2$ distance than the Gaussian density at the same $(\varepsilon, \delta)$-DP guarantee, but this difference shrinks as $d$ grows. This partially explains why the $\ell_2$ mechanism error approaches the Gaussian mechanism error as $d$ grows, though it does not explain why the chosen $\sigma$s necessary for DP lead to this behavior. > Does the computation in Figure 5 meaningfully depend on $(\varepsilon, \delta)$? I could believe that $d$ has no influence, but it is not clear to me if the computation becomes prohibitive for certain regimes of $(\varepsilon, \delta)$. We do not believe that the ($\sigma$) computation in Figure 5 meaningfully depends on the exact privacy parameters within typical ranges. We tested this computation at more extreme privacy levels ($(0.1, 10^{-7})$-DP and $(10,10^{-3})$-DP) and did not encounter numerical issues. However, this might change as $e^\varepsilon$ and $\delta$ near the limits of conventional floating point accuracy.
Summary: The paper studies the L2 mechanism with bounded L2 sensitivity in d dimensions, demonstrating improvements over the Laplace and Gaussian mechanisms under approximate differential privacy. It presents algorithms for computing approximation bounds for privacy loss random variables and introduces a parallel sampler for generating noise from the resulting distribution. ## update after rebuttal I will keep my score unchanged. Claims And Evidence: The approximation bounds for privacy loss random variables are novel, providing new results for achieving differential privacy in d dimensions. Methods And Evaluation Criteria: The experiment support the claims. The sampling is efficient. Theoretical Claims: I checked most of the proofs in the main paper, they make sense to me. Experimental Designs Or Analyses: The experiment shows the proposed mechanism is tight and can be efficient. Supplementary Material: I did't check supplementary material. Relation To Broader Scientific Literature: The paper reduces the error of dp mechanism in d dimension case. Essential References Not Discussed: No. Other Strengths And Weaknesses: One missing aspect is the composition of the L2 mechanism. The Gaussian mechanism is widely used in DP-SGD due to the availability of numerical composition analysis. To better demonstrate the practicality of the proposed L2 mechanism, its composition must be studied. Other Comments Or Suggestions: No. Questions For Authors: In Figure 1, the L2 mechanism exhibits a similar error to the Gaussian mechanism when d >100. In DP-SGD, where the dimensionality often exceeds 1000, does this imply that the L2 mechanism has comparable error to the Gaussian mechanism? If so, what is the advantage of using the L2 mechanism? It appears that the L2 mechanism achieves lower error for moderate d, but this benefit may not be significant in applications like DP-SGD. A related concern arises in Figure 6, where the sampling process for the L2 mechanism appears less stable. If d >1000, will the sampling time increase compared to the Gaussian mechanism? If so, this could be a notable limitation, as the additional computational overhead may not be negligible. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the review! > One missing aspect is the composition of the L2 mechanism. The Gaussian mechanism is widely used in DP-SGD due to the availability of numerical composition analysis. To better demonstrate the practicality of the proposed L2 mechanism, its composition must be studied. We agree that analyzing composition is the logical next step for studying the $\ell_2$ mechanism. A basic approach can use existing generic composition results for pure and approximate DP algorithms, but this is unlikely to obtain better error than the Gaussian mechanism beyond a very small number of compositions. Possible alternative approaches include: 1) Investigating advanced composition for mechanisms that satisfy simultaneous (and nontrivial) pure and approximate DP guarantees. For example, at $d=50$, at $\sigma \approx 0.5$ the $\ell_2$ mechanism is both $(1, 10^{-5})$-DP and $\approx 2$-DP. We are not aware of advanced composition bounds that can take advantage of both guarantees. 2) Analyzing the moment generating function of the privacy loss random variable. Such an analysis could provide better privacy accounting using CDP or RDP or FFT-based privacy accounting. As demonstrated in the paper, even deriving tail bounds for the privacy loss distribution is already involved, but it is possible that some kind of moment analysis can be done. We suggest that results in this direction are a good candidate for future work. > In Figure 1, the L2 mechanism exhibits a similar error to the Gaussian mechanism when d >100. In DP-SGD, where the dimensionality often exceeds 1000, does this imply that the L2 mechanism has comparable error to the Gaussian mechanism? If so, what is the advantage of using the L2 mechanism? It appears that the L2 mechanism achieves lower error for moderate d, but this benefit may not be significant in applications like DP-SGD. Yes, our experiments show that the $\ell_2$ mechanism's error converges to the Gaussian mechanism's error as $d$ grows, and at $d=1000$, the gap is essentially 0 (a short discussion of intuition for this effect appears in our response to Reviewer Vdrp). We therefore agree that the $\ell_2$ mechanism does not meaningfully improve over the Gaussian mechanism in very high-dimensional settings. However, we suggest that the moderate-$d$ setting covers many practical problems and is therefore still worth studying. If desired, we can add a short discussion highlighting this guidance for the dimension ranges where $\ell_2$ noise is most useful. > A related concern arises in Figure 6, where the sampling process for the L2 mechanism appears less stable. If d >1000, will the sampling time increase compared to the Gaussian mechanism? If so, this could be a notable limitation, as the additional computational overhead may not be negligible. The $\ell_2$ mechanism is a constant factor (in $d$) slower in our implementation. However, we note that both the $\ell_2$ and Gaussian mechanism sampling algorithms are parallelizable, and the parallelized runtime is independent of $d$. The experiments provided here do not attempt this parallelization, which is why their runtime increases with $d$. As described in Section 3.2, in a parallel setting, the $\ell_2$ sampler adds one more map and combine step over the Gaussian sampler. In a parallelized system, we expect that this cost is mild.
Summary: The authors consider a specific instantiation of the K-norm mechanism using an L2 norm. They establish conditions for achieving approximate DP as opposed to pure DP as was done in the original K-norm paper. Theory and experiments are provided. Claims And Evidence: Yes, the paper provides both theory and simulations to demonstrate the improvement. Methods And Evaluation Criteria: Yes, both theory and simulations are provided to demonstrate the work. Theoretical Claims: I checked some of the early lemmas and the results seemed sound. However, the communication of the theory I found to be unpleasant. I'm not sure I've ever seen so many lemmas without a single theorem. The bulk of the paper reads more like an appendix. Experimental Designs Or Analyses: Experiments seemed fine. Supplementary Material: No. Relation To Broader Scientific Literature: The related work section was quite lacking. 4 papers are mentioned in field that has existed for 20 years. I understand that the authors are working with approximate DP, where as a lot of related methods work on pure, but the authors should still do a better job placing their work within the context of the broader literature. Essential References Not Discussed: While the mechanism looks like it is wrapped up in an exponential type mechanism, it can also be viewed as an additive perturbation where $\tilde T(X) = T(X) + Z$ where $Z$ is distributed as an "l2" random variable. In that sense, there are a lot of options available for pure/approximate DP that can be discussed. In fact, the distribution used in the paper has been viewed as a multivariate version of the Laplace distribution (there are multiple extensions). Other Strengths And Weaknesses: In general the paper is a very narrow contribution as they study a very specific type of DP for a very specific mechanism that doesn't seem to be used much in the literature. The theory of the paper is also very unpleasant to read. Major theorems should be organized and presented as the key results. Especially novel lemmas certainly can as well, but the rest should be in the appendix. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for the review! > I checked some of the early lemmas and the results seemed sound. However, the communication of the theory I found to be unpleasant. I'm not sure I've ever seen so many lemmas without a single theorem. The bulk of the paper reads more like an appendix … [t]he theory of the paper is also very unpleasant to read. Major theorems should be organized and presented as the key results. Especially novel lemmas certainly can as well, but the rest should be in the appendix. The paper's current presentation attempts to highlight the main conceptual ideas of the privacy analysis in the main body while deferring most of the calculations to the appendix. We chose this presentation because direct approximate DP analyses of additive noise mechanisms are rare in the existing literature, and we believe the form of the analysis is interesting. Since the overall analysis has a single end goal (an approximate DP guarantee), describing the constituent results as lemmas seemed appropriate. If it seems helpful, we can add an overall privacy theorem. > The related work section was quite lacking. 4 papers are mentioned in field that has existed for 20 years. I understand that the authors are working with approximate DP, where as a lot of related methods work on pure, but the authors should still do a better job placing their work within the context of the broader literature. To the best of our knowledge, this section discusses all of the existing work on the $\ell_2$ mechanism, and the Laplace and Analytical Gaussian Mechanisms discussed elsewhere are the most relevant baselines. Do you have other references in mind? > While the mechanism looks like it is wrapped up in an exponential type mechanism, it can also be viewed as an additive perturbation where $\tilde T(X) = T(X) + Z$ where $Z$ is distributed as an "l2" random variable. In that sense, there are a lot of options available for pure/approximate DP that can be discussed. The suggested additive noise perspective is also valid (and appears in the discussion of the $K$-norm mechanism in Lemma 2.4). We agree that other additive noise mechanisms exist, and as mentioned above, to the best of our knowledge the most common mechanisms in theory and practice for computing an $\ell_2$ sensitive statistic are the variants of Laplace and Gaussian noise featured in the paper (though additional references are welcome!). > In fact, the distribution used in the paper has been viewed as a multivariate version of the Laplace distribution (there are multiple extensions). Can you elaborate on this? The multivariate Laplace distribution depends on $\ell_1$ distance to the true statistic, but the $\ell_2$ mechanism depends on $\ell_2$ distance. In what sense is the $\ell_2$ mechanism a multivariate version of the Laplace distribution? > In general the paper is a very narrow contribution as they study a very specific type of DP for a very specific mechanism that doesn't seem to be used much in the literature. We agree that the $\ell_2$ mechanism is not currently common in the literature, in part because our paper is the first to prove a (nontrivial) approximate DP guarantee for it. As the results in the paper show, this approximate DP analysis enables the $\ell_2$ mechanism to obtain lower error than the Laplace and Gaussian baselines, which appear in most DP papers. Approximate DP is the most common notion of DP in the literature and in practice. For example, it is the primary definition used in usability studies of DP [1, 2] as well as industry deployments [3, 4, 5]. Can you elaborate on why it might be considered "a very specific type of DP"? [1] https://arxiv.org/abs/2302.11775 [2] https://arxiv.org/abs/2406.12103 [3] https://journalprivacyconfidentiality.org/index.php/jpc/article/view/782 [4] https://arxiv.org/abs/1909.01917 [5] https://arxiv.org/abs/2201.11603 --- Rebuttal Comment 1.1: Comment: I appreciate the comments from the authors. a) Ultimately it is the authors' call. But I would argue that the purpose of Lemmas is to decompose the steps of establishing a theorem into more manageable pieces. I think most of the mathematics community would agree with me. Certainly lemmas can be included in the main body of the paper to help tell the story, but I don't think they are currently acting in that way. b) Why would you only discuss the l2 mechanism when discussing the literature? I think the work should be placed more broadly within the literature on additive noise mechanisms. c) The multivariate Laplace is not a uniquely defined distribution. There are multiple ways to generalize the univariate Laplace distribution. One way is to "preserve the norm" within the density, which would basically yield an l2 or l1 type mechanism. Another way is to say that a multivariate Laplace must have Laplace marginals, in which case you define the characteristic function in a very specific way. It is an interesting (though not crucial) connection. d) So the authors claim that for approximate DP, the l2 mechanism adds less noise than the Gaussian -- which theorem shows that? --- Reply to Comment 1.1.1: Comment: Thanks for following up! > Ultimately it is the authors' call. But I would argue that the purpose of Lemmas is... We'll take another look at the organization of the proof exposition. If you have specific suggestions, please let us know. > Why would you only discuss the l2 mechanism when discussing the literature? I think the work should be placed more broadly within the literature on additive noise mechanisms. First, we note that the paper discusses both the Laplace and Gaussian baselines in the introduction and experiments. As the literature for DP-SGD, the binary tree mechanism, and the projection, factorization, and matrix mechanism (see Introduction for references) demonstrates, these are the baseline algorithms for privately computing an $\ell_2$ sensitive statistic. The Related Work also discusses generalized Gaussian mechanisms (though ultimately argues that their relevance is because of a similarity in one step of the analysis, rather than utility for the problem in question). We therefore suggest that the current presentation is reasonably sufficient context. There are a few more additive noise mechanisms that we considered discussing. In each case, we largely decided against including them because they are less relevant than the Laplace or Gaussian baselines. However, discussing them here may be useful context. 1) $K$-norm mechanism. The $K$-norm mechanism is particularly useful when applied to statistics whose sensitivity is not nicely characterized by an $\ell_p$ norm (see for example [1]). In contrast, we focus on an $\ell_2$-sensitive statistic, so discussing instances other than the $\ell_2$ mechanism doesn't seem relevant. Note that the paper does discuss the $K$-norm mechanism as a generalization of the $\ell_2$ mechanism. 2) Staircase mechanism [2]. This mechanism dominates the Laplace mechanism under pure DP. However, it is only noticeably better than the Laplace mechanism in the very low-privacy/large $\varepsilon$ setting (see Figure 2 in [2] – note also that this figure is for the 1-dimensional staircase mechanism; a $d$-dimensional staircase mechanism would need an even larger $\varepsilon$ to separate from the Laplace mechanism, because it needs a large $\varepsilon$ in each coordinate). 3) Bounded noise mechanisms [3]. These mechanisms were developed for computing a private statistic with bounded $\ell_\infty$ sensitivity, and the paper provides an $(\varepsilon, \delta)$-DP algorithm that in some settings beats the Gaussian mechanism for this problem. However, the primary mechanism studied in the paper has several drawbacks: to the best of our knowledge, no algorithm to sample the mechanism is known, and it only obtains lower error than the Gaussian mechanism when $k$ is very large (approximately $> 1000$) and a very high probability ($\gg 0.99$) $\ell_\infty$ error bound is desired (see Figure 1 in [3]). Since we focus on practical algorithms for $\ell_2$ sensitive statistics, this did not seem relevant. In summary, the paper discusses all of the additive noise mechanisms that, to the best of our knowledge, are most relevant to the problem at hand. If you have additional concrete examples in mind, please let us know! [1] https://arxiv.org/abs/2309.15790 [2] https://arxiv.org/abs/1212.1186 [3] https://arxiv.org/abs/2012.03817 > The multivariate Laplace is not a uniquely defined distribution... If we understand correctly, the observation here is that there is no canonical definition for the multivariate Laplace distribution, and a possible definition coincides with the $\ell_2$ mechanism. That seems reasonable. In our experience "multivariate Laplace" typically refers to the Laplace marginals (i.e., $\ell_1$ mechanism) interpretation in the context of DP, which is why we chose the name $\ell_2$ mechanism. > So the authors claim that for approximate DP, the l2 mechanism adds less noise than the Gaussian -- which theorem shows that? The evidence for our claim is empirical: we prove that our algorithm returns a $(\varepsilon, \delta)$-DP mechanism, and then show experimentally that the returned mechanisms are more accurate than the Analytical Gaussian Mechanism (AGM), particularly when $d$ is not too large. The current presentation tries to be explicit about this (e.g., the claim that it "empirically dominates…" in the Introduction). We agree that the best possible version of this paper would include a formal utility result. (Informally, the main obstacle is that the expression for the cap fraction given in Lemma 3.9 has no closed form. This cap fraction is what characterizes the high privacy loss region, so reasoning about the overall accuracy is analytically difficult; the AGM has a similar dependence on the standard normal CDF.) Nonetheless, we suggest that an efficient and provably DP algorithm that obtains clear and nontrivial empirical error improvements for something as widely used as $\ell_2$ sensitive statistics is interesting even without this formal result.
null
null
null
null
null
null
null
null
Representations Shape Weak-to-Strong Generalization: Theoretical Insights and Empirical Predictions
Accept (poster)
Summary: This paper studies weak to strong generalization, where a strong model is fine-tuned on a task using data labeled by a weaker supervisor model (it is known that perhaps surprisingly the strong model can outperform its weak supervisor). Specifically, the paper introduces estimators, depending only on internal representations of the strong and weak models on the input data distribution for the fine-tuning task, that correlate with the test set error Err w2s of the weakly supervised strong model. Claims And Evidence: Theoretical claims are precisely formulated with definitions and assumptions made clear, and proofs are included in the appendix (although I didn't read them). Experimental evidence supports the main claims. One comment: - Cor. 5.1-2: the *variance* of the labels does appear in these corollaries, so some discussion of the precise sense in which they are labeled agnostic would be helpful. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not read proofs, but I did read definitions, assumptions, statements, and claims carefully. I have a handful of mostly minor comments on the math. - L110, top left: might also be worth mentioning using L2 regression as the objective function, as opposed to e.g. cross-entropy. - L163, right: "w.r.t. a subspace V" in the italicized definition is strange because the subspace gets defined explicitly as part of the criterion. To me the following would make more sense: "representations of $h$ are $(\delta, \hat{gamma}, \tilde{gamma})$-decomposable for some $\delta$ ..." and then V gets defined as part of the existence criteria. - L175 Left: I don't agree with the well concentrated characterization. This condition seems to be about the empirical quantities $\tilde{\Sigma}, \hat{\Sigma}$ closely approximating the expectations. There's also some Einstein notation on L180-181, unless that's introduced a summation symbol is warranted. - Conditions d and e seem closely related: I.e. assuming (e), and that $hat{\mathcal{D}}, \tilde{\mathcal{D}}$ are both IID samples from the population $\mathcal{D}$, and some relationship between operator, norms of covariate and kernel, that the estimate of the full population statistic one gets from estimating with $hat{\mathcal{D}}, \tilde{\mathcal{D}}$ is also small? Okay, some work to be done here, but my point is if possible rearranging to: (e) first, and something should be said about (d) being "(e) but with hats and tildes" would be easier to grasp. - Thm 3.6: this is interesting, but as a reader, it's not clear to me how important it is for the flow of the paper. Some commentary on what the theorem says in natural language would be helpful. Experimental Designs Or Analyses: - hyperparameters $\alpha, \beta$: I wish there were additional discussion on this topic. How are these tuned? On what split of the data set? To maximize what (i.e. correlation with Err w2s)? I also wonder if there are some characteristics that would allow for choosing good $\alpha$ without hyperparameter tuning (it's effectively a cut off on eigenvalues, there are lots of heuristics for that). In the appendix, I see that for the embedding model experiments, low $\alpha_w, \beta_w$ and higher $\alpha_s, \beta_s$ do better, but for the LLM experiments e.g. higher $\beta_w$ does better. There's also notable dataset dependence in the LLM experiments. - it would be interesting to see an experiment explicitly targeting the label agnostic property of this estimator of Err w2s. For example, does computing $\lvert P_s (I-P_w)\rvert$ on text from the common crawl correlate with Err w2s in the LLM experiments? Supplementary Material: I reviewed the section of the appendix on experiments. Relation To Broader Scientific Literature: The key contributions of this paper are motivated by human-AI alignment research. They may have broader implications for distillation fine-tuning in machine learning. As the introduction and related work outline, results of a similar flavor (decomposing Err w2s using one of various triangle inequalities and showing an easier-to-estimate term that emerges correlates with Err w2s). As I understand that the novelty of this paper lies in: - label agnostic-ness - estimates based on model representations From looking at the references mentioned in related work, I feel that this paper has a relatively comprehensive experiment suite. Essential References Not Discussed: Not that I'm aware of. Other Strengths And Weaknesses: - "As AI systems become increasingly capable of performing complex tasks beyond human comprehension" do you have an example of an an AI system performing a task that's "beyond human comprehension"? I get that it's a first sentence, and motivational introductions like this are very common today. I just want to know if we are verifiably in the "beyond human comprehension" regime (and impressive task completion in the sense of very few humans could complete the task or a human couldn't complete the task nearly as quickly, etc. don't count, because in those cases, at least some humans could at least comprehend what the system is doing). Other Comments Or Suggestions: Overall, I felt that the balance of the main body of the paper leaned very heavily towards theory, leaving very little space for discussion of the experiments. Especially since some of the theoretical content is (useful, but not essential for the flow) examples, differing some of it to the appendix to allow for a more detailed experiment section (including, for example, more discussion about hyper parameter tuning as I asked about above) would improve the paper (in my view). Questions For Authors: None beyond the above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback on our theoretical analysis, extensive experiments, novelty and broader impact. > Cor 5.1-2: why label agnostic As noted in L306 right, once we factor the label-dependent term out of the operator norm (becoming the label variance C), it can be treated as constant for a fixed dataset. When varying the models, the only factor that changes on RHS of 5.1 is $||P_s(I-P_w)||$, independent of labels. Thus the trend of RHS can be predicted w/o labels. Same for 5.2 > L110,163 Thanks for the suggestions. We’ll incorporate them in the revision. > L175 Left: L180 typos This is a typo. We didn’t intend to use Einstein notation. We’ll add the missing summation after $\frac{1}{n}$. > “Conditions d and e closely related” One possible way to make (d) and (e) more “unified” is to rephrase e as a variant of the cross-sample statement but with the size of one sample taken to infinity. We’ll note this in the revision. > Thm 3.6 We’ll include the following discussion. Thm 3.6 highlights the generality of Def 3.3, showing more examples can be constructed. Any representation composed of a part satisfying Def 3.3 with δ=0 (eg, from a very low-rank distribution) and a high-dimensional sub-Gaussian will, as a whole, satisfy Def 3.3. Eg, Example 3.4 can be extended by concatenating sub-Gaussian. > discussion on hyperparameters (1) How we tuned the hps: See Sec D.2 for detailed hp values. We select the hps that maximizes correlation with test Err_w2s. Our metric is computed on the w2s split, consistent with theory. (2) **Cross-model hp transfer:** We note that, although each model could technically require different hps, in experiments we let all weak models share hps for simplicity and still achieve strong results, suggesting that our approach is not very sensitive to hps. Further, we present **a new experiment** demonstrating that hps selected using one group of models (ie, as a validation set) generalize to other models. We randomly split the weak models into two groups, select hps based on one group, and evaluate them on the other. We repeat this 20 times and report the results on 5 datasets https://anonymous.4open.science/r/icml2025figures/table.png. Correlation remains high with low std, indicating that hps selected using a few models can reliably generalize to new ones. Additionally, we note that a small number of labeled data should suffice for hp tuning, as they are only used to measure test performance and not to compute our metric. (3) **intuitions for hp selection** - **For β**: β captures the effect of regularization and should be set higher when stronger regularization is used. This could explain why the optimal β in Exp III is much higher than in II: in III finetuning for one epoch only (following prior work) introduces very early stopping and thus strong regularization. - **For α**: The choice of α depends on the underlying dimensional structure. While the relationship can be complex, one intuition is that larger models tend to require a higher α. For small models whose dimensionality is relatively low compared to the sample size, most components can concentrate well. There may be few or no non-principal ones, so a small α suffices to filter them out. In contrast, larger models have high-dimensional yet low-rank representations (Huh et al 2021), where only a few top components concentrate with finite data. There are more non-principal components, and moreover, their magnitudes can appear inflated in the finite sample due to MP law—necessitating a larger α. These intuitions align with the reviewer’s observation from Fig 6 Exp II that the strong model requires a larger α than the weak ones > “explicitly targeting the label agnostic property; common crawl” We note that all experiments demonstrate the label-agnostic property. In Figs 2-4, our label-agnostic estimator (x-axis) strongly correlates with Err_w2s (y-axis). For the second half of the comment, if the reviewer was asking whether our metric computed on Common Crawl (CC) could predict performance on arbitrary tasks, this is not feasible: label-agnostic≠task-agnostic. Predicting performance on task A still requires unlabeled data from A. It is not reasonable to expect that CC could be used to indicate performance on any tasks. If the question was instead about predicting performance on CC itself, the bottleneck is evaluation: since CC lacks labels, we can’t compute W2SG performance as ground truth to compare our metric against. > beyond human comprehension regime We will revise the sentence to reflect: (1) in the future AI may surpass humans in certain tasks; (2) even today, AI can outperform *average* humans on certain tasks—yet W2SG shows that *average* humans can still help improve AI in those tasks. In particular, our results indicate which humans (or weaker LLMs) can best teach a strong AI —and interestingly the answer is not necessarily the strongest human or the strongest weak LLM.
Summary: This work provides a theoretical analysis of how a strong model can surpass its weak supervisor by studying the structure of their representations. They key insight (beyond prior analyses) is that even when a strong model perfectly fits the weak model’s predictions at train time, its surpasses its weak supervisor due to its better “principal representations,” which govern generalization. They quantify this, and use it to estimate the error of weak-to-strong models in a few empirical settings. Claims And Evidence: In their experiments, the authors argue that their representation-based metric captures weak-to-strong error beyond model size, i.e., it predicts weak-to-strong error in a more fine-grained manner than model size. Besides model size, it would be valuable to also consider grouping according to the error of the weak supervisor. For a given error-level of the weak supervisor, does the representation-based metric predict weak-to-strong error? This would be a more convincing experiment that illustrates that the relative representation structures of the weak teacher and strong student matter rather than just the quality of the student model. Without controlling for weak supervisor quality, it’s hard to know whether this is a confounder that causes a high correlation between weak-to-strong error and their representation-based metric. Methods And Evaluation Criteria: See "Claims and Evidence" Theoretical Claims: I read through the theoretical claims presented in the paper and they are clear and convincing. I did not check the proofs provided in the supplementary material. Experimental Designs Or Analyses: See "Claims and Evidence" Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The paper studies a topic of recent interest: the ability of a strong model to exceed the performance of its weak supervisor when trained on labels produced by this weak supervisor. There are several existing theoretical analyses studying the same phenomenon. This paper specifically studies representation structures, and yields novel insights on benign overfitting. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The writing throughout the first part of Section 3 could be more clear. Lots of notations and intermediate results are presented without a lot of guiding intuitions towards the final claims. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for finding our theoretical claims clear and convincing, and for acknowledging the novel insights of our paper. > “...relative representation structures of the weak teacher and strong student matter…Without controlling for weak supervisor quality, it’s hard to know whether this is a confounder …” (a) **A new experiment**. We note there are cases where the weak teacher’s performance cannot indicate W2S performance. We conducted a new experiment, where we control the weak supervisor’s error to lie within a narrow range, and observe that **the weak supervisor’s error correlates poorly with the W2S error, while our metric shows a strong correlation**. Since in our main experiments in the paper, the weak supervisors’ errors span a relatively large range, it is difficult to find a sufficient number of weak supervisors with similar errors. Therefore, we explicitly construct weak supervisors with similar error levels in the following way: We take a single checkpoint of a weak supervisor from Experiment I (the one with hidden size 256 pretrained for 5 epochs), and generate 20 modified versions of it by randomly masking out 100 coordinates of its features each time. We then run the weak-to-strong pipeline for each of these 20 weak supervisors. We compare the correlation between $Err_w$ and $Err_{w2s}$, and between $| P_s(I - P_w) |{\mathrm{op}}$ and $Err_{w2s}$ across all three datasets. As shown in the table below, the weak supervisor’s error correlates poorly with the weak-to-strong error, while our metric maintains a strong correlation. This further demonstrates that the detailed relationship between the weak teacher and the strong student plays an important role in weak-to-strong generalization—beyond what can be explained by the weak supervisor’s error alone. | | Lipop | FreeSolv | ESOL | |----|----|----|----| | Err_w | 0.24 | 0.29 | 0.13 | | | $\|\| P_s(I-P_w) \|\|_{op}$ | **0.62** | **0.65** | **0.61** | | (b) We illustrate our intuition with a simple example. Suppose a downstream task consists of 40% advanced linear algebra and 60% advanced calculus. We have two weak pretrained models: model A specializes in basic linear algebra, and model B in basic calculus. Assume fine-tuning mainly builds on existing knowledge rather than learning from scratch. Then, after fine-tuning, model A would likely achieve ~40% performance and model B ~60%, reflecting their alignment with the task. Now consider a strong student pretrained only on linear algebra. According to our main theory, model A—being aligned with linear algebra—should be a better supervisor for this student leading to better W2SG performance, even if its own performance is lower. Our proposed metric captures this alignment and should correlate with W2SG performance, whereas weak supervisor performance alone does not. This highlights a case where our metric offers meaningful insight beyond what weak model performance alone can explain. (c) **Practical perspective**: Measuring the weak supervisor’s error requires access to labels. In contrast, our metric is label-agnostic. The fact that it achieves such a high correlation while using less information is already impressive. > “The writing throughout the first part of Section 3 could be more clear. Lots of notations and intermediate results are presented without a lot of guiding intuitions towards the final claims.” Thanks for the suggestion. Due to space constraints and the large number of results, we focused on conveying the most important messages and were unable to include more detailed explanations. We will add more intuitive explanations in the revised version. If there are no further concerns, and given the reviewer’s largely positive feedback, we would greatly appreciate it if the reviewer would consider raising the score.
Summary: This paper provides a theoretical analysis for weak-to-strong generalization (W2SG) from a representation-based perspective. In particular, the authors consider finetuning over fixed representations with mild structural assumptions. - It is shown that the overlap between the principal subspace of the strong (student) model's representation and the orthogonal complement of the weak (teacher) model's representation is a key quantity that governs W2SG. - The theoretical framework is then leveraged to explain benign overfitting in W2SG -- errors that do not align with the strong model’s principal subspace do not affect W2SG. - The overlap between the two subspaces -- the principal subspace of the strong model's representation and the orthogonal complement of the weak model's representation -- provides a metric that theoretically predicts the W2SG performance. In practice, this metric demonstrates a strong correlation with the W2SG performance across various datasets and architectures. Claims And Evidence: The main claims made in the paper are well-supported by analysis and experiments. The theoretical analysis is mostly reasonable. However, some statements seem to have minor issues and are in lack of sufficient explanations (see "Theoretical Claims"). The empirical evidences are convincing. Methods And Evaluation Criteria: Yes, the proposed metric is well motivated by the analysis. Theoretical Claims: I couldn't verify all the proofs in the appendix, but by quickly going through Appendix A and B, I think the main theoretical results in the paper seem reasonable. However, I feel that some statements are not accurately made or well organized. - While the assumptions in Definition 3.3 are relatively mild from the analysis perspective, they are nevertheless dense and not quite well-motivated. For example, only the notion of "kernel-wise isotropy" in Def 3.2 is explained, with the explanation focusing on its necessity for the analysis, but not its intuition or practical implications. - Some assumptions in Def 3.3 looks counterintuitive. For example, in "(b) Concentration on $\mathcal{V}$", the concentration of correlation with labels, $\|\|\frac{1}{n} \Pi_V h(x_i) y_i - \mathbb{E}[\Pi_V h(x) y]\|\|$, should accurately be $\|\|\frac{1}{n} \sum_{i=1}^n \Pi_V h(x_i) y_i - \mathbb{E}[\Pi_V h(x) y]\|\|$? Experimental Designs Or Analyses: I review the experiments in the main text (Sec 5) and some of the details in Appendix E. The experimental setup is reasonable, sufficiently detailed, and well-organized. The empirical results align with the theoretical claims and provide convincing evidence for the proposed metric. Supplementary Material: Not applicable. Relation To Broader Scientific Literature: This paper provides a theoretical analysis for W2SG from a representation-based perspective. Toward the three contributions of this work: - The analysis-inspired metric is novel, intuitive, and well-motivated. - The explanation on benign overfitting in W2SG intuitive and insightful. But I feel the difference from the analysis in (Wu & Sahai, 2024) is not well explained, especially after stating the results on benign overfitting in W2SG. It seems that both benign overfitting analysis share the same intuition, just using different ensemble models. If more sophisticated mechanisms are involved, it would be helpful to remark on the difference between the two analyses. - The empirical verification of the correlation between the proposed metric and W2SG performance is extensive and convincing. Essential References Not Discussed: To my knowledge, the paper discussed the essential references in the field. Other Strengths And Weaknesses: Strengths and weaknesses are discussed in previous sections. Other Comments Or Suggestions: Comments are raised in previous sections. Questions For Authors: Major questions are raised in previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for finding our claims well-supported, our explanations insightful and intuitive, the theory and experiments extensive and convincing, and the proposed metric novel. We respond to the comments below. > the distinction between Sec 4 and Wu & Sahai (2024) The main differences are twofold: (1) Wu & Sahai (2024) show that benign overfitting **can happen**, but they do not extract **general insights about when and how it occurs**. In contrast, we identify a single key quantity driving benign overfitting in W2SG—namely, $||P_s(I - P_w)\frac{1}{\sqrt{n}} y||$ in Thm 4.1—which characterizes how much of the label aligns with the intersection between what is missed by the weak model's principal kernel and captured by the strong model’s principal kernel. When this quantity is small, the strong model can avoid repeating the weak model’s mistake, regardless of the extent of overfitting, thereby achieving error mitigation. This very mechanism is not revealed in Wu & Sahai. (2) Our Thm 4.1 is stated in a very general setting, whereas Wu & Sahai focus on a highly specific distribution with detailed assumptions, making it more of a toy example than a realistic scenario. E.g., there is no evidence that neural network representations follow exactly their assumed bi-level ensemble structure and that labels depend 1-sparsely on representations. In contrast, our assumptions (discussed on page 4) cover a wide range of realistic cases, supported by literature suggesting that neural network representations often exhibit such properties. > further explanation of assumptions in Def 3.3; and intuition or practical implications of condition (c) Due to space limitations, we only provided explanations for kernel-wise isotropy and small cross-sample inner products on $\mathcal{V}^\perp$, as we believe these two are the most involved, while the others are relatively natural and self-explanatory. Here, we provide further explanation for all the items, which we’ll include in the revised version. - (a) is a basic condition that ensures reasonable magnitudes of representations and labels. - (b) states that representations are well-concentrated in the subspace $\mathcal{V}$, both in terms of their covariance and their correlation with labels. This is why the representations on $\mathcal{V}$ are referred to as the principal representations—they are the part where the empirical distribution closely aligns with the underlying population distribution. - (c) implies that kernels constructed using only the components in $\mathcal{V}^\perp$ exhibit a certain level of uniformity across all directions, with the degree of this uniformity controlled by $\delta$. In the paper, we discuss two extreme cases—one with very small $\delta$ and one with very large $\delta$—to aid understanding. Importantly, this assumption is not made solely for analytical convenience; it is also general and applicable to realistic settings. For example, high-dimensional sub-Gaussian noise satisfies this condition with a small $\delta$—a scenario highly relevant to deep neural networks with large internal dimensions—since these vectors tend to be orthogonal to each other in the high-dimensional limit. More concrete instances can be found in Examples 3.4 and 3.5 and Thm 3.6, with their significance and relevance discussed in the right column of page 4. Thus, (c) is a key condition that allows us to capture all these diverse scenarios. It is not just analytically useful, but also practically relevant to real-world scenarios. - (d) holds either when representations on $\mathcal{V}^\perp$ are nearly orthogonal across samples or when their magnitudes are small. - (e) means that the representations on $\mathcal{V}^\perp$ have small magnitudes in the population. > Typos in Def 3.3 (b) As the reviewer correctly pointed out, the expression $|| \frac{1}{\tilde{n}} \Pi_{\mathcal{V}} h(\tilde{x_i}) \tilde{y_i} - \mathbb{E}[ \Pi_{\mathcal{V}} h(x) y ]||$ contains a typo—we forgot to include the summation after $\frac{1}{\tilde{n}} $. We will fix it in the revised version. If there are no other concerns, and given that the reviewer’s feedback is largely positive, we would greatly appreciate it if the reviewer could consider raising the score. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ responses to my questions. I think this paper provides some valuable insights, but the presentation of some theoretical results could be improved. Overall, I will maintain my current evaluation.
null
null
null
null
null
null
null
null
Improving the Variance of Differentially Private Randomized Experiments through Clustering
Accept (poster)
Summary: This paper proposes a differentially private algorithm for causal effect estimation, which leverages cluster structure in the data in order to reduce the variance (i.e., improve utility) while maintaining the same privacy guarantee. ## update after rebuttal I’m bumping up my score, after reading the rebuttal and the other reviews. I think the novelty and theoretical results are strong selling points of the paper, though I do still feel that the label DP setting limits its scope. I think the paper would benefit from a more careful exposition of label DP (including its limitations and what it does and does not protect) and also in what scenarios is it reasonable to assume that attributes are non-sensitive. Claims And Evidence: The claims made in the submission are supported by evidence. Methods And Evaluation Criteria: The methods make sense for the problem at hand. Theoretical Claims: I didn't check the correctness of any of the proofs. Experimental Designs Or Analyses: I didn't check that carefully, but the experimental designs looked sound to me. Supplementary Material: I didn't review the supplementary material. Relation To Broader Scientific Literature: The contributions of the paper are related to differentially private causal inference. Essential References Not Discussed: All relevant related works are discussed in the paper, as far as I can tell. Other Strengths And Weaknesses: Strengths -- - Novel premise: I appreciated the idea of leveraging cluster structure in order to improve utility. - I thought the theoretical results were quite nice — presented cleanly, technically sound (to the best of my knowledge) and interpretable. For example, Theorem 3.4 is a nice result that provides insight into what type of clusters can best reduce the variance. - The paper is written nicely and communicates its ideas effectively. Weaknesses -- - The proposed algorithm is very heavily inspired by one particular application (descibed in detail on the first page). It’s not totally clear to me how well this approach would generalize to other applications. In particular, any other use case for CLUSTER-DP would require access to non-sensitive user attributes which might not always be readily available. (The authors did address this point in Remark 2.1, but I’m not sure I’m wholly satisfied.) - The experimental results are mostly based on a numerical simulation; assuming I haven’t missed anything, the only non-simulated dataset is the Youtube dataset. To demonstrate the practicality of the algorithm, it would have been nice to see more experiments conducted on a broader array of realistic datasets. Other Comments Or Suggestions: I might have liked to see more justification on why label DP makes more sense as a privacy setting compared to standard DP. Questions For Authors: - Going back to Remark 2.1, let’s say that the cluster structure is sensitive and privatizing the cluster requires $\epsilon_1$ privacy budget. For the end-to-end process to have the same privacy guarantee $\epsilon = \epsilon_1 + \epsilon_2$ as the algorithm with non-sensitive cluster structure, we’d need reduce the $\epsilon_2$ budget for privatizing the outcomes and thus re-introduce more variance. I am wondering if it would be possible to address this point? For example, are there allocations of $\epsilon_1$ and $\epsilon_2$ for which the end-to-end process would still have less variance than the non-sensitive cluster situation where the privacy budget can be fully devoted to privatizing the outcomes (i.e., $\epsilon = \epsilon_2$)? - Another question that I have about Remark 2.1 is that wouldn't the end-to-end process mix DP (for privatizing the clusters) and label DP (for privatizing the outcomes)? Would the resulting privacy guarantee thus be possible to easily interpret? - The paper’s approach is to set a privacy level, then for certain well-behaved cluster structures be able to achieve better utility. Rather than showing improved utility via bounding the variance of the estimator, I would be interested to know if the authors considered using something like Propose-Test-Release to reduce the noise scale (and maybe more directly improve the utility of the algorithm) for nicely-behaved clusters. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their careful review of our paper. We hope to have addressed their questions, and would be happy to clarify these points in a camera-ready version. _Response to weaknesses:_ Although we focus the presentation on a motivating application in the advertising space—which is already a very broad and important application, deriving most of the revenue of big tech companies—this framework can definitely be applied to other domains. We discuss a few here: - Healthcare: To analyze the causal effect of medical interventions, treatment strategies, and healthcare policies while preserving patient privacy. Here, the clustering can be done based on non-sensitive information, e.g. age, race, demographic information. - Finance: To study the causal impact of economic policies and investment strategies on financial outcomes, risk management, and market stability. Here, the clustering can be done based on the company industry classification information (eg. NAICS), size of the company, other public data (revenue, profit, stock price,...) - Social Sciences: To analyze the causal impact of social policies and societal factors on users behavior. Here, the clustering can be done demographic information as well as other publicly available data on users' social platforms. _Response to Questions:_ - The variance of the full DP case will be larger than the partial DP (non-sensitive cluster and private response). Since $\epsilon_2 <\epsilon$, in the full DP we need to add more noise in randomizing the responses. In addition, to preserve the privacy of clusters, we need to have some randomization there as well. This affects the variance because it increases the cluster homogeneity quantities $\phi_0,\phi_1$, since these will now be defined w.r.t noisy clusters and will naturally be larger than when they are defined w.r.t actual clusters. - No, it wouldn’t mix. In fact two important properties of differential privacy (which are widely used in practice and design) are composition and post-processing. The first states that combining two DP mechanisms will remain DP (but their privacy loss will be added); see sec 3.5 in [1]. The latter states that DP is immune to post-processing; see prop 2.1 in [1]. - The propose-test-release (PTR) mechanism aims to reduce the noise addition for privacy by working with local sensitivity instead of the global sensitivity. Such an approach can be seamlessly integrated into our mechanism, specifically in Algorithm 1 (line 228-229), when adding the Laplace noise: we can scale its noise using PTR and local sensitivity of empirical distribution of the cluster. However, we believe this would yield only minor benefits since the global sensitivity of empirical distribution is already 1/cluster-size; the PTR works most effective when the range of outputs is large. [1]: “The Algorithmic Foundations of Differential Privacy”, Cynthia Dwork, Aaron Roth
Summary: Authors give an algorithm they call Cluster-DP, which is a pure/approximate DP mechanism (label DP) for causal effect estimation. Its main insight is that you can reduce the variance of the estimates by leveraging known clustering structure in the data. At a high level, they add Laplace noise to the empirical response distributions within each cluster, do some clever truncation and renormalization, and get a data-dependent response distribution. They construct a response randomization matrix (again, novelty) and then use its inverse to debias the privatized outcomes, estimating average treatment effects. The main contributions are theoretical guarantees for the algorithm (label differential privacy guarantees and a detailed bound on the estimator’s variance gap relative to a non-private baseline). Their analysis carefully shows how improved cluster homogeneity leads to reduced variance, thereby achieving a better privacy-variance trade-off. They have some empirical evaluations on both synthetic and real data which show the usefulness of their approach in low-epsilon regimes, which is unsurprising following the bounds they give, but good validation. Claims And Evidence: Overall, I found that the authors made some novel and well-motivated claims, and managed to provide ample evidence (mainly theoretical) to support their Cluster-DP algorithm. The detailed derivations in the appendix were appreciated. They also have some empirical performance claims, which are less substantial but adequately validate their theory in my opinion. As my review is mainly positive, I leave most of it as questions for the authors, to help clarify their work and its presentation. I'd like to highlight that I found the debiasing step via the inversion of the response randomization matrix $Q_{c,a}$ to be really interesting and non-trivial. However, its stability (especially when $\lambda$ is close to 1) is not formally explored (as far as I could tell) and might be a source of unnecessary variance if the matrix is near-singular (is that right?). Could the authors discuss this a bit further? Extrapolating from the $\lambda$ experiments seem to suggest this could be a problem, but maybe I'm over-interpreting these results. Methods And Evaluation Criteria: The authors do not provide an extensive empirical evaluation, only a standard evaluation with a synthetic dataset and on (real) youtube data. This is adequate, as the main results of the paper are theoretical. However, more in depth empirical evaluation of Cluster DP would be great in future work. Theoretical Claims: I congratulate the authors on their careful formal statements and detailed proofs. I checked the proofs of Theorems A.5 (on the baseline uniform approach) and 3.1 (the Cluster DP mechanism). I did not find issues with either. Though I did not check it carefully, I skimmed the proof of Theorem 3.4 (for the main variance reduction claim), which is well broken up into propositions and lemmas. As a general statement on the theory in this paper, though many of the tools used are standard, they are carefully and effectively combined in non-trivial ways, and the theory is broken up nicely to construct the desired bounds. Experimental Designs Or Analyses: I appreciated the conditional bias plots and variance gap plots, and the qq-plots. The youtube data seems reasonable for this problem. I'd be curious for the authors to comment on the empirical efficiency of their algorithm? I was unable to carefully check their experiments as the code wasn't provided. Supplementary Material: The authors did not provide their code. This is not ideal, though as the primary contribution of the work is theoretical, I won't hold it against them in my score. Please consider including a link to a public repo for your algorithm if accepted. Relation To Broader Scientific Literature: The manuscript appears well-situated within the literature. In particular, the authors claim a "tighter analysis" [Esfandiari et al. 2022] than previous work and substantiate the claim. Essential References Not Discussed: Not aware of any. Other Strengths And Weaknesses: Strengths: I congratulate the authors on this work, which I think is original and clearly presented. That said, it's not clear to me that it is particularly "significant" in that I'm not as aware of the desire for label-DP privacy guarantees in clustering for downstream causal effect estimation (a little niche). So, in that sense, I would not consider this work to be award level, but I certainly believe it merits acceptance. Weaknesses: The authors do a good job motivating the work in the intro for a specific application. However, the intuition for their approach could be better communicated. I found myself drawing some simple pictures to try to get a sense of why the Cluster-DP algorithm is a good idea for variance reduction; the authors should provide better intuition (through plots or a carefully constructed distributional example) to help the reader grasp the approach. Other Comments Or Suggestions: Nitpicks (mostly notational): Maybe I missed something, but the vector $y$ is used to denote the entire response space and in expressions like $\mathbf{y}^\top$. Can you distinguish between the vector of outcomes and the set $\mathcal{Y}$ of possible outcomes in your notation? Some proofs switch between notations such as $\mathbf{y}_{0,c}^2$ and "\overline{y^2_{0,c}}" $= 1 / n_{0,c} \sum y_i^2 (0)$ or something, which made it hard for me to follow whether it was a sum or an average sometimes. Some of the $\bar{y}$ and $\overline{y_{0,c}}$ notational norms were more confusing than helpful. You left some of the text from the template at the very bottom of the appendix, please delete in camera ready. Questions For Authors: Q1. Theorem 3.1 presents a privacy bound that splits into a term derived from the Laplace mechanism and a term from the re-sampling process. Could you please discuss whether these bounds are tight in practice (I'm not sure your experiments really give us a sense, but maybe I'm missing something)? Are there settings or data regimes where you'd expect the analysis to be overly conservative? Q2. Your estimator relies on inverting the response randomization matrix $Q_{c,a}$. Under what conditions might this matrix become ill-conditioned, and how does that affect the variance of $\hat{\tau}_Q$? Is there a safeguard in your method for cases where the inversion is unstable? Maybe I'm misunderstanding something, I'd appreciate if the authors could help me here. Q3. My understanding of both the theory and the empirical results is that the variance reduction benefits rely on cluster homogeneity (authors parameterize this as $\phi_a$). Can you elaborate on how robust your method is to mis-specified or sub-optimal clustering? E.g., if clusters are more heterogeneous than assumed? This is sort of touched on by considering "Cluster-free DP," but I'm wondering if there's a smooth interpolation between that and the "Cluster-DP" version. Q4. This is a future work question: you gave a mechanism that, as far as I can tell, seems to work only for discrete outcome spaces (even though you expand it beyond binary). Is there hope for continuous outcomes? Can you discuss the foreseeable issues more carefully? You mention binning - the bin sizes could be parameterized, and done in such a way that adapts to the cluster structure. This seems natural, and may be worth discussing in future work. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their careful and positive review of our paper. We would be happy to clean up the notational remarks made by the reviewer, and clarify the points below in a camera-ready version. We now address their questions: _Q1._ In our proof of Theorem 3.1, we use the composition theorem of differential privacy. In general, if mechanisms Mi are $(\epsilon_i,\delta_i)$- DP, then the composition satisfies $(\sum_i \epsilon_i, \sum_i \delta_i)$ DP. We use these results with $M_1$ the Laplace mechanism and $M_2$ the re-sampling process. If we do not allow any slack in the failure probability $\delta$, then this cannot be tightened, i.e., one can find examples of mechanisms which violate $(\epsilon, \sum_i \delta_i)$ DP if $\epsilon< \sum_i \epsilon_i$; see [1]. But as shown in [1] if we allow for a larger value of $\delta$, one can improve the privacy in terms of $\epsilon$. This tightness is for general mechanisms. For the specific composition of Laplace mechanism and resampling, there may be some tighter bounds, but an analysis based on the composition property (as is our argument) cannot be tightened. Given that, we do not anticipate the bound to be excessively conservative, if conservative at all. [1] “The Composition Theorem for Differential Privacy”, Peter Kairouz et al, 203. _Q2._ This is a good question. The matrix $Q_{c,a}$ is a scalar of the identity matrix plus a low rank matrix. As shown in the supplementary material (lines 1698-1704), its inverse is also a rank one perturbation of the (scaled) identity matrix. In Lemma A.11 we bound the maximum eigenvalue of $Q^{-1}$, which is the inverse of the minimum eigenvalue of $Q$. Using this lemma, the minimum eigenvalue of $Q^{-1}$ is at least $(1-\lambda)/(\lambda\sqrt{K}+1)$. So the matrix is well-conditioned if $\lambda$ (probability of resampling in the DP mechanism) is strictly less than 1, which is the case in our algorithm (by choosing $\sigma< \epsilon$ in privacy bound Thm 3.1). This is also reflected in the variance bound, Thm 3.4, where $(1-\lambda)^2$ appears in the denominator. _Q3._ Thank you for raising this question. Our privacy guarantees are entirely robust to the chosen clusters! The statements of Theorem 3.1 and its corollaries do not depend on the cluster structure and properties; only the variance gap established in Theorem 3.4 depends on the cardinalities and homogeneity of clusters. In practice, clusters being more heterogeneous than assumed would show up as increased variance, which is measurable, but this would not endanger the integrity of the privacy claims. We do empirically evaluate the role of clustering quality in Experiment 2 and shown in Figure 1.c by varying beta which directly controls the cluster homogeneity (larger beta corresponding to more homogeneous clusters). As expected the performance of our results (in terms of variance gains over the Cluster-Free algorithm) improves at larger betas, and smaller lambdas (the resampling probability). _Q4._ Absolutely. This is a great point, and definitely an exciting research direction for future work. As you noted, we propose binning as a solution, and implement it in the Numerical Experiments of Section 4, for both the fully synthetic and semi-synthetic data. We find the results satisfactory despite no particular tuning for the bin number and sizes. Using an adaptive grid is a natural approach to examine, which could yield further gains in the resulting privacy-variance tradeoff. Some immediate issues that practitioners should watch out for are 1) our guarantees depend on the number of bins K, hence one cannot choose an infinite number of bins, and 2) a data-adaptive choice of bin sizes requires access to the dataset which could leak information about the users’ data. This requires calibrating the noise level to ensure the privacy guarantee is maintained. --- Rebuttal Comment 1.1: Comment: I appreciate your careful answers to my questions, thank you. In particular, thank you for the care you took in answering (Q2) - this might be worth sketching out for a reader somewhere in your paper body, but I think I understand now. I was surprised that the other reviewers were less positive. I will maintain my score, best of luck. --- Reply to Comment 1.1.1: Comment: Thank you for your positive comments on our work! We also responded in details to other reviewers and were hoping that they increase their score, but unfortunately we didn't hear from them after our rebuttal.
Summary: This paper introduces Clustered-DP, a differentially private mechanism designed to improve the privacy-variance trade-off in randomized experiments. The proposed method improves the variance-precision trade-off compared to the traditional method, which introduces noise to the sensitive variables. The paper provides theoretical privacy guarantees under label differential privacy and derives bounds on the variance of the causal effect estimator. Claims And Evidence: One of the paper's key contributions is that the proposed Cluster-DP approach improves the privacy-variance trade-off. Theorems 3.1 and 3.4 provide the theoretical guarantee of privacy and variance, respectively. In section 4, both simulated and real-world data are used to demonstrate that CLUSTER-DP achieves lower variance compared to baseline methods while preserving privacy guarantees. Methods And Evaluation Criteria: The CLUSTER-DP algorithm is well-designed to balance privacy and variance, leveraging non-sensitive cluster structures. The experiments show the comparisons to the baseline methods such as uniform-prior-dp. The evaluation criteria show the privacy-variance trade-off, which is the main focus of the paper. Theoretical Claims: I checked the proof of Theorem 3.1 and didn't spot any errors or issues in the arguments. I didn't look at the results that are used in the references, and there are some derivations I didn't understand. Experimental Designs Or Analyses: The paper uses both synthetic and real-world datasets to evaluate performance and demonstrate the proposed method's generalizability. I checked experiments with numerical and real-world data. I think the results and analyses reflect the paper's key contribution. Supplementary Material: There's no supplementary material in the submission. Relation To Broader Scientific Literature: The paper is closely related to prior work on differential privacy and causal inference. It expands on existing methods by introducing clustering-based variance reduction. Essential References Not Discussed: I'm not aware of essential references that have not been discussed. Other Strengths And Weaknesses: - The paper is overall well-written, with clear motivation and rigorous derivations to the main results. - The proposed method is demonstrated with both synthetic and real-world datasets. - Although the choice of truncation parameter is discussed in the experiment section, it seems unclear how to choose the parameter in general. Other Comments Or Suggestions: (Please see the question below.) Questions For Authors: - Could you give some more detail about the derivation from l.1169 to l.1177? - Could you provide some intuition about the property $\tilde{q}_a(y|c) \geq \gamma$ ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their careful and overall positive review. We address below their two questions: _Q1._ These derivations aim to establish the differential privacy guarantee of a mechanism $M_2$ which resamples labels at random with probability $\lambda$ from the true distribution or a perturbed distribution $\tilde q$. L. 1170 states the probabilities that the “privatized” outcome $\tilde y$ is equal to a given potential outcome y. If y is the true potential outcome, then either it was sampled directly with probability (1-lambda) or it was sampled from $\tilde q$ with probability $\lambda$. Similarly, if $y$ is NOT the true potential outcome, then it must have been sampled from $\tilde q$ with probability $\lambda$. L. 1171-1173 identifies the necessary and sufficient conditions for proving that the mechanism $M_2$ is (\epsilon, \delta)-DP. We now detail the steps made on L. 1174-1177 in further detail, starting with substituting the event probabilities identified on L. 1170. Step 1 (substituting event probabilities): $1 - \lambda + \lambda \tilde q(y) \leq e^{\tilde \epsilon} (\lambda \tilde q(y)) + \delta$ Step 2 (rearranging terms): $1 - \lambda + \lambda \tilde q(y) ( 1- e^{\tilde \epsilon)) \leq \delta$ By the definition of $\delta := \max(0, 1 -\lambda + \lambda \gamma ( 1- e^{\tilde \epsilon}))$, this condition is equivalent to showing two inequalities: (a) $1 - \lambda + \lambda \tilde q(y) ( 1- e^{\tilde \epsilon}) \leq 0$ and (b) $1 - \lambda + \lambda \tilde q(y) ( 1- e^{\tilde \epsilon}) \leq 1 -\lambda + \lambda \gamma ( 1- e^{\tilde \epsilon})$ (b) always holds because $\tilde q(y) \geq \gamma$ and $1- e^{\tilde \epsilon} \leq 0$ since $\tilde \epsilon \geq 0$. Therefore the initial condition on L. 1171 is equivalent to inequality (a), which can be rearranged to the form see on L. 1177: $0 \leq \lambda (\gamma - \tilde q(y)) ( 1 - \tilde \epsilon)$ _Q2._ We enforce the property that $\tilde q_a(y|c)$ as an initial step of Algorithm 1. This is necessary to obtain the differential privacy guarantee in Theorem 3.1. The intuition is as follows: if for a given cluster $c$, treatment assignment $a$, and potential outcome $y$, we have $\tilde q_a(y|c) = 0$, then observing $y$ as a “privatized” output for that cluster and treatment assignment would mean that $y$ was equal to the original non-privatized outcome of individual $i$. This would be a violation of the differential privacy principle which aims to provide plausible deniability to individuals: seeing the output shouldn't allow you to be certain about any single individual's data. We thank the reviewer for their patience with what is admittedly a notational-heavy subject. We would be happy to include these clarifications in a camera-ready version.
Summary: The paper proposes CLUSTER-DP, a differentially private mechanism aimed at improving the variance of causal effect estimation in randomized experiments by utilizing clustering structures within data. Traditional differential privacy (DP) approaches introduce noise to protect privacy, resulting in increased estimator variance. To improve this privacy-variance trade-off, the authors introduce clustering to guide noise addition, defining a new measure of "cluster quality" (cluster homogeneity) that quantifies intra-cluster variability of outcomes. They prove that leveraging high-quality clusters (more homogeneous groups) substantially reduces variance penalties compared to unclustered or uniform-prior baselines. Claims And Evidence: In the motivating application on online advertising, the authors claim that the non-sensitive cluster information can be shared and utilized to improve the mechanism's privacy-variance trade-off. However, this is only correct when the non-sensitive information has no correlation to the private data. Although the DP guarantee (values of \epsilon, \delta) might not change if correlation exists, the privacy is still comprised as the attack can infer the private information through the released clusters. This privacy risk has been quantified and analyzed by inferential privacy [1] and its follow-ups (e.g., [2-4]). The author should discuss this. [1] Ghosh, Arpita, and Robert Kleinberg. "Inferential privacy guarantees for differentially private mechanisms." arXiv preprint arXiv:1603.01508 (2016). [2] Song, Shuang, Yizhen Wang, and Kamalika Chaudhuri. "Pufferfish privacy mechanisms for correlated data." Proceedings of the 2017 ACM International Conference on Management of Data. 2017. [3] Zhang, Wanrong, Olga Ohrimenko, and Rachel Cummings. "Attribute privacy: Framework and mechanisms." Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 2022. [4] Wang, Shuaiqi, et al. "Inferentially-Private Private Information." arXiv preprint arXiv:2410.17095 (2024). Methods And Evaluation Criteria: Yes Theoretical Claims: The proofs seem correct. Experimental Designs Or Analyses: Experiments make sense Supplementary Material: Roughly go over the supplementary material Relation To Broader Scientific Literature: The paper is related to previous DP literatures Essential References Not Discussed: See 'Claims And Evidence' Other Strengths And Weaknesses: Strengths: - The paper is well-organized and easy to follow - Problem formulation and theoretical analysis are solid - Numerical experiments are conduct Please see 'Claims And Evidence' for main weakness Other Comments Or Suggestions: NA Questions For Authors: - Does the paper assume there is no correlation between the private and non-sensitive information? If not, how to model this correlation? - In Thm 3.1, $\epsilon$ is related to $1/\gamma$, where $\gamma\leq 1/K$. Can the authors provide intuitions on why $\epsilon$ increases with $K$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for taking the time to read our paper and for their thoughtful comments. We would be happy to include clarifications of the points below in a camera-ready version. *Correlation between non-sensitive and private data:* It seems that your comment is about the measure of differential privacy and its limitations. We would like to highlight a few points: - There is a discussion about this in chapter 1 (The Promise of Differential Privacy) of [1] with an example about a medical study that teaches smoking may be correlated to cancer. Does this study compromise the smoker’s privacy? While the study reveals more information, under differential privacy this information is not deemed to be ‘leaked’. The rationale is that the impact on the smoker is the _same regardless of whether or not he was in the study_. It is the _conclusions reached_ in the study that affect the smoker, not his presence or absence in the data set. In other words, DP focuses on the privacy loss to an individual by her contribution to a dataset and therefore “by design” does not capture all of the privacy losses from correlations. - The notion of “inferential privacy” aims to consider privacy leakage due to correlation among individuals (it is identical to DP when individuals’ data are independent). So it is not addressing the case of correlation between sensitive and non-sensitive “features”, rather correlation between “individuals/samples”. - In our setting, the features $x_i$ are only used to form the clusters and then both the mechanism and analysis work with responses $y_i$ and cluster memberships $c_i$. Our analysis is under a finite sample setting, but extending it to its super-population equivalent assumes data $(y_i,c_i)$ are i.i.d, but conditional distribution $p(y|c)$ would be different (think of it as users are assigned to clusters independently according to some distribution and the marginal distribution of responses is a mixture distribution.) Notably, the responses $y_i$ are independent and so we don’t think inferential privacy would handle this, as it pertains to settings with correlation among samples. - The concern about the correlation between y (response) and c (cluster) is exactly the situation with label DP (where labels are deemed private while features are non-sensitive) and semi-sensitive features [2,3], which we discussed in the paper. Furthermore, in Remark 2.1, we argue that using the composition property of DP, we can extend our work to settings where both clusters and responses are sensitive. This “full DP” setting will also address the privacy leakage from correlations. We would be happy to add a discussion around these points in the revision. _[1]: “The Algorithmic Foundations of Differential Privacy”, Cynthia Dwork, Aaron Roth_ _[2]: “Training Differentially private Ad prediction models with semi-sensitive features”, L. Chua et. al_ _[3] “Anonymous learning via look-alike clustering”, A. Javanmard et. al_ *Response to Questions:* - Please see our response above (third bullet point). Considering the super-population regime, the correlation between clusters and responses is captured in the conditional distribution $p(y|c)$. We do not make any specific assumptions on it (which speaks to the generality of our framework) but it shows up in _Cluster homogeneity_ (Def 3.3). As discussed below the definition in population regime, $\phi_a = E[Var(y(a|c))]$. - Note that $\gamma$ is the truncation parameter (algorithm 1), so that $\tilde{q}_a(y|c)\ge \gamma$. This is the distribution used for randomization, and so $\frac{\tilde{q}_a(y|c)}{\tilde{q}_a(y’|c)} \le \frac{1}{\gamma}$. This ratio is at the heart of DP analysis as it concerns the change of outcome distribution by replacing one user. As $K$ (# of possible outcomes) increases, it forces a bound on $\gamma$, which makes this ratio to grow, leading to larger $\epsilon$. A more high-level intuition is that when $K$ is small, an observed randomized response could be allocated to a larger fraction of users (fixing all other params) and so reidentification risk will be smaller.
null
null
null
null
null
null
Generative Point Cloud Registration
Accept (poster)
Summary: This work introduces a novel method for point cloud registration that aims to generate geometry-consistent RGB image pairs from paired point sets. These generated RGB image pairs are then used to enhance the performance of point-based registration methods. The proposed approach incorporates two key innovations: a newly designed coupled conditional denoising technique and a prompt guidance mechanism, both of which are developed to enforce cross-view texture consistency. Extensive experiments conducted on the ScanNet and 3DMatch datasets demonstrate the effectiveness of the proposed method. Claims And Evidence: The claimed contributions are well-supported by the proposed method and experimental results. Methods And Evaluation Criteria: The newly designed coupled conditional denoising technique and prompt guidance mechanism are technically sound and interesting. The evaluation metrics are reasonable. Theoretical Claims: No theoretical proof in the paper. Experimental Designs Or Analyses: experimental designs or analyses are solid. Supplementary Material: More qualitative results are provided in the supplementary material. Relation To Broader Scientific Literature: The proposed method can serve as a plug-and-play module for various existing point-based registration models. Essential References Not Discussed: No. The literature review looks comprehensive. Other Strengths And Weaknesses: Strengths: The idea of using generated image pairs to enhance the registration performance of point-based models is highly innovative. Both quantitative and qualitative results demonstrate the effectiveness of the proposed method, showcasing its potential to improve registration accuracy. Weaknesses: 1. Point set registration encompasses both rigid and non-rigid transformations. However, this paper focuses exclusively on rigid transformations. This should be clearly stated early in the introduction to set appropriate expectations for readers. 2. In Table 2, the proposed method shows a noticeable improvement in mean errors but no significant enhancement in median error. This discrepancy warrants further explanation or analysis to clarify the underlying reasons. 3. The baseline models primarily consist of older methods published several years ago. To better highlight the advancements of the proposed method, it would be beneficial to include evaluations against more recent state-of-the-art models. 4. There is a concern that ControlNet might have been trained on datasets such as ScanNet or similar datasets, which could reduce the difficulty of the generation task. It would be valuable to verify whether the proposed method remains effective on more challenging datasets to ensure its generalizability. Other Comments Or Suggestions: No Questions For Authors: See the weakness section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1: Clarification on rigid transformations.** **A1:** Thank you for your valuable suggestion. In the revised introduction, we will explicitly state that our work focuses exclusively on the rigid point cloud registration problem to set clear expectations for the readers. **Q2: The discrepancy between the noticeable improvement in mean errors and the lack of significant enhancement in median error in Table 2 requires further explanation.** **A2:** Thank you for the insightful comment. We clarify that this discrepancy arises because mean and median errors capture different aspects of performance. Specifically, the mean error reflects the overall precision and is sensitive to outliers. The noticeable improvement in mean error stems from a substantial reduction in large-error cases, indicating that our method can effectively handle challenging scenarios. Instead, the median error reflects the model’s performance on the majority of cases, which was already strong and thus shows minimal change. We will clarify this distinction in the revised manuscript to avoid confusion and better highlight the strengths of our approach. **Q3: Comparison with newer state-of-the-art.** **A3:** We appreciate the reviewer’s valuable suggestion. Following the reviewer’s recommendation, we have conducted an additional comparison with a recent state-of-the-art method, PARE-Net (ECCV 2024), and included the results in the table below. The new results show that our proposed Generative FCGF (SD) consistently outperforms PARE-Net, confirming our superior registration performance. We will include these comparisons in our revised version. | Methods | Rot@5 | Rot@10 | Rot@45 | Mean | Med. | Trans@5 | Trans@10 | Trans@25 | Mean | Med. | |-----------------------|------------------|----------|----------|---------------|---------------|----------------------|----------|----------|---------------|---------------| | PARE-Net (ECCV'2024) | 75.6 | 82.1 | 86.5 | 21.0 | 2.1 | 40.6 | 63.5 | 77.4 | 47.4 | 6.4 | | Generative FCGF | **94.3** | **96.7** | **98.1** | **4.5** | **1.4** | **54.3** | **81.5** | **93.1** | **12.5** | **4.7** | **Q4: The concern is that ControlNet may have been trained on easier datasets like ScanNet, which could simplify the generation task. It's important to test the proposed method on more challenging datasets to confirm its generalizability.** **A4:** Thank you for the thoughtful comment. **(i)** We would like to clarify that, although the training dataset for the depth-conditioned ControlNet model has not been publicly released, the depth maps used in that model were generated by the MiDaS depth estimation model. By contrast, our experiments utilize real depth maps captured directly by depth sensors. Despite the significant domain gap between the MiDaS-estimated and sensor-captured depth maps, our method still achieves impressive generation quality, highlighting its robustness and generalization capability; **(ii)** To further validate the generalizability and robustness of our Match-ControlNet in more challenging scenarios, we conducted an additional experiment where we casually captured low-overlap photos of a cluttered, unconstrained indoor environment (i.e., the author's room) using a mobile phone. We then estimated their depth maps using DUSt3R for subsequent Match-ControlNet generation (without any model fine-tuning). The figure (https://anonymous.4open.science/r/rebuttal-688D/wild_vis.pdf) demonstrates that, even under these challenging in-the-wild conditions with a different depth-map source, our method still achieves impressive cross-view consistency and generation quality. These results confirm the practical effectiveness and excellent generalizability of our approach. --- Rebuttal Comment 1.1: Comment: I thank the authors for their comprehensive and detailed responses to my questions. After reading their rebuttal, I believe the authors have addressed my concerns. I believe the manuscript now meets the acceptance standards, and I will maintain my score.
Summary: The paper proposes a new perspective on Point Cloud Registration: Generative Point Cloud Registration. Compared to traditional methods or purely geometry-based learning methods, the paper incorporates image generative models. The input is a point cloud pair with unknown pose, and the output is the transformation between them. Specifically, the proposed Match-ControlNet generates corresponding images for each point cloud, and then uses image features as additional information to assist existing point cloud matching methods in finding better correspondences. The authors design Match-ControlNet's condition images and prompts to ensure geometric and texture consistency. The authors add this method on top of multiple baselines and experimentally demonstrate that Generative Point Cloud Registration can improve the performance of baselines. ## update after rebuttal I carefully reviewed the materials provided by the author during the rebuttal phase. With these additional details, I believe the experimental integrity of the paper has improved. Initially, I had concerns about the completeness of the experimental section. However, after the author's explanations and provision of additional materials, my concerns have been adequately addressed, justifying a higher rating for the manuscript. Claims And Evidence: Yes, I believe introducing image information would be helpful for point cloud matching. Methods And Evaluation Criteria: I see methodological flaws in the proposed approach. While the authors introduced requirements like 2D-3D Geometric Consistency and Cross-view Texture Consistency, the method essentially processes a stack of images using ControlNet. This alone does not robustly ensure multi-view consistency. Achieving consistency across views fundamentally requires accurately modeling the joint distribution of multiple images—a key aspect seemingly overlooked here. Instead, the current method relies on fine-tuning ControlNet on general image diffusion models, yet fails to introduce explicit constraints to enforce consistency. As a result, I remain doubtful about the reliability of the generated multi-view outcomes. Furthermore, since the approach presented in the paper often serves as a workaround in various image diffusion-based applications, it exhibits certain instability. For instance, works like Zero123++ accomplish multi-view information fusion by implementing novel modules within attention mechanisms—elements conspicuously missing from this paper. Theoretical Claims: No Theoretical Claims Experimental Designs Or Analyses: 1. I remember ScanNet has images, and the experiment lacks a comparison between the improvement of using real multi-view consistent images as additional information over the baseline and the magnitude of improvement using generated images over the baseline. 2. For methods like DUSt3R, although not used for point cloud registration problems, they can obtain the relative pose between two viewpoints through images alone. I'm curious about how the proposed method would compare with these methods? In other words, when both images and point clouds are available, how important are the point cloud features for the registration task? 3. I think the authors should add information about how different image generation results affect the estimated pose for the same pair of point clouds, or provide the mean and variance of improvements over the baseline after multiple executions. Supplementary Material: I reviewed the supplementary materials, including More Quantitative Analysis and More Visualization Results of Match-ControlNet. Relation To Broader Scientific Literature: - DUSt3R: Geometric 3D Vision Made Easy[CVPR2024] - This paper discusses the problem of pose estimation using only images - Zero-1-to-3: Zero-shot One Image to 3D Object[ICCV2023] - Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model - The above two papers discuss how to better inject multi-view consistency into Diffusion Essential References Not Discussed: None Other Strengths And Weaknesses: This paper combines the generative capabilities of ControlNet with Point Cloud Registration in an application-oriented article. Its strength lies in providing a new perspective on the Registration problem, using generative methods to compensate for the lack of image-point cloud pairs. Its weakness is that the proposed method resembles more of an experimental report, and neither theoretically nor through rigorous experimental analysis can it explain why this approach enables ControlNet to achieve multi-view texture consistency. Other Comments Or Suggestions: I hope to show more visualization results of point cloud overlap after matching, rather than just the images generated by ControNet. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Lack of joint distribution modeling of multi-view images?** **A1:** We respectfully clarify that our coupled denoising mechanism has implicitly modeled the joint distribution of multi-view images (i.e., cross-view images in our task). Formally, the likelihood over the cross-view image pair can be expressed as $\mathbb{E}[p_\theta({x}^{P}, {x}^Q)]$ (we omit conditional variables for simplicity). By concatenating the cross-view images into a unified variable ${x}^{PQ} = Cat([{x}^P, x^Q])$, we can model the joint distribution of cross-view images via the likelihood over the concatenated representation: $\mathbb{E}[p_\theta({x}^{PQ})]$. As such, we can derive a variational lower bound on the data log-likelihood for optimization: L=E_q [log p_θ(x_0^{PQ} | x_1^{PQ}) - ∑_{t>1} D_KL(q(x_{t-1}^{PQ} | x_t^{PQ}, x_0^{PQ}) || p_θ(x_{t-1}^{PQ} | x_t^{PQ})) - D_KL(q(x_T^{PQ}| x_0^{PQ}) || p(x_T^{PQ}) )]. Through the reparameterization trick, maximizing this lower bound is equivalent to minimizing our training loss in Eq. 5. This derivation demonstrates that our coupled denoising implicitly models the joint distribution of cross-view images, thereby effectively enforcing cross-view consistency generation. We will clarify this in our revision. **Q2: The proposed method is a workaround and exhibits instability? Absence of explicit attention-based modules like Zero123++?** **A2:** We respectfully clarify that our method is not a workaround, but an efficient and elegant design that enables effective cross-view information fusion and consistency learning with minimal overhead. Unlike methods like Zero123++, which introduce an additional complex attention modules for fusion, our attention design is native. We employ a novel coupled denoising mechanism that naturally extends the inherent intra-image self-attention to both inter- and intra-image attention, without any architectural modifications (see Eq. 4). We note that this design effectively unlocks the zero-shot consistency generation ability of Stable Diffusion, acquired through extensive large-scale pretraining, thereby enhancing both generation quality and cross-view consistency. As evidenced by extensive experiments, our method consistently delivers stable results across diverse scenarios (see Table 5, 6, and Fig. 5). Moreover, we tested on low-overlap, in-the-wild data (from the author's room) captured by a mobile phone. The results (https://anonymous.4open.science/r/rebuttal-688D/wild_vis.pdf) further confirm its stability in unconstrained conditions (please see A4@P29t for details). **Q3: Real vs. generated images comparison.** **A3:** As shown in Table 3 (third block) in our manuscript, we had compared ColorPCR using ground-truth images with our generative ColorPCR that utilizes generated images. Surprisingly, our generated images can even achieve superior performance across most metrics. We attribute this improvement to that our generated images effectively mitigate calibration errors and lighting challenges inherent in real-world data, as illustrated in Fig. 6. **Q4: Comparison with image-only methods (e.g., DUSt3R).** **A4:** We would like to emphasize that our paper targets the point cloud matching rather than the image matching. Therefore, comparisons with image-only methods actually fall outside the scope of our research. Below, we report the score of SOTA image-only method, DUSt3R, on 3DMatch. It shows that, due to the absence of geometric and scale information inherent in point clouds, DUSt3R demonstrates limited precision, highlighting the importance of point cloud features for accurate registration. | Methods | Rot@5 | Rot@10 | Rot@45 | Mean | Med. | Trans@5 | Trans@10 | Trans@25 | Mean | Med. | |-----------------------|------------------|----------|----------|---------------|---------------|----------------------|----------|----------|---------------|---------------| | DUSt3R | 50.9 | 64.2 | **98.5** | 10.0 | 4.9 | 6.6 | 21.2 | 61.7 | 23.3 | 19.7 | | Generative FCGF | **94.3** | **96.7** | 98.1 | **4.5** | **1.4** | **54.3** | **81.5** | **93.1** | **12.5** | **4.7** | **Q5: Variance analysis across generated images.** **A5:** Thank you for your suggestion. We repeated the evaluations on 3DMatch using varying random seeds (123, 1234, 12345) for Match-ControlNet. The mean and variance of Generative FCGF are 93.7 ± 0.6 for Rot@5 and 54.1 ± 0.1 for Trans@5, which still exhibits a significant performance gain over the baseline, validating the robustness of our method. **Q6: More visualization results of point cloud overlap after matching.** **A6:** We provide more registration visualisation results in figure (https://anonymous.4open.science/r/rebuttal-688D/reg_vis.pdf), qualitatively showing our excellent precision. We will include them into our revised version. --- Rebuttal Comment 1.1: Comment: I appreciate the author's further explanation of the novelty of the method and the additional experimental results provided, but unfortunately, I was unable to open or verify any of the PDFs linked anonymously by the author. If others can offer a way to access these PDFs or if the author can provide clearer evidence regarding the supplementary results, I am willing to adjust my score based on those results. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and for your willingness to reconsider the score based on the supplementary results. We sincerely apologize for the inconvenience caused by the instability of the anonymous GitHub system. To access the supplementary PDFs, reviewers can click the “Download Repository” button on the linked anonymous webpage. Alternatively, the direct download button link is: "https://anonymous.4open.science/api/repo/rebuttal-688D/zip", which contains all supplementary PDFs for review. Additionally, to further ensure accessibility, we have provided an anonymous Google Drive folder containing the same supplementary PDFs: "https://drive.google.com/drive/folders/1cVcv5Nw8eNUhgNaHsIf6LEJ8MBUmRa32". We truly appreciate your time and consideration. If there are still any access issues, please don’t hesitate to let us know.
Summary: This paper introduces a novel approach to point cloud registration by leveraging generative models to synthesize 2D images from 3D point clouds, enabling better feature extraction and matching for registration tasks. Traditional methods primarily rely on 3D feature matching, which often struggles in scenarios with low overlap, noise, and incomplete data. The authors propose Match-ControlNet, a framework that utilizes generative 2D diffusion models (such as Stable Diffusion) to improve geometric and texture consistency for robust point cloud alignment. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: There are no other supplementary material is submitted Relation To Broader Scientific Literature: The key contributions of Generative Point Cloud Registration are closely related to several areas in the broader scientific literature, particularly in 3D vision, generative modeling, and point cloud processing. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: $\cdot$ Novel Paradigm: The paper introduces a new approach to point cloud registration by leveraging generative 2D models to enhance 3D matching tasks, bridging a gap between 2D and 3D data processing. $\cdot$ Match-ControlNet: The proposed Match-ControlNet improves geometric and texture consistency between generated image pairs, which aids in better point cloud registration. $\cdot$ Enhanced Feature Fusion: The work integrates both zero-shot geometric-color feature fusion and XYZ-RGB fusion, providing additional visual cues for more accurate correspondence estimation. $\cdot$ Generalization and Plug-and-Play Nature: The framework can be integrated with various 3D registration methods without requiring significant modifications. $\cdot$ Empirical Validation: Extensive experiments on benchmark datasets (3DMatch, ScanNet) demonstrate improved registration accuracy compared to existing methods. $\cdot$ Addressing Low-Overlap Issues: The proposed method shows strong performance in challenging cases with low overlap and noisy point clouds. Weaknesses: $\cdot$ Dependency on Generative Models: The approach heavily relies on the quality of the generated 2D images, which could introduce artifacts or inconsistencies in certain scenarios. $\cdot$ Computational Overhead: Generating 2D images using generative models such as Stable Diffusion and performing additional feature fusion may introduce extra computation costs. $\cdot$ Few-Shot Fine-tuning Requirement: While the method offers a zero-shot solution, performance improvements through fine-tuning indicate that additional labeled data might still be necessary for optimal results. $\cdot$ Potential Sensitivity to Viewpoint Selection: The quality of the generated images depends on the viewpoint chosen for rendering, which might impact registration performance in complex 3D scenes. $\cdot$ Limited Real-World Evaluation: The datasets used (3DMatch, ScanNet) are standard benchmarks, but real-world performance in applications like autonomous driving or robotics remains to be seen. Overall, the paper presents a promising direction for improving point cloud registration by incorporating generative models, though computational efficiency and generalization to diverse real-world scenarios might require further investigation. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1: The approach heavily relies on the quality of the generated 2D images, which could introduce artifacts or inconsistencies in certain scenarios?** **A1:** The image generation quality is indeed crucial to the overall performance. Notably, our Match-ControlNet successfully unlocks the generalizable zero-shot consistency generation ability inherently learned through the large-scale pretraining of Stable Diffusion and ControlNet (see Sec.3.2 and Sec.3.3). This large-scale pretraining on consistency generation can effectively help mitigate potential artifacts and ensures strong consistency across a wide range of scenarios. On top of that, our coupled denoising, coupled prompt guidance, and few-shot consistency fine-tuning strategy further enhance generation reliability. These components are extensively validated through comprehensive real-world benchmark experiments, especially in challenging scenarios with low overlap, occlusions, and cluttered environments. **Q2: About computational overhead.** **A2:** While image generation and feature fusion introduce additional computational cost, they substantially enhance the quality of geometric descriptors, leading to significantly improved registration robustness. To better balance performance and efficiency, our future work will explore single-step denoising techniques and knowledge distillation mechanisms to accelerate both image generation and feature fusion. **Q3: Few-shot fine-tuning requirement.** **A3:** Our method already demonstrates impressive performance in zero-shot scenarios (as shown in Fig. 8). The few-shot fine-tuning, requiring only a minimal amount of image pairs (~3K samples), further improves accuracy. We believe that this lightweight requirement is both practical and consistent with common industry practices. **Q4: Potential sensitivity to viewpoint selection?** **A4:** We thank the reviewer for raising this important point. In practice, our method exhibits strong robustness to viewpoint selection. Notably, our extensive experiments on the ScanNet and 3DMatch benchmarks already cover a wide range of diverse and challenging viewpoint configurations. Across these varying viewpoint conditions, our method consistently achieves significant improvements in registration performance, demonstrating its reliability under viewpoint selection. We attribute this robustness to the powerful generalization ability of Match-ControlNet, which benefits from the large-scale pretraining of foundation models, as well as our coupled conditional denoising design. We will incorporate this discussion into the revised manuscript. **Q5: About real-world applicability in autonomous driving or robotics.** **A5:** We appreciate the reviewer’s concern. **(i)** While 3DMatch and ScanNet are standard benchmarks, it is important to note that they are constructed from real-world RGB-D scans and have been widely adopted in robotics and embodied AI research, especially ScanNet, which is a common benchmark for indoor robotic perception; **(ii)** Furthermore, these datasets capture realistic and challenging conditions, such as low overlap, occlusions, and cluttered layouts, which are highly representative of deployment scenarios in indoor robotics; **(iii)** Regarding autonomous driving scenarios, as shown in the figure (https://anonymous.4open.science/r/rebuttal-688D/outdoor_vis.pdf), our Match-ControlNet can generate cross-view consistent images from partial LiDAR scan-based sparse depth maps, demonstrating promising generation effectiveness in outdoor self-driving scenes (please refer to A1@synA for more details); **(iv)** Moreover, we tested our method on in-the-wild, low-overlap data (from the author's room) captured by a mobile phone. The resulting generations (https://anonymous.4open.science/r/rebuttal-688D/wild_vis.pdf) further confirm the robustness of our approach in the real-world, unconstrained environment (please refer to A4@P29t for more details). We will further expand the discussion on applications to autonomous driving and robotics in our revision. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My concerns have been addressed. I keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We're happy to hear that our clarification resolved your concerns, and we appreciate your time and effort in reviewing our paper.
Summary: This paper proposes a new 3D Registration method, Generative Point Cloud Registration, which connects advanced 2D generative models with 3D matching tasks to improve registration performance. The key idea in this paper is to generate cross-view consistent image pairs that are well aligned with source and target point clouds, so as to achieve geometric-color feature fusion to promote robust matching. Experimental results show that it can be seamlessly integrated into various registration methods to enhance their performance. Claims And Evidence: Yes Methods And Evaluation Criteria: The proposed method is of great significance in the field of point cloud registration, and an innovative idea of point cloud registration is proposed. Theoretical Claims: There is no Theoretical Claim. Experimental Designs Or Analyses: The experimental setting was adequate and reasonable, and the only deficiency was that the analysis was not carried out in the outdoor scene, regardless of whether the results were good or bad. Supplementary Material: Yes, I read the supplement. More results analysis. Relation To Broader Scientific Literature: This paper presents a new idea in the field of point cloud registration. The effect of point cloud registration is enhanced by generative method. Essential References Not Discussed: Also starting from the generative model registration problem, this paper lacks references to FREEREG (ICLR24). [1] Wang H, Liu Y, Wang B, et al. Freereg: Image-to-point cloud registration leveraging pretrained diffusion models and monocular depth estimators[J]. arXiv preprint arXiv:2310.03420, 2023. Other Strengths And Weaknesses: Strengths: 1. The experimental results are excellent, demonstrating strong performance on both the 3DMatch and ScanNet datasets. 2. The writing of the paper is well-executed, and the content is easy to understand. 3. This paper has some innovation. From the perspective of generation, it provides a new idea for point cloud registration field. But from another point of view, there are many methods to enhance the point cloud registration effect through the texture and color information of 2D images, such as Colorpcr (CVPR24), PointMBF (CVPR24), PEAL (CVPR23). Weaknesses: 1. The applicability of the method in this paper is poor, and I personally understand that the method cannot be applied to outdoor LiDAR scenes. Because this paper generates 2D images from depth maps. In the follow-up, we can add some experiments to verify whether it is effective for outdoor scenes, and consider whether this problem can be solved in future work. 2. No code is provided for verification. Other Comments Or Suggestions: N/A Questions For Authors: I have a small question, as shown in Figure 5, how to ensure that the same texture and color information is generated in the overlapping area when faced with low overlap? If this point cannot be guaranteed, I think performance should not improve significantly or even decrease at low overlap. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Discussion on applicability in outdoor LiDAR scenes.** **A1:** We sincerely appreciate this insightful comment. Our current Match-ControlNet indeed targets leveraging depth maps rather than LiDAR data for image generation. Compared to forward-facing depth maps, outdoor LiDAR point clouds provide omnidirectional (360-degree) scans that cannot be directly represented as conventional single-viewpoint depth maps due to their inherent multi-directional nature. In this rebuttal, we demonstrate the feasibility of partially projecting LiDAR points from predefined viewpoints to produce sparse depth maps for Match-ControlNet generation, yielding promising preliminary generation results as illustrated in the figures (https://anonymous.4open.science/r/rebuttal-688D/outdoor_vis.pdf). For future work, we plan to comprehensively address omnidirectional scans by employing multi-view depth maps or equirectangular images (a 2D image representation of LiDAR data) to adapt our Match-ControlNet to LiDAR point clouds. We will include the above discussion into our revised manuscript. **Q2: Consistency of texture information under low overlap?** **A2:** Our texture consistency under low overlap can be largely ensured by following four aspects: **(i) Extensive pre-training knowledge:** Our Match-ControlNet effectively unlocks the intrinsic zero-shot texture consistency capabilities of foundation models, which are derived from large-scale data pre-training. We believe this extensive training provides a strong foundation for ensuring texture consistency across diverse real-world challenges, including low-overlap scenarios. **(ii) Joint image-level and prompt-level texture consistency enhancement:** Our coupled conditional denoising mechanism and the coupled prompt guidance jointly prompt the texture consistency by incorporating both image-level texture consistency interaction and the prompt-level texture consistency guidance, significantly enhancing the texture coherence in low-overlap settings; **(iii) Effective texture consistency fine-tuning mechanism:** We introduce a few-shot consistency fine-tuning strategy (see Sec. 3.4) that requires only a small number of samples to further enhance the generation robustness of Match-ControlNet, particularly in challenging scenarios; **(iv) Extensive validation under low-overlap conditions:** To validate the robustness of our method under low-overlap conditions, we significantly increased the viewpoint separation (from 20 to 50 degrees on ScanNet and from 20 to 40 degrees on 3DMatch) as detailed in Line 311 (right column) and Line 362 (left column). Both quantitative and qualitative results (Fig. 7, first image; Fig. 8, third column) demonstrate reliable texture consistency. Additionally, we tested our method on in-the-wild, low-overlap data captured by a mobile phone. The resulting generations (https://anonymous.4open.science/r/rebuttal-688D/wild_vis.pdf) further confirm the robustness of our approach (please refer to A4@P29t for more details). **Q3: Reference Issue.** **A3:** We thank the reviewer for pointing out the missing FREEREG reference. We will include and thoroughly discuss this work in the revised manuscript. **Q4: Code availability.** **A4:** We will release our implementation publicly upon acceptance to facilitate reproducibility and enable broader verification by the community. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My concerns have been addressed. I will raise the score appropriately. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive comments. We’re glad to hear that our response helped address your concerns. We truly appreciate the time and effort you put into reviewing both our paper and our rebuttal, and we’re especially grateful for your willingness to reconsider the score.
null
null
null
null
null
null
ABNet: Adaptive explicit-Barrier Net for Safe and Scalable Robot Learning
Accept (poster)
Summary: This paper proposes ABNet, an adaptive explicit-barrier net for safe and scalable robot learning. The ABNet is a combination of multiple safe control nets such as BarrierNet, dMPC, as well as the proposed explicit-barrier net. The authors claim that ABNet has the potential to scale to a larger safe foundation model and show that ABNet is better in terms of robustness, and safety guarantees over existing approaches. Claims And Evidence: Yes. From the experiments, we can see that ABNet outperforms the current existing baselines. Methods And Evaluation Criteria: Yes. The explicit-barrier net explicitly computes the optimum control action as the output, which is different from the existing implicit approaches. The experiment setup aligns well with the purposes of the proposed methods. The experiments include 2D navigation and vision-based autonomous driving with obstacles which are common examples of testing control safety. The authors compare their approach with 6 baselines, which should provide enough coverage. Theoretical Claims: I've checked the correctness in the main text but not in the appendix. The main text looks good to me. Experimental Designs Or Analyses: The experiments are fine for me. Supplementary Material: No Relation To Broader Scientific Literature: I feel this paper does have its value and contribution to the community, as the authors claimed, towards a large safe foundation model. The way of considering the combination of multi-safe approaches is interesting to the community. Essential References Not Discussed: The part of references/related works is okay, but it would be nice to have more recent papers included, like the ones published in 2023 and 2024. Other Strengths And Weaknesses: Okay so here comes with the weaknesses. 1. Novelty. The explicit barrier can be seen as a novelty, but it is not significant at all. The way of linear combined control output by different heads is not new. And thus the novelty of the whole paper is concerning. 2. Regarding novelty, as the authors want to push this paper towards a " safe foundation model", I would suggest considering adding a self-attention layer for the outputs by different heads. In this way, even if some heads' output actions are not safe (due to learning errors, etc), the self-attention auto-weighting has the ability to correct the final output. Strengths: 1. The paper is easy to follow and well-written in general. 2. It is interesting to see the concept of the 'large safe foundation model'. 3. The experiments are well-designed and validate the effectiveness. Other Comments Or Suggestions: See above Questions For Authors: Are there existing works of 'safe foundation model'? If yes, has this paper discussed them? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We really appreciate the reviewer for all the positive and helpful comments. We address the remaining comments below. (1) The part of references/related works is okay, but it would be nice to have more recent papers included, like the ones published in 2023 and 2024. **Response:** We will add more recent references as suggested by other reviewers, especially those in safe RL and CBFs for manipulation. **Weakness 1** Novelty. The explicit barrier can be seen as a novelty, but it is not significant at all. The way of linear combined control output by different heads is not new. And thus the novelty of the whole paper is concerning. **Response:** The proposed explicit barrier is significant in improving the computational efficiency in scalable training, as demonstrated in Fig. 3, we can significantly improve the computation time with our method compared to the benchmark (dQP) from both the batching and training perspectives. Computational efficiency is very important in large safe foundation models, especially when those models are trained in a scalable way (as proposed in this paper). The linear combined control output is simple, but also very effective. However, proving the property (e.g., safety) of the combined control is non-trivial and challenging. The main contribution of our work is to show the safety of the combined control (Thms 3.1 and 3.2, and their proofs in Appendix B), which has not been done in the literature. We showed the existence of a new HOCBF constraint from the combined control, and thus the idea or approach of safety proof is indeed novel. **Weakness 2** Regarding novelty, as the authors want to push this paper towards a " safe foundation model", I would suggest considering adding a self-attention layer for the outputs by different heads. In this way, even if some heads' output actions are not safe (due to learning errors, etc), the self-attention auto-weighting has the ability to correct the final output. **Response:** Thanks a lot for the constructive suggestion. Adding a self-attention layer for the outputs is indeed very interesting. However, proving the safety of the combined control is non-trivial and challenging with such self-attention layers. We will further explore this. One possibility is to consider the linear attention mechanism (e.g., [1] Transformers are rnns: Fast autoregressive transformers with linear attention). We will add the discussion in the revision. **Question 1** Are there existing works of 'safe foundation model'? If yes, has this paper discussed them? **Response:** We found some survey papers regarding the safety of foundation models (e.g., On the opportunities and risks of foundation models), and we will discuss them in our revision, especially from the perspective of the importance of safety in foundations models, which will make our work stronger. Please let us know if the reviewer has any other suggestions for references.
Summary: The paper proposes to embed control barrier constraints into neural layers to enforce safety assurance to network output. In contrast to implicit formulation with differentiable optimization, the paper argues for a specific QP admitting explicit solution form so as to avoid inefficient batching through multi-threading. The explicit barrier layers are duplicated to construct multiple heads, with each accommodating specific safety features, and can be combined via a linear combination. The results show graceful scale-up with respect to the number of batches and network heads comparing to the differentiable optimization counterpart. Imitation learning results on 2D robot navigation, manipulation and vision-based driving show improvement on deriving less conservative safe behaviors. Claims And Evidence: The main claims about the advantage of explicit barrier QP includes: 1. More efficient inference and batching for training. This is well corroborated with results in Figure 3 in which the proposed approach clearly shows the benefit of avoiding differentiable optimization. 2. Explicit barrier nets improve the learning performance while conform to safety constraints. This is demonstrated in 2D robot navigation and two-link manipulation examples. 3. Multi-head explicit barrier nets are possible due to reduced computational costs and the heads can be combined with each focusing on specific safety-relevant features. This is shown case in the vision-based driving example which also proves the possibility of using unstructured context input z. Methods And Evaluation Criteria: The method is based on (Luenberger, 1997) which shows a QP has only two constraints can admit an explicit solution parameterization. To this end, the paper proposes to partition constraints into two sets and choose two that are closest to activeness. The method makes a lot of sense in that explicit parameterization is faster and more amenable to vectored evaluation. The benchmark includes 2D robot navigation, 2-DOF robot arm and vision-based car driving. The experiments cover different dynamics and vision as contextual input. Theoretical Claims: The paper provides the proof on safety assurance of blended network output subject to control barrier constraints in (3). The correctness is briefly checked but not thorough as the reviewer is not familiar with the adaptive CBF cited from literature. Experimental Designs Or Analyses: The experiment design covers claims on computational efficiency (Figure 3), safety-assured learning (Table 1, 2, 3), multi-heads to attend different image features (Figure 10) and robustness against image corruption (Table 4). The design is well made and has a good coverage on all the claims made by the paper. Supplementary Material: The supplementary materials contain code to reproduce the three experiments. Relation To Broader Scientific Literature: The contribution is related to broader literature on using implicit function for layer design. The findings on effective learning of structured output are in line with existing works arguing for differentiable computing as building blocks for end-to-end learning. The idea of leveraging explicit solution parameterization can be inspiring to works beyond safe learning. Essential References Not Discussed: No extra references are needed. Other Strengths And Weaknesses: The paper needs a clarification on the scope of systems. The part on implicit-barrier (from line 125) suggests the target dynamics is of a general control affine form $\dot{x} = f(x) + g(x) u$ while an assumption is made on relative degree $m$. The assumption is not made in the neighbourhood of some $\bar{x}$ to ensure $g(\bar{x}) \neq 0$ for the relative degree $m$. In general, I think $g(x)$ may not guarantee the same relative degree everywhere in the state space. The experiments do not contain such a case as the dynamics are either fully-actuated or with a constant control matrix. It is hard to tell what if the dynamics violate this relative-degree or when $m$ is misspecified for some states. The paper advocates scalable learning while the examples are still limited to low-dimensional system and action space. I guess the scalability statement here is about the computational cost of applying differentiable optimization on problems of such a scale. However, I think the contribution can be much more significant if results can be attained on systems with higher DOFs. The paper seems to take visual observations as piece-wise constant input. This reads as a strong assumption by breaking the causality between state and generated observation, while it is understandable this can save the analysis of differentiating the observation model. Other Comments Or Suggestions: No comments on this regard. Questions For Authors: 1. Can the method be demonstrated to work on dynamics without a globally constant relative degree? How much it may invalidate the safety assurance? 2. The minimum trick to select "the most active" constraints seems general. What could be its implication to other applications? Do we need some care on how to partition the constraints when the problem scales up? I am wondering whether it can work for optimizing along a trajectory where state may change the activeness of constraints and hence the constraints considered in QPs. Will this create in differentiability or complex loss landscape to gradient-based optimization? 3. Differentiable optimization sometimes suffer from infeasible problem parameterization. Will that also be an issue here? 4. The vision-based driving example takes image as the input and generate control from the ABNets (line 813-814). How is this done without the state input? Is the demonstrated safety formally assured or an empirical verification in this specific task? ##update after the rebuttal I would like to thank the authors' responses and clarifications, especially on the relative degree. Two points I would like to make after thinking over the rebuttals: - It is good to know that there are already potential "patching" for the cases beyond constant relative degree as in the experiments. I see some issues on resorting to these patches to counter the original criticism. Being able to define safety constraints to let robot operate in a subspace with constant relative degree is not a direct resolution to the criticism on the rigour of the theorem statement. The original statement suggests a fixed relative degree which implicitly assumes this applies to all control matrices. Providing solutions about what we could do if the (implicit) assumption didn't hold is not helping on the clarity and rigorousness of the problem scope. I am also unsure about the implication to empirical performance if state constraints are imposed for making the analysis valid in a subspace. Many under actuated tasks actually rely on going through uncontrollable states and exploiting passive dynamics. - The clarifications on the vision-based driving and state prediction model make sense. However, ignoring the causal relation between the observation model (state -> image) appears to break the analysis chain of the control loop. This could be fine for empirical driven results, but I feel it is a bit confusing for a work promoting provable control as I thought the example was intended to show the proof also applies to system with unstructured sensory observation. Overall, I think the paper should still have a good chance to be accepted while I have to admit my confidence is not as strong as it was before the rebuttal, given the way of how the raised questions were approached. My recommendation remains unchanged. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for all the positive and constructive comments. We address the remaining comments below. (1) The paper needs a clarification on the scope of systems. The target dynamics is of a general control affine form while an assumption is made on relative degree m... **Response:** The proposed method can be applied to general nonlinear systems. The problem that the coefficient of the control in the CBF constraint becomes 0 at some state is the so-called ``singularity’’ problem (if it is zero for all states, then the relative degree should be increased by one until it is non-zero). This problem can be solved by defining a CBF that avoids those states (e.g., see [1] High-order barrier functions: Robustness, safety, and performance-critical control). When there are multiple controls, there may be the so-called mixed relative degree problem, and we can address this by defining auxiliary dynamics to make all the controls show up (e.g., see [2] Control barrier functions for systems with multiple control inputs). When we have non-affine control systems, we can also define auxiliary dynamics to ensure that the method can still work (e.g., see [3] On the forward invariance of neural ODEs). We will make this clear in the revision. (2) Limited to low-dimensional system and action space... **Response:** We totally agree with the reviewer. The scalability is mainly in terms of the safety of the robot, the size of the safe robot learning models, and the computational cost of the model. Our method can also work for high DOF scenarios (e.g., manipulation) by replacing the corresponding system dynamics with the high DOF dynamics (e.g., manipulator dynamics) in the model. We will discuss this in the revision. (3) The paper seems to take visual observations as piece-wise constant input, ... **Response:** We thank the reviewer for pointing this out. Although we are taking piece-wise constant input for visual observations, the proposed model works for continuous visual observation as well. This is indeed a very interesting direction that involves differentiating the image input (e.g., in the form of optical flow). We will add this discussion in the revision. **Question 1** **Response:** Our method works for dynamics without a globally constant relative degree as well. There are many approaches to deal with such problems, e.g., by defining a CBF that avoids those states (e.g., see [1] High-order barrier functions: Robustness, safety, and performance-critical control). In fact, we only need to care about the states at the boundary of the unsafe sets that may make the coefficient of control in HOCBFs become zero, and safety can still be preserved in such cases. **Question 2** **Response:** The implication to other applications is that we always consider the most threatening factors in guaranteeing safety, and it may not necessarily just be related to the activeness of constraints, but also related to the importance of those constraints. For example, autonomous vehicles should always follow traffic rules, and it should always consider the most important rule (e.g., ensure collision free to pedestrians) rather than less important rules (e.g., lane or road keeping). We do need to care on how to partition the constraints when the problem scales up, and this can be taken care of by leveraging other factors (such as ethics and local culture in driving), e.g., see [1] Liability, Ethics, and Culture-Aware Behavior Specification using Rulebooks [2] Rule-based optimal control for autonomous driving. Since the model outputs controls in real time, we can definitely change the activeness of constraints in real time as well. We implement the model in discrete time, thus it won’t cause any differentiability problem since we can address the inter-sampling issue (safety in continuous time) using event-triggered approaches (e.g., [3] Event-triggered control for safety-critical systems with unknown dynamics). Another way to address this differentiability problem is to use the soft min approach (as discussed right before equation (4)) to combine and consider all the constraints. **Question 3** **Response:** This is really a good point. Since we always consider the two most important constraints at each time step, and we can get the closed-form solution, therefore, we do not find any infeasibility issue for now (however, there may be other problems that are worth further exploring in future work). This provides a promising way to address the infeasibility problem, and we will add the discussion in our revision. **Question 4** **Response:** We also have some state- net to predict the vehicle and obstacle states from image observations (as also demonstrated in the BNet). The demonstrated safety is based on the assumption that the state is reliably predicted. In cases where there are some uncertainties for the prediction of the states, we can use robust CBFs in our framework. We will add the discussion in the revision.
Summary: This paper addresses a critical challenge in AI-enabled robotics—safe learning—by introducing the Adaptive explicit-Barrier Net (ABNet). The authors highlight the limitations of existing safe learning methods, including poor scalability, inefficiency, and instability under noisy inputs. ABNet overcomes these issues by explicitly incorporating safety barriers into a closed-form model, ensuring provable safety guarantees. A key innovation is its multi-head structure, allowing different model heads to learn safe control policies from distinct features, thereby improving training efficiency and stability without requiring a monolithic large model. The approach is validated across diverse robotic tasks, including 2D obstacle avoidance, safe manipulation, and vision-based autonomous driving, demonstrating superior robustness and safety compared to existing models. The paper’s contributions are significant in both theoretical and practical aspects, providing a promising direction for scaling safe learning toward foundation models for robotics. Claims And Evidence: I believe that some of the authors’ claims are not well-supported by the experiments, and the writing of this paper is not particularly clear, making it somewhat difficult to understand. Similar to Control Barrier Functions (CBF), this work primarily addresses the problem of safe robot learning. However, one major concern is the fundamental distinction between the problem studied in this paper and that of Safe Reinforcement Learning (Safe RL). Safe RL also frequently employs Barrier Functions to handle safety constraints, as seen in works such as: 1. Penalized Proximal Policy Optimization for Safe Reinforcement Learning 2. IPO: Interior-point Policy Optimization under Constraints Methods And Evaluation Criteria: Furthermore, the complexity of the experimental environments used in this paper is not well-articulated, making it difficult to assess the significance of the results. Could the authors provide further clarification on the environments used in the study? Theoretical Claims: Yes. I see the appendix and the main page. Experimental Designs Or Analyses: Yes, some concerning see above. Supplementary Material: Yes, i see the appendix. Relation To Broader Scientific Literature: Additionally, does this method remain effective in environments requiring complex contact handling? Safe robot learning often considers constrained environments, such as those found in safety-gymnasium. The reviewer is interested in understanding how the proposed algorithm performs in more complex scenarios. The paper’s structure may make it particularly difficult for readers to follow, especially given the lack of clarity in the Background and Related Work sections. Safe learning has been extensively explored in the context of Safe Reinforcement Learning, with optimization approaches based on Barrier Functions and Lagrangian formulations. Compared to the following works: 1. Reward Constrained Policy Optimization 2. Constrained Policy Optimization Essential References Not Discussed: some related work is not discussed in the paper. Other Strengths And Weaknesses: What are the key advantages of using Barrier Functions in the proposed approach? Moreover, the applicability of this method seems to be constrained by the requirement for known and differentiable constraint functions. In many real-world scenarios, defining such functions can be highly complex, often resembling a black-box system. Real-world uncertainties, noise, and sensor errors may further degrade the algorithm’s performance, potentially leading to failures. For instance, ensuring that large language models (LLMs) are safely aligned—such as preventing the generation of harmful content—requires defining constraint functions for “human safety,” which is itself a highly challenging task. How do the authors view this issue in the context of their approach? Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer for all the helpful and constructive comments. We address all the concerns below. (1) Claims are not well-supported by experiments, writing is not particularly clear. **Response:** Our main claim is safety guarantee of the model, and this is supported by the SAFETY or CRASH in Tables 1-4. The computational efficiency of our method is demonstrated in Fig. 3 with comparison to benchmarks. The performance improvement in scalable learning is shown in Figs. 4-6. We will improve the writing with preliminary on CBFs and BNet in the revision. (2) Fundamental distinction with Safe RL and optimization approaches based on BFs and Lagrangian **Response:** There is indeed a fundamental distinction between our method and Safe RL. In safe RL, safety is usually taken as a component of reward function, and thus, it can only improve safety without guarantees. While our method can formally prove safety, as shown in Thm. 3.1 and 3.2. Safety guarantees are also demonstrated in Tables 1-4. We will add discussions on safe RL and include the references in the revision. (3) Complexity of environments not well-articulated. Could the authors provide clarification on the environments? **Response:** The complexity of the experimental environments is given in Appendix section C. In summary, we consider different nonlinear dynamics and constraints across three different tasks. We have one complex vision-based end-to-end autonomous driving experiment in which the model directly takes the front-view image as input, and outputs the safe control. All the experiments are done in VISTA, and it is a sim-to-real driving simulator that generates driving scenarios from real driving data [Amini, et.al 2022]. The VISTA allows us to train model with guided policy learning. This learning method has been shown to work for model transfer to a full-scale real vehicle. This is also given in Appendix section C.4. (4) Does this method remain effective in environments requiring complex contact handling? **Response:** Our method is still effective in environments requiring complex contact handling, in which case we just need to replace the dynamics by the ones of manipulator. The safety may be different in contact scenarios (e.g., force constraint instead of collision avoidance). The CBF method has been widely used in manipulation in complex environments (e.g., Safe Multi-Robotic Arm Interaction via 3D Convex Shapes), since our model is based on the CBF method, we can apply our model to those complex scenarios when learning is involved. (5) Lack of clarity in the Background and Related Work. **Response:** We will add preliminaries on CBFs and BNet to improve the clarity, and discuss more on safe RL and other related works in Related Works section (now the related work is given in Sec. 5). (6) Some related work is not discussed in the paper. **Response:** We will add the references suggested by the reviewer and other reviewers (e.g., safe RL), and discuss them. (7) What are the key advantages of using Barrier Functions in the proposed approach? **Response:** The key advantages of using barrier functions in the proposed method is that we can formally train models that have safety guarantees in a scalable way, as shown in Thms. 3.1 and 3.2 and Tables 1-4. Another advantage is the high efficiency of our proposed method, as shown in Fig. 3. (8) Applicability constrained by known and differentiable constraint functions. Uncertainties, noise, and sensor errors may further degrade performance, potentially leading to failures. **Response:** There are many related works that learn differentiable constraint functions from demonstrations, such as [1] Learning control barrier functions from expert demonstrations and [2] Synthesis of control barrier functions using a supervised machine learning approach. Therefore, we can combine existing literature with our ABNet to learn constraints in scenarios that the safety is not predefined. This has been made clear in the future work section 6. When there are uncertainties, noise or sensor errors, we can employ robust CBF methods in our model (e.g., Fault tolerant neural control barrier functions for robotic systems under sensor faults and attacks). We will make this clear in the revision. (9) large language models (LLMs) (safely aligned): preventing harmful content—requires defining constraint functions for “human safety,” How do the authors view this issue in the context of their approach? **Response:** Thanks a lot for pointing out the safety problem of LLMs, we have been actively working on this and on other models or problems where safety is not clearly defined. In LLMs, in order to avoid toxic language, we may learn a differentiable risk function according to the harmfulness of the content, and then we can take this risk function as a CBF in our approach such that we ensure that the harmfulness of generated content be below some level. We will discuss this in the revision. --- Rebuttal Comment 1.1: Comment: Thank you very much for the authors’ response. Over the past two days, I have revisited the manuscript and the rebuttal while also reviewing the comments from other reviewers. Thank you for the reviewer’s response. I have decided to keep my score unchanged. I sincerely appreciate the efforts the authors have made during the rebuttal period. --- Reply to Comment 1.1.1: Comment: We really appreciate the reviewer for the comment. Could you please let us know the reasons of keeping the score unchanged? Any point would help a lot to further improve our paper in the revision. Please let us know if you have any remaining concerns. Thank you, Authors.
Summary: The paper presents ABNet, a novel framework that utilizes attention mechanisms to handle diverse input patterns, while incorporating barrier functions to maintain the system state within a safety set, ensuring forward invariance. This approach aims to improve the scalability and robustness of robot learning by enabling each head of BarrierNet to focus on different aspects of the observation space, thereby facilitating the development of safe control policies in a variety of environments. Claims And Evidence: While the incorporation of attention mechanisms into BarrierNet represents a core contribution, the manuscript would benefit from a more thorough technical exposition regarding the non-trivial nature of this integration. The authors should elaborate on: (1) specific technical challenges encountered during this integration process, (2) innovative solutions developed to overcome these challenges, and (3) the distinctive advantages conferred by this particular implementation of attention mechanisms. Such technical insights would significantly enhance our understanding of the methodological novelty and provide clearer differentiation from conventional architectural adaptations. Methods And Evaluation Criteria: The experimental scenarios are relatively simplistic, consisting of static environments with limited dynamics, and lack sufficient qualitative analysis, such as video demonstrations comparing the method to baseline approaches. Theoretical Claims: The method does not appear to ensure optimal task performance. As stated in the paper, "we use NMPC to collect ground-truth controls (training labels) with corresponding states," implying that the upper limit of ABNet's task performance is constrained by the performance of NMPC (e.g., minimum time to reach a target). Additionally, optimality does not seem to be the primary focus in training. By employing imitation learning with barrier functions, safety appears to be prioritized, which could further impact task performance. Is there a mechanism to balance task performance and safety? Furthermore, the criteria for selecting the penalty functions in Equation 3 are not well explained. How are these values chosen, and how does the method balance conservatism and performance optimality? Experimental Designs Or Analyses: Figure 6 does not clearly highlight the performance differences between ABNet and methods like MPC or BNet. While Figure 6 includes results from MPC, Table 3 does not provide a corresponding quantitative comparison. For the method to be applicable to more complex, real-world scenarios—such as autonomous driving with dynamic obstacles like vehicles or pedestrians—consideration of the environment's external dynamics is essential. This raises concerns about the scalability of ABNet. The authors could consider testing in more advanced simulators, such as [1-2], to better showcase the method's adaptability and robustness in dynamic environments. [1] CARLA: An open urban driving simulator [2] Nuscenes: A multimodal dataset for autonomous driving Supplementary Material: All. Relation To Broader Scientific Literature: The proposed methodology may prove particularly suitable for safety-critical applications with well-characterized system dynamics. Essential References Not Discussed: [1] CARLA: An open urban driving simulator [2] Nuscenes: A multimodal dataset for autonomous driving Other Strengths And Weaknesses: 1. The approach explicitly incorporates barrier functions into neural network training, ensuring safety constraints are satisfied. 2. The modular architecture of ABNet with multiple attention heads allows scalable, incremental learning, which is promising for building complex, safe models in stages. 3. The method demonstrates robustness to noise, yielding lower variance in performance. 4. The method's practical application is constrained by its dependence on predefined, differentiable constraint functions. This requirement poses significant challenges in real-world implementations, where such functions are often either prohibitively complex to formulate or behave as essentially unknowable systems. Other Comments Or Suggestions: To my knowledge, large transformer models are increasingly applied in autonomous driving scenarios involving dynamic external agents, as referenced in [3]. This approach uses transformers as a backbone for safety planning based on sampled trajectories, ensuring collision-free paths while closely resembling NMPC planners or human driving behaviors. How do the authors view the online planning, and could barrier functions be integrated to enhance safety? [3] Planning-oriented Autonomous Driving Questions For Authors: Can the method handle complex contact scenarios and adapt to humanoid robots managing full-body dynamics with external force feedback? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer for all the positive and constructive comments. (1) Specific technical challenges **Response:** There are two main technical challenges: (a) The training and testing efficiency of the scalable robot learning model; (b) Formal proof for the safety of the composed model in scalable robot learning; (2) Innovative solutions to overcome these challenges **Response:** For challenge (a), we proposed the explicit-barrier approach (in which a closed-form solution for the QP is given) to significantly improve the efficiency of the model, as shown in Fig. 3. For (b), we show the existence of a new HOCBF condition from the composed model, which has not been done in the literature ((21) of Appendix sec. B). (3) Distinctive advantages **Response:** The most distinctive advantages of the method is the safety guarantee, as shown in Tables 1-4 (SAFETY). This is also the main contribution of our work. We also show the improvement of performance with the increase of heads of explicit-barrier, as given in Figs. 4-6 and Table 3 (PASS). (4) Experiment are relatively simplistic, and lack qualitative analysis. **Response:** We have two intuitive experiments that are easy to understand (from dynamics, safety perspective). We also have one complex vision-based end-to-end experiment to show how our methods can work in realistic scenarios. We have shown qualitative analysis in Fig. 4, Fig. 8, 9, 11. All the qualitative results are from videos, and we will attach them in the paper revision (the rebuttal only allows pictures). (5) Optimal performance: Is there a mechanism to balance performance and safety? **Response:** The objective of this paper is to show the safety instead of optimality. However, as we solve dQPs, we can indeed ensure optimal task performance. For applications where safety is not critical, we can relax the first condition in (3) with a slack variable, and minimize it in the cost (2). In this way, we can balance task performance and safety. We will add this in the revision. (6) Criteria for selecting penalty functions: how does the method balance conservatism and optimality? **Response:** The penalty functions in (3) are output of previous NNs or trainable parameters, and we do not need to select them by hand. Smaller penalty functions will make the robot more conservative. Our method finds optimal functions through training of the model to achieve a desired balance between conservatism and performance (given by the training data). We will make it clear in the revision. (7) Figure 6: performance differences between ABNet and methods like MPC or BNet. Table 3 does not provide a comparison with MPC. **Response:** There is a clear improvement of ABNet over BNet as trajectories with BNet stop near the obstacle (shown by blue trajectories in Fig. 6), while the ones of ABNet can safely pass obstacle. The MPC is the ground truth (that is computationally expensive, cannot be applied in real time), and thus it is not included in Table 3. We will make this clear in the revision. (8) Scalability to dynamic obstacles, and testing in more advanced simulators. **Response:** Our ABNet can be scaled to dynamic environments as we just need to incorporate the dynamics and states of obstacles in (3). However, identifying dynamics and states of obstacles requires more powerful models, which does not rely on ABNet (not a limitation of ABNet). The ABNet mainly provides safety guarantees, and we can use existing literature in dynamics and state estimation to better augment ABNet in dynamic environments. We will explore it in future, and make it clear in the revision. **Weakness:** Dependence on predefined, differentiable functions. **Response:** There are many related works learning differentiable constraints from demonstrations, e.g., [1] Learning control barrier functions from expert demonstrations, [2] Synthesis of control barrier functions using a supervised machine learning approach. Therefore, we can combine existing literature with ABNet to learn constraints in cases that safety is not predefined. **Suggestion:** Large transformer models with safety planning. How do the authors view online planning, and could barrier functions be integrated to enhance safety? **Response:** Existing approaches with large models can improve safety, however, there are no guarantees. Our method can be integrated into them to formally guarantee safety. The transformer models can be integrated either in the upstream or downstream of ABNet (e.g., via linear attention such that we can still prove the safety of the model). **Question:** Can the method handle complex contact scenarios? **Response:** Our method can handle complex contact scenarios by replacing the dynamics with the ones of humanoid. The safety may be different in contact scenarios (e.g., force constraint instead of collision avoidance). This would be another important application of our model, and we will discuss it in the revision. --- Rebuttal Comment 1.1: Comment: My concerns remain unresolved. In the experimental environment provided by the authors, all obstacles are static and two-dimensional. However, the authors still claim that MPC serves as a computationally expensive ground-truth method. I believe that in such a simple environment, MPC results should be included. The authors mention that the penalty function is the output of a neural network, which I find confusing. How is this penalty function trained? This aspect is not discussed in the paper. Does this imply that additional training data is required to learn the penalty function? --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer for the further feedback. **1. MPC results should be included.** **Response:** We have compared MPC with our ABNet in the vision-based end-to-end autonomous driving task, please see the table below. In summary, both methods can make the ego vehicle safely pass the obstacle (as also shown by the Safety metric that should be $\geq 0$ for a safe control method). However, the MPC (with interior point to solve the nonlinear programs) is much more computationally expensive than our proposed ABNet (0.872s v.s. 0.004s at each time step). Since the MPC involves the solving of nonlinear programs, the solver usually has to linearize or simplify the model within itself to increase the computational efficiency, which may make MPC lose safety guarantees. Moreover, we found that MPC sometimes gives very awful solutions that are far from optimal (due to the complexity of the nonlinear dynamics and constraints). While our ABNet can transform nonlinear optimizations into differentiable QPs, and then further derive the closed-form solution with the proposed method. ABNet can always give normal solutions with theoretical safety guarantees. We will add the results to the revision of the paper. | Method| Crash| Pass|Safety|Computation time| Theoret. Guar.| |---|---|---|---|---|---| | MPC|0% | 100% | 0.006 | 0.872| $\times$| | ABNet|0% | 100% |1.455 | 0.004| $\surd$| **2. The authors mention that the penalty function is the output of a neural network, which I find confusing. How is this penalty function trained? This aspect is not discussed in the paper. Does this imply that additional training data is required to learn the penalty function?** **Response:** Please note that the penalty function is not the output of the ABNet. The only output of the ABNet is the solution of the dQP/closed-form solution (i.e., the control of the robots). Instead, the penalty function is the input of the ABNet (**please note that this is clearly shown by the inputs $p_i$ and $p_{m,1}$ in Fig. 2**), and it is the output of the previous layer (e.g., LSTM in the vision-based driving case study). Therefore, we can just take penalty functions as some intermediate variables (or parameters) within the neural network (just like other trainable parameters of a neural network), and we do not need any additional data to train the penalty function. However, we do require the penalty function to be positive in order to ensure the safety in the ABNet, and we have used a scaled sigmoid function to ensure the penalty function to be $> 0$. In summary, the penalty function is just like other trainable parameters in a neural network, and it is optimized by the loss of the output of the ABNet (controls of a robot) using error backpropagation. **We train the ABNet like a normal neural network using error backpropagation to optimize all the parameters (including the penalty function)**.
null
null
null
null
null
null
Policy Gradient with Tree Expansion
Accept (poster)
Summary: The paper introduces SoftTreeMax, a new approach that combines policy gradient (PG) with tree search. The goal is to address the inherent high gradient variance in traditional PG algorithms. The authors present theoretical analysis showing that the gradient variance of SoftTreeMax decays with the depth of the tree expansion, and that this decay rate is influenced by the second eigenvalue of the transition matrix induced by the tree expansion policy. On the empirical side, they utilize a GPU-based simulator to efficiently manage the exponential computational cost of expanding the tree. Empirical validation is done through experiments on the Atari benchmark suite, where the proposed approach demonstrated lower gradient variance and better performance compared to Proximal Policy Optimization (PPO). ## after rebuttal The rebuttal did not significantly change my view of the paper. I will maintain my score. Claims And Evidence: Overall, the claims are clear. The variance bound of C-SoftTreeMax in Theorem 4.4 depends on the number of states S, which is typically large (or even infinite). It is unclear whether this dependency is unavoidable in the worst case, as no lower bound is given, and a comparison between the bound and the empirical variance is not provided. For E-SoftTreeMax, the bound in Theorem 4.7 does not depend on S, and an empirical comparison is included. Thus, this claim is more convincing. Theorem 4.8 provides a bound on the bias of the gradient estimator in cases where the dynamics model is inaccurate. This bound scales linearly with $S$ and $d$. Again, it is unclear whether this dependency is unavoidable, and no empirical comparison is provided. Assuming this dependency is accurate, the variance decreases with $d$ while the bias increases. This tradeoff on overall performance is not examined, as the experiments are conducted with a precise forward model (which does not exist in most practical applications). Finally, in the conducted experiment on the Atari environment, when a precise forward model does exists, the paper demonstrates that indeed the variance reduces with $d$ and the overall performance is better than PPO's. Methods And Evaluation Criteria: The theoretical results provide bounds on the variance as well as on the bias of the estimator as a function of the tree depth d which definitely makes sense. The comparison to PPO also makes sense, as this is one of the most popular PG algorithms. Since the method aims to combine PG with tree search, it would make sense to compare it to a benchmark that is based on MCTS (such as AlphaZero or something similar). Theoretical Claims: I read the proof outline in the main text and did not find anything that raised my suspicion. I have also read the proof of Lemma 4.1 in the appendix. Experimental Designs Or Analyses: The conducted experiments seem valid. As mentioned earlier, some additional experiments could be beneficial, such as a comparison to a tree search algorithm. Additionally, regarding the comment in footnote 1, I think it would be beneficial to compare the two suggested algorithms in a non-deterministic benchmark. Supplementary Material: Lemma 4.1, and the experiments in figures 4 and 5 Relation To Broader Scientific Literature: The paper addresses the well-known issue of high gradient variance in policy gradient algorithms. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper presents a novel approach for reducing gradient variance. A major limitation is that the algorithm requires computing expectations over future states, which can be very challenging in stochastic environments with a large number of states. Other Comments Or Suggestions: N/A Questions For Authors: * How can one handle a stochastic environment with an infinite number of states? * Does it make sense to change the behavioral policy over time? Specifically, does it make sense to use the current policy (or a regularized version of it) as the tree-expansion policy? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed and thoughtful review. **Dependency of bound on $S$ and lower bound** RL analysis on tabular MDPs often include $S$ terms, usually stemming from the triangle inequality applied on summation over states. These dependencies can sometimes be replaced by structural assumptions on the transition matrix (like branching). This is a common and important theoretical-practical gap. While the theory suggests meaningless bounds for large $S$ in practice we see exponential variance reduction with $d$ for the huge state space in Atari. On a related note, a lower bound to the variance is given in the appendix A.5 (it does not include $S$). **Bound on E-SoftTreeMax does not contain $S$** There is a dependence on $S,A$ also in the case of E-SoftTreeMax. However, notice that the bound in Thm 4.7 is presented in O-notation while the C-SoftTreeMax bound is expressed explicitly. The critical theoretical contribution in both cases is the identification of the exponential decay rate with respect to tree depth. Regarding the missing numerical variance measurements for C-SoftTreeMax: we conducted similar variance analyses for both SoftTreeMax variants, but included only the E-SoftTreeMax results in the paper to avoid redundancy. The numerical results for C-SoftTreeMax show similar behavior and correspondence to the theory. We will clearly state this in the revision. The primary reason we included the detailed variance measurements for E-SoftTreeMax was to verify our theoretical hypothesis from Thm 4.7 that the parameter $\alpha$ is related to the $\lambda_2$. Our numerical experiments confirmed this in the general case by showing that variance decays at a rate determined by the mixing properties of the baseline policy. We will add a brief discussion of C-SoftTreeMax's empirical variance characteristics in the revised manuscript to ensure completeness. **Comparison to MCTS** Following your comment, we added comparisons with a strong baseline to our experiments: EfficientZero [Ye et al., 2021]. We chose it because it is a highly sample-efficient version of MuZero that is also open source. MuZero is one of the best known RL algorithms to date. The results are given here: https://ibb.co/zHG3KP8m, showing that SoftTreeMax surpasses EfficientZero in all the games we tested except one. We will add full experiments in the final version. **Question 1: How can one handle a stochastic environment with an infinite number of states?** For stochastic environments with infinite state spaces, we propose three complementary approaches: 1. We can use Monte Carlo sampling to expand the tree. This approach maintains a parametric distribution over actions (potentially dependent on $\theta$) and samples from it to build the tree. This method can be viewed as a tree adaptation of Model Predictive Path Integral control (MPPI) with a value function. 2. Function approximation for leaf states: For infinite state spaces, we already use neural networks to approximate the value function at leaf states. This approach naturally extends to continuous state spaces. 3. Theoretical extension: The key concepts in our analysis can be adapted to infinite state spaces by: 3a. Replacing transition matrices with kernels in the continuous case. 3b. Exploiting that $\pi_b$'s transition kernel is a non-expansive operator. 3c. Leveraging that the eigenvalue 1 gets canceled for the policy gradient. These properties can be shown to hold for decision models with infinite state and action spaces, though we leave the complete theoretical development for future work. **Question 2: Does it make sense to change the behavioral policy over time? Specifically, does it make sense to use the current policy as the tree-expansion policy?** Yes, adapting the behavioral policy over time is a promising direction. As training progresses and the policy improves, using it as the tree expansion policy could lead to exploring more relevant parts of the state space. This creates a bootstrapping effect similar to what's done in MCTS algorithms like AlphaZero, where the current policy guides the search. Still, care must be taken as trained policies often become deterministic leading to unfavorable variance properties for semi-deterministic environments (when $P^{\pi_b}$ approaches a permutation matrix, $\lambda_2(P^{\pi_b})$ could potentially approach 1, eliminating the variance reduction benefit). Also we can’t use the SoftTreeMax policy directly to open the tree as it requires expanding a tree while expanding a tree. A reasonable compromise might be to learn a proxy policy as a mixture of the current policy and a uniform policy, with the mixing weight potentially decreasing over time to balance exploration and exploitation while maintaining favorable variance properties. Thank you for raising this excellent point, which provides fertile ground for future work. We will address this in the revised paper.
Summary: The authors propose a generalization of softmax parametrization for policy gradient methods that utilizes the breadth-first search tree of the future states. This type of parametrization combines planning with policy gradient methods to reduce the latter's variance. The authors proved that given some assumption on the planning policy, the variance of the new gradient decreases exponentially with the depth of planning, allowing trade-off between computation efficiency and gradient variance. Finally, the authors propose the GPU implementation of their algorithm, which shows improvement against the PPO baseline regarding sample complexity. ## After rebuttal I was happy to see the MCTS baseline. However, I still believe that the assumption on the presence of GPU simulators is very strong, so I decided to maintain my score. Claims And Evidence: **Claim 1.** The gradient variance of SoftTreeMax decays exponentially with a rate depending on the spectral properties of the transitions matrix induced by a tree search policy; This claim has been proven theoretically (Theorems 4.4 and 4.7) and practically given an experiment on a randomly generated MDP. **Claim 2.** In the case of an approximate forward model, the gradient bias is proportional to the approximation error. This claim has been proven theoretically; however, there is no empirical evidence since all the experiments use the exact forward model. **Claim 3.** The algorithm is implementable on GPU and outperforms PPO in terms of both gradient variance reduction and achieving higher reward. This claim was confirmed on 8 Atari games, implemented in a parallelizable GPU version of the environment. Methods And Evaluation Criteria: The proposed method and the evaluation methodology make sense to me. However, given the focus on improving the algorithm's planning capabilities, the inclusion of the planning benchmark (e.g., the game of Go or any other strategic game) could be important in assessing the final improvement. Theoretical Claims: Theoretical statements make sense to me, although I have not carefully checked them in the appendix. Experimental Designs Or Analyses: - The experimental design lacks comparisons with other model-based approaches. The proposed algorithm assumes a relatively strong assumption on the interaction model with MDP: the presence of a simulator or an approximate simulator to perform a BFS over the next states. Given this assumption, I believe the comparison with MCTS-based methods (with a comparable tree search budget) should be present. - The lack of planning benchmarks. Since the method is assumed to improve planning capabilities, I expect to see a comparison on some planning benchmark, such as Go or any other strategic game (at least Connect-4). Supplementary Material: I skimmed the proofs for C-SoftTreeMax and read the experimental details in detail. Relation To Broader Scientific Literature: The improvement of the planning capabilities of policy-gradient methods is relevant to the current scientific literature. Essential References Not Discussed: The paper lacks a discussion of another type of planning without deep search, such as k-step lookahead, during advantage estimation (e.g., Tree Backup (Precup et al, 2000), Retrace (Munos et al, 2016), application in Muesli (Hessel et al, 2021)). In particular, I found that the method resembles the idea of n-step Tree Backup (e.g., Sutton & Barto book) but from the perspective of policy parameterization, not Q-value estimation. Precup, Doina, Richard S. Sutton, and Satinder Singh. "Eligibility traces for off-policy policy evaluation." ICML. Vol. 2000. 2000. Munos, R., Stepleton, T., Harutyunyan, A., & Bellemare, M. (2016). Safe and efficient off-policy reinforcement learning. Advances in neural information processing systems, 29. Hessel, M., Danihelka, I., Viola, F., Guez, A., Schmitt, S., Sifre, L., ... & Van Hasselt, H. (2021, July). Muesli: Combining improvements in policy optimization. In International conference on machine learning (pp. 4214-4226). PMLR. Other Strengths And Weaknesses: Overall, I enjoy the idea of integrating planning directly into the policy's parametrization and the theoretical guarantees for variance reduction given the depth of the search. However, as a limitation acknowledged by the authors, I also found that the assumption of the presence of GPU-sumilator is very strong. Other Comments Or Suggestions: - Questions For Authors: - Does the number of online interactions include the samples generated during the tree search? - For $d=0$, the method corresponds to a usual softmax parametrization (lines 147-157). Why, in your experiments, increasing $d$ to 1 or 2 can harm performance? In this case, it looks like the method gets more information on the environment and should help in training. I also found the case of $d = 1,2$ the most interesting from a practical perspective since it should not introduce a significant overhead on tree-search counterpart but decrease the variance (given all the experiments, the relative variance reduction between $d=0$ and $d=1$ is most significant). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed and thoughtful review. We appreciate the careful reading of our work and the constructive feedback. Below, we address your concerns: **Additional MCTS baselines** Following your comment, we added comparisons with a strong baseline to our experiments: EfficientZero [Ye et al., 2021]. We chose it because it is a highly sample-efficient version of MuZero that is also open source. MuZero is one of the best known RL algorithms to date. The results that were completed during the rebuttal are given here: https://ibb.co/zHG3KP8m, showing that SoftTreeMax surpasses EfficientZero in all the games we tested except one. We will add full experiments in the final version. **Question 1: Does the number of online interactions include the samples generated during the tree search?** No, the number of online interactions does not include the samples generated during tree search. We distinguish between two types of interactions: 1. Online environment interactions: These are actual interactions with the environment during training (e.g., when collecting trajectories for PPO updates). 2. Planning interactions (tree search): These are simulated forward passes using our GPU-based simulator to expand the tree. As we explained in our experiments, the computational complexity of planning interactions is substantially lower than online interactions, especially with our GPU batching mechanism for tree expansion. This is why we track them separately, and our sample complexity plots show only the online interactions. For a complete view of computational efficiency, we also provide wallclock time comparisons in Figure 4, which accounts for the total computation including tree expansion. As seen there, despite the additional computation for tree search, SoftTreeMax still demonstrates favorable performance per unit time for appropriate choices of depth. **Question 2: Why can increasing d to 1 or 2 harm performance in some experiments?** This is an excellent observation. While theoretically more information should help, this illustrates the basic bias-variance tradeoff in the SoftTreeMax policy: At small depths (d=1,2), we introduce a structural bias by incorporating tree search, but the variance reduction benefit hasn't fully kicked in yet. The theoretical variance reduction is exponential in d, so the effect becomes more pronounced at larger depths. We will address this valuable insight in the revised paper. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response, and I would like to keep my score.
Summary: This paper extends softmax policy gradient methods by integrating planning through tree expansion. The authors introduce two implementation variants—C-SoftTreeMax and E-SoftTreeMax—which differ in whether the expectation is computed inside or outside the exponent. They analyze the policy gradient variance for both approaches in the tabular case and demonstrate that the variance upper bound decreases with increasing tree depth at a rate determined by the second-largest eigenvalue of the behavior policy. Additionally, the paper examines the gradient bias introduced when employing an approximate model. The theoretical findings are further validated through experiments on several Atari games, where GPU based simulation was leveraged to mitigate the computational overhead introduced by tree expansion. Claims And Evidence: The main issue is in the bound for the policy gradient bias when using an approximate model. The proof doesn’t entirely check out. In the proof for Lemma A.5, it is stated that $||\hat{M}-M||=O(\beta d \epsilon)$. However, it is unclear why $\gamma^{-d}$ from $C_{s,d}$ does not appear in the bound. Moreoever, if the gap between $\hat{M}$ and $M$ is big, i.e., not approaching 0, the subsequent proof may not hold. Another reservations that I have is regarding the difference between the variance upper bounds of C-SoftTreeMax and E-SoftTreeMax. Specifically, the former one scales with $S^2$ and $A^2$ while the later one does not contain either $S$ and $A$. I’m not sure why there is such discrepancy. Related to the bounds, while the variance of E-SoftTreeMax is verified in numerical experiment illustrated in Fig. 1, there’s no verification provided for the result of C-SoftTreeMax. Methods And Evaluation Criteria: The methods and evaluation mostly make sense to me. However, I have concerns about the assumption in E-SoftTreeMax that $r(s, a)$ equals $r(s)$, as this does not seem to hold in either Atari or MuJoCo—despite the paper's claim that this is typical for these environments. In Atari, for example, the reward is determined by transitions between consecutive states (i.e., $r(s, s^′)$). MuJoCo’s reward functions generally include a dedicated component determined by the action. Theoretical Claims: I carefully checked the proof for all results. There seems to be an issue with the proof for Lemma A.5 and Theorem A.6, as I discussed in the “Claims And Evidence” section. In addition, the mathematical analysis would benefit from improved clarity and proofreading. For instance, the definition of $\theta$ and $\Theta$ are ambiguous. Sec 2.1 describes $\theta$ as a mapping from S x A to R. However, the beginning of Sec 2 describes $\Theta$ as a vector of $R^S$, a vector representation of $\theta(s)$, which seems to imply that the later is a scalar. It is not clear how $\theta(s)$ is defined, or what its dimensionality is. Also please note the typos listed under “Other Comments Or Suggestions”. Experimental Designs Or Analyses: Yes, I checked the validity of both numerical and Atari experiments and found no major issues overall. However, there are two minor issues: 1. variance bounds: the missing numerical result for C-SoftTreeMax which was brought up earlier. 1. behavior policy: In Sec 8, the paper claims to have “explained how to choose the expansion policy to minimize the gradient variance”. However, more precisely, I think what explained was instead on what property we want the induced transition matrix to have. It is not clearly explained the procedure of how to “choose” a behavior policy. In fact, in the deep RL experiment in Atari, only one type of policy (uniform) was used as the behavior policy. It is not clear what level of impact the behavior policy has on the learning performance. Supplementary Material: I did not run the code but took a look into it. Relation To Broader Scientific Literature: It extends the existing RL literature on two fronts: 1, integrates planning with model-free policy gradient methods 2, in the context of planning with a local simulator [Yin et. al 2022], most work were concerned with action value methods. This paper provides some empirical evidence supporting the benefits of combining policy gradient methods with planning. D. Yin, B. Hao, Y. Abbasi-Yadkori, N. Lazić, and C. Szepesvári. Efficient local planning with linear function approximation. The 33rd International Conference on Algorithmic Learning Theory, 2022. Essential References Not Discussed: No Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: I recommend changing the title to “Softmax” policy gradient with tree expansion, since the main results are specific to softmax parameterization. List of typos: page 1, the last sentence of the 2nd last paragraph, “we prove that the with …” page 2, Sec 2, the RHS of the definition of $\mu_\pi$ should be $\mu_\pi^\top$ instead of $P^\pi$ page 2, Sec 2.1, in the variance expression of a random vector $X$, $^\top$ should be on the second parenthesis. page 24, Sec B.3, the symbol inside the parenthesis in “In Gopher, however, for large depths ()” Questions For Authors: 1. The bounds on the variance seems loose, in that, when the temperature \beta is infinite t, the bound is infinite while the policy is deterministic, which should yield 0 variance in a deterministic MDP. Could you clarify if it is an artifact or if there’s another reason? 2. Could you clarify regarding the potential issue with the proof for Lemma A.5 and Theorem A.6? A detailed discussion on the bound on $||\hat{M}-M||$ would be particularly helpful. 3. Could you provide intuitions regarding the differences between the variance upper bounds of C-SoftTreeMax and E-SoftTreeMax? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We appreciate the careful reading of our proofs and the constructive feedback. We address your concerns below. **Proof issues in Lemma A.5 and Theorem A.6** You're correct that $\gamma^{-d}$ from $C_{s,d}$ should appear in this bound. This oversight makes the proper bound $O(\beta d \gamma^{-d} \epsilon)$. When the approximation error $\epsilon$ is small enough and $\beta$ properly controlled, this term still approaches 0 as required for the subsequent proof. We will correct this in the revised version. **Variance bounds discrepancy** E-SoftTreeMax also depends on $S,A$, but Theorem 4.7 uses O-notation while C-SoftTreeMax shows explicit constants. Both establish the crucial exponential decay rate with tree depth.This exponential decay explains why deeper trees provide more stable policy gradient estimates and better overall performance, regardless of specific constants or dependencies. The key contribution is establishing this relationship between tree depth and variance reduction through the eigenstructure of the MDP's transition dynamics. We'll highlight this insight more clearly. **Assumption that r(s,a) equals r(s)** This simplification was made for theoretical analysis of E-SoftTreeMax. Our practical implementation handles general reward structures correctly in all experiments. **Definition of $\theta$ and $\Theta$** Thank you for highlighting this inconsistency. In Section 2.1, we use $\theta$ conventionally for policy parameters, but later use $\Theta \in \mathbb{R}^S$ and $\theta(s)$ as a scalar state function. We'll replace the notation in Section 2.1 with a different symbol, reserving $\theta$ and $\Theta$ exclusively for our SoftTreeMax formulation to ensure consistency throughout the paper. **Verification for C-SoftTreeMax** We analyzed both variants but included only E-SoftTreeMax results (Figure 1) to avoid redundancy. C-SoftTreeMax shows similar behavior and correspondence to theory. We focused on E-SoftTreeMax to verify that parameter $\alpha$ relates to the transition matrix's second eigenvalue in the general case, which our experiments confirmed. We'll add a brief discussion of C-SoftTreeMax's empirical variance characteristics for completeness. **Behavior policy choice** Our theory identifies properties of ideal behavior policies but doesn't provide concrete procedures for complex domains. In Atari experiments, we used uniform policies for simplicity. Preliminary experiments with other policies showed similar trends but weren't comprehensive enough to include. We'll clarify our claims and add discussion about the practical impact of behavior policy choice. **Minor comments and title suggestion** We appreciate your writing suggestions and will consider revising the title to better reflect our focus. **Question 1 regarding loose variance bounds** This is indeed an artifact of our analysis approach. Our bound becomes loose in this regime because we use worst-case inequalities at multiple steps in the proof that don't capture the special structure that emerges in deterministic settings. The $\beta^2$ term appears because we bound the gradient norm without accounting for how the policy structure changes as $\beta$ increases. For practical parameter ranges used in training, our bound provides useful insights about variance reduction with tree depth. In the revised paper, we'll clarify this limitation and discuss how the bound could be refined to better handle the deterministic limit case. We'll clarify this limitation and discuss potential refinements. **Question 2 on the potential issue with the proof** Please see our response above regarding the missing $\gamma^{-d}$ term. **Question 3 on variance bounds intuition** C-SoftTreeMax has a more decomposable gradient structure (Lemma 4.3): $$\nabla_\theta\log \pi_{d,\theta}^C = \beta\left[I_{A} - 1_A (\pi_{d,\theta}^C)^\top \right]P_s \left(P^{\pi_b}\right)^{d-1}.$$ This allows direct spectral decomposition. When the projection matrix interacts with the decomposed $(P^{\pi_b})^{d-1}$, the stationary distribution term gets canceled, leaving terms scaled by $\lambda_i^{d-1}$, with $|\lambda_2|$ dominating. E-SoftTreeMax's analysis is more complex as its gradient involves matrix products and ratios with exponentiated rewards, requiring sophisticated techniques to establish the convergence rate $\alpha$. This explains why we provide an exact variance bound for C-SoftTreeMax, while for E-SoftTreeMax we give an asymptotic bound where $\alpha = |\lambda_2(P^{\pi_b})|$ only under certain conditions. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response, which has addressed most of my concerns. I've increased my score accordingly.
Summary: The paper introduces the SoftMaxTree algorithm, a softmax policy gradient algorithm extended with planning. The main idea of the extension is that estimation the gradient from longer paths reduces the variance of the gradient. Two variant are considered depending on the expectation being inside or outside of the exponent. The main theoretical results are bounds on the variance for the two variants considered. A bound is also provided for the bias introduced in the case of approximate forward model. Empirical evidence on several Atari benchmarks show that the proposed algorithms perform favorably compared to the standard PPO in terms of cumulative reward and variance. Claims And Evidence: The claims are supported both theoretically and empirically. Methods And Evaluation Criteria: The evaluation is sensible. Theoretical Claims: I checked the proof succinctly. Experimental Designs Or Analyses: The experimental setting is fine. Additional baselines that use planning in the training phase (e.g. using MCTS to generate traces) would have been useful. Supplementary Material: I parsed thru the proofs, and looked at the additional experimental data. Relation To Broader Scientific Literature: The bounds on the variance are valuable. Essential References Not Discussed: The most relevant literature is mentioned, but a deeper comparision with policy gradient algorithms that are combined with MCTS would have been useful. Other Strengths And Weaknesses: The search/planning part of the algorithm is awkward. It is attempting to do exhaustive search, but does some intensive pruning to keep the tree narrow (to achieve linear complexity scaling with depth). It is really unclear why not MCTS or at least Monte Carlo sampling. That would also have linear dependence, and both are extensively studied in RL. I assume the variance would have been more difficult to analyze theoretically in case of MCTS. Other Comments Or Suggestions: One of the advantages of PPO is that it is model free. Usually, one still needs a simulator for training, but there is less constrain on the simulator compared to the setting here that needs a forward model (e.g., we need to be able to reset the simulator to a certain state). If it is possible to use planning, using MCTS to generate traces improves the training (even with an approximate forward model as in MuZero). Therefore I would suggest to add baselines that use MCTS in the training phase (perhaps off-policy variants). Questions For Authors: The variance scales favorably with depth, but a small eigenvalue is crucial in this dependence. The assumption on the transition matrix is common for the theoretical analyses, but often does not hold in practical applications. Do you expect an increase of variance with depth in such scenarios? A discussion on the topic could be useful. The variance is reduced with search depth, however the bias increases with approximate forward model. Any solutions for the tradeoff? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for recognizing the value of our theoretical and empirical contributions. We appreciate your feedback and address your questions below. **MCTS vs. our search approach** Thank you for this important point of clarification. SoftTreeMax and MCTS represent fundamentally different approaches with distinct goals. While both involve planning, our work specifically focuses on how to directly integrate forward modeling into the policy gradient framework itself. The key innovation of SoftTreeMax is that it incorporates planning directly into policy parameterization, creating a differentiable policy that naturally reduces gradient variance. Unlike MCTS, which operates as a separate planning algorithm that produces improved rollouts, our approach transforms the underlying architecture of policy gradient methods. This enables us to maintain the theoretical elegance of policy gradient approaches while addressing their fundamental variance issues. Our analytical results show how this integration affects gradient properties in ways that would not be directly applicable to MCTS-based methods. We believe this approach opens new avenues for research at the intersection of planning and policy-based methods. That said, we acknowledge the value of comparing against and potentially combining with MCTS-based approaches in future work. **Additional MCTS baselines** Following your comment, we added comparisons with a strong baseline to our experiments: EfficientZero [Ye et al., 2021]. We chose it because it is a highly sample-efficient version of MuZero that is also open source. MuZero is one of the best known RL algorithms to date. The results that were completed during the rebuttal are given here: https://ibb.co/zHG3KP8m, showing that SoftTreeMax surpasses EfficientZero in all the games we tested except one. We will add full experiments in the final version. **Variance scaling with eigenvalue assumptions** You've identified an important theoretical-practical gap. While our theory depends on properties of transition matrices that may not always hold in practice, our empirical results consistently show variance reduction across various environments. In cases where the eigenvalue assumptions are violated, we would expect diminished (but still positive) variance reduction benefits rather than variance increases. Figure 3 in our paper demonstrates this empirically - the variance consistently decreases with depth across different games, even though the theoretical assumptions likely vary between environments. **Bias-variance tradeoff with approximate models** The bias-variance tradeoff we observed follows this pattern: - The closer $\pi_b$ is to $\pi_{d,t}$ at time t, the lower the bias - The closer $\pi_b$ is to having similar rows in $P^{\pi_b}$, the lower the variance For practical solutions to this tradeoff, we recommend: 1. Adaptive depth selection based on model certainty 2. Progressive increase in depth as model accuracy improves during training 3. Ensemble methods to better estimate model uncertainty Our Theorem 4.8 shows that the gradient bias diminishes with approximation error while retaining similar variance reduction properties. Even with approximation errors, the variance continues to decay exponentially, just at a rate dictated by $\hat{P}^{\pi_b}$ instead of $P^{\pi_b}$. We will expand this discussion in the revised paper to provide clearer guidance on managing this tradeoff in practical implementations. --- Rebuttal Comment 1.1: Comment: Regarding MCTS, I think the authors missed my point. SofTreeMax has a planning component that helps to reduce the variance. My argument was that the planning component uses an awkward pruning technique, and it using MCTS as the planning component could have been more powerful. Theoretically would probably more difficult to tackle, but I would expect that it would be a stronger algorithm. I am on the fence about the variance reduction in practical application. There are domains where search pathology was shown, those might apply here as well. I am not sure though what kind of domains are more suitable for the proposed approach.
null
null
null
null
null
null
Hyper-Transforming Latent Diffusion Models
Accept (poster)
Summary: This work introduces a novel "LDMI" framework which empowers latent diffusion models to generate Implicit Neural Representations (INRs). The proposed Hyper-Transformer Decoder enables the space of INR parameters to be learned in a flexible and probabilistic manner. Empirical tests are conducted on a range of image and shape reconstruction tasks. Claims And Evidence: Most claims within the paper are well supported by the experimental evidence. Perhaps the statement on "our work establishes [....] generative modelling with unconstrained resolution" is a bit lacking in evidence since output resolutions seem to match training resolutions in the experiments. Methods And Evaluation Criteria: A good variety of datasets are explored, and for the case of ImageNet and CelebA, standard metrics are appropriately used such as PSNR and FID. It would certainly be preferable to also see some quantitative metrics for the ERA5 and ShapeNet chairs however - few robust scientific conclusions can be drawn by eye. Theoretical Claims: No, while some theoretical background is given, the key results here are empirical. Experimental Designs Or Analyses: In the CelebA experiment in FIgure 4, the baselines are trained on a different dataset to the LDMI. While the claim is made that this makes the task easier for the baselines, this lack of a controlled experiment prevents rigorous conclusions to be drawn from the differences in performance. Supplementary Material: Appendix A largely consists of a pedagogical background on diffusion models. Appendix B is more useful as it covers more detail on the training methodology. Relation To Broader Scientific Literature: This work sits in the popular field of diffusion models and hypernetworks, and provides a good review of relevant prior works in the area. Essential References Not Discussed: To the best of my knowledge, no essential references are missing. Somewhat more tangentially, there are some complementary works on performing functional inference with diffusion models which may be of interest such as: Neural Diffusion Processes, Dutordoir et al, ICML 2023 All-in-one simulation-based inference, Gloecker et al, ICML 2024 Other Strengths And Weaknesses: It ought to be viable to perform inference at a higher resolution than what was used during training, as it's one of the key advantages of possessing a INR, but I don't see explicit examples, perhaps some examples could be added to the supplementary material? Other Comments Or Suggestions: I would recommend making it explicitly clear that pixel values correspond to integrals over a finite region, so are not ideally suited for this application, but they serve more as a useful testbed. Questions For Authors: Relating to the above point, I feel this paper would be stronger if first it spends a bit of time clarifying the key problem/challenge they wish to tackle, before then outlining the solution. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and encouraging remarks. Your comments helped us significantly improve the manuscript. Below, we address all the concerns raised. ## On Our Claims Regarding Unconstrained Resolution We agree that validating our model’s ability to generalize to unseen coordinates is essential. Following your suggestion, we added new experiments on super-resolution. During [reconstruction](https://anonymous.4open.science/api/repo/LDMI_pre-7F42/file/experiments/figures/super_recs_celebahq256.png?v=5cc02656), test images at the training resolution are encoded, decoded into INRs, and evaluated on denser grids. For [sampling](https://anonymous.4open.science/api/repo/LDMI_pre-7F42/file/experiments/figures/super_samples.png?v=485b880b), latent codes are drawn from the diffusion prior and decoded similarly. These experiments confirm that $\texttt{LDMI}$ captures continuous signals and generalizes to higher resolutions—highlighting a key advantage of INR-based generation. ## New quantitative results for ERA5 and ShapeNet To better support our claims of modality generalization, we now report the PSNR measured on the ERA5 and Chairs datasets: | Method | Chairs | ERA5 | | -------- | -------- |-------- | | Functa | 29.2 | 34.9 | | VAMoH | 38.4 | 39.0 | | $\texttt{LDMI}$ | **38.8** | **44.6** | We omit GASP (not applicable to reconstruction due to their GAN-based framework). $\texttt{LDMI}$ generalizes without adaptation and outperforms comparable models. ## On CelebA and CelebA-HQ You are correct—the original submission trained on CelebA, while baselines used CelebA-HQ. We now explicitly distinguish them and retrain $\texttt{LDMI}$ on CelebA-HQ at $64 \times 64$ for a fair comparison. To further demonstrate scalability, we train $\texttt{LDMI}$ on CelebA-HQ at $256\times 256$, achieving strong results not addressed by prior work. ## On related work: NDPs and SimFormer Thank you for pointing out these complementary works. We agree that approaches such as Neural Diffusion Processes, and SimFormer, are interesting and relevant within the broader context of function-space modeling with diffusion processes. In the following, we highlight the main difference. **NDPs** leverage a diffusion model to define distributions over function values at given coordinates. Their architecture explicitly enforces exchangeability and permutation invariance via a bi-dimensional attention mechanism, and their sampling mechanism mimics Gaussian processes and related meta-learning methods such as Neural Processes. **Importantly**, the function itself is not represented via a neural network whose parameters are generated or learned—rather, the model learns to denoise function values directly, conditioned on inputs. **Simformer** is designed for simulation-based inference (SBI), where the goal is to infer unknown parameters of stochastic simulators from observations. It treats both data and parameters as random variables and learns a diffusion model over the joint distribution $p(\boldsymbol{x}, \boldsymbol{\theta})$, allowing for flexible sampling of any conditional (e.g., posterior, likelihood, marginals). Parameters may include function-valued (infinite-dimensional) components, but they are not represented as INRs—rather, they are input variables within the inference pipeline. Simformer excels at amortized Bayesian inference with unstructured or missing data and flexible conditioning. We added these references and discussion to the Related Work section of our revised manuscript. ## On the interpretation of pixel values as integrals We agree that pixels approximate integrals over regions. Following [SIREN, NeRF, Functa], we adopt the standard Dirac delta approximation by treating pixel values as samples at center coordinates. We’ve made this assumption explicit in the revised paper. ## On the core challenge we address Thank you for raising this point. We have clarified the motivation in the revised manuscript. Our goal is to overcome the scalability bottlenecks of MLP-based hypernetworks in generative modeling of functions via INRs. While INRs excel at representing continuous signals, prior methods (e.g., Functa, GASP, VAMoH) require hypernetworks with tens of millions of parameters—even for small 3-layer INRs—making scaling impractical. To address this, we propose the $\texttt{HD}$ decoder, a Transformer-based hypernetwork that maps latent samples to INR weights. Its parameter count remains fixed as INR size grows, with only sequence length increasing. We further optimize efficiency with a grouping strategy, enabling generation of large INRs at low cost. Integrating this design into latent diffusion models allows $\texttt{LDMI}$ to model more complex signals and generalize across resolutions efficiently. --- Rebuttal Comment 1.1: Comment: I appreciate the thoughtful response, and have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: Dear Reviewer XroW, Thank you very much for your thoughtful engagement during the rebuttal phase and for updating your score. We truly appreciate the time and care you took to evaluate our responses. Your feedback helped us improve the paper, and we're very happy to hear that it is now in a form you consider ready for acceptance. Best regards, The authors.
Summary: This paper proposes a new framework for INR generation (LDMI) which combines latent diffusion models and a transformer based hyper network for learning the distributions over INR parameters. The hyper network transforms the latent variables through a transformer encoder and decoder and generates the INR parameters. Beyond the vanilla end-to-end training pipeline of a LDMI, it enables also using a pre-trained LDM and a hyper transformer for efficient transfer learning without full retraining. Claims And Evidence: The authors claim that their method is effective in generation tasks as well as hyper-transforming tasks. However, the results from Table 1 show that on CelebA LDMI doesn't achieve comparable PSNR score compared to Functa and the FID score is much higher than GASP. Also on ImageNet the PSNR is not comparable with Spatial Functa although the FID score does outperform its rival. Methods And Evaluation Criteria: The datasets they evaluate on are reasonable, including multiple standard image datasets. The metrics they use are standard. (PSNR, FID) However, they do not provide evidence that the model can achieve comparable performance with other baselines with either less parameters or less training samples etc. Theoretical Claims: I have not spotted error in the proofs of any theoretical claims. Experimental Designs Or Analyses: They show qualitative results of their method on generation, reconstruction, hyper-transforming and data completion tasks. However, the experimental results are not sufficient to show the effectiveness of their approach. Supplementary Material: Yes. The authors provide more background knowledge (e.g. on diffusion models), a detailed training algorithm, and hyper parameters used in the experiments. Relation To Broader Scientific Literature: This paper is connected to latent diffusion models, transformer-based hypernetworks also the implicit neural representation (INR) literature. Essential References Not Discussed: Ruiz, Nataniel, et al. "Hyperdreambooth: Hypernetworks for fast personalization of text-to-image models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2024. The above mentioned paper uses a hyper network approach to personalize text-to-image models which should be relevant to this paper. Other Strengths And Weaknesses: Strength: The idea is well explained. Weakness: The quantitive results of the paper are not strong enough to show the effectiveness of the proposed approach. Other Comments Or Suggestions: No Questions For Authors: In the hyper-training setting, which frozen LDM do you use, I don't think it's specified in the paper. You also mentioned that it is only been trained for a limited number of iterations and achieve the qualitative results shown in Figure 3b, do you have results regarding metrics vs training time to show the effectiveness of your approach? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and for highlighting relevant connections to related work. Below, we address each of the raised concerns in detail. ## On the relation to HyperDreamBooth We appreciate your suggestion to consider HyperDreamBooth, which we have now cited and discussed in the revised manuscript. While the goals of our work differ, there are indeed interesting architectural parallels. Both approaches leverage transformer-based hypernetworks to generate weights, motivated by scalability and efficiency. HyperDreamBooth focuses on personalizing pre-trained text-to-image diffusion models by generating low-rank residuals (via LoRA) from a single image—enabling rapid adaptation to new subjects. In contrast, $\texttt{LDMI}$ introduces a generative model over the space of implicit neural representations (INRs), which represent continuous functions across diverse modalities (e.g., images, 3D occupancy fields, and climate data). Rather than modulating a pre-trained model, our Hyper-Transformer Decoder generates the full parameter set of an INR from latent samples, enabling resolution-agnostic, function-level generation. While both methods rely on Transformer-based hypernetworks, $\texttt{LDMI}$ operates in a distinct regime of generative modeling over functions. ## On the competitiveness of our method and strengthened evaluation Thank you for raising this point. In response, we have significantly strengthened our empirical results to more clearly demonstrate the advantages of our approach. First, we added results on higher-complexity datasets such as CelebA-HQ $(256 \times 256)$, which fall outside the scope of prior baselines. Our model demonstrates strong performance in both [generation](https://anonymous.4open.science/api/repo/LDMI_pre-7F42/file/experiments/figures/super_samples.png?v=485b880b) and [reconstruction](https://anonymous.4open.science/api/repo/LDMI_pre-7F42/file/experiments/figures/super_recs_celebahq256.png?v=5cc02656) at multiple resolutions—without retraining—showcasing the benefits of INR-based generation. Second, we updated Table 1 with new results and included the number of hypernetwork parameters to highlight $\texttt{LDMI}$’s key strength: scalability. For example: - GASP/VAMoH require 25.7M parameters to generate ~50K INR weights. - $\texttt{LDMI}$ uses 8.06M parameters to generate ~330K INR weights for a deeper, 5-layer INR. Despite having significantly fewer parameters, $\texttt{LDMI}$ delivers superior performance. For details, please refer to our [response to Reviewer ER44](https://openreview.net/forum?id=yhgcRwJ9Dn&noteId=ldvx3foEnj), where the updated table is provided. To further contextualize this comparison, we highlight some key aspects of the baselines: - Functa relies on **test-time optimization**, fitting each test INR with access to ground-truth data. While this explains its high PSNR, it departs from our amortized inference setting and limits fair comparison. We include it for completeness given the task similarity. - GASP **cannot perform reconstructions** due to its GAN-based design, limiting its use in conditional tasks. In summary, $\texttt{LDMI}$ offers a compelling trade-off between quality, scalability, and generalization, supported by stronger experiments in the revised version. ## On the hyper-training setting and training efficiency Thank you for this question. In our hyper-transforming setup, we use the publicly available pre-trained LDM from [Rombach et al., 2022], specifically the LDM-VQ-4 variant, trained on CelebA-HQ at $(256 \times 256)$ resolution, and the LDM-VQ-8, trained on ImageNet. We freeze both the encoder and the diffusion-based prior, replacing only the decoder with our $\texttt{HD}$ module. While Section 3.5 of the original submission specified the frozen components, we have now revised the text to make this distinction more explicit. This setup provides a key advantage: significantly faster training. By freezing most of the architecture and training only the decoder, we reduce the number of learnable parameters and avoid the need for a full two-stage training pipeline—common in diffusion models—where the autoencoder and the prior must be learned sequentially. Our hyper-transforming approach thus offers a lightweight, modular alternative that efficiently adapts pre-trained models to new functional decoding regimes.
Summary: The authors propose a novel method for generating the parameters of implicit neural representations (INRs) representing real data. They use a latent diffusion framework, which first trains a VAE to learn a rich latent representation of data, then trains a diffusion generative model on the learned representations to generate new samples. The main innovation is the introduction of a hyper-transformer decoder, which uses an encoder-decoder transformer network to translate from encoded latent representations to INR parameters. Besides training the VAE and diffusion from scratch, the authors also introduce a hyper-transforming paradigm, in which a pre-trained encoder and diffusion model are frozen and only the hyper-decoder weights are updated. Experiments demonstrate that the proposed method performs comparably to baselines. #### Updates after rebuttal period The authors have made sincere efforts to address the points I raised, and the new changes and additions to the manuscript strengthen the writing and clarify the mathematical details. In light of these revisions, I will raise my original review score. Claims And Evidence: The claims are backed up. Methods And Evaluation Criteria: The methods appear sound. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: Experiments appear sound. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The field of diffusion models, and LDMs in particular, has been growing as the de facto most popular method of generative modeling. While normally restricted to fixed grids of pixels, the introduction of implicit neural representations to this modeling paradigm represents an important shift toward a more flexible representation of the inputs and outputs. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: ## Strengths - The paper is well-written. There is a clear flow of ideas from background to motivation which sets the stage for the explanation of the main innovations and the results. In addition, the authors take care to explain most of their design choices and the mathematical objects that comprise these designs to clarify their purpose (with a few mistakes and/or errors, as explained in the weaknesses section). - The hyper-transformer decoder is a well-motivated way to convert latent representations to INR parameters. Specifically, the use of learnable template weights and biases is a clever way to reduce computational and memory complexity. - The hyper-transforming training paradigm allows the re-use of powerful, existing models trained on large datasets. This can drastically reduce the computational requirements of the method compared to training from scratch. ## Weaknesses - Since this is a novel idea, the authors should make the notation as easily understandable as possible - especially for the hyper decoder (HD). However, the repeated usage of, e.g., $\mathbf{W}$ for weight matrices with only subscripts and superscripts to differentiate between the types of weights makes understanding the HD more difficult. This is not a large weakness, as I believe the authors present their method quite clearly otherwise, but may be something to consider for future revisions. - Similar to the point above, the dimensions of the weights are inconsistent. For example, in line 264 the authors state that the dimensions of $\bar{W}^i$ are $d_{out} \times G$ and Figure 2 shows each $\bar{W}^i$ having sequence length $G$, implying that sequence length corresponds to the number of columns. However, in line 272, $\bar{W}^b$ are stated to have dimensions $d_{in} \times d_{out}$ but the sequence length given in Figure 2 is $d_{in}$ which corresponds to the number of rows instead of the columns. This makes it difficult to reconcile the interactions between the weights and hidden states. - The right hand side of Eq (7) is a (normalized) dot product of two vectors, which should result in a scalar value and not in a column vector as desired. If the authors mean to use the dot operator ($\cdot$) as an element-wise product between the two vectors, they should clarify this to avoid confusion. - The results seem to show that the proposed method is not much better, if at all, than competing baselines. Table 1 indicates that other methods achieve either superior reconstruction accruracy or superior generative ability. The ImageNet samples in Figure 3b appear unnatural and unconverged. Other Comments Or Suggestions: - Line 238, right column: "$d_{i}n$" should be "$d_{in}$" Questions For Authors: - How did you choose the specific normalization of the output weight matrix as shown in Eq (7)? I.e., why did you choose to normalize the output by the L2 norm of the product of the two input columns? - In the hyper-transforming training setup, how do you pass the coordinate inputs to the encoder? Pre-trained VAE models used in latent diffusion generally do not take this information as part of their inputs. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback, and for recognizing the clarity, motivation, and contributions of our work. Your comments helped improve the paper significantly. Below, we address each of the concerns raised in your review. ## On the notation used for weights We appreciate your comments on the novelty of our weight generation strategy and agree that clearer notation helps with readability. In the revised version, we have clarified the notation as follows: - The bar in $\bar{\mathbf{W}}$ denotes globally shared, learnable parameters (e.g., $\bar{\mathbf{W}}^\text{i}$ and $\bar{\mathbf{W}}^\text{b}$). - The superscript $i$ in $\bar{\mathbf{W}}^\text{i}$ refers to “*input*” to the Transformer Decoder. The superscript $\text{o}$ in $\mathbf{W}^\text{o}$ refers to its “*output*”, which transforms these global input weights by attending to the latent tokens. - The subscript $l$ in $\mathbf{W}_l$ identifies the target INR layer. We omit this when not needed for clarity. - The superscript $\text{b}$ in $\bar{\mathbf{W}}^\text{b}$ stands for the “*base*” weights, which serve as the template modulated by $\mathbf{W}^\text{o}$ via our grouping and reconstruction strategy. These clarifications are now reflected in the manuscript and Figure 2. ## On the correctness Equation 7 and the weight dimensions We highly appreciate that you pointed this out, you are completely right. We have corrected the following two issues: - We corrected the typo in line 272 to have $\bar{\mathbf{W}}^\text{b} \in \mathbb{R}^{d_{\text{out}}\times d_{\text{in}}}$. - Now we use the Hadamard or element-wise product $\odot$ for the weight reconstruction. In Equation (7), each $w_{\lfloor c / k\rfloor}^{\mathrm{o}}$ (we're skipping bold notation here due to rendering issues) is an output token of the Transformer, and also the $\lfloor c / k\rfloor$-th column of the grouped weight matrix $\mathbf{W}^{\text{o}} \in \mathbb{R}^{d_{\text{out}}\times G}$. The length of the Transformer decoder sequences are thus $L\times G$, which is the total number of grouped columns. We can set the embedding dimension of the Transformer to $d_{out}$ or project the tokens using one feed-forward layer, depending on the case. Additionally, we corrected the typo in line 238 to have $d_\text{in}$. We hope after these corrections, the concern is fully addressed. ## On the competitiveness of our method Our key strength is scalability: we outperform baselines while using fewer hypernetwork parameters. For instance: - GASP/VAMoH: 25.7M params → 50K INR weights - $\texttt{LDMI}$: 8.06M params → 330K INR weights (5-layer network) We also extended Table 1 and added experiments on resolution generalization at [generation](https://anonymous.4open.science/api/repo/LDMI_pre-7F42/file/experiments/figures/super_samples.png?v=485b880b) and reconstruction, including [CelebA-HQ $(256 \times 256)$](https://anonymous.4open.science/api/repo/LDMI_pre-7F42/file/experiments/figures/super_recs_celebahq256.png?v=5cc02656), which are not tackled by prior work. We refer to our [response to Reviewer ER44](https://openreview.net/forum?id=yhgcRwJ9Dn&noteId=ldvx3foEnj), where we include the updated Table. To clarify the context behind other baselines: - Functa uses test-time optimization, tuning each test sample with ground-truth. - GASP cannot perform reconstructions due to its GAN-based design. In sum, $\texttt{LDMI}$ offers a strong balance of quality, scalability, and generalization. ## On the choice of the weight reconstruction method Thank you for this insightful question. Initially, we adopted a normalization-based reconstruction (inspired by Trans-INR), which worked well for MLP-based INRs. However, this approach led to instability when generating SIREN weights, particularly due to vanishing gradients—hindering generalization in tasks like super-resolution. To address this, we introduced a novel scaling-based reconstruction: $$ \left(1+w_{\lfloor c / k\rfloor}^{\text{o}}\right) \odot \bar{w}_c^{\text{b}}. $$ This avoids collapse when $\boldsymbol{w}_{\lfloor c / k\rfloor}^{\mathrm{o}} \approx 0$ and provides stable training for high-frequency signals—leading to the strong results in our revision. ## On coordinate input handling in the hyper-transforming setup Thank you for raising this question. Our goal is to model data as samples from a stochastic process, using our $\texttt{HD}$ decoder to map latents into functions. In the hyper-transforming setup, we use convolutional encoders (often pre-trained) that implicitly capture coordinate information from structured grids: - For image and ERA5 data: ResNet encoders - For 3D occupancy fields: 3D convolutional encoders The decoder then produces INR weights, enabling evaluation at arbitrary coordinates and decoupling resolution from representation. --- Rebuttal Comment 1.1: Comment: Many thanks to the authors for their thorough rebuttal and for addressing my questions. I will keep my review and my score the same as before, as I still believe that the paper needs more re-writing before it is ready for publication. --- Reply to Comment 1.1.1: Comment: Dear Reviewer LB9r, Thank you again for your thoughtful review and for acknowledging the clarity and contributions of our work. However, we were disappointed to see your score remain unchanged, especially given that your final comment (*“the paper needs more re-writing”*) introduced a concern not mentioned in your original review. This new claim also stands in contrast with your initial assessment, in which you wrote: > *“The paper is well-written. There is a clear flow of ideas from background to motivation which sets the stage for the explanation of the main innovations and the results.”* Throughout the rebuttal, we made substantial efforts to address every point you raised: - We clarified Equation (7), corrected all identified typos, and clearly explained the use of element-wise operations in the reconstruction. - We revised the manuscript to improve the clarity and consistency of the weight notation, as you suggested. - We strengthened our empirical results with new datasets, higher-resolution experiments, and clearer comparisons—demonstrating the scalability and competitiveness of our approach. - We explained our change from normalization to scaling in the weight reconstruction, which significantly improved model stability with SIREN-based INRs. - We responded in detail to your question on coordinate inputs in the hyper-transforming setup, clarifying the role of convolutional encoders. We took your feedback seriously and used it to significantly improve the manuscript. Considering the scope of the improvements made in direct response to your comments, we were surprised that neither feedback nor an updated evaluation was provided. If there are remaining issues beyond those already raised and addressed, we would sincerely welcome the opportunity to respond to them. Best regards, The authors.
Summary: This paper introduces a new generative framework called Latent Diffusion Models of Implicit Neural Representations (LDMI), which integrates Implicit Neural Representations (INRs) into transformer-based latent diffusion models. The key component is to use a Hyper-Transformer Decoder (HD) to replace traditional MLP-based hypernetworks and address their limitations of scalability and efficiency. This module generates INR parameters from latent variables. Experiments have shown the effectiveness of the proposed framework. Claims And Evidence: About the claim of scalability of hyper-transformers. The current version of this paper lacks clear experimental evidence to show that there is scalability in hyper-transformers. Methods And Evaluation Criteria: Yes. The methods and evaluation criteria make sense. Theoretical Claims: Yes. I have checked the theory of diffusion models and hyper-nets discussed in this paper. Experimental Designs Or Analyses: Yes, I checked the experimental design. A key problem with this paper is the lack of comparison to the advanced diffusion models [A, B, C] or a baseline of diffusion models with MLP-based hyper-networks. This comparison is essential to the claim of scalability of hyper-transformers. [A] Dhariwal P, Nichol A. Diffusion models beat gans on image synthesis[J]. Advances in neural information processing systems, 2021, 34: 8780-8794. [B] Bao F, Nie S, Xue K, et al. All are worth words: A vit backbone for diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 22669-22679. [C] Peebles W, Xie S. Scalable diffusion models with transformers[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2023: 4195-4205. Supplementary Material: Yes. I have reviewed the supplementary material. Relation To Broader Scientific Literature: This paper is a new method of introducing the hyper-nets into diffusion models for image generation and upgrading the MLP-based hyper-nets into hyper-transformers. Essential References Not Discussed: The paper lacks a discussion on the following works about diffusion transformers: [A] Bao F, Nie S, Xue K, et al. All are worth words: A vit backbone for diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 22669-22679. [B] Peebles W, Xie S. Scalable diffusion models with transformers[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2023: 4195-4205. Other Strengths And Weaknesses: Strengths: - The motivation of using hyper transformers to address the scalability limitation of traditional hypernets is clear. - This paper is well-written and easy to follow. Weaknesses: - The claim of scalability has not been verified. In line 60, the authors claim that the proposed hyper-transformer decoder can solve the scalability of hyper-networks. However, the authors only verify that the hyper-transformer can work on ImageNet in Table 1. There is no evidence of whether the larger hyper-transformer achieves better performance than the MLP-based hyper-nets of a similar size. - Table 1 only compares the method, Spatial Functa. The authors have not compared their method to the advanced diffusion models [A, B, C] or a baseline of diffusion models with MLP-based hyper-networks. [A] Dhariwal P, Nichol A. Diffusion models beat gans on image synthesis[J]. Advances in neural information processing systems, 2021, 34: 8780-8794. [B] Bao F, Nie S, Xue K, et al. All are worth words: A vit backbone for diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 22669-22679. [C] Peebles W, Xie S. Scalable diffusion models with transformers[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2023: 4195-4205. Other Comments Or Suggestions: Please refer to my questions in the “weakness” section. Questions For Authors: Please refer to my questions in the “weakness” section. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and thoughtful feedback, as well as for recognizing the novelty and clarity of our work. Below, we address the key concerns raised. ## On the scope and nature of the contribution $\texttt{LDMI}$ is not intended to compete with standard diffusion models that operate in pixel space. These models approximate $p(\boldsymbol{y})$ over discrete grids. In contrast, we model $p(\boldsymbol{y}|\boldsymbol{x})$ as a stochastic process, where $\boldsymbol{y}$ is a function represented via INR parameters $\theta$. This defines a substantially richer and more complex generative space—orthogonal to that of the suggested references. To our knowledge, $\texttt{LDMI}$ is the first to use Transformers as decoders to generate INR weights from latent samples in this setting. That said, we appreciate the reviewer’s suggestion and agree that including and discussing these works helps contextualize our contributions. We have added the suggested pixel-based diffusion models [A, B, C] to the Related Work section and explicitly discussed how our problem setting fundamentally differs. ## On baseline comparisons To provide fair comparisons in the INR setting, we benchmark against the closest functional baselines. In addition, to emphasize the differences with respect to the provided references, we have added super-resolution experiments (e.g., [samples](https://anonymous.4open.science/api/repo/LDMI_pre-7F42/file/experiments/figures/super_samples.png?v=485b880b) and [reconstructions](https://anonymous.4open.science/api/repo/LDMI_pre-7F42/file/experiments/figures/super_recs_celebahq256.png?v=5cc0265)) to show that LDMI generalizes across resolutions without retraining—a key property enabled by its function-based design. ## On scalability of the $\texttt{HD}$ Decoder While our original submission already compared $\texttt{LDMI}$ against MLP-based hypernetworks (used in all baselines), we have now further strengthened our scalability analysis. The updated Table 1 includes hypernetwork sizes: | **Model** | **PSNR (dB) ↑** | **FID ↓** | **HN Params ↓** | |----------------------------------|-----------------|-----------|-----------------| | **CelebA-HQ (64 × 64)** | | | | | GASP | N/A | **7.42** | 25.7M | | Functa | $\mathbf{\leq 30.7}$ | 40.40 | N/A | | VAMoH | 23.17 | 66.27 | 25.7M | | **$\texttt{LDMI}$** | 27.72 | 11.08 | **8.06M** | | | | | | | **ImageNet (256 × 256)** | | | | | Spatial Functa | $ \mathbf{\leq 38.4} $ | $ \geq 8.5 $| N/A | | **$\texttt{LDMI}$** | 20.69 | **6.94** | 102.78M GASP and VAMoH use ~25.7M parameters to generate 50K weights for shallow 3-layer INRs. In contrast, our $\texttt{HD}$ decoder uses **only 8.06M parameters** to generate 330K weights for a deeper 5-layer INR. Despite using 70% fewer parameters, $\texttt{LDMI}$ performs better, scaling to complex signals like [CelebA-HQ $(256 \times 256)$](https://anonymous.4open.science/api/repo/LDMI_pre-7F42/file/experiments/figures/super_recs_celebahq256.png?v=5cc0265)—not tackled by any baseline. To show superiority against latent diffusion and MLPs, as suggested, the table below compares $\texttt{LDMI}$ using either an MLP or our $\texttt{HD}$ on CelebA-HQ: | Method | HN Params | PSNR (dB) | | ------------------------------- | ----------------------- | ---------- | | $\texttt{LDMI}$-MLP | 17.53 | 24.93 | | $\texttt{LDMI}$-$\texttt{HD}$ | **8.06M** | **27.72** | ## On the competitiveness of our method We believe a few clarifications are helpful for contextualizing the results in Table 1. Functa’s high PSNR arises from **test-time optimization**: it fits a separate modulation vector per test image using ground truth, unlike our amortized inference approach. This undermines a fair comparison, but we include it for completeness and clarify this in the revised manuscript. While GASP reports lower FID on CelebA, it **cannot perform reconstructions** due to its GAN-based design, limiting its use in conditional tasks. Considering these factors, LDMI provides a balanced solution—competitive in both sampling and reconstruction—while being more scalable and efficient. ## Discussion on Diffusion Transformers We appreciate the suggested references and have added them to the Related Work section. These approaches apply Transformers as denoising backbones in pixel-space diffusion. Our use is fundamentally different: we apply Transformers as hypernetworks to generate INR weights, enabling function-level generation beyond the limitations of discrete grids.
null
null
null
null
null
null
KernelBench: Can LLMs Write Efficient GPU Kernels?
Accept (poster)
Summary: The major contribution of this paper is as follows: - This paper introduced a benchmark framework to evaluate how good a modern LLM can write efficient GPU kernels. The core of this benchmark framework consisting of 250 tasks with 3 levels of granularity: single primitive, sequence of ops and the overall model. - Using the benchmark, this paper evaluated modern LLMs. The conclusion is that there's still a large room of improvement as modern LLMs face challenges on both correctness ratio and efficiency. Experiments shows repeated or iterative sampling may help, but still yielding unsatisfactory kernel performance. Claims And Evidence: The evaluation in this paper well supports the claim that modern LLMs underperform. Methods And Evaluation Criteria: The design of the benchmark successfully cover basic aspects including throughput, correctness while describing the tasks with plain text + pytorch ref code + sample inputs. This design is simple-yet-effective. In several aspects, the paper should have explored deeper. - One major concern is letting LLM to generate kernel without hardware information. In real world senarios, the hardware specs like number of registers, shared memory, cuda compatibility, etc would critically affect the performance of a kernel. - In the task designed by this paper, it's important for the LLM to decide when and how to select a subset of ops to fuse (write kernel) for. E.g. fusing the attention block, but not the entire transformer block or even the entire model. This paper does not show much information on this and it's unclear whether the low performance of LLMs, especially in task 3, are writing low-performance models because of poor CUDA optimizations or simply writing kernels for blocks that should't be fused. Theoretical Claims: Benchmark paper, no theoretical claims. Experimental Designs Or Analyses: The evaluation for the correctness and evaluation is well designed. One concern mentioned above is the lack of evaluation on whether the decision on the subset of ops to be fused is correct. Moreover, it would be beneficial to compare with some SoTA auto fusion methods. Supplementary Material: The content in appendix supports the main content well. Relation To Broader Scientific Literature: This benchmark would facilitate the research on automatic GPU code generation, which is underexplored but attracting increasing attention. Essential References Not Discussed: There's a line of research on automatic kernel generation solving the same problem but using compiler-based methods like AStitch (ASPLOS'22), Welder (OSDI'23), ROLLER (OSDI'22) or older foundation works like TVM (OSDI'18) Other Strengths And Weaknesses: Strengths: A benchmark measuring basics of LLM GPU code generation Weaknesses: See methods and experiment section Other Comments Or Suggestions: - It will be good for the paper to provide more details on the designed tasks, especially for level 2&3 for readers to understand the coverage of this tasks. Questions For Authors: - In the benchmark design, how does the LLM make decision on which part of the computation we need to generate kernel for? Do they tend to generate a single kernel for all computation? - Will it be more beneficial to use a DAG representation instead of a pytorch ref code representation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Updated Paper: https://storage.googleapis.com/anonymous-files/kernelbench.pdf We thank you for appreciating KernelBench design and suggesting further improvements. As you noted, automatic GPU code generation is an underexplored area with many interesting research questions; KernelBench facilitates research in this direction as the first benchmark and environment for kernel development, with “simple-yet-effective” task definition and “well-designed evaluation”. In fact, we have already seen enthusiasm from the community, with multiple projects tackling KernelBench through agentic optimization and post-training. **Providing Model with Hardware Information** We totally agree that specifying hardware information is important as GPU kernels are inherently platform-dependent. In fact, this was already studied in our original submission. In Section 5.2.2, we provided the model with the exact kind of GPU hardware specifications (see Appendix C.5) that the reviewer described, and found that current models rarely conduct optimization correctly for underlying hardware when provided with such information. **Clarifying Design Choice to Test Kernel Fusion** In KernelBench the model has full flexibility to decide what subset of operators in the PyTorch reference to optimize and fuse. We believe this is one of the crucial abilities when a model is given distinct or new architectures in the real-world setting. KernelBench’s 3-level categorization helps disentangle fusion decisions and kernel generation. Level 1 problems (single operators) only test the model's ability to write optimized kernels; Level 2 and 3 problems are designed to additionally evaluate the model's ability to identify and leverage fusion opportunities; Appendix K provides a detailed task breakdown. **Fusion Patterns in Model Generated Code** To answer your questions about **fusion patterns** in generated kernels, we manually inspected the kernels generated by the best performing model, DeepSeek R1, and provided new analysis in Appendix L. We focus on level 2 problems, which are composed with one mainloop (e.g. conv, matmul) and 2-5 epilogue operations (non-linearities, reductions, etc). We observe model-generated code always attempts to generate 1-2 fused kernels per problem. As shown in Figure 19, the fused kernels tend to contain more than half the operators in the program. To explicitly answer your question, only 18% of programs fuse all operators into a single kernel. Regarding your question on the quality of fusion decisions and whether they cause low performance, we analyzed the generated kernels that were slower than PyTorch Eager (as shown in Table 17\) and draw two observations: 1\) main loop operators (eg. Conv) were not fused with epilogue operators 2\) the model’s attempt to fuse main loop operators (e.g. GEMM \+ other ops) was not faster than launching highly-optimized cuBLAS kernels. Also refer to **“Analysis of Performance Degradation Cases”** in response to Reviewer ufH2 for a related study. **Comparison with SOTA Compiler-Based Approach** Thank you for bringing up relevant compiler-based approaches (AStitch, Welder, ROLLER, TVM), which we have added and elaborate on in our updated related works. To directly address your concern, we compare fusion decisions in model-generated kernels with auto-fusion compilers. Since AStitch, Welder, ROLLER could not be run on KernelBench due to format incompatibilities or outdated support for KernelBench PyTorch 2.5 /CUDA 12.4 (Appendix B), we focus comparison on widely-adopted torch.compile with SoTA performance (better than TVM, see Table 3 in PyTorch 2 \[1\], ASPLOS ‘24) and employs an auto-fusion policy over TorchInductor’s define-by-run IR. We show both fusion decisions of R1 and torch.compile in Table 17\. Torch.compile often creates sophisticated fusion patterns by breaking Convolutions or GroupNorm into smaller multi-pass kernels that compute partial results and statistics in parallel — behavior that R1-kernels rarely exhibit. **DAG Representation** Per your suggestion, we conduct experiments on using a DAG representation (ONNX, torch.fx graph) “instead” of a PyTorch Reference, which might help highlight fusion opportunities. We explored this in Appendix M.2 and found that DAG representations cause output mismatch issues on problems that succeed with PyTorch representations – see response to reviewer ufH2 for details. We appreciate the comprehensive comments and hope that our additional experiments, analysis of results, and discussion addressed your concerns. We hope you find the paper significantly improved and consider reflecting this in your final score. \[1\] PyTorch 2: Faster Machine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation. ASPLOS '24. https://doi.org/10.1145/3620665.3640366 --- Rebuttal Comment 1.1: Comment: Thanks for the detailed study on fusion and failure patterns, I'll update my score to 3.
Summary: This paper introduces KernelBench, a benchmarking framework designed specifically to evaluate the correctness and performance of GPU CUDA kernels generated by large language models (LLMs). KernelBench compiles a representative set of PyTorch code snippets, categorizing them into three distinct complexity levels based on their granularity. By systematically assessing the correctness and performance gains achieved by various LLMs when converting these PyTorch snippets into CUDA kernels, KernelBench provides a comprehensive evaluation of the kernel generation capabilities across multiple prominent LLMs and hardwares. Claims And Evidence: While the majority of the claims in this paper are well-supported by strong evidence, there are areas where the arguments and evidence could be further strengthened: - This paper claims to propose a benchmark for evaluating the performance of LLMs in generating CUDA kernels. While the evaluation and analysis are indeed quite detailed, the results show numerous instances of performance degradation or no change. Providing a more comprehensive analysis of cases where LLM-generated kernels result in performance degradation would offer a more balanced and realistic portrayal of LLMs' capabilities in this domain. Methods And Evaluation Criteria: **Methods and Potential Issues:** - This paper introduces KernelBench to evaluate the kernel generation capabilities of LLMs. However, the design of KernelBench primarily focuses on assessing the translation of PyTorch code into CUDA kernels, which essentially evaluates code translation abilities. Including a broader range of code or natural language to CUDA kernel translations would significantly enhance its value. - KernelBench appears to be more suited for generating raw CUDA C++ code. In reality, LLMs might have the potential to leverage other general-purpose tools (such as Triton, CUTLASS) for kernel generation. Exploring these possibilities in the paper would make the results more comprehensive. **Evaluation Criteria and Potential Issues:** - This paper employs the formula (in Line 213) as the standard for evaluating LLM-generated GPU kernels: This formula integrates both correctness and performance dimensions to comprehensively assess the capabilities of LLMs. However, the metric does not account for differences in task complexity. For example, Level 1 tasks (single operations) and Level 3 tasks (full architectures) vary significantly in difficulty, but the $fast_p$ metric does not weight or adjust for these differences. Theoretical Claims: The paper did not make any theoretical claims or proofs. Experimental Designs Or Analyses: I have reviewed the experimental designs and analyses for their soundness and validity, and I found no major issues or concerns to address. Supplementary Material: I have reviewed the supplementary materials, including ablation experiments related to different prompts and hardware configurations, as well as the specific code content generated by the LLMs. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** - The paper introduces a comprehensive benchmark, KernelBench, designed to evaluate the ability and potential of Language Models (LMs) in generating efficient GPU CUDA kernels. - The authors propose a novel metric, $fast_p$, which combines both correctness and performance to evaluate the quality of generated kernels. This dual-focus approach offers a more holistic and nuanced assessment compared to traditional metrics that rely solely on correctness or performance. The analysis of mainstream large models using this metric provides valuable insights into their capabilities in generating CUDA kernels. - The paper conducts a wide range of experiments across multiple dimensions, including prompt content, hardware types, and operator categories. **Weaknesses:** - The paper focuses exclusively on evaluating LMs' ability to generate raw CUDA C++ code. However, it does not explore the potential of LMs to leverage high-performance libraries such as CUTLASS or Triton, which could significantly enhance kernel performance. Investigating these tools could reveal the upper bounds of LLM-generated kernel performance and provide a more comprehensive evaluation. - While the paper examines the impact of different GPU parameters, it does not delve into architecture-specific optimizations. Modern GPUs, such as those based on Ampere or Hopper architectures, offer unique features like asynchronous memory access and warp specialization. Incorporating experiments that utilize these architecture-specific optimizations could unlock further performance gains and provide a more complete picture of LLM capabilities. - The paper does not address the design of different data types, which are closely related to Tensor Core utilization in CUDA. The benchmark also lacks designation about specific Tensor Core code. Given the importance of Tensor Cores for mixed-precision models (e.g., FP16, FP8) commonly used in SOTA LLMs, incorporating Tensor Core-specific designs would significantly enhance the relevance and value of the benchmark. Other Comments Or Suggestions: The optimization prompts for different levels of tasks may need to be differentiated. For instance, for Level 1 tasks (operator-level generation), it might be necessary to guide the LLM to generate lower-level CUDA code, essentially rewriting the entire operator. On the other hand, for Level 3 tasks (graph-level generation), it may not be necessary for the LLM to generate CUDA code for all operators. Instead, it might be more effective to guide the LLM to rewrite part of specific PyTorch functions within the network to achieve optimization. Questions For Authors: See Weakness and Comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Updated Paper: https://storage.googleapis.com/anonymous-files/kernelbench.pdf We sincerely thank you for the detailed and insightful review\! We are truly encouraged by the positive feedback, particularly that the work is seen as "well-supported by strong evidence" and a "comprehensive benchmark." We especially appreciate the recognition of our "novel metric, fast\_p," as offering a "more holistic and nuanced assessment," and that you found value in our "wide range of experiments across multiple dimensions." We were glad to read that you felt our work "provides valuable insights into \[LLMs'\] capabilities in generating CUDA kernels", which are valuable for AI and HPC communities as we explore automating kernel optimization. These strengths highlight core goals we aimed for, and we're glad they resonated. **Analysis of Performance Degradation Cases** Per your suggestion, in addition to our existing error analysis (4.2) and case study of the fastest kernels (Appendix D), we added a new “Case study: Performance Degradation” (Appendix N), which specifically examines instances where generated kernels underperformed compared to the baseline. Here are our findings: 1. LLM implementations of core ops (matmul, conv) underperform highly-optimized proprietary (e.g. cuDNN) kernels in PyTorch. 2. LLM correctly identifies fusion patterns, but fused operations (often matmuls) are not efficiently implemented, outweighing benefits from reduced memory access 3. LLM blocks better PyTorch native fusion capabilities by generating a custom kernel for a minor task that prevents optimizing across a larger sequence of operations. **Alternative Input Specification** Per your suggestion, we explored in Appendix M using 1\) Natural Language 2\) DAGs of program operators as input specification. On a representative Level 2 problem that model succeeded with PyTorch representations, it failed with compilation and logical issues on natural language representation due to ambiguity on exact behaviors, even when provided with verbose dimension details. DAG representations capture the program execution much better and hint the model to conduct fusions, but going from DAG to kernel directly can lead to logical errors resulting in output mismatch. **Using Libraries & DSLs** Reviewer raises the point of whether generating code using frameworks like Triton/CUTLASS would be helpful. To address your feedback, we extended KernelBench with a Triton task specification and evaluation backend. As shown in Appendix O Table 20, we found models perform worse when using Triton, both in terms of correctness and performance: fast\_1 for DeepSeek R1 drops from 12%, 36%, 2% to 6%, 13%, 2% across 3 levels respectively. Qualitatively, we found models generate many Triton-related errors, likely due to Triton being a rarer source of training data than CUDA, highlighting potential challenges for using domain-specific libraries. We reiterate KernelBench’s goal is to propose a new benchmark with thorough baseline evaluation as a first step, rather than solving kernel generations. **Level-Specific Prompting and Scoring** We made a deliberate choice to not explicitly weight fast\_p by task complexity (e.g., Level 1 vs. 3). We report the levels separately, and harder tasks are expected to yield lower scores naturally, reflecting greater challenges. This approach is common in other coding benchmarks (LiveCodeBench) that include easy/medium/hard problems without score normalization by difficulty. Regarding level-specific prompting, our baseline evaluations intentionally used general prompts, rather than the suggested task-specific ones, to evaluate each model's fundamental ability to independently discover and select optimization strategies across task complexities without explicit steering. **Architecture-Specific Optimizations** Based on your suggestions, we added experiments for eliciting architecture-specific optimizations: Tensor Cores and asynchronous memory transfers (Appendix G.3) on Ampere GPUs, in addition to existing experiments in Section 5.2. We provided DeepSeek-R1 with examples using wmma and memcpy\_async instructions, on simple KernelBench matrix multiply problems in FP16 (compatible with Tensor Cores). We observed that the model attempted to apply yet struggled to utilize those advanced instructions. Among the 5/17 correct kernels that use WMMAs, successfully leveraging Tensor Cores did not lead to better performance over PyTorch. No kernels used memcpy\_async correctly. This highlights that successfully utilizing hardware features remains challenging for models and KernelBench provides a playground for the community to develop methods that address this limitation. Once again, we thank you for your valuable time and feedback. We hope that our additional experiments, analysis of results, and discussion addressed your concerns. We hope you find the paper significantly improved and consider reflecting this in your final score. --- Rebuttal Comment 1.1: Comment: Thank you for your efforts and clarification. Overall, I am leaning toward acceptance and will keep my score unchanged.
Summary: This paper proposes KernelBench, which is a new benchmark for evaluating LLM's performance in writing correct and fast kernels. Specifically, KernelBench gathers three different levels of tasks, including individual operations, sequence of operations, and end-to-end architectures, and introduces a novel fast_p metric to model correctness and efficiency at the same time. KernelBench shows that most frontier models do not perform well in writing kernels, among which the state-of-the-art reasoning models perform the best. KernelBench further shows that leveraging feedback is important for reducing execution errors and discovering faster solutions: Claims And Evidence: Most claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The evaluation metric fast_p makes sense for KernelBench. However, apart from fast_p which takes both correctness and speedup into consideration, I would recommend adding two separate metrics for correctness and speedup separately for clearer demonstration. For the evaluation approach, KernelBench does not handle the issue of cross-platform variation. While the paper claims that it "does not provide ground truth kernels for the tasks since we imagine users benchmarking on a variety of hardware platforms (including new platforms)", it's possible that, in terms of speedup, some model performs the best on one platform while the other model performs the best on the other platform -- it will make the evaluation results (even the model rankings) super hard to reproduce. Theoretical Claims: There's no theoretical claim in this paper. Experimental Designs Or Analyses: There is one weakness regarding the analysis: - There is a lack of comparison to existing programming benchmarks like HumanEval/MBPP and LiveCodeBench. While it is reasonable that some models are expected to have various ranks across different benchmarks as KernelBench has a different focus, the overall ranking should largely align well with the mainstream benchmarks. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This paper shows that leveraging feedback is important for reducing execution errors and discovering faster solutions, which aligns well with many existing works that focus on more general code generation tasks [1,2]. [1] Chen, Xinyun, et al. "Teaching large language models to self-debug." arXiv preprint arXiv:2304.05128 (2023). [2] Xia, Chunqiu Steven, and Lingming Zhang. "Keep the Conversation Going: Fixing 162 out of 337 bugs for $0.42 each using ChatGPT." arXiv preprint arXiv:2304.00385 (2023). Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Updated Paper: https://storage.googleapis.com/anonymous-files/kernelbench.pdf We thank reviewer AoMw for your review\! We are glad the reviewer appreciates our "fast\_p evaluation metric for KernelBench" and finds that our "claims are supported by clear and convincing evidence." Below, we address your comments regarding the evaluation metrics, platform dependency, and comparison to other benchmarks. **Clarity of fast\_p vs. Separate Correctness and Speedup Metrics** “Adding two separate metrics for correctness and speedup” is a great suggestion for added clarity. In our revision (see link above), we included an extended table presenting correctness (equivalent to fast\_{p=0}) and geo-mean speedup as separate metrics (Appendix I, Table 16), providing a disaggregated view for these specific aspects. The geometric mean of speedups only includes the correct generations, as fast but incorrect code is not helpful. Thanks for acknowledging that the “evaluation metric fast\_p makes sense for KernelBench\!” We also reiterate that for kernel generation, speedup and correctness are tightly coupled, motivating our choice to design fast\_p. To provide more context on this, we've also added a section discussing our metric design explorations (Appendix I). We hope the combination of the original fast\_p and these new separate metrics offers more clarity. **Platform Dependency and Reproducibility** We agree entirely about the platform-specific nature of hardware performance tuning. Thus, it is very important to compare results when controlling for both the input program and the underlying hardware. For instance, our evaluations (as noted in paper Section 4.4 and Appendix G.1) across several hardware platforms, including L40S, A100, and H100 GPUs, revealed reasonable consistency in the kernel generations at Level 1 but more pronounced variation in Level 2\. Other than this hardware evaluation study, most of our experiments in the paper are done on an Nvidia L40S, and we expect the results to be reproducible on this type of GPU. **Comparison to Existing Programming Benchmarks** To address your comment, we've added a new experimental section (see Appendix J) where we compare model performance on KernelBench (KB) with LiveCodeBench (LCB). (We chose LCB as HumanEval performance is quite saturated for current models). We present the relative rankings of models across these benchmarks. Our results show, perhaps unsurprisingly, that models performing well on general coding benchmarks tend to also perform better on KernelBench, but variability in rankings (e.g. o1 ranks 1st in LCB and 2nd in KB Level 1, and R1 ranks 2nd in LCB but 1st in KB Level1) across different levels of KernelBench suggests that additional skills are required for high performance in kernel-specific tasks – intuitively, this aligns with the major differences between GPU programming and standard programming problems found in popular coding benchmarks. We would also like to highlight that KernelBench is not merely another code generation benchmark; it adds the critical dimensions of performance optimization and hardware awareness, testing a model's ability to generate not only correct code, but also efficient code, which presents distinct challenges. **Relation to Broader Literature of Leveraging Feedback** We definitely see the connection with existing works on leveraging feedback too (we also added citations of works listed by you here)\! We believe KernelBench takes this concept into a particularly challenging and impactful domain. Optimizing hardware kernels (notoriously difficult even for human experts) offers tangible real-world benefits (cost and energy savings for AI\!), making it a high-stakes environment given the ubiquity and importance of AI systems today. We intentionally designed KernelBench to facilitate precisely iterative, feedback-driven improvement. By providing rich, actionable feedback signals—clear correctness checks (pass/fail), compilation status, precise runtime measurements, and speedup relative to a baseline—KernelBench creates an environment where AI systems can directly learn from their attempts and refine their solutions. The goal of our baseline results using feedback is to thoroughly characterize the degree to which we can solve KernelBench. We find that despite using these techniques, the best model gets fast\_p=1 of only 18% on level 3, showing there’s a lot more progress to be made on this benchmark. In this sense, we see KernelBench as a stepping stone for pushing forward research in automated kernel engineering, providing a crucial contribution to the community as a standard evaluation environment. In light of these clarifications as well as new experimental results and modifications to the paper to address your comments, we would really appreciate it if you would re-examine our paper and consider raising your score.
null
null
null
null
null
null
null
null
AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting
Accept (poster)
Summary: The paper introduces AdaPTS, a framework to adapt pre-trained time series models (univariate) for probabilistic multivariate forecasting. The authors use adapters to project multivariate inputs into a latent space where a frozen pre-trained model is applied independently to each channel. To enforce the invertibility, the authors adopt several types of autoencoders. They provide theoretical analysis of linear adapters and extend the framework for probabilistic forecasting using Bayesian inference. Empirical results on synthetic and real-world datasets demonstrate that AdaPTS improves the accuracy and uncertainty quantification based on a pre-trained model Moment. Claims And Evidence: > Claim in L147: no requirement of fine-tuning due to feature-level transformations From Figure 1, AdaPTS still needs to train learnable transformations respectively on time series with different variate numbers. Methods And Evaluation Criteria: * Evaluations in Table 1 are not comprehensive: (1) while the author claimed that the proposed method "can be plugged in any foundation model", only one type of pre-trained model is adapted in Table 1. (2) Some baseline datasets are omittet, such as ECL, Traffic, and other ETT subsets. (3) Prediction lengths of {96, 196, 336, 720} are commonly evaluated in previous works, but the forecasting horizon in this paper only includes {96, 196}, * Lack of probabilistic metrics (e.g., MASE and WQL) in evaluation. I'm confused about whether AdaPTS has empowered the pre-trained model with the capability of probabilistic forecasting. How about the performance? * The evaluation does not explicitly demonstrate that the pre-trained model can benefit from the multivariate modeling. A support comes from the mutable performance when applying different autoencoders in Table 1. * Lack of comparison with related works like UP2ME: Univariate Pre-training to Multivariate Fine-tuning as a General-purpose Framework for Multivariate Time Series Analysis. Theoretical Claims: While the authors aim to enforce the invertibility in adapters (Definition 3.1). Assumption 3.3 uses the linear parametrization with the bias term that has no explicit inverse. Also, these derivations based on linear weighting do not contribute much to the paper because the proposed method does not adopt a simple linear layer. Does the proposed method explicitly maintain the invertibility in the learned autoencoders? Experimental Designs Or Analyses: See Methods And Evaluation Criteria. Supplementary Material: I have read the complete results of experiements. Relation To Broader Scientific Literature: The paper focuses on adapting pre-trained time series models. The authors position their contributions in the usage of adapters to leverage pre-trained FMs for multivariate and probabilistic tasks. Essential References Not Discussed: The authors have adequately discussed related works. Other Strengths And Weaknesses: * Strength: The paper discussed an important problem in adapting univariate pre-trained models for multivariate probabilistic forecasting. * Weakness: (1) The proposed method may lack novelty, since the authors adopt existing AEs wihtout further adaptations. (2) I am not clear about the advantage of the proposed method compared to previous works. For example, UP2ME and LoRA can also enhance existing pre-trained models and both of them and AdaPTS still requires task-specific fine-tuning. The outcome model may not be applicable for zero-shot forecasting. (3) No evaluation is provided to demonstrate the claimed efficiency of the proposed methods. (4) The performance in Table 1 is mutable when using different AEs. Other Comments Or Suggestions: See above. Questions For Authors: * Results in Table 1 show that some adapters degrade performance. Could the authors provide a more detailed analysis of why this happens? * The calibration results indicate that longer-horizon forecasts tend to underestimate uncertainty. Are there any strategies the authors could suggest to improve calibration for longer horizons? * The paper focuses on Moment as the foundation model. Have the authors considered applying AdaPTS to other univariate FMs? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We would like to thank Reviewer 4t4s for their detailed feedback and constructive comments. We now address the concerns raised in their review: > Claim in L147: no requirement of fine-tuning due to feature-level transformations The claim in line 147 regarding "no requirement of fine-tuning" pertains specifically to the *pre-trained weights* of the foundation model. As shown in Figure 1b, the weights of the foundational model are kept frozen, while only the lightweight adapter (which is significantly smaller in parameter count compared to the FM) is trained. > Evaluations in Table 1 are not comprehensive: We agree with the reviewer that the evaluation in Table 1 could be expanded. Our choice of the Moment model for validation was motivated by its widespread use in the literature. However, we are actively expanding the range of foundation models considered, with **Moirai** already incorporated into our framework. Additionally, the datasets and forecasting horizons chosen for the evaluation are not omitted but rather reflect a careful selection that highlights the most relevant aspects of our framework. We do recognize the importance of a more comprehensive evaluation and will consider incorporating additional datasets and horizons in future work. > Lack of probabilistic metrics Our current evaluation focuses on comparing AdaPTS to the vanilla Moment model, which provides point forecasts. As probabilistic metrics are ill-defined for deterministic predictors, we chose to report the MSE for Table 1. Neverthless, we use calibration metrics such as the ECE and the reliability diagram (Figures 5 and 6), to evaluate the probabilistic aspect of our approach. We plan to include more probabilistic foundation models, like **Moirai**, and will incorporate the reviewer's suggestion to report probabilistic metrics for these models. > UP2ME We thank the reviewer for bringing up UP2ME. While we agree that UP2ME is a relevant work, we believe there are significant differences in our approaches. UP2ME focuses on pre-training a model on univariate time series from a given task, then fine-tuning on multivariate data from the same task. AdaPTS in contrast, aims to provide a plug-and-play adapter that can be applied to any univariate foundational model without fine-tuning. > Assumption 3.3 In the linear case analysis, the invertibility condition (Assumption 3.2) is imposed on the adapter matrix $W_\varphi$, while Assumption 3.3 pertains to the linear parameterization of the foundation model’s predictor, which does not need to be invertible. For autoencoders, the inverse transformation is learned through gradient-based optimization rather than being explicitly imposed. We discuss potential extensions, such as Normalizing Flows, which could be explored to create inherently non-linear and invertible adapters. The primary goal of the linear analysis was to demonstrate that our approach offers solutions better than the identity baseline (the vanilla FM), as evidenced by Proposition 3.4 and Figure 2. > (1) The proposed method may lack novelty We refer the reviewer to our reponse to reviewer Qhdv. > LoRA can also enhance existing pre-trained models We would like to notice that there is probably confusion between us and the reviewer in terms of terminology, because by an adapter, we call a block that operates along the feature dimension and preceeds the frozen foundation model with the goal to model channel interdependence and potentially reduce the dimension. In contrast, LoRA fine-tunes the **Foundation Model’s weights**, so it doesn't solve the problem of adapting the model to the multivariate setting. Thus, AdaPTS and LoRA are rather complementary to each other and can be used together, if necessary. In our case, we focus on the frozen foundation model, but performing experiments where FM's model weights are fine-tuned together with an adapter is a good direction for future work. > some adapters degrade performance The differences in performance between adapters are due to the variations in their architecture (linear vs non-linear, deep vs shallow) and stochastic strategy (VI vs Dropout Vs deterministic). This makes their performance sensitive to the specific task, data characteristics, and convergence of the optimization algorithm during training. > longer-horizon forecasts tend to underestimate uncertainty Calibration is a critical aspect of forecasting, especially for longer horizons. Techniques like temperature scaling on a held-out calibration set can be effective in improving calibration and mitigating underestimation of uncertainty at longer horizons. > applying AdaPTS to other univariate FMs We have integrated **Moirai** with our framework and are considering additional foundation models upon acceptance of the paper. We believe these clarifications address the reviewer’s concerns, and we would be grateful if the reviewer could reconsider their score in light of the additional insights provided.
Summary: The paper presents AdaPTS, a novel framework for adapting pre-trained univariate foundation models (FMs) to probabilistic multivariate time series forecasting. AdaPTS introduces adapters—feature-space transformations that project multivariate series into latent spaces, where predictions are made independently by the frozen FM. Results are inverted back via decoders. This approach improves forecasting accuracy, uncertainty quantification, and robustness across benchmarks. The paper demonstrates strong empirical results, showing AdaPTS consistently outperforms baseline methods, provides meaningful uncertainty estimates, and effectively reduces dimensionality. Conceptually, AdaPTS bridges representation learning with Bayesian inference, enhancing FM adaptability and interpretability. Claims And Evidence: The paper provides clear evidence supporting its claims about AdaPTS’s improved forecasting accuracy and uncertainty quantification, demonstrated across multiple datasets and adapter configurations. Calibration results indicate room for improvement in uncertainty estimation at longer horizons, weakening the robustness claim. Methods And Evaluation Criteria: The proposed AdaPTS methods and evaluation criteria are well-aligned with the problem of adapting univariate foundation models for probabilistic multivariate time series forecasting. The set of real-world datasets is the standard benchmark in the forecasting literature. Theoretical Claims: Yes. The authors have provided proofs for prop 3.4 and 4.1 for linear and VAE adapters. They both followed standard linear algebra and variational inference principles. Experimental Designs Or Analyses: Yes, the experimental designs are sound. However, there are not many details on the reproducibility of the results, where the provided link to the code does not work. Supplementary Material: Reviewed all parts of supplementary materials. They include extensions and details of the experiments and theory from the main text that support the claims. Relation To Broader Scientific Literature: The proposed method directly builds upon recent advancements in foundation models like Moment and Chronos, designed primarily for univariate forecasting. It extends ideas from the literature on adapters used in other domains, such as PCA-based adapters (Feofanov et al., 2024; Benechehab et al., 2025), to enable multivariate forecasting. It also integrates concepts from probabilistic representation learning, leveraging Bayesian neural network ideas (Gal & Ghahramani, 2016) and variational autoencoders (Kingma & Welling, 2013) to quantify uncertainty. Essential References Not Discussed: None Other Strengths And Weaknesses: Weaknesses: (1) the authors didn't specify how the feature interactions can be handled when adapting the univariate forecasting model to a multivariate model; (2) there is not much technical innovation in the proposed method: it basically added an encoder and decoder before and after frozen foundation models followed by applying techniques such as variational inference on obtained embeddings for uncertainty intervals. Other Comments Or Suggestions: Please address the comments in the weakness section. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We appreciate the reviewer’s detailed and insightful feedback. We are particularly grateful for the recognition of our work’s strong empirical results, the effectiveness of AdaPTS in improving forecasting accuracy and uncertainty quantification. We would like to clarify and address the raised concerns: > However, there are not many details on the reproducibility of the results, where the provided link to the code does not work. As stated in the reproducibility section of our paper, the code will be made publicly available upon acceptance. We used the placeholder URL ("URL hidden for review") as a mean to maintain anonymity during the review process, and we apologize for any confusion due to this practice. However, we have provided relevant implementation details in Appendix C.2 to ensure transparency and facilitate reproducibility. > (1) the authors didn't specify how the feature interactions can be handled when adapting the univariate forecasting model to a multivariate model As defined in our paper (Definition 3.1), feature-space transformations ($\varphi$) play a crucial role in capturing channel dependencies. Applying $\varphi$ to a multivariate time series projects the data into a latent space where each component is a nonlinear function of the original features (when using a nonlinear encoder). For instance, PCA—a baseline in our study—transforms data into a space of linearly uncorrelated components, demonstrating how our approach inherently manages multivariate dependencies. This mechanism is central to AdaPTS and a key aspect of our contribution to multivariate time-series forecasting. > (2) there is not much technical innovation in the proposed method: it basically added an encoder and decoder before and after frozen foundation models followed by applying techniques such as variational inference on obtained embeddings for uncertainty intervals. While we acknowledge that autoencoders and variational inference are established techniques, our contribution is methodological and practical. The novelty of AdaPTS lies in the formalization of the probabilistic multivariate time-series adaptation problem, the design choices behind our framework, and the comprehensive analysis of its effectiveness. Multivariate and probabilistic forecasting remain central challenges in time-series research, and our work provides a principled and practical approach that can be applied to real-world forecasting tasks. We believe this adds significant value to the time series community. Given our clarifications, we hope the reviewer recognizes the merit of our contributions and considers adjusting their evaluation accordingly. We appreciate the constructive feedback and welcome further discussion on how AdaPTS enhances the field of foundation models for time-series forecasting.
Summary: The paper introduces AdaPTS, a framework designed to adapt pretrained univariate time series Foundation Models (FMs) to multivariate probabilistic forecasting tasks. The core challenge addressed is the inherent limitation of existing FMs (e.g., Moment, Chronos), which are typically trained on univariate data and struggle with multivariate dependencies and uncertainty quantification. AdaPTS proposes probabilistic adapters—learned feature-space transformations—that project multivariate inputs into a latent space compatible with univariate FMs, process each dimension independently via the frozen FM, and invert the transformation to produce probabilistic forecasts in the original feature space. --- Key conceptual contributions include: \ (1) Adapter Framework that is defined as invertible transformations $(φ:R^{D} → R^{D′})$ that map multivariate time series into a latent space where univariate FMs operate. The framework enforces invertibility to ensure predictions can be mapped back to the original space. Two adapter families are also explored, including Deterministic adapters (e.g., linear autoencoders, deep nonlinear autoencoders), and Probabilistic adapters (e.g., $β$-VAE, dropout as approximate variational inference), which introduce stochasticity into the latent space to capture uncertainty. --- (2) Methodological Insights, including: (2.1) Decoder Dominance: In deterministic adapters, the decoder contributes more critically to performance than the encoder (ablation study in Fig. 7), (2.2) Hyperparameter Trade-offs: For $β$-VAE, higher $β$ values improve disentanglement and calibration but require careful tuning of the likelihood noise scale ($σ$), and (2.3) Scalability: The framework supports zero-shot adaptation (no FM fine-tuning) and generalizes to variable feature dimensions, making it practical for real-world deployment. --- AdaPTS establishes a principled, modular framework for adapting univariate FMs to multivariate settings while enabling uncertainty quantification. By decoupling feature-space transformations from FM parameters, it offers a scalable solution for real-world applications requiring probabilistic forecasts. The integration of Bayesian principles with deep learning architectures positions the work at the intersection of representation learning and time series analysis, with empirical validation across diverse domains underscoring its practical utility. Claims And Evidence: The work presents clear, convincing evidence for its primary claims, with rigorously designed experiments and theoretically grounded adapter designs. The few underperforming cases (e.g., ExchangeRate) are contextualized but merit deeper investigation. I am providing detailed assessments of the evidence quality for each major contribution as follows: --- (1) Multivariate Foundation Models (FMs) Adaptation via Adapters: The paper provides strong empirical evidence for the core claim that adapters improve multivariate forecasting accuracy. Experiments across four real-world datasets demonstrate consistent MSE improvements over the baseline Moment model in 5/8 tasks (Table 1), with gains of up to 15% on Illness ($H=24$). The synthetic linear FM experiment (Fig. 2) and the nonlinear FM validation (Fig. 8) further corroborate the framework’s ability to learn superior transformations compared to identity/PCA baselines. However, the underperformance on ExchangeRate ($H=192$) remains unexplained beyond dataset-specific characteristics, raising questions about generalizability to high-volatility financial time series. While the authors attribute this to domain shifts, deeper analysis (e.g., feature correlation patterns and noise profiles) would strengthen causal interpretation. --- (2) Theoretical Foundations: The closed-form derivation for linear adapters (Proposition 3.4) is mathematically sound under stated assumptions, and the synthetic linear FM experiment (Fig. 2) validates the theory’s practical relevance. However, the analysis does not extend to nonlinear FMs like Moment, where adapters are optimized via gradient descent rather than closed-form solutions. The authors implicitly assume that the linear case provides sufficient intuition for nonlinear regimes, but formal approximation guarantees or Lipschitz continuity analyses would bridge this gap. The Bayesian treatment of probabilistic adapters (Proposition 4.1) follows standard variational inference principles, though the absence of posterior contraction rates or PAC-Bayes bounds limits theoretical novelty. --- (3) Uncertainty Quantification: Calibration results (Fig. 5) show that probabilistic adapters produce reasonably calibrated forecasts for short horizons but exhibit overconfidence as $H$ increases. While this aligns with known challenges in long-horizon forecasting, the evaluation is restricted to LinearVAE on ETTh1. A broader analysis across adapter types and datasets (e.g., Weather, Illness) would better substantiate the claim. Furthermore, the lack of comparative baselines (e.g., MC dropout applied directly to Moment) makes it unclear whether the calibration improvements stem from the adapter architecture or merely the introduction of stochasticity. --- (4) Dimensionality Reduction: The claim that adapters enable cost-effective inference is supported by experiments showing that VAE achieves optimal performance with $2$ latent dimensions on Illness (Fig. 3), retaining $95.6%$ explained variance. However, the paper does not quantify computational savings (e.g., FLOPs reduction vs. accuracy trade-offs) or compare against alternative compression techniques (e.g., pruning, quantization). This weakens the practical impact argument for resource-constrained deployments. --- (5) Latent Space Interpretability: The visualization of latent representations (Fig. 4) provides qualitative evidence that VAE adapters mitigate distribution shifts by enforcing isotropic Gaussian latent spaces. However, the absence of quantitative metrics (e.g., Maximum Mean Discrepancy between train/test embeddings) limits the strength of this claim. Additionally, the analysis does not explore whether the structured latent space improves robustness to adversarial perturbations or out-of-distribution inputs. --- (6) Ablation Studies: The hyperparameter analysis (Fig. 6) and component ablation (Fig. 7) are methodical, revealing key insights, such as Decoder dominance in deterministic adapters suggests that feature recombination is more critical than encoding for forecasting, and Higher $β$ in $β$-VAE improves disentanglement and calibration but requires careful tuning of $σ$. While informative, the ablation scope is narrow. For instance, the study does not explore the impact of adapter depth/width in nonlinear architectures or the role of pretraining data diversity in FM compatibility. --- It is also worth mentioning that the authors transparently acknowledge limitations, including (1) Restriction to Moment (The framework’s compatibility with other FMs (e.g., Chronos, Moirai) remains unverified), (2) Calibration gaps (No post-hoc recalibration or adaptive noise scaling is attempted), and (3) Normalizing Flows (Optimization challenges with invertible flows are noted but not resolved). Methods And Evaluation Criteria: The methodological and evaluation framework of AdaPTS demonstrates substantial technical merit in addressing multivariate probabilistic forecasting via univariate foundation models (FMs), though certain design choices invite deeper scrutiny. I am providing a critical analysis of its alignment with the problem's requirements. --- Methodological Appropriateness: The core innovation - probabilistic adapters as invertible latent-space transformations - directly tackles the dimensionality mismatch between univariate FMs and multivariate inputs. By decoupling feature-space projections ($φ$) from FM inference, AdaPTS achieves three critical objectives: (1) Zero-shot compatibility: Avoids FM fine-tuning, preserving pretrained representations while adapting to new tasks - a necessity given the computational impracticality of retraining large FMs. (2) Uncertainty propagation: Stochastic adapters (e.g., LinearVAE) inject Bayesian principles into deterministic FMs like Moment, enabling probabilistic forecasts without architectural changes. (3) Dimensionality reduction: Learned latent spaces (e.g., VAE with $D' = 2$ on Illness) reduce inference costs while preserving performance, addressing real-world deployment constraints. The theoretical analysis for linear adapters (Proposition 3.4) is mathematically rigorous under stated assumptions, with synthetic experiments (Fig. 2) validating closed-form solutions outperforming identity/PCA baselines. However, the extension to nonlinear FMs relies on gradient-based optimization without formal guarantees (e.g., Lipschitz continuity or approximation bounds), leaving a gap between linear theory and nonlinear practice. While common in deep learning, explicit analysis of how nonlinear adapter architectures interact with FM inductive biases (e.g., Moment’s transformer backbone) would strengthen claims of generalizability. The Bayesian treatment of adapters via variational inference (Proposition 4.1) aligns with best practices but introduces limitations such as $β$-VAE’s isotropic priors oversimplifying latent dependencies, potentially limiting cross-channel interaction modelling (evidenced by degraded performance on ExchangeRate), and Calibration gaps for long horizons (Fig. 5) suggest overconfidence, yet the absence of comparisons against ensemble methods or deep kernel learning obscures whether improvements stem from adapter architecture or stochasticity alone. --- Evaluation Criteria Strengths and Gaps: The benchmark selection (ETTh1, Illness, Weather, ExchangeRate) spans energy, healthcare, and finance, ensuring domain diversity. However, key limitations emerge: (1) Temporal distribution shifts: While latent-space visualizations (Fig. 4) qualitatively demonstrate mitigated shift, quantitative metrics (e.g., Maximum Mean Discrepancy) are absent, weakening robustness claims. (2) Volatility analysis: ExchangeRate’s underperformance (Table 1) is attributed to volatility but lacks analysis of heteroskedasticity or leverage effects—critical in financial data. A volatility-aware adapter ablation would clarify failure modes. (3) Metric limitations: MSE/MAE focuses on point forecasts, neglecting probabilistic metrics like CRPS or sharpness. Reliability diagrams (Fig. 5) assess calibration but omit quantile coverage statistics, limiting uncertainty quantification depth. Baseline comparisons against PCA and identity adapters are informative but incomplete. State-of-the-art multivariate transformers (e.g., Crossformer) and FM-specific adaptations (e.g., Moirai) are noted but not benchmarked, leaving open questions about relative performance. For instance, Crossformer’s explicit cross-channel attention could outperform AdaPTS’s latent-space projections on highly interdependent features. --- Empirical Validation and Practical Impact: Experiments demonstrate consistent improvements over Moment in 5/8 tasks (Table 1), with VAE adapters reducing Illness MSE by $15%$ ($2.902 → 2.461$). However, the framework’s computational overhead is unquantified: while dimensionality reduction (Fig. 3) suggests efficiency gains (e.g., optimal performance with 2 latent dimensions on Illness), wall-clock time or FLOPs comparisons against native multivariate FMs are omitted. This weakens claims about deployability, as adapters introduce additional training/inference costs despite latent compression. --- Theoretical and Practical Trade-offs: The linear adapter analysis assumes full-rank $W_φ$ and linear FMs, but real-world FMs (e.g., Moment’s transformer) are nonlinear. A perturbation analysis (e.g., Lipschitz constants for $φ$/FM compositions) or approximation error bounds would bridge this gap. Similarly, the decoder’s dominance in deterministic adapters (Fig. 7) highlights feature recombination as critical, but the absence of adversarial robustness tests (e.g., input perturbation sensitivity) limits insights into representation stability. Theoretical Claims: From my understanding, the theoretical analysis in AdaPTS focuses on two primary proportions: (1) Proportion 3.4 (optimal linear adapter derivation) and (2) Proposition 4.1 (VAE adapter training objective). Now, I am assessing their correctness, assumptions, and limitations, which are aligned with ICML's standard guidelines. Proportion 3.4: Optimal Linear Adapter: The authors claim that for linear foundation models and linear adapters under full-rank assumptions, the closed-form optimal solution is $ W_{φ}^{∗} = (B^{⊤}A)^{+}B^{⊤}B $, where $ A = Y - W_{FM} ^ {⊤}X $ and $ B = b_{FM}1^{⊤} $. After checking the derivation steps and assumptions provided by the authors, I found them reasonable, but I still think this claim has some limitations. One of them is the numerical stability. The regularization term $ λI $ added to $ B^{⊤} A $ (Remark 3.5) is not explicitly justified in the proof but is standard practice to avoid singular matrices. The next limitation is generalizability. The closed-form solution assumes a linear Foundation Model, but real-world foundation models (e.g., Moment's transformer) are nonlinear. The authors acknowledge this gap but do not provide approximate bounds or Lipschitz continuity arguments for nonlinear extensions. In conclusion, I think the proof is mathematically correct under stated assumptions but lacks theoretical guarantees for nonlinear Foundation models. It'd be nice to hear back from the authors about these limitations. --- Proposition 4.1: VAR Adapter Training Objective: The authors claim that the training objective for VAE adapters maximizes an ELBO-like lower bound. While they are providing easy-to-understand derivation steps and assumptions, I think this claim suffers from two limitations, including Identifiability. Their proof does not address whether the latent representation $ Z $ uniquely identifies the input $ X $, a known challenge in VAEs. Another limit would be approximation quality. The variational posterior $ q_{ϕ} (Z ∣ X) $ is assumed to be flexible enough to approximate the true posterior, but no PAC-Bayes or approximation error bounds are provided. To conclude, The ELBO derivation is technically correct but lacks novelty and fails to address critical Bayesian challenges (e.g., posterior contraction rates). Please note that my concern here is not novelty ( While it is essential ), but I'd like to hear back from the authors about this limitation, and it would be nice to address it further. Experimental Designs Or Analyses: The experimental design and analysis in AdaPTS exhibit careful construction with notable strengths. I would like to provide a couple of the methodological issues that I think should be addressed further. --- First, the evaluation framework lacks comprehensive uncertainty quantification metrics, which undermines the paper's probabilistic claims. While reliability diagrams (Fig. 5) provide qualitative insights into calibration, the analysis is limited to LinearVAE on ETTh1 with no quantitative metrics like CRPS or proper scoring rules. For a paper emphasizing probabilistic forecasting, this represents a significant gap—particularly as calibration deteriorates for longer horizons without proposed remediation strategies. An outstanding experimental design would systematically evaluate calibration across all adapter variants and datasets, benchmark against alternative uncertainty quantification methods (e.g., ensemble approaches), and quantify miscalibration using established metrics. --- Second, the baseline comparison framework insufficiently contextualizes AdaPTS within the multivariate forecasting landscape. The experimental design evaluates against vanilla Moment and PCA adapters but omits comparisons with state-of-the-art multivariate models (e.g., Crossformer, TSMixer) that explicitly model cross-channel dependencies. This limitation is particularly notable for ExchangeRate, where adapters underperform the baseline—suggesting that certain multivariate dependencies require specialized architectures. Without these comparisons, it remains unclear whether adapter-based approaches offer advantages over dedicated multivariate architectures beyond computational convenience. --- Third, despite claims of "cost-effective inference through dimensionality reduction" (Fig. 3), the experimental design lacks rigorous efficiency analysis. While results demonstrate performance preservation with reduced dimensions (e.g., VAE achieving optimal performance with 2 latent dimensions on Illness), no quantitative measurements of computational savings (FLOPs, wall-clock time) or adapter overhead are provided. This omission prevents objective assessment of the practical utility claims, especially since adapter training introduces additional computational costs that may offset inference savings. A more complete analysis would quantify these trade-offs across different adapter architectures and deployment scenarios. Supplementary Material: Yes, I thoroughly reviewed the supplementary material, which consists of Appendices A through D. Appendix A contains detailed proofs for the two main theoretical propositions (3.4 on optimal linear adapters and 4.1 on VAE training objectives), showing the mathematical derivations that support the main paper's claims. Appendix B discusses Normalizing Flows as potential adapters and the optimization challenges they present. Appendix C provides comprehensive experimental details, including dataset characteristics (C.1) and implementation specifics like preprocessing steps, training parameters, and hyperparameter optimization (C.2). Finally, Appendix D presents additional experimental results, including Moment's application to synthetic data (D.1) and Mean Absolute Error metrics (D.2) that complement the MSE results in the main paper. The supplementary material provides valuable technical depth, particularly regarding the theoretical foundations and experimental reproducibility. Relation To Broader Scientific Literature: First, the framework's approach to time series foundation model adaptation through invertible feature-space transformations builds upon but significantly extends prior adapter methodologies from adjacent fields. While previous work demonstrated the utility of simple transformations like PCA for classification tasks and model-based reinforcement learning, these approaches yielded limited improvements for forecasting tasks. AdaPTS diverges from these precedents by enforcing an invertibility constraint that enables bidirectional transformation between input and latent spaces, a crucial innovation for forecasting. This represents a fundamentally different architectural approach compared to alternatives like Moirai, which handles multivariate inputs through flattened channels but suffers from quadratic memory complexity as dimensionality increases. The paper's systematic comparison between learned transformations and static ones (e.g., PCA) demonstrates the limitations of prior art in capturing the complex, task-dependent relationships in multivariate forecasting. --- Second, the theoretical foundations developed for adapters connect previously disparate research threads in representation learning and uncertainty quantification. The closed-form solution for linear adapters (Proposition 3.4) provides a mathematical foundation that explains when and why adapters can outperform identity mappings—addressing a key gap in the empirical findings. More significantly, the paper's Bayesian treatment of adapters extends the partially stochastic Bayesian neural network framework to the specific constraints of foundation model adaptation. While prior work established dropout as an approximate variational inference, AdaPTS uniquely applies this principle to the adapter component specifically, enabling uncertainty quantification without modifying the underlying foundation model architecture. This creates a new bridge between deterministic foundation models like Moment and the probabilistic forecasting literature that traditionally required specialized probabilistic architectures. --- Third, the empirical validation offers insights that challenge assumptions in dimensionality reduction for time series. While the standard approach in multivariate time series processing often preserves original dimensionality, AdaPTS demonstrates that performance can be maintained or even improved with dramatically reduced latent dimensions (e.g., VAE achieving optimal performance with just 2 latent dimensions on Illness data that originally had 7 features). This finding substantially extends prior work on time series representation learning by showing that learned nonlinear projections can capture cross-channel dependencies more efficiently than traditional methods. Furthermore, the observed decoder dominance in deterministic adapters (Fig. 7) challenges the symmetrical encoder-decoder paradigm prevalent in representation learning literature, suggesting that feature recombination plays a disproportionately important role in multivariate forecasting—a phenomenon not previously documented in time series foundation model research. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Regarding originality, the paper makes a genuinely novel contribution by reframing the multivariate forecasting problem through the lens of feature-space transformations. While adapter architectures have been explored in natural language processing and computer vision, the paper's invertibility constraint introduces a fundamentally different approach tailored to time series data. Most remarkably, the theoretical formulation of adapter optimality conditions (Proposition 3.4) extends beyond empirical validation to provide mathematical intuition for why and when adapters outperform the direct application of foundation models. This theoretical-empirical synergy distinguishes the work from purely engineering-driven approaches in the time series literature. The reconceptualization of dropout as approximate variational inference, specifically within the adapter framework, further demonstrates the creative adaptation of existing techniques to the unique constraints of time series foundation models. --- The paper's significance, while substantial, is somewhat constrained by the experimental scope. The focus on Moment as the sole foundation model leaves open questions about generalizability across the rapidly evolving landscape of time series foundation models. This limitation is particularly relevant given the architectural diversity among recent models like Chronos (tokenization-based) and Moirai (mixture-of-experts). Additionally, while the paper convincingly demonstrates improved forecasting accuracy on standard benchmarks, it misses an opportunity to explore challenging real-world scenarios where distributional shifts are more severe or where domain-specific characteristics (such as financial volatility clustering in ExchangeRate) dramatically affect performance. Such extensions would significantly strengthen the practical impact claims. --- Clarity represents both a strength and weakness. The mathematical formalism is precise and well-structured, with propositions clearly stated and adequately proven. The visualization of latent representations (Figure 4) effectively communicates the distribution shift mitigation benefits of probabilistic adapters. However, the paper occasionally suffers from terminology inconsistencies—notably in the interchangeable use of "channels," "features," and "components"—which, while footnoted, creates unnecessary cognitive load. The experimental section would benefit from a clearer exposition of hyperparameter sensitivity, particularly regarding how adapter dimensionality affects computational savings. A quantitative analysis of inference time or FLOP reductions would provide concrete evidence for the "cost-effective adaptation" claims. Finally, the discussion of failure cases (especially on ExchangeRate) remains somewhat superficial, missing an opportunity for deeper insights into adapter limitations. --- In conclusion, I think this work represents a significant conceptual advancement in time series foundation model adaptation, with strong theoretical underpinnings and promising empirical results that could substantially impact how practitioners deploy these models in real-world multivariate forecasting tasks. Other Comments Or Suggestions: N/A Questions For Authors: I have a couple of questions from the authors and I am eager to hear back from them during the rebuttals. I would be happy to change my score if the answers are convincing. Here are the questions: --- (1) Foundation Model Generalizability: Your experiments exclusively use Moment as the base FM. How would AdaPTS perform with architectures like Chronos (tokenization-based) or Moirai (mixture-of-experts)? Maybe an empirical validation across diverse FM architectures would strengthen the generalizability claims, while evidence of architecture-specific limitations. I'd like to see the authors' thoughts on it in general. --- (2) ExchangeRate Performance Degradation: The VAE adapter shows significantly worse performance than the baseline Moment on the ExchangeRate dataset (0.455 vs. 0.130 MSE). Could you elaborate on the specific characteristics of financial time series that challenge your method, particularly regarding heteroskedasticity or leverage effects? --- (3) Computational Efficiency Quantification: While you demonstrate dimensionality reduction benefits (e.g., optimal performance with 2 latent dimensions on Illness), no wall-clock time or FLOPs comparisons are provided. Could you quantify the actual computational savings, including adapter overhead? Concrete efficiency metrics would transform the "cost-effective inference" claim from theoretical to practical, substantially strengthening the real-world applicability argument. --- (4) Theoretical Guarantees for Nonlinear Adapters: Proposition 3.4 provides a closed-form solution for linear adapters with linear FMs, but the extension to nonlinear settings relies solely on empirical validation. Have you explored approximation error bounds or Lipschitz continuity properties for nonlinear adapter compositions with nonlinear FMs? --- (5) Multivariate Architecture Comparisons: Your baselines focus on adapter variants rather than comparing against dedicated multivariate forecasting architectures (e.g., Crossformer, TSMixer). How does AdaPTS compare to state-of-the-art multivariate models that explicitly model cross-channel dependencies? Such comparisons would clarify whether adapter-based approaches offer advantages beyond computational convenience—particularly for datasets like ExchangeRate where adapters underperform. --- (6) Calibration Remediation Strategies: Figure 5 demonstrates increasing calibration degradation for longer horizons. What post-hoc recalibration methods or architectural modifications did you consider to address this systematic overconfidence? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer’s thoughtful and constructive feedback. The recognition of the theoretical novelty and empirical strengths of our approach is greatly appreciated. In this rebuttal, we address the reviewer’s specific concerns and questions in detail to further clarify our methodology, experimental choices, and future directions. > (1) Foundation Model Generalizability We agree that testing AdaPTS across diverse FMs is important for validating generalizability. While our experiments currently focus on Moment, we have already integrated **Moirai** into our framework, which we plan to include in the camera-ready version. AdaPTS is designed to be "plug-and-play" with different FMs, and we anticipate similar success with tokenization-based models like Chronos (besides some technical details, like differentiability of the foundation model with respect to its input, which is not guanranteed given the discrete nature of the tokenization operation). Further experimental results with additional FMs will be included to strengthen this claim. > (2) ExchangeRate Performance Degradation The degradation on the ExchangeRate dataset, as mentioned by Reviewer 267f, stems from simple tricks being enough to get a strong baseline, and the potential uselessness of modeling cross-channel dependencies in such datasets. We acknowledge this limitation and will explore more suitable datasets, and benchmarks, where the strengths of our framework will be more easily perceivable. > (3) Computational Efficiency Quantification We appreciate your comment regarding computational efficiency. While we demonstrate dimensionality reduction benefits, we have not exactly quantified inference times or FLOPs. However, as the inference complexity is linear in the number of channels, the inference time gain is a multiplicative factor given the reduced number of channels. Furthermore, in the response to reviewer 267f, we provide the order of magnitudes of the adapter parameter count, and the time it takes to train on the ETTh1 dataset as an example. > (4) Theoretical Guarantees for Nonlinear Adapters While Proposition 3.4 provides a closed-form solution for linear adapters, we acknowledge that the extension to nonlinear settings has been primarily based on empirical validation. Exploring approximation error bounds or Lipschitz continuity for nonlinear adapters is an exciting future direction, as it would offer a more rigorous understanding of their behavior. However, we believe the linear case analysis provides enough insights to motivate our framework, proving that an optimal solution to the adapter optimization problem exists, beyond the identity baseline. > (5) Multivariate Architecture Comparisons We agree that comparing AdaPTS with other specialized multivariate architectures, like Crossformer and TSMixer, would provide valuable context. However, these baselines result from a different task-specific forecasting paradigm, which we believe is not relevant for our raised problematic, that of adapting foundation models. > (6) Calibration Remediation Strategies We appreciate the reviewer’s attention to calibration degradation for longer forecasting horizons. Indeed, calibration is crucial for the reliability of probabilistic forecasting, and we observed that AdaPTS may experience a degradation in calibration for longer horizons, as shown in Figure 5. To address this issue, several strategies exist in the literature, including the application of temperature scaling or isotonic regression on a held-out calibration set, which are commonly used to improve calibration in such settings. We thank the reviewer again for their valuable feedback, and we hope these clarifications answer their questions. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for thoughtfully and thoroughly addressing my questions and concerns. I found your responses convincing, and I’d be glad to see your work accepted. I'm also curious to see how this line of research evolves in the future. I'm happy to raise my score to a 4, and I wish the authors all the best moving forward.
Summary: The paper introduces a variation-autoencoder style encoder and decoder around a foundational model to enable it to perform forecasting for probabilistic and multivariate settings. Claims And Evidence: The claim fo the paper is that any time-series univariate foundational model can be adapted to perform much harder problem of multivariate probabilistic forecasting. The paper show some improvements of few benchmarks on the MOMENT model which doesn;t fully validate the claim. Methods And Evaluation Criteria: The evaluation setup is limited to only one foundational model. The datasets used are also very limited. Theoretical Claims: The theoretical justification to add the constant diagonla matrix to encoder weight matrix is, while trivial, valid and sound. Experimental Designs Or Analyses: The forecasting setup makes sense but the authors only use single foundational model for forecasting accuracy evaluation. The benchmarks are very limited Supplementary Material: The proofs look sound. Relation To Broader Scientific Literature: Due to lack of enough experimental validation of the method, the significance of this work on furthering research on foundational time-series models is unclear. Essential References Not Discussed: Relevant papers are discussed. Other Strengths And Weaknesses: Weaknesses: 1. Lack of enough baselines 2. Lack of base foundational models used (only MOMENT is studied) 3. Many popular forecasting benchmarks are not used. Strengths 1. The method is simple and looks valid Other Comments Or Suggestions: See weaknesses and other comments above Questions For Authors: See weaknesses above Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank Reviewer AXte for their feedback. We appreciate the acknowledgment of our method’s simplicity and validity, as well as the soundness of our theoretical justification. We would like to address the concerns raised in the review: > Lack of enough baselines Our primary objective is to enhance the forecasting capabilities of a vanilla foundational model. Therefore, it serves as the most relevant baseline for comparison. Additionally, we have explored non-learning-based adaptation approaches such as PCA that is included in the paper, and SVD decomposition and random projections that we did not include as they have comparable results with PCA. > Lack of base foundational models used (only MOMENT is studied) We acknowledge the limitation of evaluating our method on a single foundational model (MOMENT), which we explicitly state in the limitations section of the paper. In response to this concern, we have since experimented with **Moirai** and observed promising results. If the paper is accepted, we will include these findings in the camera-ready version, along with potentially other foundation models as suggested by the other Reviewers (267f). > Many popular forecasting benchmarks are not used. Our paper introduces a framework that enables (i) probabilistic forecasting, (ii) channel mixing, and (iii) dimensionality reduction. We believe that there exists no benchmark so far that can clearly highlight each of these aspects, so we tried to choose the datasets from the perspective of diversity application (electicity, medicine, weather and finance). We plan to include more datasets in our study, but identifying the best benchmark for the multivariate setting is an important direction for future work. Meanwhile, on the considered datasets, we validated our claims demonstrating a methodology to maintain or surpass the original foundational model’s performance while using a reduced number of channels and providing uncertainty estimates. In summary, we believe our method provides a meaningful contribution to the field by demonstrating how any univariate time-series foundational model can be extended to tackle more complex multivariate probabilistic forecasting tasks. Based on our clarifications, we hope the reviewer acknowledges the value of our contributions and revises their evaluation accordingly.
Summary: The paper proposed AdaPTS, an adapter for univariate time series foundation models, which makes them both multivariate and produce probabilistic predictions. The authors first provide a theoretical framework for adapters for time series foundation models, and discuss many adapters (encoder-decoder combinations) which satisfy these properties. The authors demonstrate the performance of their adapters on a few multivariate time series forecasting datasets using the MOMENT time series foundation model. Claims And Evidence: The paper has two claims. The authors aim to leverage **existing pre-trained univariate FMs** to enable **probabilistic** forecasting for **multivariate time series**. Overall, I believe that the authors present some evidence to support their claims. They evaluate their methods on 4 multivariate long-horizon forecasting datasets, and augment Moment to produce multivariate probabilistic forecasts. I strong suggest that the authors demonstrate the following, to improve the : 1. **Multivariate Baselines:** Beyond PCA (which is a excellent choice for a strong baseline), the paper does not compare to some other existing ways of imbuing multivariate context to time series foundation models. Please see [1, 2, 3] for some ways to imbue multivariate context to TSFMs. Some of these methods may be used as baselines. 2. **More TSFMs**: The authors do mention this as a limitation. I believe that demonstrating their approach on another TSFM, different in design from Moment would make the experiments stronger. I would recommend TTMs (only TSFM not based on the Transformer architecture), or TimesFM (decoder-only, while Moment is encoder-only). 3. **Datasets:** (1) I would encourage the authors to compare their methods on more datasets. The long horizon forecasting benchmark that the authors used, has more datasets, for example ETTh2, ETTm1, ETTm2, Traffic and Electricity. But even beyond these datasets, the GIFT-Eval benchmark has a lot of multivariate time series useful for forecasting. (2) Also, these existing datasets have known issues. For example, a very strong baseline in the Exhange Rate dataset is predicting the last time step. Also, some studies including [1] have shown that these time series datasets do not significantly benefit from modeling cross-channel dependencies. ### References 1. Żukowska, Nina, et al. "Towards Long-Context Time Series Foundation Models With A Handful Of Additional Parameters." NeurIPS 2024 Workshop on Fine-Tuning in Modern Machine Learning: Principles and Scalability. 2. Liu, Mingzhu, Angela H. Chen, and George H. Chen. "Generalized Prompt Tuning: Adapting Frozen Univariate Time Series Foundation Models for Multivariate Healthcare Time Series." arXiv preprint arXiv:2411.12824 (2024). 3. Lee, Seunghan, Taeyoung Park, and Kibok Lee. "Partial Channel Dependence with Channel Masks for Time Series Foundation Models." arXiv preprint arXiv:2410.23222 (2024). Methods And Evaluation Criteria: The proposed methods and the evaluation criteria make sense. Please see comments in the previous section. Theoretical Claims: I did not rigorously verify the proofs or correctness of the claims, but they look reasonable and correct to me. Experimental Designs Or Analyses: The experimental design and analysis is sound. Please Supplementary Material: I reviewed the appendix briefly to look at details of the experimental setup. Please see comments in the Claims And Evidence section. Relation To Broader Scientific Literature: The paper uses univariate foundation models, and adapts them to model multivariate data and produce probabilistic forecasts. The contributions are simple and easy to understand. Essential References Not Discussed: I have mentioned some key references in the Claims and Evidence section. Other Strengths And Weaknesses: Strengths: The paper solves an important problem. It is well motivated, well-written, and backed with theoretical insights. Weaknesses: I think the paper needs a few more experiments to highlight the impact of their contributions. Other Comments Or Suggestions: I do not have any other comments or suggestions. Questions For Authors: 1. I wonder how flexible is the parametric distribution of the probabilistic predictions. The authors use a Gaussian Likelihood, but I wonder if you can use mixtures of multiple distributions just like MOIRAI and/or LagLlama do. 2. How does your method compare with conformal prediction to generate prediction intervals? 3. How efficient are the adapters in terms of parameters and runtime? I want to understand how computationally feasible is this adaptation process. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the thoughtful feedback provided by Reviewer 267f and would like to address the concerns raised in their review. > Multivariate Baselines: Beyond PCA, the paper does not compare to some other existing ways of imbuing multivariate context to time series foundation models. We acknowledge the reviewer's suggestion to include additional baselines beyond PCA. We chose PCA as it aligns closely with our framework's design. We have also experimented with other non-learning-based adapters such as **SVD** decomposition and **Random Projections**, which yielded similar or worse performance compared to PCA. We thank the reviewer for the provided references and will incorporate these suggested baselines in the updated version of the paper. > More TSFMs: The authors do mention this as a limitation. I believe that demonstrating their approach on another TSFM, different in design from Moment would make the experiments stronger. I would recommend TTMs (only TSFM not based on the Transformer architecture), or TimesFM (decoder-only, while Moment is encoder-only). We thank the reviewer for suggesting additional TSFMs to test within our framework as we recognize the importance of demonstrating the benefit of our approach on diverse TSFMs. We have already integrated **MOIRAI** and tested it with our adapters. In addition, we plan to include more foundation models in the future, including TTMs and TimesFM suggested by the reviewer, to better validate our methodology. > Datasets: (1) I would encourage the authors to compare their methods on more datasets. The long horizon forecasting benchmark that the authors used, has more datasets, for example ETTh2, ETTm1, ETTm2, Traffic and Electricity. But even beyond these datasets, the GIFT-Eval benchmark has a lot of multivariate time series useful for forecasting. (2) Also, these existing datasets have known issues. For example, a very strong baseline in the Exhange Rate dataset is predicting the last time step. Also, some studies including [1] have shown that these time series datasets do not significantly benefit from modeling cross-channel dependencies. We thank the reviewer for the suggested additional datasets, and we plan to use them to strengthen our experimental evidence. Generally speaking, we believe that selecting the appropriate time series benchmark has been a broader issue that affects the whole time series forecasting community. In the multivariate case, it is even more challenging as no study has been performed to indentify datasets with reasonable channel interdependence. In addition, we have experimentally found (Section 3.3) that adapters still can be useful **even in the case when channels are independent**, since it adds an additional level of complexity to the architecture. Identifying an appropriate benchmark for the channel-interdepence studies is an important direction for future work. > I wonder how flexible is the parametric distribution of the probabilistic predictions. The authors use a Gaussian Likelihood, but I wonder if you can use mixtures of multiple distributions just like MOIRAI and/or LagLlama do. The reviewer raised an important point regarding the expressivity of the fitted probability distributions. In theory, previous work has established the universal approximation property for any conditional density, given sufficient stochasticity introduced early in non-linear models. We aim to mimic this setup in our adapters by incorporating stochastic units in the encoder through Variational Inference (VI) or Dropout as approximate VI. In practice, however, we agree with the reviewer that the Gaussian likelihood may not capture complex, multimodal distributions. Alternatives such as Flow matching or parametric mixtures could be more suitable solutions, and we will explore these in future work. > How does your method compare with conformal prediction to generate prediction intervals? We haven't yet investigated conformal prediction for AdaPTS, though we see its potential as a parallel approach to our Bayesian adapters for probabilistic forecasting. We consider it to be an interesting avenue for future research. > How efficient are the adapters in terms of parameters and runtime? I want to understand how computationally feasible is this adaptation process. To address the reviewer's query about computational efficiency, we provide the following details: - **Parameter Counts**: Our adapters introduce a minimal number of additional parameters, for instance, in the ETTh1 dataset, the optimal VAE adapter has 2,659 parameters (1 hidden layer with 64 units). This is significantly less than the full 37.9M params of the Moment small foundation model. - **Training Time**: As an example, the training time for this VAE adapter on the ETTh1 dataset (13k timesteps, 7 features) with a single V100 (32GB VRAM) GPU is **25min**. We appreciate the reviewer's insights and will incorporate the suggested improvements to strengthen our paper. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you so much for your response. I enjoyed reading your paper, and will improve my score, with the understanding that you would include insights from some of the experiments that you mentioned in your rebuttal.
null
null
null
null
Improved Regret Analysis in Gaussian Process Bandits: Optimality for Noiseless Reward, RKHS norm, and Non-Stationary Variance
Accept (oral)
Summary: This paper provides refined regret analyses of maximum variance reduction (MVR) type of algorithms in Gaussian Process (GP) bandits. It first establishes a general upper bound on the maximum posterior variance for MVR algorithm (Lemma 3.1), and applies it to obtain upper bounds on the cumulative regret and the simple regret in the noiseless setting (Section 4), the bounded RKHS norm setting (Section 5), and the time-varying noise variance setting (Section 6). Across all three settings, the resulting regret bounds improve the existing upper bounds previously established in the literature. Claims And Evidence: All claims are made through theoretical analysis with solid proofs. Although I am not doubting the results, I believe the statements Corollary 3.2 should be made more explicit regarding the hidden constants. It was stated that those hidden constants have dependence on $\mathcal{X}$, but it is unclear what the dependence is, even in their proofs in Appendix C.2. For example, is the constant finite even if $\mathcal{X}$ is continuous or unbounded? I understand the authors' main goal is to analyze the dependence of regret on $T$, $B$, or $V_T$, but the hidden constants should be stated explicitly somewhere in the paper. Methods And Evaluation Criteria: This paper suggests a novel analysis, not a novel algorithm. The MVR algorithm and its variant called PE algorithm are investigated in this paper, which would be fairly straightforward and sensible algorithms for GP bandits. As performance metrics, this paper investigated the cumulative regret and the simple regret, which are also standard in the literature. Theoretical Claims: I had a look into the proof of Lemma 3.1, Corollary 3.2, Theorem 4.1 and Theorem 4.2. I couldn’t find a critical error, although the proof of Corollary 3.2 lacks details. Experimental Designs Or Analyses: This paper does not include numerical experiments. Supplementary Material: This paper does not have supplementary material to review. Relation To Broader Scientific Literature: The main contribution of this paper is that the improved regret upper bounds were established for GP bandits. At the core of their analysis, Lemma 3.1 plays an important role, suggesting improved posterior variance upper bounds for MVR. Although these results only apply to MVR algorithms, it is notable that the resulting regret bounds improve or match the state-of-the-art results including the other algorithms. I believe that this work will become one of key papers in the GP bandit area. I am not so sure about this paper’s contribution beyond GP bandits. As mentioned in the paper, this work is partly related to heteroscedastic linear bandits, but I do not see a clear implication. Essential References Not Discussed: None Other Strengths And Weaknesses: Although the main contribution of this paper is theoretical analysis, I believe that it also has practical implications on the implementation of MVR algorithms -- the analysis provides a guideline on how to choose the algorithm parameters $\beta$ or $\lambda$ given the target time horizon $T$ or RKHS norm $B$. However, the analysis ends up revealing the optimal asymptotic dependence of those algorithm parameters on the problem constants. It will be great if explicit suggestions are given. Other Comments Or Suggestions: - It will also be great if the paper includes numerical experiments showing that MVR algorithms with suggested parameter choice actually perform very well. - I am not so sure whether "heteroscedastic" is the right term to describe the non-stationary noise variance setting. I would believe "heteroscedastic" describes the situation where the noise variance depends on the query point. If the algorithm adaptively chooses the query points utilizing the heteroscedastic noise variance, it may violate Assumption 2.2 saying that the noise sequence should be mutually independent. I believe the best description would be just "time-varying noise variance". Questions For Authors: In Section 5, the paper analyzes the algorithms with $\lambda$ chosen to be different from the actual noise variance $\rho$, unlike the other sections. Is this something necessary? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and for the overall positive feedback. We will carefully incorporate the reviewer's feedback in the revision. Below is our answer to the reviewer's question. **In Section 5, the paper analyzes the algorithms with $\lambda$ chosen to be different from the actual noise variance, unlike the other sections. Is this something necessary?** The setting $\lambda = \Theta(B^{-2})$ is necessary to obtain improved dependence against the RKHS norm upper bound $B$. Roughly speaking, our result suggests that we should set smaller (larger) noise variance parameters if the underlying function could be complex (simple). This is intuitively reasonable from the frequentist point of view since the noise variance parameter of GP corresponds to the regularization parameter of the kernel ridge estimate. Namely, it is reasonable to set a suitable regularization parameter depending on the underlying function's complexity to obtain a good kernel ridge estimator and corresponding GP.
Summary: The paper studies the classic problem of Bayesian optimization under the frequentist setting (where the target function lies in the RKHS of a known kernel). It derives a novel bound on the maximum variance after $T$ observations (Lemma 3.1). This bound has several consequences and applies to various Bayesian optimization problems. In the noiseless setting, it establishes near-optimal cumulative regret guarantees, addressing a COLT open problem. In the noisy setting, it improves regret bounds with respect to the RKHS norm of the target function. The results are also extended to the heteroscedastic noise setting. Claims And Evidence: The proofs are provided and seem correct to me. Methods And Evaluation Criteria: The paper is of theoretical nature. Theoretical Claims: I checked the proof of Lemma 3.1 in detail. Other results are sound. Experimental Designs Or Analyses: Not applicable! Supplementary Material: I reviewed some proofs in detail and skimmed through the rest. Relation To Broader Scientific Literature: The paper improves regret bounds in a well-studied problem and addresses a COLT open problem. Essential References Not Discussed: The bounds on information gain given in Section 2 seem from the following paper but the reference is not provided. On information gain and regret bounds in gaussian process bandits, Vakili, S. and Khezeli, K. and Picheny, V., AISTATS 2021. Other Strengths And Weaknesses: The paper is clearly written, with rigorous mathematics, and addresses regret bounds in BO under the frequentist setting, improving upon existing results. The literature on BO with heteroscedastic noise, such as Makarova et al. (2021), does not seem to be well discussed. Other Comments Or Suggestions: Typos in the introduction: "fileds", "Establised". Check for more typos! Questions For Authors: 1. Are there additional technical challenges that should be emphasized, or do the results follow directly from combining Lemma 3.1 with the existing analysis for BO algorithms MVR and PE? 2. Can the authors specifically state the bounds on information gain that they use? 3. Is Footnote 1 on page 5 directly related to the results of this paper? Does Lemma 3.1 have implications for other algorithms such as GP-UCB? Can similar bounds to those in Lemma 3.1 be established for the sum of variances rather than the maximum variance? 4. Can the authors provide more technical details on the comparison with Flynn and Reeb (2024)? While Lemma 3.3 appears to be a crucial step in both papers, what makes the results of this paper stronger? Is the improvement related to bounding maximum variance instead of the sum of variances? Ethical Review Concerns: Not applicable! Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and for the overall positive feedback. Below are our answers to the reviewer's questions. **1. Are there additional technical challenges that should be emphasized, or ...** We clarify the technical challenges and novelty of the results in Sections 4-6 below. - The results in Section 4 directly follow Lemma 3.1 with the existing analysis; therefore, once we obtain Lemma 3.1, there is no technical challenge. - The novelty of the results in Section 5 is the observation that the combination of the proper setting of $\lambda^2 = \Theta(B^{-2})$ and Lemma 3.1 results in the optimal dependence of RKHS norm $B$ in regret. However, the proof itself is straightforward and simple once we find the proper scaling of noise variance setting $\lambda^2 = \Theta(B^{-2})$. - The analysis in Section 6 contains non-trivial technical challenges in the proof, while we omit the discussion in the paper due to space limitations. Specifically, the time-varying variance proxies make the naive application of statement 2 in Lemma 3.1 difficult. One naive way is to set $\tilde{\lambda}_t^2$ as the true variance proxy $\rho_t^2$; however, this strategy breaks the condition related to maximum information gain in Lemma 3.1 when $\rho_t^2$ becomes too small. Another idea is to introduce the lower threshold against variance proxy so that the condition lemma 3.1 always holds, but this strategy does not lead to the nearly-optimal dependence in the regret since the upper bound of the information gain is quantified only by the minimum of the elements in the noise variance parameter matrix. Roughly speaking, the difficult problem is that the existing theory of the information gain cannot capture the total noise level. For example, if the cumulative variance proxy is low, the information gain upper bound becomes very high if one of the variance proxies is very low. To overcome the above issues, we set $\tilde{\lambda}_t^2$ based on carefully chosen thresholds depending on the cumulative variance $V_T$ (Line 1221 for SE kernel, Line 1268 for Matern kernel). We believe that our paper's solution to the above-described issues is highly novel. **2. Can the authors specifically state the bounds on information gain that they use?** We use the result of Vakili et al. (2021) (Corollary 1). As the reviewer kindly pointed out, we will add the citation in the revision. **3. Can similar bounds to those in Lemma 3.1 be established for the sum of variances rather than the maximum variance?** To our current understanding, we cannot generalize our result to the sum of the posterior variance or the maximum variance in the algorithms other than MVR, because the effect of $\mathcal{T}^c$ seems not negligible (See also next question's answer in detail). The only thing we can obtain is an algorithm-independent "minimum" posterior variance upper bound as Eq. (4). Please see our related rebuttal of the #2 answer for Reviewer 7Bi6. **4. Can the authors provide more technical details on the comparison with Flynn and Reeb (2024)?** The reason why we can obtain a stronger result than that in Flynn and Reeb (2024) is that the MVR algorithm does not have to care about the regret incurred from the time subset $\mathcal{T}^c$, which is defined in Lemma 3.3. Flynn and Reeb (2024) uses elliptical potential count lemma for the analysis of GP-UCB; however, the resulting cumulative regret upper bound suffers from approximately $O(\min\\{\gamma_{\gamma_T(\lambda^2)}(\lambda^2), \gamma_T(\lambda^2)\\} + \sqrt{\lambda^2 T \gamma_T(\lambda^2)})$ regret. While the second term is the favorable dependence against noise variance parameter (as with our upper bound in Eq.(4)), the existence of the first term, which comes from $\mathcal{T}^c$ and the elliptical potential count lemma (Lemma 3.3), becomes problematic. Specifically, if we desire optimal regret in our paper's settings (such as the noiseless setting), we have to decrease the noise variance parameter rapidly so that the second term $\sqrt{\lambda^2 T \gamma_T(\lambda^2)}$ matches the lower bound. This can be done by setting the decreasing noise variance parameter $\lambda^2$ such that the maximum information gain $\gamma_T(\lambda^2)$ increases almost linearly with $T$; however, in this case, the resulting regret becomes far away from the desired result due to the effect of the first term $O(\min\\{\gamma_{\gamma_T(\lambda^2)}(\lambda^2), \gamma_T(\lambda^2)\\})$. On the other hand, as described in the proof sketch of Lemma 3.1, we can obtain the maximum posterior variance bound of MVR without the effect $|\mathcal{T}^c|$ by bounding the maximum variance with the average only on the favorable subset $\mathcal{T} = [T]\setminus \mathcal{T}^c$ (Line 226-230 in the right column). Therefore, we can obtain stronger results than those in Flynn and Reeb (2024) by bypassing the above-described problematic behavior of their analysis with MVR.
Summary: The paper develops a novel bound for the posterior variance of the Gaussian process. Such bounds are used to obtain a tighter regret bound of noise-free simple/cumulative regret bounds of Bayesian optimization algorithms. Furthermore, these bounds facilitates establishing novel regret bounds for MVR/PE algorithms in both stationary/nonstationary noise setting. Claims And Evidence: The claims made in this paper seem to be valid and convincing. Methods And Evaluation Criteria: No numerical experiments. Theoretical Claims: I went through most of the appendix to check the validity of the manuscript. It seems to be a solid work. Experimental Designs Or Analyses: No numerical experiments. Supplementary Material: Yes, I went over the proof. Relation To Broader Scientific Literature: This paper provides a novel bound on posterior variance of Gaussian process, which can be utilized many downstreaming tasks involving Gaussian process. Essential References Not Discussed: The manuscript contains a broad range of relevant literature, including the most up-to-date ones. Other Strengths And Weaknesses: No significant weakness. The manuscript contains several contributions in advancing the theory of Bayesian optimization. 1) By establishing the upper bound of the posterior variance, 2) it establishes competitive regret bounds for noise-free Bayesian optimization, 3) tightened PE/MVR regret bound under both stationary/nonstationary noise. I believe these contributions are non-trivial and, accordingly, the manuscript deserves to be accepted. Other Comments Or Suggestions: 1. While I was able to follow the manuscript after spending quite a bit of time, I initially had a hard time following their notations to distinguish stationary/non-stationary bound. in Lemma 3.1, it would help readers if authors add a remark in clarifying the notations and the time index $t$ and $T$. 2. The paper briefly describes the two metrics, simple regret and cumulative regret, in Section 2. It is worthwhile to distinguish the role/relationship of these regrets. For instance, simple regret represents the convergence rate, while the cumulative regret serves as a metric to assess the holistic performance of the algorithm, not just the final output. Questions For Authors: Is the condition $\mathcal{X} = [0,1]^d$ necessary for corollary 6.1 and 6.2? If I recall correctly, Bull's result only assumes $\mathcal{X}$ to be a compact subset (perhaps satisfying the interior cone condition) of the Euclidean space. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and for the overall positive feedback. We will carefully incorporate your suggestions about the clarity into the revision. Below is our answer to the reviewer's question. **Is the condition $\mathcal{X} = [0, 1]^d$ necessary for corollary 6.1 and 6.2? If I recall correctly, Bull's result only assumes to be a compact subset (perhaps satisfying the interior cone condition) of the Euclidean space.** Although Bull's lower bound for the noiseless setting is applicable to any sufficiently regularized compact subset, the lower bound for the noisy setting in Scarlett et al. (2017) explicitly assumes $\mathcal{X} = [0, 1]^d$. We added this condition to be consistent with the setting in Scarlett et al. (2017). We also believe that the result of Scarlett et al. (2017) can be generalized to any compact subset whose packing number behaves as that of $[0, 1]^d$; however, since we are not aware of any formal proof in existing work, we decided to add the condition $\mathcal{X} = [0, 1]^d$ explicitly. - Scarlett, Jonathan, Ilija Bogunovic, and Volkan Cevher. "Lower bounds on regret for noisy Gaussian process bandit optimization." Conference on Learning Theory. PMLR, 2017.
Summary: This paper presents improved theoretical guarantees for Gaussian Process (GP) bandit algorithms, with a particular focus on reducing regret under three key scenarios: the noiseless setting, dependence on the RKHS norm of the underlying reward function, and non-stationary noise variance. The main contribution is a new, tighter upper bound on the posterior variance of GP models used in MVR and PE algorithms. This result refines existing analyses by considering a subset of time indices in which the variance reduction is well-controlled, rather than uniformly averaging over all steps. As a result, the paper shows nearly optimal cumulative and simple regret bounds for standard kernels in various settings. The paper also provides regret bounds under time-varying noise, filling a gap in existing literature. Overall, the analysis is clean, and the technical results are novel and broadly applicable. ## update after rebuttal The authors have provided clear and satisfactory responses to most of the questions raised in the initial review. As a result, I will maintain my original score. Claims And Evidence: The theoretical claims in the paper are generally well-supported. The main claim regarding the improved posterior variance upper bound (Lemma 3.1) is carefully proven, with clear intuition and appropriate use of mutual information and elliptical potential arguments. The paper also provides detailed corollaries (Corollary 3.2) that concretely instantiate the regret implications under SE and Matérn kernels. The claims about achieving optimal dependence on the RKHS norm are also backed by theorems with well-structured proofs and appropriate comparison to prior bounds. However, the paper does not discuss the polynomial kernel case, which was treated in prior work (e.g., Srinivas et al., 2010). Including this would help verify consistency with linear bandit results and strengthen the generality of the framework. Methods And Evaluation Criteria: The methods are theoretically grounded and appropriate for the goal of tightening regret bounds under the GP bandit setting. Theoretical Claims: I checked the derivation of Lemma 3.1 and its application to the noiseless setting and RKHS-norm-dependent regret bounds. The idea of using a selected index subset $\mathcal{T}$ for tighter bounding is novel and clean. The use of the elliptical potential count lemma and its adaptation are sound. The regret bounds in Theorems 4.1, 4.2, and 5.1 seem correct under the stated assumptions. I did not identify any obvious mistakes. Experimental Designs Or Analyses: There are no experiments in this paper, which is acceptable given its theoretical nature. Supplementary Material: I reviewed Appendix C (proof of Lemma 3.1 and Corollary 3.2) and Appendix D (proofs for Theorems 4.1 and 4.2). The arguments are rigorous and follow standard lines from mutual information analysis in GP bandits. I also briefly reviewed Appendix J, which motivates the non-stationary variance setting with examples. These sections are well-written and helpful. Relation To Broader Scientific Literature: The work builds upon classical results in GP bandits and recent advances in MVR and PE. It contributes to tightening regret bounds with respect to noise and RKHS norm, which is important in both theoretical and practical settings. Essential References Not Discussed: References are sufficient enough. Other Strengths And Weaknesses: - Theoretical novelty in posterior variance bounding technique. - Clear improvement over existing results in noiseless and RKHS-dependent settings. - Addresses an underexplored but practically relevant scenario of heteroscedastic noise. Other Comments Or Suggestions: - Consider adding a "Notation Table" near the end of Section 2 for ease of reference. - Discuss the behavior under polynomial kernels, as it connects to linear bandits and prior results. Questions For Authors: 1. Have you considered evaluating your bounds under the polynomial kernel? Given its finite-dimensional nature, the information gain is logarithmic, and your results should match linear bandit regret bounds (I guess). This would be a valuable consistency check. 2. Is it possible to extend your special time-index argument to randomized selection methods such as GP-TS? Could this lead to improved analyses for such algorithms? - Chowdhury, Sayak Ray, and Aditya Gopalan. "On kernelized multi-armed bandits." International Conference on Machine Learning. PMLR, 2017. 3. To ensure I understood the key technical improvement correctly: Is the main reason for the improved regret bound in Lemma 3.1 the fact that the authors consider a carefully chosen subset of time indices $\mathcal{T}=\\{\frac{T}{2} \geq 3 \gamma_T(\tilde{\lambda}_T^2 I_T)\\}$, instead of averaging over all $[T]$? If so, this selective averaging strategy appears crucial, could the authors clarify how general this idea is, and whether it can be extended to other types of GP bandit algorithms beyond MVR? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and for the overall positive feedback. We will carefully incorporate your comments into the revision. Below are the answers to your questions. **1. Have you considered evaluating your bounds under the polynomial kernel? Given its finite-dimensional nature, the information gain is logarithmic, and your results should match linear bandit regret bounds (I guess). This would be a valuable consistency check.** Thank you for your suggestion. As for the non-stationary variance setting, in Lines 402-424, we discuss the connection between the existing linear bandit method and our work. However, so far, we are not aware of the existing linear bandit literature that focuses on the noiseless setting, or the RKHS norm upper bound (L2-norm upper bound of linear parameter). Therefore, we will add the discussions in the revision after carefully investigating the related works in linear bandits. **2. Is it possible to extend your special time-index argument to randomized selection methods such as GP-TS? Could this lead to improved analyses for such algorithms?** At least, maximum posterior variance upper bounds (e.g., Eq. (4)) are only applicable for the MVR algorithm. We believe that the extension to other well-known algorithms is important for future work. On the other hand, we can also obtain a more limited result: an algorithm-independent "minimum" posterior variance upper bound as Eq. (4). This can be easily confirmed by replacing the L.H.S. of the inequality in Lines 229-231 with the minimum posterior variance. This upper bound is meaningless for the analysis of the cumulative regret; however, it is useful for analyzing the simple regret of other algorithms or stopping time analysis for other problem settings (e.g., level-set estimation (Gotovos et al., 2013)). - Gotovos, Alkis, et al. "Active learning for level set estimation." Proceedings of the Twenty-Third international joint conference on Artificial Intelligence. 2013. **3. To ensure I understood the key technical improvement correctly: Is the main reason for the improved regret bound in Lemma 3.1 the fact that the authors consider a carefully chosen subset of time indices. This selective averaging strategy appears crucial, could the authors clarify how general this idea is, and whether it can be extended to other types of GP bandit algorithms beyond MVR?** Since the subset selection $\mathcal{T}$ itself is conducted by the elliptical potential count lemma (Lemma 3.3), which is an algorithm-independent lemma, at least Eq. (9) always holds for any algorithm. On the other hand, dealing with the regret incurred from the remaining index set $\mathcal{T}^c = [T] \setminus \mathcal{T}$ is difficult for algorithms other than MVR. So far, the only possible claim in algorithms other than MVR is the upper bound given on Lines 237-242 in the left column. As discussed in Lines 234-250 in the left column, although this gives a certain extent of improved dependence on the noise variance parameter for any algorithm, the decreasing speed of the noise variance is more restricted and is not sufficient to claim near-optimality in our analysis.
null
null
null
null
null
null
PatchPilot: A Cost-Efficient Software Engineering Agent with Early Attempts on Formal Verification
Accept (poster)
Summary: This paper presents an improvement to the "Agentless" approach to solving SWE-bench tasks. They manager to solve 3-5% more problems on SWE-bench Lite/Verified while using up to 20% *less* money, with Claude 3.5. They provide detailed analysis and ablations. Claims And Evidence: Yes, there is a wide array of experiments that were run in this work to support the claims. Methods And Evaluation Criteria: Yes SWE-bench is the leading method to test software engineering agents, and obtaining *higher* SWE-bench scores at *lower* cost is a sign of a substantial improvement. Theoretical Claims: n/a Experimental Designs Or Analyses: Yes, as detailed in the SWE-bench guide, the only major issues that can happen with SWE-bench evaluations is either submissions using the 'hints' column which should not be used, or submissions using knowledge about fail2pass or pass2pass tests before submitting. I saw no evidence for either of these things being done in this work. Supplementary Material: no. Relation To Broader Scientific Literature: There's two lines of work on how to solve swe-bench-style tasks: SWE-agent-like models and Agentless-like models. The first usually get higher scores but are very expensive, the Agentless models usually do slightly worse but are very cheap. This paper is a clear contribution to the second type of models, which are very important. Essential References Not Discussed: None. Other Strengths And Weaknesses: I think this paper is very strong overall: great results, easy to read, good analysis, very interesting discussion section. Other Comments Or Suggestions: I would maybe not call it 'human-based planning'. Kinda makes it seem like you have a human-in-the-loop "Devin" like system where the human is planning and the agent is executing. Can you find a phrase for this that doesn't have the word 'human' maybe? Questions For Authors: Is your system's wall-time better than Agentless? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the constructive and positive comments. ## 1. Extra Clarification on Terminology: "Human-Based Planning" We thank the reviewer for pointing this out. We agree that the term "human-based planning" may be misleading, as it could imply a human-in-the-loop system. To avoid this confusion, we will revise it to "Rule-based planning", which more accurately reflects our intent—i.e., that the plan is rules pre-specified by developers. ## 2. Wall-Time Comparison with Agentless We followed the reviewer’s suggestions and conducted an experiment to compare the wall-time of our system with Agentless and OpenHands on 100 cases (the selection of these cases is stated in the response-1 to reviewer Lckx). Since all tools support multiprocessing, we controlled the number of processes to be the same for all methods (8) to ensure a fair comparison. We measured the average time per instance and ran the experiment three times. The results are as follows. | Stability test | Round 1 | | Round 2 | | Round 3 | | Average Time | |----------------|---------|--------|---------|--------|---------|--------|--------------| | PatchPilot | 38/100 | 85min | 38/100 | 86min | 39/100 | 89min | 87 min | | Openhands | 31/100 | 86min | 35/100 | 85min | 39/100 | 99min | 90 min | | Agentless | 35/100 | 102min | 33/100 | 103min | 34/100 | 100min | 102 min | The results show that PatchPilot is faster than agentless and OpenHands. It also shows that PatchPilot is more effective than the baseline approaches.
Summary: The paper proposes PatchPilot, an agentic patching framework designed to address the trade-offs among patching efficacy, stability. It introduces a novel human-based planning workflow, incorporating 6 components, with special emphasis on refinement as a unique contribution. Claims And Evidence: Their claims, supported by extensive empirical evidence and rigorous evaluation on SWE-Bench performance. Methods And Evaluation Criteria: Yes. Theoretical Claims: Not much theoretical claims. Experimental Designs Or Analyses: Authors could enhance the analysis by including more intuitive visualizations, such as detailed plots illustrating trade-offs in stability and cost. Why it surpassed agentless in a clear way. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Should add more related work. Other Strengths And Weaknesses: Strengths: See above Weaknesses: The performance is good, but the authors should provide more evidence to support their claims, as their method uses fewer tokens yet achieves good results. However, the experimental analysis does not fully substantiate their claims and lacks several important insights. Other Comments Or Suggestions: Could you elaborate on how PatchPilot specifically addresses or mitigates the potential instability caused by iterative refinement? The manuscript could provide a deeper discussion of the scenarios where PatchPilot may fail or perform sub-optimally. Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the constructive and positive comments. ## 1. Better experiment analysis We will follow the reviewer’s suggestion and include more visualizations to show the comparison between our method and baselines in effectiveness, stability, and cost. To better demonstrate the advantage of PatchPilot over Agentless, we reran these two methods on 100 cases three times (the selection of these cases is stated in response-1 to reviewer Lckx) and reported their resolved rate, average cost, and run time. | | Round 1 | Round 2 | Round 3 | Average resolved | Average cost | Average time | |--------|------------------|------------------|------------------|--------------|-------------|-----------| | PatchPilot | 38/100 ($98.23) | 38/100 ($99.60) | 39/100 ($97.88) | 38.33 | 98.57 | 87 min | | Agentless | 35/100 ($117.75) | 33/100 ($120.31) | 34/100 ($117.02) | 34 | 118.36 | 102 min | The results show that PatchPilot consistently outperforms Agentless in all three metrics. ## 2. Mitigating Potential Instability Introduced by Iterative Refinement Thanks for pointing this out. We incorporate two strategies to improve stability during refinement. **(1) Batch-based Refinement with Selection Mechanism:** Instead of generating a single patch per iteration by directly modifying the previous one, PatchPilot generates multiple patch candidates in each batch based on the previous best patch. It then evaluates all candidates—together with the previous best patch—using both the PoC and functionality tests and selects the best-performing one. If all current candidates underperform compared to the previous best patch, the previous best patch is resolved. This mechanism significantly reduces the risk of accumulated generation noise and protects against overwriting correct earlier solutions (validated by our hyper-parameter experiment when we reduce the batch size from 4 to 1, the overall performance drops notably). **(2) Early Stopping:** As soon as a patch passes all tests, iterative refinement is terminated immediately, and the patch is returned. This prevents unnecessary iterations without useful feedback, avoiding potential degradation from further blind refinement. ## 3. Failure Cases of PatchPilot We will include more analysis of failure cases. In summary, some of the failure cases are because the issue descriptions are of low quality, such as incomplete or incorrect descriptions of the issues. To resolve these cases, we need to rewrite high-quality issue descriptions. Some other cases are shallow patches, where PatchPilot only patches the superficial logic instead of addressing the root cause and does not consider the corner case of the issue, or over-patching, where patches inadvertently affect more scenarios than necessary, causing unexpected behavior. These cases may be further resolved by fine-tuning specific models that can better understand the program logic of the target repos.
Summary: This paper proposes PatchPilot, an agentic framework for autonomous software patching. It relies on human-based planning and consists of five workflow components: reproduction, localization, generation, validation, and refinement. The overall workflow as well as each component (except for the final refinement step) closely resembles prior work Agentless. The key difference being: - During generation, PatchPilot explicitly prompts the model to output a multi-step patching plan. - Instead of regenerating patches, PatchPilot refines existing ones based on validation feedback. - Prompt engineering: The model is guided to generate diverse patches by explicitly prompting for both simple and complex solutions. Experiments conducted on SWE-bench (Lite and Verified) show that PatchPilot outperforms SOTA open-source methods while having the lowest cost. Results also show that PatchPilot is more stable than the SOTA agent-based planning method OpenHands. Claims And Evidence: The paper presents an incremental improvement rather than a novel contribution, integrating existing tools and iterative refinement workflows from prior agentic work into the Agentless workflow without adequately acknowledging previous research. **Overall Design** The approach closely follows the Agentless workflow, with the first four stages - reproduction, localization, generation, and validation - remaining largely unchanged. The primary modifications, such as searching tools, are also borrowed from prior work and are not novel. The paper claims the Refinement is a unique component to PatchPilot, but this is not the case. Early LLM-based automated program repair work, such as ChatRepair [a], has already demonstrated that test execution feedback can refine previous patches more efficiently than regenerating them from scratch. Multi-agent frameworks like MarsCode [b] and SpecRover [c] similarly refine the generated patch based on test execution results. Also, existing agentic tools like OpenHands [d] enable LLMs to autonomously refine the PoC reproduction scripts and patch during their interactions - an approach commonly seen in their repair trajectories. **Other Technical Designs Inspired by Prior Work** The other technical designs also closely resemble prior work from the same research area; however, the authors do not acknowledge these contributions and instead present them as if they were their own novel designs. - Search tools: Similar tools have been proposed in AutoCodeRover [e] and CodeR [f], yet the authors did not acknowledge these prior contributions and present them as if they were their own novel design. - Coverage information: AutoCodeRover [e] optionally leverages coverage analysis, such as Spectrum-based Fault Localization, for context retrieval. - Separating planning and generation: The separation of planning has been explored in prior work, such as AppMap Navie [g]. - Patch validation strategy: The strategy is nearly identical to Agentless, except for prioritizing patches that pass PoC tests over functionality tests, whereas Agentless prioritizes patches that pass functionality tests over PoC tests. However, the authors do not clarify that this strategy is a modification of existing work. [a] Keep the Conversation Going: Fixing 162 out of 337 bugs for $0.42 each using ChatGPT. (ISSTA 2024) [b] MarsCode Agent: AI-native Automated Bug Fixing. (ArXiv 2024) [c] SpecRover: Code Intent Extraction via LLMs. (ICSE 2025) [d] OpenHands: An Open Platform for AI Software Developers as Generalist Agents. (ArXiv 2024) [e] AutoCodeRover: Autonomous Program Improvement. (ISSTA 2024) [f] CodeR: Issue resolving with multi-agent and task graphs. (ArXiv 2024) [g] AppMap Navie: https://appmap.io/blog/2024/06/20/appmap-navie-swe-bench-leader/ Methods And Evaluation Criteria: There are many hyperparameters that have not been examined through ablation studies, such as the iteration limit for refinement, the number of retrieved files, the number of plans and patches for a single instance. It is also unclear how sensitive the technique's performance and cost are to these parameters. Moreover, some key hyperparameters, such as the sampling temperature, are not even clearly specified. Update after rebuttal: * Ablation study shows that prompt engineering like the diverse-prompting design may be a main driver of improvement. I am not fully convinced by the main claim of this paper. Theoretical Claims: N/A Experimental Designs Or Analyses: In Section 4.2, the authors compare PatchPilot with OpenHands on a subset of 45 problems, repeating the experiments three times and reporting standard deviation. However, I wonder if this is statistically significant enough to support a strong conclusion. Supplementary Material: This paper does not include any separate supplementary material like source code. I have reviewed the paper and its entire appendix. Relation To Broader Scientific Literature: Overall, the contribution is insignificant compared to prior literature: - The proposed technique mainly integrates agentic designs (such as tool use and iterative refine loops) into an agentless framework, offering little novelty. - Observations on the cost-effectiveness and stability of human-based planning versus agent-based planning have been made before, as have insights on the effectiveness of refining existing patches compared to generating new ones. Essential References Not Discussed: Essential related works are cited but not adequately discussed in the paper. (see above) Other Strengths And Weaknesses: Other strengths: - Outperforms SOTA open-source methods while having the lowest cost Other Comments Or Suggestions: - “SWE-Bench” should be “SWE-bench” Questions For Authors: 1. How much improvement does prompt engineering contribute (specifically, by using three different prompts to generate simple, complex, and standard fixes) ? The ablation study did not isolate this design. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the constructive comments. ## 1. Novelty and Differences from Existing Tools As discussed in Section 2, [a-g] are all agent-based, differing from our workflow (we included [b-g] and will add [a]). - Search tools: We acknowledge that AutoCodeRover and CodeR also have search tools, and we are inspired by their designs. The key differences are in the search mechanisms, where we use fuzzy search instead of exact search. As the searched strings may differ from the original code segments, the exact search misses some matches. - Coverage: AutoCodeRover's SBFL method requires post-patching test cases, which are unavailable in SWE-bench problem settings. PatchPilot generates coverage using only PoCs. - Separating planning and generation: We respectfully point out that AppMap Navie is a commercial product without research papers. Although it mentions having generation plans, the technical details are not available. Our separation design was inspired by our early observations of shallow patches. Besides, the detailed design can be very different. For example, we proposed diversity prompts and step-by-step generation. We believe it is reasonable to claim the novelty and originality of our design of the generation component. - Patch validation vs. Agentless: Agentless generates assertions for testing cases and validates based on assertions. It is hard to generate correct and comprehensive assertions. PatchPilot generates only PoCs and uses LLMs to validate based on PoC outputs. We applied both methods to 100 patches (50 from PatchPilot, 50 from Agentless). Our method identified 43 correct patches; Agentless identified 40. - PatchPilot differs from Agentless in the workflow (**refinement**) and applies optimizations to each component. Although the components' high-level ideas are similar, the specific designs can vary a lot and affect final performance, as shown in the comparison with Agentless. - About refinement: Agent-based tools refine the latest patch by prompting the LLM with prior conversation history. This simple method has a few limitations and cannot be applied to PatchPilot: 1) It generates only one patch in the first round and iteratively refines it, which lacks diversity. If the initial patch contains errors, it may be hard to fix. This is partially validated in our hyper-parameter experiments with a batch size of 1, showing a performance drop. 2) It cannot revert to earlier patches if a refinement worsens the result, while PatchPilot enables such a capability. 3) By including the refinement history in prompts, it risks context overflow and potential LLM confusion from prior bad patches. PatchPilot instead uses only the best previous patch and its validation results. Given these major differences, we believe it is reasonable to claim novelty for our refinement. While existing tools may share similar high-level ideas, our design is fundamentally different and more advanced, giving better performance and stability. We believe this brings essential novelty and contribution. ## 2. Extra Ablation Studies for Hyperparameters We conducted experiments to evaluate hyper-parameter sensitivity. PatchPilot includes the following hyper-parameters (default values in brackets): number of retrieved files (5), total patches generated (12), batch size per round (4), diversity prompts (3), and generation temperature (0.8). Each experiment varies one hyper-parameter while keeping others. We used 60 cases randomly selected from SWE-bench-Lite and Claude-3.5-Sonnet. - Num. of retrieved files in localization: We changed it from 5 to 3 and 7. Varying it introduces subtle differences, but retrieving more files will have a higher cost as LLM will have more conversations and longer contexts. - Total num. of patches: Changing it from 12 to 1, 4, 8, 16. A large one slightly improves performance but raises costs. 12 strikes a balance between performance and cost. - Batch Size: Changing the batch size from 4 to 2/6 introduces performance drops. For 2, the diversity is not enough; for 6, the refinement round (2) is not enough. A moderate batch size balances the refinement rounds and diversity. - Diversity Prompts: Instead of using three prompts together, we ran the experiment with each prompt separately. Mixing them gives the best performance as it enables the highest diversity. - We varied the temperature ($\tau$) from 0.8 to 1 and 0.6. The results are similar. We set it high to encourage diversity. | Variants|Number|Resolved|Cost ($)| |---------|---------|-----------|----------| | Default| - |28/60|58.27| |Retrieved files| 3|24/60|50.38| | | 7|27/60|64.23| |#. patches|1|20/60|30.96| | | 4|26/60|39.95| | |8|27/60|51.02| | |16|28/60|62.88| | Batch| 2|24/60|62.93| | |6|25/60|55.77| |Prompt|Standard|26/60|57.67| | | Big| 25/60| 61.17| | | Small| 22/60| 55.32| |$\tau$| 0.6|27/60| 58.21| | | 1.0|28/60|58.02| ## 3. Stability Evaluation Our stability experiment is in **response-1 to reviewer Lckx.** --- Rebuttal Comment 1.1: Comment: Thanks for the response. I remain unconvinced by the claim of novelty of this paper. For example, I wouldn’t consider fuzzy search to be “fundamentally different and more advanced” from exact search. Please also kindly note that fuzzy match has already been used in existing work like MISAI [a], which has a research paper. Similarly, I would not consider the patch validation or refinement design to be “fundamentally different and more advanced” compared to prior agentic-based work. > AutoCodeRover's SBFL method requires post-patching test cases, which are unavailable in SWE-bench problem settings. PatchPilot generates coverage using only PoCs. Please note that the follow-up work of AutoCodeRover, CodeR [b], overcomes this limitation by using self-generated PoCs to get coverage information (instead of PoC, they call it “reproduced test cases” generated by a Reproducer agent). Despite this slight modification, the method is still acknowledged as SBFL. PatchPilot’s strategy is just a simplified version. > Separating planning and generation: We respectfully point out that AppMap Navie is a commercial product without research papers. Although it mentions having generation plans, the technical details are not available. Please kindly note that AppMap Navie was open-sourced in June 2024 [c][d], and its technical blog [e] has been cited and discussed by papers such as SpecRover [f] and Agentless. From the SpecRover paper: “Navie uses a retrieval-augmented generation (RAG) based approach to construct the code context, and performs an explicit planning step before generating code changes.” If the step-by-step planning is a major novel contribution of this paper, an ablation study should be provided to support the claim, and the authors should, at the very least, cite this line of work (AppMap Navie, CodePlan [g], etc) and discuss the differences. > diversity prompts The simple idea of using diverse prompts - i.e., three different hand-crafted prompts to generate standard, minimal, and maximal edits - is indeed a novel and effective design of PatchPilot, as evidenced by the ablation study. I already acknowledged this in my original review. Prior work such as MASAI [a] uses similar prompt engineering (i.e., prompting the LLM for minimal rewrites), but only applies a single style of prompting, not multiple. However, the proposed prompt engineering is somewhat ad hoc. For instance, PatchPilot’s minimal-patch prompt includes: * A restriction to “modify one file only”, while some issues require multi-file edits. The exact prompt reads: > “One File Only: Choose one file to modify, and ensure all changes are limited to that file”. * Fix only the specific input mentioned in the issue, potentially preventing general fixes. The exact prompt reads: > “If the issue mentions a specific input or argument that triggers the bug, ensure your solution only fixes the behavior for that input”. While the direction of using prompt engineering to increase patch diversity is interesting and promising, I do not find the current approach to be systematic or novel enough to justify acceptance. Also, according to the new ablation results, this three-prompt design yields an approximately improvement of (28-26)/26=7.7% relative to standard prompting. The default PatchPilot solves 136 problems, outperforming Agentless (which uses standard prompting and solves 123 problems) with a relative improvement of (136-123)/123=10.6%. This might suggest that: * Without this prompt engineering, PatchPilot could solve ~126 problems (136 / 1.077), v.s. Agentless’s 123. * Conversely, Agentless with prompt diversity might be capable of solving ~132.5 problems (123 * 1.077). Disclaimer: I understand that the ablation result may be imprecise due to the small sample size. I just use the 7.7% number here for illustrative purposes. In other words, although this paper claims a number of novel contributions - including refinement - I am not fully convinced, as prompt engineering like this three-prompt design may be the main driver of improvement rather than any fundamentally new methodology. In summary, the authors’ response does not sufficiently address my concern. As reviewer Lckx also notes, the novelty is low. I continue to lean towards rejection. References: [a] MASAI: Modular Architecture for Software-engineering AI Agents [b] CodeR: Issue resolving with multi-agent and task graphs. (ArXiv 2024) [c] https://github.com/SWE-bench/experiments/issues/28 [d] https://github.com/getappmap/SWE-bench [e] AppMap Navie: https://appmap.io/blog/2024/06/20/appmap-navie-swe-bench-leader/ [f] SpecRover: Code Intent Extraction via LLMs. (ICSE 2025) [g] CodePlan: Repository-level Coding using LLMs and Planning. (FSE 2024) --- Reply to Comment 1.1.1: Comment: Thank the reviewer for the additional questions. First, we respectfully point out that PatchPilot achieved the *highest resolved rate* on the SWE-bench-Lite and SWE-bench-Verified benchmarks among all open-sourced tools when we submitted it. It is *most stable and cost-efficient* among the top-performing tools. Given the internal statistical nature of LLMs and the high cost of their APIs, achieving high resolved rate, stability, and low cost together is notably challenging and requires non-trivial efforts. Second, we emphasize that although some existing works have a similar high-level idea to us, the technical designs are different. For example, existing agent-based planning work has refinement, which is not directly applicable to our system given the different workflow. Besides, the actual refinement mechanisms are different, as our refinement needs to be highly coupled and integrated with our other components. We believe this does not dilute our technical contributions, given the difference of actual designs and improvements of our tool in performance, stability, and cost-efficiency. Given their relatively low performance, simply following their designs would not lead to a performance improvement. Third, we respectfully point out that by saying “our design is fundamentally different and more advanced” we refer to that adding all individual designs together makes our method fundamentally different, and our performance makes it more advanced. We do not claim that *every individual design* is fundamentally different. For example, we did not claim fuzzy search as fundamentally different in the rebuttal. Sorry for the confusion on this. Fourth, about fuzzy search, MASAI’s fuzzy search is used in patch generation to find the original code that needs to be replaced by the patch, while we used it in localization, which are not the same. About coverage information, thanks for pointing CodeR out. Our coverage filter is simpler, but it is faster and gives reasonable localization performance. About generation, AppMap Navie is different from us. 1) Input to planning: Beyond localization results and issue descriptions (which was used in AppMap), PatchPilot also leverages runtime feedback from both PoCs and functionality tests. 2) Prompting strategy: As acknowledged by the reviewer, PatchPilot employs diverse prompting strategies to generate multiple plan variants. 3) Plan-to-patch execution: PatchPilot performs step-by-step patch generation, where each plan step corresponds to one edition with syntax and linting checks running before generating the next. AppMap Navie directly feeds the plan as part of the prompts for generation. To quantify the impact of these differences, we constructed a variant in which all three design choices were replaced with those of AppMap Navie. On a subset of 150 cases from SWE-bench-Lite, the original PatchPilot vs this variant: 69 vs. 64, showing the importance of our designs. CodePlan targets different problems from us. We note that CodeR, MASAI, and AppMap are not accepted papers and are not ranked among the top. We did not include a detailed comparison with their individual components and mainly compared OpenHands. We believe it is a reasonable practice. We apologize for not comparing their design in detail and will add this to our paper. About the diversity prompt, we respectfully point out that it is one but not the main contribution. Our ablation study shows that each component contributes to a 3–4% performance gain. We agree that minimal-patch prompts cannot handle all cases. Its goal is to increase diversity rather than handling all cases. We have standard and comprehensive prompts. We respectfully point out that the way to calculate the hypothetical improvement of diversity prompts may not be comprehensive. First, it relies on an assumption that cases contributing to performance improvements by each design are evenly distributed across the dataset, which may not hold as the performance improvements are typically because of joint efforts. Second, our strategy cannot be directly applied to Agentless as it does not generate patch plans. Following the reviewer's idea, we changed “generate plan” to “generate patches” and applied it to Agentless. We ran the original agentless and this variation on 150 cases above. The original Agentless vs this variant: 61 vs. 63. (1) It shows marginal improvements on Agentless. Our design improving an SOTA method does not dilute our contribution. (2) PatchPilot still outperforms this improved Agentless, showing the importance of our other designs. (3) We reran the hyper-parameter testing on 150 cases above, and the resolved cases are 66 (only standard), 64 (only comprehensive), and 59 (only minimal), while combined prompts achieved 69. This shows that the percentage cannot be directly applied to more cases given the cases affecting the results may not be distributed evenly. We are at the reviewer’s disposal for any further questions.
Summary: In this paper, the authors describe PatchPilot, a novel human-based planning workflow for solving Github issues. The innovations include generating reproduction tests to help locate the root cause; a planning and generation task division for patch generation, and a refinement loop to iteratively improve a patch. Empirical evaluations on SWE-Bench showcases PatchPilot’s efficiency and stability in resolving Github issues compared to baseline methods. Claims And Evidence: The claims are well-supported by clear evidence. The authors provided overall performance evaluations on SWE-Bench Lite and Verified set. Table 1 shows that PatchPilot strikes a balance between resolution rate and cost. Section 4.3 provided useful information on each component. Methods And Evaluation Criteria: SWE-Bench is a standard benchmark dataset for measuring software agents so it makes sense the authors chose it to validate their work. Theoretical Claims: N/A Experimental Designs Or Analyses: Most experimental designs are sound. I have one comment for Section 4.2, why did the authors present the PatchPilot vs OpenHands comparison results with GPT-4o instead of Claude 3.5 Sonnet? Considering both methods are reported with Claude 3.5 Sonnet in Table 1, this mismatch should have some explanation. Supplementary Material: Yes, Section A and D. Relation To Broader Scientific Literature: Key novelties are: using reproduction tests to help localization; planning and generation separation for patch generation, and patch refinement. Some versions of the first two are present in other agent based works such as OpenHands. One could argue OpenHands' iterative approach has the refinement element too as a subset of history is retained in prompts. So the significance of these novelties are not super strong. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See sections above. Other Comments Or Suggestions: N/A Questions For Authors: 1. Can the authors provide an example plan generated for the patch generation step? And a corresponding example with LLM following the generated plan. 2. As the PoC is used to help locate the root cause, its validity is crucial. Besides including instructions in the prompt, did the authors carry out other mechanisms to verify their validity? 3. Majority voting is mentioned in Section 4.3, can the authors provide more information on how the majority voting is carried out? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the positive and constructive comments. ## 1. Stability Comparison We chose GPT-4o because of our budget limits before. We reran the stability comparison. First, we changed GPT-4o to Claude-3.5-Sonnet. Second, in response to Reviewer HrpY’s concern about small sample size, we increased the testing cases from 45 to 100 from SWE-bench-Lite. These cases were selected from three categories based on these methods’ results in one previous run: 30 are commonly resolved by all tools, 15 are solved independently by each of the three tools, and 55 are commonly unresolved cases (hard ones). The cases in each category are randomly selected. Third, we added Agentless based on Reviewer vPVX’s suggestion. We calculated the standard deviation (STD) of the resolved rate and cost. To show the statistical significance, we conducted a Bartlett’s test to compare the variance between PatchPilot and the comparison baselines. Our Null hypothesis $H0: \sigma^2_p = \sigma^2_b$, $\sigma^2_p$ and $\sigma^2_b$ are the variance of PatchPilot and baseline. The results are as follows (the first number in p-value is the comparison of resolved rate, and the other is the cost). | | Round 1 | Round 2 | Round 3 | Average Time | STD | $p$-value | |------------|------------------|------------------|------------------|--------------|-------------|-----------| | PatchPilot | 38/100 ($98.23) | 38/100 ($99.60) | 39/100 ($97.88) | 87 min | 0.58($0.91) | - | | OpenHands | 31/100 ($156.58) | 35/100 ($160.38) | 39/100 ($169.58) | 90 min | 4.00($6.68) | 0.044/0.040 | | Agentless | 35/100 ($117.75) | 33/100 ($120.31) | 34/100 ($117.02) | 102 min | 1.00($1.73) | 0.50/0.43 | The result confirms that PatchPilot has a much lower STD than OpenHands in the resolved rate. Besides, the $p$-value is smaller than 0.05, meaning we can reject the null hypothesis. Given that the STD of PatchPilot is smaller than OpenHands, this shows that PatchPilot is more stable than OpenHands with statistical significance. The resolved rates of all methods are lower than those in Table 1 because we include more hard cases in the 100 cases. The cost shows a similar trend. OpenHands is relatively more cost-effective when using Claude-3.5 compared to using GPT-4o in Section 4.2, as a stronger model can reduce the number of iterations of OpenHands. The result also shows the stability of Agentless, validating our claim that human-based planning is more stable than agent-based planning. Finally, PatchPilot consistently achieves lower cost and higher performance than Agentless. ## 2. Illustrative Example of Patch Planning and Generation We will include more examples. Here, we show one example of django__django-11133. Django is a Python web framework. In this issue, the user reported that HttpResponse failed to handle memoryview objects. **PatchPilot generated a patch plan with one step:** <STEP> Add an explicit memoryview check in the content setter </STEP> <Actions> In the file django/http/response.py, locate the content setter of HttpResponse. Modify the code so that it first checks if the provided value is an instance of memoryview. If it is, immediately return bytes(value). Otherwise, proceed with the existing check for iterables. </Actions> **Following the plan, PatchPilot generates a correct patch:** ```python # - Python attempts str conversion first # - when self._charset != 'utf-8' it re-encodes the content - if isinstance(value, bytes): + if isinstance(value, (bytes, memoryview)): return bytes(value) ``` ## 3.PoC Validation Beyond LLM Checking We first convert the issue description into a structured format, extracting the expected behavior and wrong behavior. We also include the following rule-based checks. **Rule 1:** Discard PoCs that cover fewer than 3 files, as LLMs often create fake buggy functions inside the PoC instead of invoking real project code. **Rule 2:** If a specific exception is mentioned in the issue, we check if the same exception appears in stderr during PoC execution, confirming it triggers the described error. To evaluate PoC quality, we manually reviewed 100 cases. Our validation filtered out 27 PoCs (20 by LLM, 5 by Rule 1, and 2 by Rule 2). Of the remaining 73 PoCs, 92% are correct. These results confirm the effectiveness of our patch validation and the high quality of generated PoCs. ## 4. Majority Voting Strategy in Ablation Study In the first two variants, we use majority voting since patch validation is not feasible without PoCs or functionality tests. We generate multiple patches per case and proceed as follows: Normalize each patch by stripping whitespace, line breaks, and comments; count the frequency of each normalized patch; and select the most frequent one. If there is a tie, we prompt an LLM to choose the best patch. ## 5. Novelty of Refinement in PatchPilot We discuss the novelty of our refinement in **response-1 to reviewer HrpY.**
null
null
null
null
null
null
Dynamical phases of short-term memory mechanisms in RNNs
Accept (poster)
Summary: The paper investigates the strategies that recurrent neural networks (RNNs) use to maintain short-term memories via sequential firing. The authors trained low-rank and full-rank RNNs on delay-response tasks and identified two distinct mechanisms: slow-point (SP) manifolds and limit cycles. They found that introducing a post-response period significantly biases the strategy toward limit cycles. Additionally, they derive a scaling law relating the critical learning rate to the delay period length. Claims And Evidence: Their major claim was supported mostly by their results on artificial RNNs, but the biological relevance of these mechanisms remains unclear, especially the usage of limit cycles. Methods And Evaluation Criteria: yes Theoretical Claims: I did not verify every detail of their mathematical arguments, I reviewed the toy model derivations, and they appear to be correct. Experimental Designs Or Analyses: Yes. The experiments include both low-rank and full-rank RNNs, as well as large-scale training with over 35,000 networks, ensuring robust findings within artificial RNNs. Supplementary Material: Yes, I reviewed the setting of the toy model and the derivation of critical learning rates. Relation To Broader Scientific Literature: The paper is well-situated within the computational neuroscience and machine learning literature, particularly in the study of RNN dynamics. The discussion of memory maintenance mechanisms in artificial networks is insightful. However, the connection to empirical neuroscience is underdeveloped. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: - Strengths: - Rigorous mathematical derivation. - Large-scale empirical validation with 35,000+ trained RNNs. - Acute to realize that traditional RNNs can be different from brain doing task by post-action period. - Weaknesses: - Weak connection to biological plausibility and experimental neuroscience. - Limited discussion of hyperparameter sensitivity and generalizability. Other Comments Or Suggestions: No Questions For Authors: 1. What do you think would be the biological correspondence of the learning rate in your model? 2. In Figure 5, the network needs to maintain feature-specific information rather than a featureless memory. How do SP manifolds or limit cycles encode and transmit feature representations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback, as well as their recognition of our methodological rigor and insightful contributions to understanding short-term memory mechanisms in artificial networks. Please find our responses below: **Q1** While learning rate is an abstract optimization hyperparameter, prior literature indicates that it can be influenced by both external and internal biological factors. See below for a brief discussion that we will add: > Interestingly, our findings that longer delays require smaller learning rates parallels observations in neuroscience, where tasks with longer temporal gaps between cues and outcomes often pose greater credit assignment challenges. During cue-reward delays, dopamine activity has been suggested as a mechanism for solving credit assignment through eligibility traces. For instance, synaptic-level eligibility traces have been evidenced by dopamine's role in potentiating synaptic changes within precise temporal windows (Shindou et al., 2019; Yagishita et al., 2014). Additionally, dopamine may facilitate credit assignment through activity ramps that increase as subjects approach rewards (Fiorillo et al., 2003, Mikkael et al., 2022, Krausz et al., 2023). But more explicitly, a recent study (Coddington et al., 2023) showed that dopamine can directly modulate learning rates to support effective decision-making. Similarly, our results align with studies showing that the duration of the inter-trial interval—analogous to our post-reaction time—can also affect learning rates. Specifically, another neuromodulator, serotonin, has been found to modulate learning rates following long inter-trial intervals (Iigaya et al., 2018). Thank you for this exciting connection that significantly improves the impact of our work for the neuroscience community. **Q2** We appreciate this insightful question. Our task was intentionally designed to isolate memory maintenance mechanisms, independent of representational content, to expose the underlying structure of the solution space (e.g., phase transitions). In tasks like delayed cue discrimination (Fig. 5), inputs determine the initial condition of the latent state, which then evolves along either a slow manifold or a limit cycle. In both scenarios, the identity of the cue is encoded in the trajectory, e.g., distinct slow-point manifolds. How feature representations are integrated into these mechanisms is a compelling direction for future work. For instance, one could introduce a contextual variable that modulates the delay period for the same stimulus. This would allow investigation into whether the RNN reuses its existing slow point or limit cycle structures, or develops new attractors for each context. We view questions like these as natural next steps—and our publicly available dataset of trained RNNs is well-suited for such follow-up studies. We will add a paragraph to the outlook about this interesting future direction. **Weakness: hyperparameter sensitivity and generalizability** In the revised draft, we include a new experiment in which we double the neuronal time constant $\tau$ (one of the hyperparameters in the RNN architecture). The results remain virtually unchanged, with the same phase diagram emerging. We will report these findings in a supplementary figure. We also note that a recent work (Park et al. 2025; ICLR 2025) has empirically found a similar phase diagram of algorithmic strategies, though in a completely different context. While testing all hyperparameters is computationally infeasible given the scale of our experiments, we now include the following paragraph in the Discussion to explicitly state this limitation and outline future directions: > In this work, primarily constrained by the immense compute required (about a month of a standard GPU time), we fixed architectural parameters and activation functions, which is a limitation of this work. Future work could explore the effects of different activation functions, network size (although recent results suggest that larger networks may reduce efficient learning rates; see Dinc et al., 2025), self attention-like mechanisms, and short-term synaptic plasticity on the emergence of different memory strategies. We conjecture that, since the toy models highlight the fundamental constraints associated with the latent dynamical systems learned within these architectures, phase diagrams we observe in this work may present universally across models (see also Park et al., 2025). **Generalizability and relevance of the toy models** We agree that this is central to the broader impact of our work and should have been stated more clearly. Please also see our response to Reviewer CYoj. **Final remarks** Please let us know if any further clarifications are needed. We sincerely appreciate the reviewer’s feedback and support. We agree that stronger alignment with neuroscience is important and have made necessary revisions.
Summary: This paper analyzes the emergent mechanisms of short-term memory maintenance in task-optimized recurrent neural networks. The paper presents an analysis of a toy model and performs large-scale experiments to show that similar features emerge in actual task-optimized networks. Claims And Evidence: The theory and numerical experiments are each separately well-executed and interesting, but their connection is difficult to pin down. Typically, theoretical analysis of RNNs is designed to be directly comparable with the outcomes of numerical experiments. However, in this work, the theory seems to target a different model entirely. While the experiments exhibit some qualitative similarities to the theoretical toy model, they feel somewhat disjointed, making it challenging to establish a clear link between theory and practice. Methods And Evaluation Criteria: Yes, somewhat (see above). Theoretical Claims: Yes, all of them. Experimental Designs Or Analyses: The RNN experiments make sense, although I did not look at the code in detail. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: This paper would be of interest to computational neuroscientists, as well as deep learning researchers working on interpretability. Essential References Not Discussed: None. Other Strengths And Weaknesses: The work is not particularly novel, as studying attractors as short-term memory mechanisms in RNNs is one of the oldest problems in computational neuroscience. I appreciate the attempt at theory, although as I explain above the connection between the numerical experiments and the theory developed is unclear to me. Other Comments Or Suggestions: For completeness, the introduction should acknowledge that many neuroscientists propose a synaptic basis for working memory, which relies on mechanisms distinct from those presented in the authors' work. For example: - https://www.cell.com/trends/trends/cognitive-sciences/fulltext/S1364-6613(15)00102-3 - https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010776 - https://www.nature.com/articles/s44271-023-00027-8?fromPaywallRec=false Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and for recognizing that both the theoretical and empirical components are well-executed. We understand the main concerns to be (1) the clarity of the connection between theory and experiments, and (2) the perceived novelty of our contributions. We have revised the manuscript accordingly and address both below. **(1)** Due to space constraints, we had to condense this connection in the original submission. Here, we clarify the rationale and why toy models are *directly* relevant for the RNNs. **In short**, consider the models in Eq. (S1) and (S15) of the main text. These approximate the latent trajectory of a rank-one and rank-two RNN around a local minima, respectively. The scaling laws, derived from these models, accurately predict the transition boundaries observed in large-scale RNN training (new result, $\beta = 4.05 \pm 0.1$, $R^2 = 0.99$), when RNNs are learning those particular solutions in their latent subspaces. Here, it is worth noting that we empirically tested the scaling laws in the (realistic) full-rank RNNs, whereas the toy model theoretically models the latent trajectories in the low-rank RNN. The theoretical connection is an open question in this field and is a current hot topic (Valente et al., 2022). However, Valente et al. 2022 has empirically shown that trajectories of full-rank RNNs trained on behavioral tasks can often be reproduced by their low-rank counterparts that are sufficient to solve the task. In our case, rank-one RNN is sufficient to solve the task with a slow-point manifold, so it is not surprising that the toy model predictions explain (qualitatively) the existence of and (quantitatively) the boundary scalings of the phase diagram. *Longer rationale justifying, e.g., why the slow point model is a realistic approximation:* - RNNs are universal approximators of dynamical systems, and the rank of their recurrent weight matrix determines the dimensionality of the latent dynamics they can implement (Beiran et al., 2020). - Recent work has shown that even complex tasks can be solved with low-rank RNNs (Dubreuil et al., 2022), and that full-rank RNNs are often effectively low-rank in practice (Valente et al., 2022). In our empirical study, we similarly observe that both low- and full-rank RNNs converge to low-dimensional solutions, which take the form of either slow-point manifolds or limit cycles. - The slow-point toy model is the **normal form** (for reference, see Nonlinear Dynamics and Chaos by Strogatz) for a saddle-node bifurcation, which approximates one-dimensional dynamical systems around their local minimum (slow-point formation), including those implemented by the rank-one RNNs in Eq. (2). - Finally, learning the weights in rank-one RNNs implementing a slow-point corresponds to varying the parameter of this normal form, which gives rise to the phase diagrams derived in Fig 4C (and observed in full-rank RNNs in Fig 6). We will briefly incorporate this explanation into the revised manuscript, with the rigorous mathematical details in the supplementary. **(2)** We agree with the reviewer that *studying attractors as short-term memory mechanisms in RNNs is one of the oldest problems in computational neuroscience*. However, our contribution is not the problem itself, but our novelty lies in findings and mechanisms that were previously not known in this long-standing field. To be specific, here are a few of the novel findings in our work: - To our knowledge, this is the first study to systematically analyze how latent mechanisms emerge during learning (e.g., such a description does not exist for the STSP based models, also cited by the reviewer). - We identify a sharp transition between slow-point and limit-cycle solutions depending on task delay and learning rate. To our knowledge, we are the first to show that there is a phase diagram of strategies (w.r.t., optimization and task parameters). - Unlike prior work, which often focuses on complicated working memory tasks (e.g., saccades in the references cited by the referee), our custom task design focuses on short-term memory and balances simplicity and expressiveness, allowing us to analyze dynamical structures at scale. - We show that small changes (e.g, a post-response) can qualitatively shift the memory strategy learned. - Our large-scale dataset (35,000+ trained RNNs, first dataset of its size reported and to be made public) will be released to support further studies on training dynamics, robustness, and learning strategies. **Misc** We thank the reviewer for pointing out the omission of synaptic models of working memory, which we now explicitly cite. Please also see our response to Reviewer pHmu. We also see that you asked for an ethics review. How could we address your concerns? **Final remarks** We hope these clarifications address the reviewer’s concerns and are grateful for the feedback, which significantly improved the paper’s presentation and scope.
Summary: This paper studies computational RNN models of a classic neuroscience working memory task–the delayed response task–along with two, very simplified and tractable, dynamical system models capable of learning the task through adaptation of a scalar parameter. The paper studies the role that changes in the delay time, the response time, and an optional post-response period (where no output from the given model is expected) time play in how the models studied learn to solve the task. The main results of the paper is that RNNs tend to learn two different solutions to the task, one making use of a slow point the other a limit cycle, and that the solution learned will depend on the learning rate, the length of the delay and response periods, and the presence of a post-response period. It is observed that for longer delays and higher learning rates a limit cycle tends to be learned instead of a slow point solution. Interestingly, the loss function of a toy dynamical system model (the normal form of the saddle-node bifurcation) of learned slow-point dynamics requires a smaller learning rate or delay period to be learned, while a toy dynamical system for the limit cycle dynamics (sine function) is capable of learning with faster learning and longer delay periods. This relationship of better scaling w.r.t. learning rate and delay period for limit cycle solutions is proposed to underlie the phenomena observed in RNNS. Interestingly, the observed scaling in trained RNNs on the task without the post-response period is roughly what was predicted by the toy model. Lastly, for the numerical experiments, a very large sample of RNNs was trained and the authors plan to make these publicly available to facilitate future work. The authors argue that their analysis of the learning of delay periods sheds light on the difficulty of learning long time dependencies in machine learning, and that it will help inform future neuroscience research by demonstrating how task parameters can have a significant effect on learning. ## Update After Rebuttal I am satisfied by most of the authors clarifications and believe the paper is relevant to neuroscience and RNN-related learning and have thus updated my score to a 4. The remaining issue for me is that I still believe the authors could have done a better job connecting the toy models and the low-rank RNNs mathematically; this is why I have not rated the paper a strong accepted. Lastly, please disregard my mistaken replacement of $\dot{x}$ with $x$ in one of the comments. Claims And Evidence: In the reviewer’s view the claims seem well supported. Methods And Evaluation Criteria: Yes, they seem appropriate. Theoretical Claims: I checked the proofs in section S1.1 and they appear sound. Experimental Designs Or Analyses: I did not check any of the code. The experimental designs seem appropriate for the questions that are being asked. Supplementary Material: I checked the math in S1.1 and it seemed sound. Relation To Broader Scientific Literature: There are several recent papers studying the “curse of memory” (difficulty learning long-timescales) in RNNs, the effect that long-memory tasks have on learning, and how to ameliorate it, that the reviewer believes could be relevant to cite: - Approximation and Optimization Theory for Linear Continuous-Time Recurrent Neural Networks – Li et al. (2022) JMLR - Recurrent neural networks: vanishing and exploding gradients are not the end of the story – Zucchet & Orvieto (2024) NeurIPS - Generalized teacher forcing for learning chaotic dynamics – Hess et al. (2023) PMLR The author also believes it could be useful to discuss models of working memory that rely on dynamics in variables other than firing rate, see for example the reference in Q1 of “Questions for Authors” section. Essential References Not Discussed: The reviewer is reasonably well versed with current neuroscience models of working memory (Working models of working memory – Barak & Tsodyks (2014) Current op. In Neurobiology). The researcher is currently engaged in research on learning in the neuroscience-related firing-rate RNNs studied in the paper, and is therefore quite familiar with relevant literature, including: - Flexible multitask computation in recurrent networks utilizes shared dynamical motifs – Driscoll et al. (2024) Nature Neuroscience - Generating Coherent Patterns of Activity from Chaotic Neural Networks – Sussillo & Abbott (2009) Neuron - A neuronal least-action principle for real-time learning in cortical circuits – Senn et al. (2024) eLife - Partial observation can induce mechanistic mismatches in data-constrained models of neural dynamics – Qian (2024) biorXiv, along with the papers mentioned above Other Strengths And Weaknesses: ## Strengths The paper does a fantastic job distilling a difficult problem down to a model that is simple enough for mathematical analysis but still seems to capture certain key aspects of the problem. The insights into two different strategies an RNN can use to solve a delay task, and how task structure and learning rate can influence these, are interesting and, in the reviewer’s view, worthy contributions. ## Weaknesses The two main weaknesses, in the reviewer’s view, are: (1) that the simple dynamical systems studied are a substantial departure from an RNN and little justification is provided for the choice of simplified systems (Question 2, below); (2) how machine learning connections are primarily discussed in the discussion when the paper seems more relevant to computational neuroscience (Question 4, below); (3) the lack of discussion of certain alternative models of working memory (Question 1, below). Note: the reason for the score is because the reviewer would like to see their comments and questions addressed before being able to recommend it for acceptance. If these are addressed satisfactorily the reviewer will certainly increase their score. Other Comments Or Suggestions: - In the abstract: the reviewer finds “limit cycles providing temporally localized approximations” a little confusing, and wonders if clarity could be increased for this sentence. - In the abstract, the authors mention: “we derive theoretical scaling laws for critical learning rates as a function of the delay period length, beyond which no learning is possible.” It could be worth specifying that this derivation is in simplified dynamical system models rather than directly from RNNs - Line 38, RHS: “studied” => studying - Line 45, RHS: “periodicity” => periodic - Line 73: LHS: I suggest: “Using interpretable dynamical system models stripped down to their most essential components for solving a delayed activation task” => “Using low-rank RNNs”. More concise and avoids confusion with analytical dynamical system models also studied in the paper - Line 91, RHS: I suggest changing $W/W^{in}$ to $\{W, W^{in}\}$ to avoid notational confusion with division - Equation 3: “$x$” => $\dot{x}$ - Line 359 RHS: exponents 2 and 3 should be -2 and -3. - Line 663: “an halt” => a halt - Eq S13: $T_{delay} >> T_{delay}$ => $T_{delay} >> T_{resp}$ Questions For Authors: 1. To the reviewer’s understanding there is an alternative hypothesis for encoding working memory: that of short term plasticity (e.g. Working models of working memory – Barak & Tsodyks, 2014; see section on “Short term synaptic plasticity). Could the authors add mention of this hypothesis to the paper, or provide a compelling argument as to why it is irrelevant? 2. While compelling empirical evidence is provided for the relevance of the toy dynamical system models studied to learning in RNNs, the toy models are not rigorously connected to RNNs–at least beyond the sentence “It is worth noting that increasing the dimensionality of the dynamical system can allow more efficient solutions to the system, but the toy models we discuss can be thought as approximate bounds on what can be achieved.” Could the authors provide some deeper insight into this limitation, and the differences they expect between the behaviour of the toy models and of RNNs? This could be a useful limitation to include in the discussion. 3. The authors suggest that a key impact of their paper is the number of models the paper makes available. The reviewer wonders at the utility of providing 35,000 models all trained on a simple delay task. Could the authors propose different examples of ways this dataset could be used? The reviewer would also be curious about the environmental impacts of training this many models. 4. From the reviewer’s perspective this paper is primarily focused on learned computational mechanisms for solving a continuous-time delayed response task and therefore more relevant for neuroscience than machine learning. However, the authors spend the vast majority of the discussion talking about machine learning. The reviewer believes the paper would be more valuable if it spent more time discussing neuroscience applications, relevance to experimentalists, and potential hypotheses-to-test that the paper might generate; in particular, ways of distinguishing slow-point vs. limit cycle mechanisms experimentally. Would the authors be able to provide more discussion time on these neuroscience implications? 5. In biological networks one typically has modulatory signals that can increase or reduce network excitability. Do the authors have ideas for if and how a network with a limit cycle and excitability modulation (that can shut off the limit cycle after the response period) might be distinguished from the slow point mechanism, and how such a modulated limit cycle might compare with the two studied mechanisms? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and highlighting our approach's strengths—especially our effort to simplify a complex problem into an interpretable framework. Their suggestions have greatly shaped our revisions. Below, we address all specific concerns and weaknesses. **Q1** That is an excellent point. As the reviewer notes, models based on dynamic synaptic variables like STSP represent a fundamentally distinct mechanism from the activity-based attractor models we focus on. In an earlier draft, we mentioned both classes—such as Mongillo et al. (2008)—and then narrowed the focus to activity-based models (Fig. 1). Based on the reviewer’s feedback, we restored and expanded the neuroscience context. In addition to intro, discussion mow highlights that STSP mechanisms may allow networks to bypass the learning constraints we identify for fixed-weight attractor models (e.g., Masse et al., 2019). The Conclusion links to recent AI models like Mamba, which integrate memory via local mechanisms that may functionally resemble STSP. **Q2** This point is addressed in our reply to Reviewer CYoj. **Q3** A key motivation for releasing all trained models is to enable reuse without retraining. We now clarify that the dataset required ~1 month of GPU time. Due to space constraints, we briefly outline three research directions this dataset enables (with more to be added in the Outlook): - Curriculum learning: Training pre-trained models for longer delays with reduced learning rates may enable faster convergence and offer insight into curriculum/transfer learning. The toy models could support new curriculum strategies with theoretical guarantees—relevant for experimentalists (including co-authors) who find it difficult to train mice on seconds-long delay STM tasks. - Model initialization: Exploring how weight initialization affects convergence to slow-point vs. limit-cycle strategies. For example, initializing near an existing solution (e.g., SP) may bias learning toward that strategy and potentially shift phase boundaries (Fig. 5). - Robustness: Comparing robustness by perturbing neurons (e.g., mimicking optogenetics) may reveal key differences (oscillations vs decay) between slow-point (SP) and limit cycle (LC) strategies. We also refer the reviewer to our response to Reviewer 59aj’s second question for more on the dataset’s utility. **Q4** Testable predictions. Our work predicts that both SP and LC dynamics can generate sequential neural activity, but differ in structure: SP dynamics converge to a slow-point attractor by trial end, whereas LC dynamics continue producing trial-type-specific sequences beyond reward. This distinction is testable by analyzing neural activity after task completion. While Rajan et al. (2016) explored sequences in RNNs, we link them to distinct attractor types. For instance, Fig. 4 predicts that extending the post-response period biases dynamics toward LC (oscillatory) or SP (ramping) solutions. Although this period can’t be fully removed in experiments, it can be disrupted—e.g., via bulk optogenetic inhibition of task-relevant regions, providing a plausible way to test this prediction. Relation to recent studies: We will cite the suggested works and place greater emphasis on neuroscience connections (previously omitted due to space). The reviewer’s interest—as a neuroscientist—strongly encouraged this expansion.- **Q5** We thank the reviewer for raising this thought-provoking possibility of a third dynamic regime—one in which a network operates in a limit-cycle (LC) mode during the task but undergoes a modulatory shift in excitability toward the end of the trial that disrupts the remaining cycle, effectively mimicking a slow-point (SP) solution in the end. This is indeed how we would expect LC solutions to be utilized in practice. Distinguishing such a modulated LC from a true SP is an exciting experimental challenge and could reveal important aspects of synaptic learning rules and neuromodulatory control. We outline three levels of experimental inquiry to test this hypothesis in vivo (briefly): - Exp 1: The assumption of functional reorganization can be tested by combining constant-power bulk optogenetic activation with - calcium imaging at different trial stages. - Exp 2: Genetically encoded fluorescent sensors (e.g., dLight, GRAB-ACh) can be used to track neuromodulatory signals during task performance. A signal peaking in the post-response window would support the proposed mechanism. - Exp 3: To establish causal roles, one could block or genetically delete the relevant receptors and observe changes in post-trial neural dynamics or task performance. **Final remarks** Thank you for the style suggestions and pointing out the typos. We incorporated all suggested edits and reviewed the manuscript for clarity, consistency, and polish. Please let us know if any points need clarification. We sincerely appreciate the reviewer’s enthusiasm and attention to detail. --- Rebuttal Comment 1.1: Comment: Thank you very much for the comprehensive response! I have a few more clarifying questions (numbering does not correspond exactly to the previous question numbers): 1. regarding your response to reviewer CYoj on the connection between toy model and theory: I agree that modelling RNNs with low-rank RNNs is entirely justified. My question about connecting toy model and theory is not about connecting RNN and low-rank RNN, but rather drawing a connection between the low-rank RNN and the toy models used. For example, would it be possible to demonstrate how the saddle node normal form could be derived from a rank 1 RNN, and how the parameters of the RNN might relate to the parameter $r$? For the $\dot{x} = \sin(2\pi r t)$ model, perhaps you could provide similar intuition for how it would relate to the 2D low rank RNN, and how its $r$ parameter might relate? 2. Thank you very much for the extra details on GPU use for the dataset! As mentioned in my original review, given the month-long compute time for the project, it would be great to have a bit more information on the environmental impact. For example, it could be beneficial to include (at least in the supplementary) an estimate of both GPU hours used and GPU type, along with an approximate estimate of carbon emissions. This should not be too difficult, as emissions calculators are available online--see e.g.: https://mlco2.github.io/impact/ 3. In your response to **Q4** above you suggest that LC and SP could be distinguished by bulk optogenetic inhibition. How would the effect of opto inhibition on SP differ from its effect on LC? 4. To clarify, in your response to **Q4** above you mention "*Fig. 4 predicts that extending the post-response period biases dynamics toward LC (oscillatory) or SP (ramping) solutions*". Do you not mean that it biases towards LC and away from SP solutions? 5. In your response to **Q4** above you mention "*This distinction is testable by analyzing neural activity after task completion.*" Wouldn't a mechanism like the one I imagined in **Q5** from my review make it difficult to distinguish LC and SP based on activity after task completion? 6. Thank you very much for the detailed response to **Q5**. Will you mention this as a potential confound for distinguishing LC and SP in the main text? --- Reply to Comment 1.1.1: Comment: Thank you for the additional questions. Please find our answers below: 1) Please allow us to elaborate with step by step mathematical derivations: - A low-rank RNN *universally* approximates a flow map, $\dot \kappa (t) = G(\kappa(t))$, when we train its parameters during learning. - Any flow map (consider 1-d case), unless the second derivative vanishes, can be approximated by a Taylor series around its minimum (e.g. where slow point's locus is), where the first derivative vanishes due to the extremum conditions. Hence, the flow map becomes $\dot \kappa(t) = a + b \kappa(t)^2 + O(\kappa^3)$. - Here, we can simply rescale the time (using the symmetry there) to fix $b=1$ without loss of generality, achieving the normal form. This is a standard calculation in dynamical system literature and would amount to rescaling n and m in our RNN such that their multiplication W = mn^T remains invariant (e.g., an inherent symmetry of the system). - Then, for a given set of n and m variables that support this slow-point manifold, the latent dynamical system approximated by the rank-one RNN would approximate this function such that $\dot \kappa(t) = - \kappa(t) +n^T \tanh(m\kappa(t)) :\approx r + \kappa^2$. - Now, since there is an attractor around some $\kappa \approx \kappa^*$, the latent activities will be in the neighborhood of the local minima (again, by definition of an attractor) most of the time, hence the time evolution of the RNN will be well approximated by this local approximation (also shown in our Fig. S1). - During learning, the small changes in parameter n (a similar argument can be made for changes in m) would then be approximated as: $(n+Δn)^T \tanh((m)\kappa) = n^T tanh(m\kappa) + Δn^T \tanh(m\kappa)$. Since $\kappa(t) \approx \kappa^*$ most of the time (which becomes exact as $T \to \infty$ as slow point becomes a fixed point), the second term can be approximated ny $ Δn^T \tanh(m\kappa^*)$. This is a linear function of $Δ n$, i.e., change in the encoding weights. - Hence, the gradient that governs $r$ in the toy model $\dot \kappa = \kappa^2 + r$ also governs the infinitesimally linear changes, $Δ n$ corresponds to $r$ in the toy model! But then, this is already enough to make our case, since changes in $Δ n$ will be subject to the same scaling laws as $r$ in our toy model did in the limit $T \to \infty$. Now, for the limit cycle, we can follow exactly the same arguments, but we have to assume that latent dynamical system is two dimensional. Moreover, we would need to use the form in Eq. (S15) not (S16) as the former is the dynamical system not the latter, though the steps for the derivations are analogous (where we now approximately fix the radius $||\kappa||_2 = R*$). In the end, the math comes down to $\dot \kappa_1 = - 2 \pi r \kappa_2$, where changes in $r$ would correspond to $Δm$. 2) Thank you for this great website. We used computers with NVIDA RTX 3090 GPUs, which amounts to about 110 kg CO2 emission, 440 km driven by an ICE car, 55 kg coal burned, about 2 tree seedlings sequesting carbon for 10 year. We will acknowledge this and cite the website in the acknowledgement. This is an important point, thank you! 3) Bulk optogenetic during trials would disrupt the neural activities in both models outside of the steady-state. For a limit cycle, the return to steady-state would be in the form continuing the sequential activity, whereas slow-point manifolds that define sequential activity through closeness to the locus would "restart" the sequence. This would be a first evidence, though experiments in Q5 would need to be conducted. Bulk optogenetic post-trial could allow us to test whether we can bias the solutions as predicted by theory, since when bulk opto is applied during learning, animals could not reliably use post-trial window for learning. 4) Correct, this was a typo. 5) Exactly, this is why the reviewer's intuition was correct and the experiment in Q5 is a necessary next step. On the other hand, if animals were forced to refrain their responses after their behavior for a short window before their reward is delivered, it could also be expected to see echoes of a limit cycle (which is how we plan to start these experiments with mice). Notably, if limit cycles are observed after the trial completion, that is a definite evidence; whereas during trial, limit cycles, slow-points, and an intermediary mechanism described by the reviewer, would behave similarly during trial. 6) Yes, we will include these in the discussion, which is in the main text. Specifically, discussion will now be more tailored for neuroscientists. We believe this was our final response due to the rules. As an experimental neuroscience lab, we are confident we can present these points clearly and accessibly in the final version. We are grateful for the reviewer’s engagement and hope our responses support an improved evaluation!
null
null
null
null
null
null
null
null
Plan-and-Act: Improving Planning of Agents for Long-Horizon Tasks
Accept (poster)
Summary: The paper proposes a Plan-and-Act methodology for long-horizon web-tasks. The basic premise of the Plan-and-Act method is that it decomposes the long-horizon planning into two modules: planning and executing. The planner module creates a long-horizon plan and the executor executes actions relevant to completing the tasks in this plan. The paper also introduces a dataset augmentation and synthesis process to fine-tune LLMs in the planning phase. The model achieves a ~10% increase in success rates compared to previous SOTA in the Web-Arena Lite benchmark. Claims And Evidence: The claims made in the paper are supported well with the experiments. But that being said, I am always a bit skeptical about papers that operate with LLMs for evaluations wrt to reproducibility. The experimental process for reproducing the tables is not provided in the paper (seeds, temperature, etc.). This makes it slightly harder to judge the effectiveness of the paper as I am not sure if the best results obtained with the LLMs were included in the paper. I would urge the authors to give more information on the reproducibility of the tables in the paper and any experimental parameters so that it is easier for the reader to reproduce. Methods And Evaluation Criteria: I feel that the method introduced in this work has been introduced in a few other research papers (refer essential references not discussed section) before as well, albeit for other applications (robotics, etc.) These papers also focus on decomposing the long-horizon planning into multiple modules (planning + acting w/ replanning). I would urge the authors to discuss the differences of these prior works when compared to Plan-and-Act and maybe show a comparison against these methods as well. It might be possible that the superior performance might be due to the long customised prompts given as input to the LLMs (as shown in the appendix). Right now, in its current state, I am going to lean towards weak accept but am willing to change if the authors can address some of the concerns I have raised in my review (specifically, the comparison to other plan-act-based methods) The evaluation criteria seem to be from a standard benchmark and I have no qualms about it. The benchmark is apt for the application at hand and the proposed method performs better than previous SoTA. Theoretical Claims: No theoretical claims in the paper. Mostly experimental. Experimental Designs Or Analyses: The experimental design is valid as it is based on a benchmark dataset (WebArena-Lite). I would suggest the authors to include experimental details for better reproducibility of the experiments. Supplementary Material: Most of the supplementary material is about the prompts used and the outputs from the LLMs. It is quite detailed. Relation To Broader Scientific Literature: The results show improvement compared to previous SoTA. But the method used has been introduced previously for other applications. I am not sure if this would count as a novel contribution to the scientific literature. Essential References Not Discussed: The papers below operate with a similar plan-act (+extra modules) for embodied robotics tasks. My initial thought was that something like this could be extended to the WebArena environment as well. [1]: Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M. Sadler, Wei-Lun Chao, and Yu Su. LLMPlanner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models, 2023 [2]: Nayak, S., Morrison Orozco, A., Have, M., Zhang, J., Thirumalai, V., Chen, D., ... & Balakrishnan, H. (2024). Long-horizon planning for multi-agent robots in partially observable environments. Advances in Neural Information Processing Systems, 37, 67929-67967. [3]: Shyam Sundar Kannan, Vishnunandan LN Venkatesh, and Byung-Cheol Min. SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models. arXiv preprint arXiv:2309.10062, 2023 Other Strengths And Weaknesses: I appreciate the way in which the results are presented where the increase in success rates is clearly visible with the addition of each module. Other Comments Or Suggestions: NA ### Update after rebuttal: I wasn't sure if the "official comments" were visible to the authors and hence am including them here: I appreciate the authors' rebuttal. I am assuming that the authors will include the hyperparameters, discussion of additional experiments on WebVoyager, other missing references raised in the reviews in the camera-ready. I am increasing my score by 1 as the authors have clarified my questions. A suggestion: it would be nicer for the reviewers if you could copy the text to particular answers to questions raised in the rebuttal instead of redirecting to rebuttals of other reviewers. It gets hard to do that in the current state eg. "please see R3-5 for more details". Questions For Authors: I would urge the authors to present more experimental details for better reproducibility and a thorough explanation on why Plan-and-Act is different than other plan-act methods used in the papers linked above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > R4-1: The experimental process for reproducing the tables is not provided in the paper (seeds, temperature, etc.). This makes it slightly harder to judge the effectiveness of the paper as I am not sure if the best results obtained with the LLMs were included in the paper. I would urge the authors to give more information on the reproducibility of the tables in the paper and any experimental parameters so that it is easier for the reader to reproduce. Hyperparameters for SFT We have used the following parameters for training both the Planner and the Executor models, both for the 70B and 8B models. As the 70B model, we have used Llama-3.3-70B-Instruct as the base model; and for the 8B model, we have used Llama-3.1-8B-Instruct as the base model. - Learning Rate: 2e-5 - Optimizer: AdamW - LR Scheduler: Cosine - Warmup Ratio: 0.1 - Batch Size: 32 - Epochs: 1 - FP16/BF16: Enabled - Machine: 8xA100 - Framework: torchtune Hyperparameters For Inference - Temperature: 0 - Framework: vLLM - Max tokens generated: 4196 - Maximum sequence length: 32000 Hyperparameters for Data Generation We used GPT-4o for all data generation stages. When generating synthetic data, for each generation, we have retrieved 5 in-context examples and generated 10 new synthetic user query-plan pairs. > R4-2: I feel that the method introduced in this work has been introduced in a few other research papers (refer essential references not discussed section) before as well, albeit for other applications (robotics, etc.) These papers also focus on decomposing the long-horizon planning into multiple modules (planning + acting w/ replanning). I would urge the authors to discuss the differences of these prior works when compared to Plan-and-Act and maybe show a comparison against these methods as well. It might be possible that the superior performance might be due to the long customised prompts given as input to the LLMs (as shown in the appendix). We thank the reviewer for their feedback. For method distinctions, see R3-4 in our response to Reviewer 3. For comparisons on other datasets, please see R2-1 (WebVoyager)/R2-2 (WebArena) in our response to Reviewer 2, where we show that our model performs on-par with other prior work on WebArena and set a new SOTA for text-only models on WebVoyager. Regarding the prompts given as input to the LLMs, most of the customized prompts in the appendix are for the data generation pipeline. At inference time, the System prompts for the Planner, Executor, and for Replanning are only the prompts in A.3.1, A.4.1, and A.10.1, which are fairly high-level and generic and similar in length to other prior work (See Figure 21 in WebRL). > R4-3: The experimental design is valid as it is based on a benchmark dataset (WebArena-Lite). I would suggest the authors to include experimental details for better reproducibility of the experiments. Please see R4-1. > R4-4: The results show improvement compared to previous SoTA. But the method used has been introduced previously for other applications. I am not sure if this would count as a novel contribution to the scientific literature. Please see R3-4/R4-5. > R4-5: The papers below operate with a similar plan-act (+extra modules) for embodied robotics tasks. My initial thought was that something like this could be extended to the WebArena environment as well. [1]: Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M. Sadler, Wei-Lun Chao, and Yu Su. LLMPlanner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models, 2023 [2]: Nayak, S., Morrison Orozco, A., Have, M., Zhang, J., Thirumalai, V., Chen, D., ... & Balakrishnan, H. (2024). Long-horizon planning for multi-agent robots in partially observable environments. Advances in Neural Information Processing Systems, 37, 67929-67967. [3]: Shyam Sundar Kannan, Vishnunandan LN Venkatesh, and Byung-Cheol Min. SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models. arXiv preprint arXiv:2309.10062, 2023 We thank the reviewer for these additional references which we will add to our related work. Indeed, these works [1,2,3] use hierarchical LLM Agents to decompose tasks and plan for robotics/embodied agents which shares some similarity with Plan-and-Act. However, similar to the other planning based web agents mentioned in R3-4, none of these prior work contain a framework for collecting and generating synthetic data for training open source LLMs to get better at these tasks. The synthetic data generation pipeline of Plan-and-Act is what differentiates it from other prior work with planning and agents. > R4-6: I would urge the authors to present more experimental details for better reproducibility and a thorough explanation on why Plan-and-Act is different than other plan-act methods used in the papers linked above. Please see R4-1 for more experimental details and R3-4/R4-5 for a detailed explanation on how Plan-and-Act is different from other plan-act methods.
Summary: This paper proposes Plan-and-Act, an agent for web environments which separates planning from execution. A planner generates the overall plan, and a separate executor carries out the plan by issuing low-level actions. In order to train the planner, a synthetic data generation method is introduces to annotate trajectories with feasible plans. The method achieves a new sota on WebArena-Lite. Claims And Evidence: The proposed method outperforms the previous sota, WebRL, and well designed ablations show that proposed components in planner and executor design contribute to the performance. Methods And Evaluation Criteria: Methods are evaluated on Webarena-lite success rate. Baselines include finetuning ReAct style without a planner, and WebRL. Theoretical Claims: n/a Experimental Designs Or Analyses: see above Supplementary Material: n/a Relation To Broader Scientific Literature: There are several issues with the paper in terms of relation to the broader literature, - It is difficult to understand the contributions of the paper within the broader literature of planning in LLM agents, as discussion of related works in agents with planning is missing - The paper does not discuss the difference between Webarena and Webarena Lite, and it is difficult to understand the distinction of the approach among the many approaches in Webarena Essential References Not Discussed: There are numerous prior works which propose a dynamic plan-and-execute architecture, which are not discussed in the paper: - AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents, ICLR 2025 - WebPilot: A Versatile and Autonomous Multi-Agent System for Web Task Execution with Strategic ExplorationIiii - Adaptive planning from feedback with language models, NeurIPS 2023 Other Strengths And Weaknesses: The strengths of the paper are training a strong plan-and-execute approach using synthetic data, while its weaknesses are the lack of discussion of related works and limited novelty, as plan-and-execute is a common approach among related works. I am willing to increase my score if the authors address these points. Other Comments Or Suggestions: - It would be beneficial to report results using a smaller LLama model as well, to demonstrate the generality of the proposed approach. Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > R3-1: It is difficult to understand the contributions of the paper within the broader literature of planning in LLM agents, as discussion of related works in agents with planning is missing We thank the reviewer for their feedback. Please see response R3-4. > R3-2: The paper does not discuss the difference between Webarena and Webarena-Lite, and it is difficult to understand the distinction of the approach among the many approaches in Webarena We appreciate the reviewer’s feedback. WebArena-lite was introduced in VisualAgentBench as a subset of the full WebArena, refined to remove unclear and impossible tasks, and also generated a training set where the original WebArena benchmark does not. For a discussion on how our approach differs from other prior work in this area, please refer to R3-4. For a comparison of results, please refer to R2-2, where we show that Plan-and-Act performs on-par with existing prior work on the full WebArena benchmark while being completely open source. > R3-3: There are numerous prior works which propose a dynamic plan-and-execute architecture, which are not discussed in the paper: > > AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents, ICLR 2025 > WebPilot: A Versatile and Autonomous Multi-Agent System for Web Task Execution with Strategic ExplorationIiii > Adaptive planning from feedback with language models, NeurIPS 2023 We appreciate the feedback from the reviewer. AgentOccam and WebPilot were referenced in the Introduction as well as Section 2.1 of the Related Work section and we thank the reviewer for the pointer to AdaPlanner. We will expand and discuss these works in the final version of the paper. For a more in-depth discussion of how our work differs from the existing literature, please see R3-4. > R3-4: … its weaknesses are the lack of discussion of related works and limited novelty, as plan-and-execute is a common approach among related works. I am willing to increase my score if the authors address these points. The Plan-and-Act framework is not just a hierarchical planning framework, but also a data generation framework. Prior work that involve planning and hierarchies such as AgentOccam, WebPilot, AdaPlanner, and ADaPT are all prompting methods using closed-source models such as GPT-4o as their base model. Our method provides a simple, systematic way to generate high quality training data to train LLMs on web tasks. In addition, our method uses a very simple 2-agent framework, which is significantly simpler compared to other prior work with planning. AgentOccam uses a “Planning via Generation” technique where the planning is incorporated into the action space and the model plans in a tree-like fashion. WebPilot has a significantly more complex infrastructure with 6 different agents in total. AdaPlanner has a In-Plan and Out-of-Plan Refiner to facilitate replanning when the plan is wrong and a skill-discovery module that is orthogonal to our method and can be used in conjunction. ADaPT uses recursive decomposition to decomposes tasks when the executor fails, whereas our dual-agent architecture simply replans at each step. All of these methods use more excessive prompting to improve performance, while our method has a simple Plan-and-Act structure at runtime. Other work that have discussed generating training data for Web Agents such as DigiRL, WebRL, AutoWebGLM, and NNetNav provide more complex techniques for collecting diverse trajectories, which are complementary to Plan-and-Act, as our pipeline (Section 4.1) is simple and can be interchanged. Furthermore, they only produce trajectory data, but do not planning data (Section 4.2). They also rely on external simulators to generate data, whereas our method can generate synthetic planning data without a simulator (Section 4.3). > R3-5: It would be beneficial to report results using a smaller LLama model as well, to demonstrate the generality of the proposed approach. We thank the reviewer for this feedback. We have also trained a Llama-3.1-8b-instruct model using the Plan-and-Act framework. We also decided to add CoT style reasoning to the Planner and Executor so that it generates some reasoning before it generates the plan/action. Furthermore, we also finetuned another Llama-70B model on this data as well to see how the performance would be for the Llama-70B model with CoT as well: - Llama-70B with Dynamic Replanning: 53.94% (89/165) - Llama-8B with CoT with Dynamic Replanning: 53.33% (88/165) - Llama-70B with CoT with Dynamic Replanning: **57.58**% (95/165) We evaluated smaller Llama-8B models and found that our approach still outperforms existing methods (53.33%), demonstrating strong generality. With CoT reasoning, our Llama-70B achieves **57.58**%, setting a new SOTA on the WebArena-lite dataset. We will include these results in the final version of our paper.
Summary: The authors propose Plan-and-Act, which consists of two separate modules for planning and acting (execution), with dynamic replanning for better adaption to different situations. The Planner generates high-level plans, which are taken as input for the Executor to generate low-level actions. Importantly, for the training of the two modules with enough domain knowledge, the authors use synthetic data generation. While the low-level actions can be the training data for the Executor, the authors suggest two approaches for enhanced training of the Planner. The first approach is plan annotation, which annotates collected trajectories with plan "labels" by prompting LLMs. To scale up the training data for Planner, the second approach to synthetically generate plan data is used. On WebArena-Lite, they show that Plan-and-Act can outperform the baselines. ## update after rebuttal I appreciate the authors for providing the extended empirical results. While the added empirical results do add value to the work and address some of my concerns, I still believe none of WebArena and WebVoyager justify the claim of being "long-horizon." From my experience, neither WebVoyager nor WebArena requires long-horizon executions. On a relatively minor but related topic, the number of steps for the failure trajectories shouldn't be used for measuring the complexity of tasks (and even successful trajectories can contain non-optimal actions and steps). Claims And Evidence: - The claims that constitute the proposed approach are empirically supported by the results on WebArena-Lite. Especially, Table 1 provides a performance improvement breakdown across the components of the proposed method. Methods And Evaluation Criteria: - The proposed method for synthetic generation of planning data is sound. Especially, for environments or domains where the dynamics itself can change over time, such as web navigation, generating data that is grounded to the actual environment is important. - (Also as mentioned in this paper,) synthetic augmentation of planning training data can introduce noise to some extent. Theoretical Claims: There is not much of theoretical claims from this submission. Experimental Designs Or Analyses: - One primary weakness of this work is its empirical evaluation. It only provides the evaluation on WebArena-Lite, which employs non-real-world websites as part of the environment. The experimental results may be strengthened by evaluating the proposed approach on more realistic benchmarks, such as WebVoyager. - SOTA claims on WebArena-Lite (not just from this submission) may need further investigations, as many papers still use the original WebArena for evaluation and the corresponding performance on the WebArena-Lite subset should be derivable. Supplementary Material: I checked out some prompt examples. Relation To Broader Scientific Literature: - While the targeted problem itself is relevant to LLM agents, the current empirical evaluation (only on WebArena-Lite) makes it hard to assess the proposed method's broader applicability. Essential References Not Discussed: - More papers can be cited in the context of using separate planner and executor, especially "ADaPT: As-Needed Decomposition and Planning with Language Models" Other Strengths And Weaknesses: - Given the task distribution of WebArena(-Lite) and example plans and trajectories, the use of the term "long-horizon" can be an overstatement or misleading. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > R2-1: One primary weakness of this work is its empirical evaluation. It only provides the evaluation on WebArena-Lite, which employs non-real-world websites as part of the environment. The experimental results may be strengthened by evaluating the proposed approach on more realistic benchmarks, such as WebVoyager. That is a fair point. We have evaluated our method on WebVoyager and report the results below, where we achieve **80.02**% accuracy which is SOTA for text-only models (note that operator uses multi-modal approach ). To evaluate our approach on WebVoyager, we first collected training data, since WebVoyager does not have any trajectories. We used the text-only WebVoyager model and generated 1500 trajectories using the Action Trajectory Generation (Section 4.1). We then used QWQ-32B to annotate our trajectories (Section 4.2) and to generate 10k synthetic plans (Section 4.3). Our model uses both Dynamic-Replanning and CoT reasoning introduced in R3-5. We finetuned 2 llama-3.1-8b-instruct models for the Planner and Executor. Furthermore, we tried using QWQ-32B as a zero-shot executor with our finetuned llama-3.1-8b-instruct model as the planner. Our 8B planner and executor has an accuracy of 58.08% and our 8B planner and 32B executor achieves an accuracy of **80.02**%, which sets a new SOTA for all open source models, as well as one for all text-only models, since OpenAI Operator uses vision. | Technique | Base Model | WebVoyager Accuracy (%) | | ---------------------- | ------------------------------------------------------- | ----------------------- | | WebVoyager (text-only) | gpt-4-turbo | 44.3 | | NNetNav | llama-8b-instruct | 34.2 | | OpenWebVoyager | Idefics2-8b-instruct | 27.4 | | Wilbur | gpt-4-turbo | 52.6 | | WebVoyager | gpt-4-turbo | 57.1 | | Plan-and-Act | llama-8b-instruct planner + llama-8b-instruct executor | 58.08 | | Agent-E | gpt-4-turbo | 73.1 | | Plan-and-Act | llama-8b-instruct planner + zero-shot QWQ-32B executor | **80.02** | | OpenAI Operator | OpenAI Operator | 87.0 | > R2-2: SOTA claims on WebArena-Lite (not just from this submission) may need further investigations, as many papers still use the original WebArena for evaluation and the corresponding performance on the WebArena-Lite subset should be derivable. We appreciate the feedback from the reviewer. We investigated your suggestion and found that while some papers do release traces that allow you to see the trajectories, the evaluation of these traces is impossible for some tasks without running the simulation itself. Thus, we evaluated Plan-and-Act on the full WebArena benchmark. We used the Llama 70B model with CoT that we introduced in R3-5. Below, you can see our performance compared to other work prior to the ICML deadline. Plan-and-Act performs better/on-par with all prior work, while being open-source. | Method | Base Model | WebArena Accuracy (%) | | ---------------- | --------------- | --------------------- | | NNetNav | Llama-3.1-8b | 16.3 | | AutoWebGLM | ChatGLM3-6B | 18.2 | | WebPilot | gpt-4o | 37.2 | | AgentOccam | GPT-4-Turbo | 43.1 | | AgentOccam-Judge | GPT-4-Turbo | 45.7 | | Plan-and-Act | Llama-70B | 45.7 | | Openai Operator | Openai Operator | 58.1 | > R2-3: While the targeted problem itself is relevant to LLM agents, the current empirical evaluation (only on WebArena-Lite) makes it hard to assess the proposed method's broader applicability. Please see R2-1 and R2-2. > R2-4: More papers can be cited in the context of using separate planner and executor, especially "ADaPT: As-Needed Decomposition and Planning with Language Models" We will add ADaPT in related work. > R2-5: Given the task distribution of WebArena(-Lite) and example plans and trajectories, the use of the term "long-horizon" can be an overstatement or misleading. We kindly refer the reviewer to the table in R1-1, in the response to Reviewer 1. There, we have provided a breakdown of the number of steps per task in WebArena-lite. Each task averages around 9-13 steps on average. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for providing the extended empirical results. While the added empirical results do add value to the work and address some of my concerns, I still believe none of WebArena and WebVoyager justify the claim of being "long-horizon." From my experience, neither WebVoyager nor WebArena requires long-horizon executions. On a relatively minor but related topic, the number of steps for the failure trajectories shouldn't be used for measuring the complexity of tasks (and even successful trajectories can contain non-optimal actions and steps). --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's valuable feedback and agree that the term 'long-horizon' may lead to misunderstandings given the task distributions in WebArena and WebVoyager. We will clarify this in the revised manuscript by softening the terminology, emphasizing our approach's potential applicability toward longer-horizon tasks, and avoiding overstating current task complexity.
Summary: This paper introduces Plan-and-Act, a framework consisting of a planner that generates high-level task plans and an executor that translates these plans into specific actions. To deal with unexpected failures, the planner will be involved in updating the plan after each execution step. Besides, a synthetic data generation method is proposed to finetune the planner. Through experiments in the WebArena-Lite environment, Plan-and-Act achieves a state-of-the-art success rate. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes, the proposed methods make sense for the web task. However, the paper only reports the success rate of the methods on the WebArena-Lite benchmark. Additional metrics, such as the average number of steps required to complete a task, would provide a more comprehensive assessment. Theoretical Claims: No, there is no theoretical claim. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: A new synthetic data generation strategy can improve LLM's performance for long-horizon web tasks. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The experimental results are positive, achieving new state-of-the-art performance on the WebArena-Lite benchmark. 2. The paper provides sufficient ablation studies to demonstrate the contribution of individual components. Weaknesses: 1. The proposed framework lacks novelty, as it essentially follows a hierarchical planning approach and utilizes environmental feedback for replanning—both of which are commonly used in planning applications. 2. The experiments are conducted solely on WebArena-Lite, a simulated environment. It would be more informative to evaluate the approach on WebVoyager, which better reflects real-world web behavior. 3. More prior methods [1, 2] should be included in the experimental comparison to better contextualize the improvements. 4. The writing could be improved in the following aspects: (1) The Related Work section should not merely summarize previous studies but should also explicitly discuss the similarities and differences between the proposed approach and existing methods. (2) The paper should include a thorough discussion of the limitations of the proposed method. References: [1] Zhang et al., WebPilot: A Versatile and Autonomous Multi-Agent System for Web Task Execution with Strategic Exploration. [2] Yang et al., AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents. Other Comments Or Suggestions: No. Questions For Authors: See Weaknesses and Evaluation Criteria. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > R1-1: The paper only reports the success rate of the methods on the WebArena-Lite benchmark. Additional metrics, such as the average number of steps required to complete a task, would provide a more comprehensive assessment. Below are additional metrics, including average steps and a success/failure breakdown, as suggested. To provide a more comprehensive assessment, we have also provided a breakdown comparing successful and unsuccessful tasks across the different websites/tasks. Furthermore, we have run experiments on other datasets including WebVoyager. Please see R2-1 response below. | Website | # Tasks | Avg. Steps (All) | Avg. Steps (Success) | Avg. Steps (Fail) | Success Rate (%) | | ----------------- | ------- | ---------------- | -------------------- | ----------------- | ---------------- | | Overall | 165 | 11.12 | 7.52 | 13.43 | 53.9% | | GitLab | 30 | 13.7 | 5.98 | 20.35 | 53.3% | | Reddit | 19 | 9.37 | 8.31 | 9.92 | 84.2% | | Shopping Admin | 35 | 12.4 | 8.65 | 14.41 | 48.6% | | Shopping | 45 | 9.87 | 7.11 | 10.66 | 55.6% | | Map | 26 | 10.00 | 10.37 | 9.10 | 46.2% | | Multiple Websites | 10 | 11.70 | 6.00 | 17.83 | 30.0% | > R1-2: The proposed framework lacks novelty, as it essentially follows a hierarchical planning approach and utilizes environmental feedback for replanning—both of which are commonly used in planning applications. We would like to direct the reviewer to our response R3-4 in our response to Reviewer 3. > R1-3: The experiments are conducted solely on WebArena-Lite, a simulated environment. It would be more informative to evaluate the approach on WebVoyager, which better reflects real-world web behavior. We conducted new experiments on WebVoyager, please see R2-1 in our response to Reviewer 2, where Plan-and-Act achieves SOTA results for text-only models on WebVoyager with an accuracy of **80.02%**. > R1-4: More prior methods [1, 2] should be included in the experimental comparison to better contextualize the improvements. AgentOccam [1] and WebPilot [2] do not report results on WebArena-lite, so we evaluated our method on the full WebArena benchmark; please see R2-2 for a detailed comparison, where we find that Plan-and-Act achieves performance on-par or better with prior work on WebArena. > R1-5: The writing could be improved in the following aspects: (1) The Related Work section should not merely summarize previous studies but should also explicitly discuss the similarities and differences between the proposed approach and existing methods. (2) The paper should include a thorough discussion of the limitations of the proposed method. Regarding related work, we will expand it based on the discussion in R3-4. Regarding the limitations, one main drawback is that Action Trajectory Generation (Section 4.1) does depend on having a baseline model that can successfully complete the web tasks. The synthetic data generation pipeline introduced in Section 4.3 is able to mitigate some of these concerns such that with a sufficient amount of training data. However, for datasets that do not have any training data, such as WebVoyager, the pipeline will depend on having a base model to collect trajectories. We will include a more thorough discussion of limitations in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' comprehensive response. It addressed my concerns. So I increased my score.
null
null
null
null
null
null
Dynamic Range Reduction via Branch-and-Bound
Reject
Summary: This paper tackles the numerical precision challenges of solving NP-hard QUBO problems on low-precision hardware accelerators (e.g., quantum annealers, FPGAs) by introducing a dynamic range (DR)-aware optimization framework. The authors propose a hybrid Branch-and-Bound algorithm with policy rollout to iteratively compress the DR of QUBO matrices while preserving global optima, enabling compatibility with reduced-precision representations. Key innovations include formalizing DR reduction as a Markov decision process, designing efficient bounds for pruning suboptimal search paths, and validating the method on real hardware. Experiments demonstrate significant DR reduction across ML-related QUBO instances, outperforming greedy baselines and enhancing solvability on quantum/FPGA platforms. Claims And Evidence: Yes, the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate but require scalability validation. For example, experiments are limited to small-scale QUBO instances with n≤20 (i.e., 20 binary variables). Scalability for large-scale problems with n>100 remains unverified. Theoretical Claims: Yes, the proofs for theoretical claims is correct. Experimental Designs Or Analyses: Experimental designs are sound but could be expanded. BinClus/SubSum datasets rely on synthetic outliers (Appendix D), raising concerns about real-world applicability. Supplementary Material: The supplementary material is comprehensive. Relation To Broader Scientific Literature: The work extends prior research in meaningful ways: 1. DR as a hardware-centric metric: Builds on Stollenwerk et al. (2019a,b) and Yachi et al. (2023) but generalizes beyond CR/BW. 2. Algorithmic innovation: Integrates policy rollout (Bertsekas et al., 1997) into B&B, addressing local optima in Mücke et al. (2025). Essential References Not Discussed: No critical references are omitted. Other Strengths And Weaknesses: Strengths: (1) Originality: First to formalize DR reduction as an MDP and integrate B&B with policy rollout for QUBO. (2) Practical impact: Validates DR compression on real hardware, enabling low-precision AI accelerators. Weaknesses: (1) Theoretical gaps: No convergence guarantees for policy rollout. (2) Limited scalability: Experiments focus on small-scale problems (n≤20). Other Comments Or Suggestions: N/A Questions For Authors: Q1: How does the method guarantee global optima preservation when the initial z∗ is unknown (e.g., for high-dimensional QUBO)?  Q2: How sensitive is the method to the choice of the rollout depth and the number of iterations? Q3: While the paper focuses on hardware solvers (QA and DA), classical solvers are often used for QUBO problems. How does the proposed DR reduction method compare to classical solvers in terms of solution quality and runtime for the same QUBO instances? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and detailed feedback, as well as for highlighting the originality, theoretical soundness, and practical relevance of our contributions. We respond to the raised concerns below. ### Scalability and Small-Scale Evaluation ($n \le 20$) We acknowledge that the experiments focus on QUBO instances with up to 20 binary variables. This is primarily due to hardware limitations of current quantum solvers, which restrict instance size due to limited qubit count or precision granularity. Nonetheless, we emphasize that: - Our algorithm is not limited to small-scale instances in principle. - The computational complexity is governed by the most time consuming step of computing bounds in the Branch-and-Bound (B&B) algorithm. The runtime is $O(Tn^2)$, where $T$ is the number of parameters we allow to change, and $n$ the problem dimensionality. In fact, as shown in the experiments, even modifying a few parameters ($T\ll n$) can result in substantial dynamic range (DR) reduction. - We are currently extending our evaluation with larger-scale synthetic and structured QUBO instances and will include this in the final version to validate scalability in software simulations. ### Synthetic Nature of Datasets and Real-World Applicability The BINCLUS and SUBSUM datasets indeed contain synthetic outliers to simulate worst-case DR conditions, which are commonly encountered in QUBO formulations derived from noisy or high-variance data (e.g., weighted constraints, learned potentials). We agree that real-world benchmarks (e.g., Max-Cut QUBOs) can further validate applicability. We are currently incorporating additional benchmarks and realistic QUBO instances into our extended evaluation. ### Convergence Guarantees of Policy Rollout We appreciate the reviewer’s observation. Policy rollout introduces a trade-off between solution quality and computational complexity: - Since rollout is performed for a fixed finite depth, the search is guaranteed to terminate. - Moreover, rollout guarantees an improvement over the base (greedy) policy in terms of DR. We will clarify this guarantee in the final version and discuss potential directions for formal convergence analysis. ### Q1: Global Optima Preservation without Knowing z*? We believe this might stem from a misinterpretation. Our algorithm does not assume prior knowledge of the optimal solution z*. Rather, we guarantee preservation of some global optimum by building on (Mücke et al., 2025), which defines safe parameter intervals—i.e., updates that provably preserve at least one global optimum of the original QUBO. These intervals are derived using efficiently computable bounds on the optimal QUBO value, as discussed in Appendix B: - Upper bounds are obtained via approximation algorithms (e.g., simulated annealing). - Lower bounds are computed efficiently via roof duality (Boros et al., 2008) or semidefinite relaxations (Alessandroni et al., 2023). This framework allows us to conservatively modify QUBO parameters without altering the problem's global optima. ### Q2: Sensitivity to Rollout Depth and Iteration Horizon Our experiments (Fig. 6) demonstrate that larger rollout depths and iteration counts yield stronger DR reduction. However, our algorithm remains efficient due to: - Early termination via policy rollout, limiting search depth. - Impact-based parameter selection (IMPACT), which restricts updates to DR-relevant matrix entries, reducing the branching factor. This enables us to scale to larger horizons T while maintaining computational tractability. We will add a sensitivity analysis to the final version to further illustrate this. ### Q3: Comparison to Classical Solvers We analyzed simulated annealing (Kirkpatrick et al., 1983) and observed that lower DR does not necessarily correlate with improved classical solver performance—likely due to classical methods being less sensitive to DR than hardware-based solvers due to floating-point arithmetics. We will incorporate a detailed comparison with simulated annealing and tabu search (Goldberg & Kuo, 1987) in the final version to provide a more complete picture of solver performance post-DR reduction. We thank the reviewer again for the insightful suggestions, which will directly inform the improvements in our final submission. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. Many of my concerns are addressed, so I would like to raise my score to 3.
Summary: For given QUBO instances, the presented approach produces new QUBO instances which feature the same solutions but whose parameters have a reduced dynamic range. This is achieved by formulating the problem as an MDP and running a branch-and-bound strategy. Results show an improved number of found global optima over several runs on three different types of problems. Claims And Evidence: The claims made are sound. The generality of the observed phenomenon is to be questioned and not further discussed, i.e., the title could use an addition like "in X cases". Methods And Evaluation Criteria: The chosen problem set is very narrow. QUBO problems with less sensitivity to dynamic range (e.g., QUBO problems with a fixed dynamic range) are not discussed. General (native) QUBO instances are not discussed. The approach is not intently built for these kinds of problems, but it would still be helpful to show the results for them. Theoretical Claims: The main theoretical claim is the construction of the upper and lower bounds. They appear fine at first glance. Experimental Designs Or Analyses: The experimental design lacks a broader evaluation for different kinds of problem instances. It is also lacking any comparison to non-MDP-based approaches to the same issue or even any comparison to simpler heuristics tackling the same problem. (D-Wave, for example, per default applies some manipulation on the weights to adjust dynamic range.) The setup of the quantum hardware was described insufficiently in this case. Supplementary Material: Not enough for review. Relation To Broader Scientific Literature: The paper is missing some discussion about other heuristics for the dynamic range problem. Essential References Not Discussed: see above Other Strengths And Weaknesses: see above Other Comments Or Suggestions: typos: - refer to equation with "Equation 1" not "(1)" - refer to literature via citet and citep depending on context - use of "NP-hard" and "complexity" is a bit imprecise Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and constructive review. Below, we address the key concerns and clarify aspects related to the scope, comparisons, and experimental setup. We will incorporate the suggested corrections (e.g., references to equations, citation style, and terminology) in the camera-ready version. ### Scope of QUBO Problem Classes Evaluated Our current focus is on QUBO problems with high dynamic range due to input data, which are particularly challenging for real-world hardware solvers. We agree that extending the evaluation to QUBO problems with lower sensitivity to DR and more general or randomly generated instances is important. We are actively working on expanding our experiments to cover a broader spectrum of QUBO classes and will integrate this extended evaluation into the final version. ### Comparison to Non-MDP Heuristics and Literature Coverage Our experimental section includes comparisons to several established approaches to dynamic range reduction that preserve global optima: - The greedy heuristic from (Mücke et al., 2025) - The AUX method using auxiliary variables (Oku et al., 2020) - The PEN method for tuning penalty parameters (Alessandroni et al., 2023) These baselines represent different families of strategies. If the reviewer has a specific alternative heuristic in mind, we would be happy to include a comparison in the final version and expand the discussion on related dynamic range mitigation strategies, particularly those used in practice by hardware vendors. ### Clarification on D-Wave’s Built-in Techniques We appreciate the note regarding D-Wave’s internal dynamic range adjustments. While D-Wave applies internal rescaling and chain strength tuning to mitigate embedding-related issues, we note: - Global parameter rescaling does not change the dynamic range. - Their techniques are hardware-specific (focused on embedding quality and chain robustness), whereas our method operates on the QUBO formulation itself and is hardware-agnostic. We will clarify this distinction in the final version and note that a deeper integration with D-Wave’s toolchain is a promising direction for future work. ### Quantum Hardware Setup Details In our quantum experiments, we used D-Wave’s Advantage 5.4 system with default solver parameters. We evaluated results over 1000 samples per QUBO instance and reported energy distributions. We agree that additional parameters (e.g., annealing time, number of reads) can improve transparency and will include these in the updated paper. Once again, we appreciate the reviewer’s insights. We will integrate the broader evaluations, more detailed hardware setup, and improved contextualization in the camera-ready version.
Summary: This paper presents a Branch-and-Bound algorithm designed to reduce the numerical precision requirements of NP-hard Quadratic Unconstrained Binary Optimization (QUBO) problems, which are critical in real-time AI applications. By utilizing dynamic range as a measure of complexity, the algorithm aims to enhance the solvability of QUBO problems on hardware accelerators like quantum and FPGA-based digital annealers. The experimental results demonstrate that the proposed method effectively reduces the dynamic range in problems such as subset sum, clustering, and vector quantization. Claims And Evidence: Yes, the claims are supported by both theoretical analysis and empirical experiments. Methods And Evaluation Criteria: Yes, this work brings a new perspective and method to Quadratic Unconstrained Binary Optimization. Theoretical Claims: Yes, the theoretical proofs in the manuscript are solid. Experimental Designs Or Analyses: Yes, the experiment section is structured. Supplementary Material: Yes, both Detailed Proofs and More Experiment Results have been carefully reviewed. Relation To Broader Scientific Literature: This paper aims to solve the Quadratic Unconstrained Binary Optimization issue that can not be solved by the existing works, which is a further improvement on the existing methods. Essential References Not Discussed: The introduction to related work is relatively thorough. Other Strengths And Weaknesses: 1. The author provides insufficient information regarding data input and does not clarify what QUBO embedding is. Additionally, the appendix does not include any statistical details about the dataset. 2. The experiment appears to be somewhat inadequate, as the information provided in Table 1 is limited. AUX and PEN do not work in most cases, and the author should conduct a more comprehensive evaluation. Other Comments Or Suggestions: None Questions For Authors: 1. The author provides insufficient information regarding data input and does not clarify what QUBO embedding is. Additionally, the appendix does not include any statistical details about the dataset. 2. The experiment appears to be somewhat inadequate, as the information provided in Table 1 is limited. AUX and PEN do not work in most cases, and the author should conduct a more comprehensive evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and encouraging evaluation of our paper, and for acknowledging the strength of our theoretical and empirical contributions. Below, we address the noted concerns regarding dataset details and experimental evaluation. ### Clarification on Data Input and QUBO Embedding We appreciate the reviewer’s feedback on the clarity of the data input process. While the Appendix includes a description of the datasets, we agree that this could be made more explicit. In the camera-ready version, we will: - Expand the descriptions of the BINCLUS, SUBSUM, and VECQUANT problems. - Clearly define the QUBO embeddings used—i.e., how each problem is reformulated into a QUBO structure. For example, the 2-means clustering task is translated into a QUBO that optimizes a discrete assignment of points to clusters. ### Lack of Statistical Details in the Appendix This is a valid and helpful point. While our current focus was on reduction of dynamic range and hardware performance, we agree that statistical properties of the input data can influence QUBO structure and dynamic range. We will include: - Summary statistics (e.g., dimensionality, distributional properties, sparsity) for each dataset. - A brief discussion of how these properties relate to dynamic range and solver performance. ### Concerns About Table 1 and Baseline Methods (AUX, PEN) We acknowledge that AUX and PEN do not apply to all QUBO instances. This reflects their inherent limitations in generality: - AUX is only suitable for integer-valued QUBOs, and is therefore restricted to problems like SUBSUM. - PEN is applicable only when penalty parameters are used to enforce hard constraints (e.g., in VECQUANT). In contrast, our method is universally applicable to any real-valued QUBO, making it suitable across problem domains without requiring specific problem structure. We will clarify this in the final version. ### Request for More Comprehensive Evaluation We fully agree that a broader empirical evaluation is valuable. While we selected three representative problems (subset sum, clustering, vector quantization), we are already extending our evaluation to include additional QUBO formulations and larger problem sizes in preparation for the final version. We sincerely thank the reviewer for the helpful suggestions. We will incorporate these improvements in the camera-ready version to ensure clarity and completeness.
Summary: The focus of this paper is the Quadratic Unconstrained Binary Optimization problem (QUBO), and in particular on methods to reduce the precision of the input entries. This is motivated by applications in hardware acceleration, where small input (e.g. 8 bits) can result in better parallelization. QUBO is an NP-hard problem, and can model many combinatorial optimization problems. They proposed several principled methods of dealing with this issue, including a branch and bound algorithm. Finally, an experimental evaluation is included on quantum hardware and FPGA-based digital annealer. Claims And Evidence: I did not see any problematic claims. Methods And Evaluation Criteria: They seemed to make sense to me, but I am not an expert on the standards for this particular area. Theoretical Claims: The algorithms and analysis were described as principled, as opposed to ones with rigorous theoretical guarantees. Experimental Designs Or Analyses: The experimental design and analyses seemed okay to me, but again I am not an expert on these applications. Supplementary Material: I skimmed through it, but not thoroughly. Relation To Broader Scientific Literature: This paper may be relevant to the hardware acceleration community. I don't think that the majority of researchers in combinatorial optimization would be interested in the results, but there may be some. Essential References Not Discussed: I did not notice essential references not discussed, but am not familiar with related work on this topic. Other Strengths And Weaknesses: Strengths - The paper was well-written and easy for me to read. It seemed polished. - The QUBO problem seemed very general, and it is stated in the paper that it could model a large variety of problems in combinatorial optimization. - I think it's interesting to consider the more low level runtime considerations in combinatorial optimization, like how the input is represented in bits. Weaknesses - The presented contributions were principled, but they were not backed up with theoretical proof. The paper describes a bad alternative to the issue of reducing precision as simply truncating the input entries since that could completely change the CO problem, but since there doesn't seem to be proof that their way will preserve the problem I don't see how we can be confident that their way won't also completely change the CO problem. - In addition to the fact that the algorithmic results are principled but not backed up by rigorous analysis, it didn't seem like there was a ton of novel results compared to other comparable papers I've seen at ICML. - It seems that the presented algorithm is exponential time, but if we are later going to run an approximation algorithm for the CO problem wouldn't the runtime for the precision procedure be a prohibitive bottleneck? Other Comments Or Suggestions: None Questions For Authors: - Could you clarify what can be said for sure about the performance of the algorithm? In terms of theoretical guarantees. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. Below, we respond to the raised concerns regarding theoretical guarantees, runtime feasibility, and novelty. ### Theoretical Guarantees and Rigor While the overall optimization procedure is heuristic in nature, key components of our method are theoretically grounded: #### Bounding Procedure: Our Branch-and-Bound (B&B) algorithm leverages rigorously derived lower and upper bounds on the dynamic range (DR) that can be achieved through permissible modifications to the QUBO matrix. These bounds are mathematically valid and discussed in detail in Section 5.3 and Appendix C. #### Preservation of Optima: As noted, naive truncation of QUBO parameters can lead to incorrect solutions (e.g., spurious optima). In contrast, we build upon (Mücke et al., 2025), which defines intervals for safe parameter updates—i.e., updates that provably preserve at least one global optimum of the original QUBO. These intervals are derived using efficiently computable bounds on the optimal QUBO value, as discussed in Appendix B: - Upper bounds via sub-optimal solutions (e.g., simulated annealing). - Lower bounds via roof duality (Boros et al., 2008) or convex relaxations such as semidefinite programming (Alessandroni et al., 2023). This ensures our procedure retains the nature of the original combinatorial problem. ### Performance Guarantees: Our approach is guaranteed to match or outperform the heuristics in (Mücke et al., 2025), as our MDP-based method explores a strictly richer decision space. We agree that tighter performance bounds for the full method are an exciting direction for future work. ### Runtime Considerations and Practical Efficiency While our method has exponential worst-case complexity in the number of matrix updates $T$, we mitigate this in practice: #### Policy Rollout: We limit full tree expansion by switching to a base policy after a small rollout horizon, reducing computational overhead significantly. #### Impact-Based Index Selection (IMPACT): Instead of branching on all matrix entries, we restrict updates to the few entries that directly affect the DR. This reduces the branching factor without noticeable degradation in performance, as shown in our experiments. #### Efficient Bound Computation: Our pruning bounds can be computed in $O(Tn^2)$ time (Appendix C), making the B&B framework practically efficient as a preprocessing step—even for moderately sized QUBO problems. Thus, while the method remains heuristic, its computational profile is controllable, and it does not become a prohibitive bottleneck in practice. ### Novelty of Contributions We respectfully highlight several novel aspects of our work: - We are the first to formulate precision reduction for QUBO as a long-sighted Markov Decision Process, enabling more globally informed decisions than greedy heuristics. - We introduce a principled Branch-and-Bound algorithm with provable bounds and policy rollout integration. - Our approach is general-purpose, improving over (Mücke et al., 2025) while being applicable to arbitrary QUBO instances—unlike many prior methods which are problem-specific or depend on particular constraints. - Finally, we demonstrate practical relevance by improving real hardware solver performance (e.g., QA and DA), including power and resource usage on FPGA designs. We are grateful for the reviewer’s encouraging comments on the clarity of the paper and the relevance of our low-level optimization perspective. We will incorporate additional clarifications on theoretical guarantees in the final version.
null
null
null
null
null
null
EEG-Language Pretraining for Highly Label-Efficient Clinical Phenotyping
Accept (poster)
Summary: This paper introduces EEG-Language Models (ELMs), a multimodal framework that integrates EEG signals with clinical text reports for various downstream tasks, including retrieval, abnormality classification, and event classification, across multiple datasets. The method employs time-series cropping, text segmentation, and a multiple-instance learning (MIL) variant of contrastive learning to address the alignment challenges between EEG signals and textual descriptions. Experimental results show that ELMs outperform EEG-only pretraining methods. Notably, the model exhibits zero-shot capability, further highlighting its adaptability across diverse downstream tasks. These findings underscore the potential of multimodal pretraining in medical applications, enabling richer and more effective representations for classification and retrieval tasks. Claims And Evidence: The experiments and analyses presented in the paper are sufficient to substantiate the authors' contributions. Methods And Evaluation Criteria: The proposed methodology aligns well with the research problem and is well-motivated. The evaluation framework is comprehensive, with multiple benchmark datasets and appropriate performance metrics. The chosen baselines allow for a fair comparison, effectively demonstrating the advantages of the proposed approach. Theoretical Claims: The paper is primarily empirical and does not involve rigorous theoretical proofs. Experimental Designs Or Analyses: The experimental design is reasonable, with a diverse set of datasets and downstream tasks. The proposed method is compared against both supervised and self-supervised models, which strengthens the validity of the results. However, the paper only includes large-scale EEG models like LaBraM as a baseline in table 4, while the SSL baselines for other tables are relatively outdated. It is recommended to include more recent models and large-scale EEG models to provide a more comprehensive comparison. Supplementary Material: I have reviewed the appendix submitted by the author. Relation To Broader Scientific Literature: The paper presents a novel approach by integrating EEG with language modeling and multiple-instance learning. It builds upon prior works such as CLIP and M-FLAG while incorporating multiple-instance learning to better handle EEG-text alignment. The experimental results convincingly demonstrate the effectiveness of this framework, highlighting its potential for advancing EEG applications in the medical domain. The study makes a meaningful contribution by extending multimodal learning techniques to EEG-based medical analysis. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: • Novelty: The paper introduces an innovative approach by integrating EEG and language modeling with multiple-instance learning. • Methodological soundness: The proposed methodology is well-grounded and effectively addresses EEG-text alignment challenges. • Comprehensive experiments: The study evaluates the model across multiple tasks, demonstrating strong performance. Weakness: • Baseline comparison: The paper only includes large-scale EEG models like LaBraM as a baseline in table 4, while the SSL baselines for other tables are relatively outdated. It is recommended to include more recent models and large-scale EEG models to provide a more comprehensive comparison. Other Comments Or Suggestions: In Tables 2 and 6, the abbreviation "SV" is not explicitly defined, which may lead to ambiguity. It would be helpful to provide the full term for clarity. Questions For Authors: No major questions. The paper is well-structured, and the methodology is clearly explained. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your thorough and supportive review of our manuscript. We are grateful for your positive assessment of our work’s novelty, methodological soundness, and comprehensive evaluation, as well as your recommendation to accept. We have used your constructive suggestions to refine our paper. **Baseline comparisons** We appreciate your comment regarding the baseline comparisons, particularly the limited inclusion of large-scale EEG models like LaBraM beyond Table 4. While we initially hesitated due to challenges in isolating the effects of data, encoder, architecture, parameter counts, or pretraining strategy, we recognize the value of broader comparisons. Following your suggestion, we have extended our evaluation to include LaBraM across additional datasets (TUAB subject level, NMT, and TUEV, while omitting TUSZ as they pretrain on this dataset) using the same evaluation strategy we used for all other methods. We adopted LaBraM’s preprocessing recommendations (resampling, bandpass filtering, notch-filtering, avoiding bipolar montages, and crop lengths of 10s for TUAB/NMT, 5s for TUEV) and obtained EEG embeddings. The updated results, visible in Tables S1-3 (available at this [link](https://docs.google.com/document/d/e/2PACX-1vQygdcAED1qMhVgFv5jU9TsclAyRxp-XKFiGwxK2pkxLSdrKAgyGVuAEYBVPnmQZeJDfIBVMLTbzTwG/pub)), show that ELM-MIL outperforms LaBraM across clinical contexts, with accuracy gains of up to 5.7–13.4% depending on the dataset. Recent literature such as LaBraM has predominantly focused on pretraining on many datasets (about 20 in their case) with large transformers to yield general representations. However, these representations are nonspecific and not ideal for downstream prediction tasks without fully finetuning the encoder, which is problematic when limited downstream data is available as in clinical contexts. In contrast, using medical text during pretraining helps ELMs learn relevant representations, to which we ascribe their strong performance. **Abbreviation** Additionally, thank you for noting the undefined abbreviation “SV” in Tables 2 and 6. We apologize for the oversight—“SV” refers to “Supervised”—and now provide the full term to ensure clarity. **Alignment visualisations** Finally, we would like to kindly note the addition of alignment visualisations, which may be found in Figures S1-5 at the same [link](https://docs.google.com/document/d/e/2PACX-1vQygdcAED1qMhVgFv5jU9TsclAyRxp-XKFiGwxK2pkxLSdrKAgyGVuAEYBVPnmQZeJDfIBVMLTbzTwG/pub). These highlight the ability for our method to localize pathology, while also indicating shortcomings such as rarely mentioned features. We hope the reviewer finds these informative. We add Figure S1 to the main text along with a short paragraph noting the successes and shortcomings, while adding Figure S2 through S5 to the appendix. We hope these revisions address your suggestions effectively. Thank you once again for your insightful feedback.
Summary: This paper introduces an approach for pretraining multimodal EEG-language models (ELMs) to improve pathology detection. The authors propose combining EEG data with clinical reports using a sub-unit alignment strategy, which involves cropping EEG time series and segmenting medical reports to create multiple non-overlapping samples. They further extend this approach with multiple instance learning (MIL) to address misalignment between EEG and text segments. The proposed model significantly improves pathology detection performance, especially in scenarios with limited labels. These results are particularly applicable to clinical settings, where datasets are typically much smaller than those in many common deep learning applications. ## update after rebuttal Thanks for the authors' rebuttals. The comments have addressed most of my concerns. I would keep my score. Claims And Evidence: The paper makes a clear claims with convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. Theoretical Claims: The paper does not include a clearly defined theoretical proof section. However, in the methodology section, it provides a detailed description of how concepts such as multimodal alignment, time-series cropping, text segmentation, and multiple instance learning are applied to the pretraining of EEG and language models. The theoretical foundation of these methods primarily stems from existing research in contrastive learning and multimodal representation learning. For example, the paper mentions the InfoNCE loss function and MIL-InfoNCE loss function, both of which are based on the theoretical framework of contrastive learning and are used to learn aligned representations between EEG and text. These theoretical frameworks have been extensively studied and validated in other domains, making the proposed methods theoretically sound. However, the paper could further discuss the specific application and adaptability of these theoretical frameworks in EEG and language pretraining. For instance, while the multimodal alignment strategy and MIL extension are theoretically designed to mitigate inconsistencies between EEG and text segments, such inconsistencies may still persist in practical applications, especially when clinical reports contain a large amount of information unrelated to downstream clinical tasks. Experimental Designs Or Analyses: The experimental designs and analyses are sound and well-executed. The authors conduct extensive experiments to validate their approach, including: Retrieval Analysis: The authors evaluate the ability of ELMs to retrieve matching EEG recordings from clinical reports and vice versa, using top-K accuracy as the metric. Pathology Detection: The authors compare ELM-MIL to EEG-only models on the TUAB dataset for binary classification of normal vs. abnormal EEG recordings. They also evaluate performance on the NMT dataset to assess generalization. Zero-shot Classification: The authors demonstrate zero-shot classification performance using a prompt ensemble, showing that ELMs can leverage language embeddings for pathology detection without explicit downstream training. Ablation Studies: The authors conduct ablation studies to investigate the impact of different components, such as the aggregation method for positive samples and the number of positive EEG/text samples. The experimental designs are comprehensive and address various aspects of the proposed approach. The results provide clear evidence of the effectiveness of ELM-MIL in improving pathology detection. Supplementary Material: The appendix of the paper includes the following sections: Appendix A: Provides an analysis of each category in the TUEV dataset, demonstrating that ELM-MIL outperforms other methods in clinical event detection. Appendix B: Details the model training process, including optimizer settings, EEG encoder architecture, language encoder selection, and temperature parameter choices. Appendix C: Describes EEG data preprocessing and subsampling, covering data filtering, preprocessing steps, and class imbalance handling. Appendix D: Lists the prompt set used for zero-shot classification. Appendix E: Provides detailed information on report segmentation and content partitioning, including how paragraphs are extracted from clinical reports and how content clustering is performed. These appendices offer valuable supplementary information for understanding the paper’s methodology and experiments. Relation To Broader Scientific Literature: In the "Related Work" section, the authors thoroughly discuss the connections between their research and the broader scientific literature, highlighting key contributions in relation to previous studies. This section is divided into four subsections: Self-supervised learning with EEG data, Using EEG for pathology detection, Medical multimodal language modeling, and Multiple instance learning. The proposed EEG-Language Models (ELMs) introduce multimodal pretraining by aligning EEG with text, significantly improving pathology detection performance, particularly in scenarios with limited labeled data. This represents a major advancement compared to previous self-supervised learning (SSL) approaches that rely solely on EEG data. Essential References Not Discussed: In the paper, the authors have thoroughly discussed various works related to EEG-Language Models (ELMs), including the latest advancements in self-supervised learning, multimodal modeling, and EEG-based pathology detection. However, the related work section could be further expanded by including research on alignment strategies in multimodal learning. Alignment strategies are crucial in multimodal learning. While the paper mentions methods such as CLIP and M-FLAG, there have been recent advancements in multimodal alignment. These methods could provide new insights for improving EEG-text alignment. Other Strengths And Weaknesses: Strengths: Innovative Approach: The paper presents a novel application of multimodal pretraining in the medical domain, combining EEG data with clinical reports in a meaningful way. Significant Improvements: The proposed ELM-MIL model demonstrates substantial improvements in pathology detection, especially in scenarios with limited labeled data. Zero-shot Classification: The ability to perform zero-shot classification using language embeddings is a unique and powerful feature of the proposed approach. Weaknesses: Data Limitations: The availability of paired EEG-report datasets is limited, which may restrict the scalability of pretraining data. Future work could explore synthetic data generation techniques. The related literature is insufficient. The related work section could be expanded by including research on alignment strategies in multimodal learning. Alignment strategies are crucial in multimodal learning. While the paper mentions methods such as CLIP and M-FLAG, there have been recent advancements in multimodal alignment. These methods could provide new insights for improving EEG-text alignment. The paper lacks logical structure. The methods section should focus solely on describing the methodology. For instance, the statement “We set N=32 and M=8 as this covers all samples for a majority of subjects.” refers to specific experimental parameters and should be moved to the experiments section. Other Comments Or Suggestions: N/A Questions For Authors: In practical applications, the dataset size may be much larger. In this case, would the model's training time and memory requirements increase significantly? The multimodal model proposed in the paper performs excellently in pathology detection, but model interpretability is crucial for clinical applications. Has the author considered improving the model's interpretability through specific techniques, such as feature importance analysis or attention mechanism visualization? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your detailed and constructive review of our manuscript. We greatly appreciate your recognition of our approach’s innovation, performance improvements, and clinical relevance, as well as your thoughtful suggestions and questions, which have helped us strengthen our work. **Alignment and challenges** We appreciate your suggestion to further discuss our EEG-language pretraining including its potential challenges with alignment. While our MIL extension aims to mitigate inconsistencies, challenges may persist when reports contain information unrelated to clinical tasks. However, our results (e.g., robustness to additional text sections in Figure 2) suggest this is less impactful for common clinical concepts. We are pleased to provide new model intepretability figures which visualize alignment (Figures S1-5 available at this [link](https://docs.google.com/document/d/e/2PACX-1vQygdcAED1qMhVgFv5jU9TsclAyRxp-XKFiGwxK2pkxLSdrKAgyGVuAEYBVPnmQZeJDfIBVMLTbzTwG/pub)). While we show that our methodology is able to localize pathology despite a lack of explicit temporal information (Figures S1-S3), alignment for rare or subtle features remains challenging (Figure S4). These examples highlight both successes and shortcomings, setting the stage for future refinements. We add Figure S1 to the main text along with a short paragraph noting the successes and shortcomings, while adding Figure S2 through S5 to the appendix. **Multimodal literature** Regarding literature on multimodal alignment, we are expanding the "Related Work" section to include further literature. In case the reviewer believes we are omitting important relevant work, we would be sincerely grateful for further suggestions. Related Work - Medical multimodal language modeling. [L100] (...) *Recent advances outside the medical domain include multi-task strategies, both during pretraining by integrating contrastive learning and self-supervised losses [1,2], as well as finetuning on multiple downstream tasks [3,4]. Further exploration involves moving compute from unimodal encoding to multimodal fusion [5].* [1] Tschannen, Michael, et al. "Siglip 2: Multilingual vision-language encoders with improved semantic understanding, localization, and dense features." arXiv preprint arXiv:2502.14786 (2025). [2] Tang, Zineng, et al. "TULIP: Towards Unified Language-Image Pretraining." arXiv preprint arXiv:2503.15485 (2025). [3] Liu, Haotian, et al. "Improved baselines with visual instruction tuning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [4] Dai, Wenliang, et al. "InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning." arXiv, 2023, arxiv.org/abs/2305.06500. [5] Kim, Wonjae, Bokyung Son, and Ildoo Kim. "Vilt: Vision-and-language transformer without convolution or region supervision." International conference on machine learning. PMLR, 2021. **Logical structure** We appreciate the note about the logical structure and the placement of phrases such as “We set N=32 and M=8”. To improve the structure of the manuscript, we propose the following sections: 3. Methods (unchanged) 3.1 Pretraining (was: Experimental Setup) 3.1.1 EEG-language pretraining (unchanged) 3.1.2 EEG-only self-supervised learning (unchanged) 4. Experimental Setup (new) 4.1 Pretraining setup (new) 4.2 Datasets and evaluation tasks (was 3.2) 4.3 Preprocessing (was 3.3) and move the mentions of hyperparameters settings (such as N, M, temperature, model dimensionality) to '4.1 Pretraining Setup'. This separates the description of the methodology and experimental details into distinct sections. We thank the reviewer for their suggestion. **Training scalability** Regarding training scalability, our approach benefits from a small EEG encoder (0.9M parameters) compared to large-scale models like LaBraM (up to 369M), a frozen text model, and no finetuning requirement. This efficiency enables training on large batches with a single GPU (e.g., 9 hours on our dataset using a two-generation-old GPU) and thus scalability should remain very manageable. **Interpretability** For interpretability, we agree on its clinical importance and add the aforementioned alignment visualizations. This provides interpretability in the temporal domain and we hope the reviewer finds them valuable. While for the current manuscript we used an efficient CNN encoder to enable a clear focus on comparisons between pretraining strategies per se, we recognize that scaling to transformer architectures with attention visualizations can further enhance interpretability across the spatial domain. We leave these important encoder architecture explorations to follow-up work. We hope these comments address your feedback effectively. Thank you again for your insightful comments, which have significantly improved our manuscript.
Summary: This paper presents a multi-modality model that integrates EEG recordings and clinical reports for neural event detection. The proposed method segments an EEG recording and its corresponding report into sequences of epochs and words, then constructs epoch-word pairs and an alignment matrix for representation learning. The model employs both pairwise contrastive learning and multi-instance contrastive learning to enhance feature representation. Finally, it is fine-tuned for various downstream tasks. Experiments on multiple datasets demonstrate that the proposed approach outperforms several contrastive learning and EEG-based baselines. Claims And Evidence: My main concern is the motivation behind aligning EEG recordings with clinical reports. EEG is a time-series signal that records a patient real-time physiological state, such as sleep stages, seizure events, and other neurological conditions. However, clinical reports typically consist of structured sections, such as an abstract and findings/details, which provide a summarized interpretation of the EEG recording. These reports often include patient information, the state during recording (e.g., awake, asleep, or under stimulation), and observations of seizure activity or specific waveforms, but they lack precise time indices, e.g., onset/offset of neural events. Authors also point out this lack issue of time information of clinical reports but attempt to address it by using a neural network to learn representations of report segments and force-align them with temporal EEG epochs. However, this approach lacks clinical feasibility, as there is no inherent one-to-one correspondence between EEG epochs and report segments. Simply learning latent representations for alignment without considering the structured, non-temporal nature of clinical reports does not meet real-world clinical workflows. Methods And Evaluation Criteria: Partially, the proposed method focuses on detection/classification tasks, but incorporating additional detailed clinical report, which require manual writing by doctor, significantly increases data costs. Moreover, such neural detection tasks can naturally be performed using EEG data alone, without the necessity of aligning with textual reports. The added complexity of learning from reports does not provide clear motivation and advantages for classification tasks. Theoretical Claims: NA Experimental Designs Or Analyses: Please kindly refer to Methods And Evaluation Criteria. A meaningful task involving clinical reports would be report generation, as the authors also highlight in the Discussion and Impact Statement section. However, the current focus on detection and classification contradicts the intended use of clinical reports. Supplementary Material: Yes Relation To Broader Scientific Literature: EEG-Text alignment is a promising research direction. However, constructing a benchmark dataset, designing an effective alignment method, and establishing a meaningful evaluation framework remain open challenges. Addressing these aspects would be highly beneficial for advancing clinical tasks and BCI applications. Essential References Not Discussed: CLARA: Clinical Report Auto-completion, WWW20. Other Strengths And Weaknesses: Please kindly refer to the above comments Other Comments Or Suggestions: Please kindly refer to the above comments Questions For Authors: Please kindly refer to the above comments Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your detailed and insightful review of our manuscript. We appreciate your feedback as it has helped us refine our presentation and clarify the motivations behind our work. We are encouraged by your recognition of EEG-text alignment as a promising research direction and would like to address your concerns. **EEG-Text Alignment** We acknowledge the point about the lack of explicit temporal correspondence between EEG epochs and clinical report segments, which indeed poses a challenge. However, our approach leverages the fact that concepts like “abnormal EEG” in reports are guaranteed to relate to multiple EEG crops from the same recording and, in expectation, more so than to unrelated (negative) crops. This principle mirrors successful contrastive learning in computer vision (e.g., noisy image-caption pairs) and video-text alignment (e.g., subtitles not always matching visuals), where pretraining remains effective despite imperfect correspondence. To illustrate this, we have added qualitative examples of our ELMs at this [[link]](https://docs.google.com/document/d/e/2PACX-1vQygdcAED1qMhVgFv5jU9TsclAyRxp-XKFiGwxK2pkxLSdrKAgyGVuAEYBVPnmQZeJDfIBVMLTbzTwG/pub) (Figures S1-5), showing similarity scores between text embeddings (e.g., “seizures arising from the right hemisphere”) and 5-second EEG crops for hold-out subjects. Visualizations of the highest- and lowest-similarity crops demonstrate that our method captures pathology-relevant alignment in the absence of explicit temporal information in the reports, reinforcing its practical utility. We add Figure S1 to the main text and Figure S2 through S5 to the appendix. **Downstream tasks and data costs** Regarding the motivation for downstream tasks, we agree that EEG-only methods can address detection/classification. However, our multimodal approach enhances unimodal EEG encoder initialization, yielding better representations for these tasks. As shown in our manuscript, our method outperforms EEG-only baselines (e.g., +8.7% balanced accuracy at 1% labels on TUAB) and even large-scale models like LaBraM (Table 4, as well as Tables S1-3 with new additional linear probing results at the same [link](https://docs.google.com/document/d/e/2PACX-1vQygdcAED1qMhVgFv5jU9TsclAyRxp-XKFiGwxK2pkxLSdrKAgyGVuAEYBVPnmQZeJDfIBVMLTbzTwG/pub)), trained on many more EEG datasets. The pretrained EEG encoders we are releasing can be used for downstream EEG-only clinical tasks without additional complexity and at reduced cost due to the low parameter count. This suggests that leveraging existing clinical reports—already available in large quantities in hospitals and with neglible costs to store compared to EEG data and especially compared to acquiring additional EEG data—offers a cost-effective way to improve performance without increasing data collection costs. Moreover, we like to clarify that our method repurposes existing clinical reports, and does not require new manual writing. These reports, a byproduct of standard clinical workflows, enhance model performance without additional expense. **Report generation** Finally, we appreciate your suggestion of report generation as a valuable task, and we fully intend our work to pave the way for such applications, as noted in the manuscript. We believe that learning and evaluating pathology-sensitive representations is a critical first step; if alignment fails to capture clinical relevance in latent representations, subsequent text generation would lack grounding. Our focus on detection/classification validates this foundation, aligning with the broader utility of clinical reports. To emphasize how our results establish a stepping stone for future tasks like report generation, we now note this in the discussion and expand the impact statement to contextualize this trajectory. **Discussion addition [L422]:** Our pathology-sensitive multimodal alignment is a critical step toward automated report generation (e.g. Biswal et al. 2020), ensuring EEG-text representations capture clinical information for future documentation tasks. **Impact statement extension [L457]:** The multimodal nature of our approach, by aligning EEG with clinical reports in a pathology-sensitive manner, not only enhances detection but also lays an important foundation for automated report generation. Specifically, such generation may greatly benefit from an aligned latent space which contains clinical information. This could facilitate clinical documentation by translating EEG signals into structured summaries. These can constitute highly valuable future efforts given the time-intensive nature of manual reporting. We hope these clarifications address your concerns and demonstrate the clinical and scientific value of our approach. Thank you again for your constructive feedback, which we believe has significantly strengthened our manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for your response. ``` EEG-Text Alignment ``` While the inclusion of multimodal information appears to contribute to performance, this may be due to the additional, distinguishable information introduced by the word representations. Also, I am curious about the alignment between simulated and real clinical reports. How do you evaluate whether the simulated text generation aligns with real-world clinical EEG reports? Also, in many clinical settings, reports often include only brief notes, such as onset/offset times or summary-level observations, rather than rich detailed descriptions. Without evaluation, how do we assess the validity and fairness of the proposed method? ``` Regarding the motivation for downstream tasks, we agree that EEG-only methods can address detection/classification. However, our multimodal approach enhances unimodal EEG encoder initialization, yielding better representations for these tasks. ``` ``` The pretrained EEG encoders we are releasing can be used for downstream EEG-only clinical tasks without additional complexity and at reduced cost due to the low parameter count. ``` I think my main concern remains: why must we involve text data to improve EEG classification tasks, which are traditionally EEG-only? In clinical practice, doctors do not rely on text notes to perform detection or classification. For seizure diagnosis, standard detection is performed on EEGs. Some cases may require video monitoring, which is costly and only available in tertiary hospitals. While you mentioned that "These reports, a byproduct of standard clinical workflows,...., clinical reports already available in large quantities in hospitals and with neglible costs to store compared to EEG data and especially compared to acquiring additional EEG data", my personal opinion is that this is not true. Generating suitable EEG clinical reports for model training is resource-intensive, and such data is generally limited to a small number of tertiary care centers. In fact, there are no large available EEG reports in general cases. Many clinical EEG reports consist only of brief notes, lacking detailed descriptions. Some initiatives, such as TUH DB, stop providing clinical notes and instead encourage development focused on EEG detection methods. In general, there is no clear clinical motivation to incorporate text reports into EEG classification tasks. ``` We believe that learning and evaluating pathology-sensitive representations is a critical first step ``` I think EEG classification is often used for phenotyping, including identifying stages, events, or abnormal activity, rather than explicitly detecting neuropathological patterns. Pathology tasks focus on more understanding underlying mechanism of diseases, like epileptogenic zone (EZ) or molecular-level information. It would be better to properly define the intended focus of the paper, and clarify what specific clinical or biological insights the text-based component contributes beyond performance improvements in classification accuracy. While the authors have clarified the potential of the proposal, my concerns remain. I am temporarily lowering my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your continued feedback, which has been useful in refining our manuscript. We would like to address the misunderstandings that led to your lowered score. Below, we provide detailed clarifications. > I am curious about the alignment between simulated and real clinical reports. We apologize for any confusion: our method **does not generate or simulate clinical reports**. Instead, we use existing clinical reports from the TUEG dataset (a subset totalling 11.8K reports), naturally produced during standard practice. We expand on this below. > in many clinical settings, reports often include only brief notes, such as onset/offset times or summary-level observations, rather than rich detailed descriptions. We regret if “reports” suggested extensive documentation and apologize for the confusion. In reality: - **Brief Notes Are Common**: A portion of TUEG reports are short, yet effective. Reports do not need to provide highly detailed descriptions. Compared to binary labels, a brief note with observed EEG events and the clinical correlation already provides a much richer signal. In addition, this enables considerably more data, given our subset of 11.8K reports compared to the largest abnormal corpus (TUAB) with 2.7K labels. A new figure shows heterogeneous report lengths, reflecting real-world diversity [Figure S6; [link](https://docs.google.com/document/d/e/2PACX-1vQygdcAED1qMhVgFv5jU9TsclAyRxp-XKFiGwxK2pkxLSdrKAgyGVuAEYBVPnmQZeJDfIBVMLTbzTwG/pub)]. - **Scalability**: Given a clarification on length, we hope the reviewer agrees that similar notes exist more commonly across settings, rather than just tertiary centers, making our approach more broadly applicable. > why must we involve text data to improve EEG classification tasks, which are traditionally EEG-only? In clinical practice, doctors do not rely on text notes to perform detection or classification. We agree and would like to stress that **no text data is involved during downstream tasks**. To clarify, - **"EEG-Only methods"**: EEG representations are learned by only pretraining on EEG. These representations are subsequently evaluated on downstream tasks. - **Our EEG-Language methods**: EEG representations are pretrained by aligning EEG signals with text embeddings from clinical reports, enriching the learned features with contextual guidance. For classification, the text encoder is discarded, and we evaluate the EEG representations alone. For zero-shot classification, a simple prompt (e.g., ‘EEG is abnormal/normal’) is embedded using the text encoder, but no patient-specific report/text is used or generated. Our results show this offers two key benefits: (1) improved EEG representations for classification tasks, especially in low-data scenarios, and (2) zero-shot capabilities - all without altering clinical workflows that rely on EEG alone at test time. We apologize for any confusion in our original presentation. To address this, we propose the following addition to Section 4.1: - *We emphasize that “EEG-only” refers to pretraining without text, while ELMs use text solely during pretraining to guide EEG representation learning. At test time, neither method uses clinical reports, ensuring alignment with standard EEG-based clinical practice.* > I think EEG classification is often used for phenotyping, including identifying stages, events, or abnormal activity, rather than explicitly detecting neuropathological patterns. We agree with the reviewer's characterisation of the primary use of EEG. We apologize if our use of 'pathology detection' has caused confusion with respect to the intent of the paper. We aimed to ameliorate this by always using 'detection'. While we would like to kindly note that it is commonly used in the literature for our scope of clinical phenotyping (e.g. [1-3]) we understand the reviewer's concern and propose to rephrase the abstract: - L14: Multimodal language modeling has enabled breakthroughs for representation learning, yet remains unexplored in the realm of functional brain data for **clinical phenotyping**. - L22: Compared to EEG-only models, our multimodal models perform significantly better **across four clinical evaluations** and (...) And clarify at the start of the introduction: - L35: While EEG sees widespread clinical use **for the detection of pathology, by which we refer to broad clinical phenotyping such as disease classification and event detection**, (...) Alternatively, we are open to adjusting the title to use 'clinical phenotyping' instead of 'pathology detection', although we might need ICML chair input on whether this is permitted and might risk misaligning the paper with the relevant literature. We hope this addition at the start of the introduction sufficiently clarifies our scope. (We provide DOI due to character restrictions) [1] https://doi.org/10.3390/math11071619 [2] https://doi.org/10.1016/j.compbiomed.2021.104434 [3] 10.1109/JSAC.2020.3020654
Summary: The manuscript describes EEG-Language (CLIP-like) pretraining on medical EEG recordings and the accompanying textual medical reports. They used a pretrained medical langauge model and a from-scratch-trained EEG encoder to map temporal crops of EEG and subsections of medical reports to the same latent space, with some methodolical adaptations for the fact that multiple EEG crops belong to the same recording and multiple subsections belong to the same medical report. Experiments are performed across multiple medical EEG datasets. Results show that this yields good pathology detection accuracy in zero-shot settings, whihc can be further improved via linear probing on subsets of the training data and linear probing also works for seizure detection and event classification. Claims And Evidence: The claims of a EEG-language-model pretraining that yields good pathology detection from no or few labels seem well-supported to me. Methods And Evaluation Criteria: The authors provide a sensible setup for the evaluation, multiple types of evaluation (retrieval, different types of downstream performance) on multiple datasets. Theoretical Claims: None Experimental Designs Or Analyses: See Methods and Evaluation Criteria Supplementary Material: No Relation To Broader Scientific Literature: EEG-Language pretraining with medical reports is an open area that seems to not have been tackled yet and for which any results are very valuable for the research community Essential References Not Discussed: None that I am aware of Other Strengths And Weaknesses: Figures look clean, writing seems easy to read Other Comments Or Suggestions: - Questions For Authors: In p.3 $L_\textrm{orth}$ is $h_\textrm{e}$ normalized? I Assume so, because why otherwise is this correlation and not covariance? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you sincerely for your positive feedback on our manuscript. We are grateful to hear about the value of our contribution to the research community, as well as the clarity of our manuscript. Regarding your question about $L_{orth}$, $h_e$ is indeed L2-normalized. We apologize for this omission in the paper. We have corrected the description and notation in Section 3.1.1 accordingly. Thank you for helping us improve our manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for addressing the confusion regarding the L2-normalization. I have another question, the medical reports, are they still publicly available for TUH, can anyone obtain them? --- Reply to Comment 1.1.1: Comment: Thank you for your question. Whereas all EEG datasets used in our manuscript are publicly available, the reports were provided by the Neural Engineering Data Consortium at Temple University following a data sharing agreement. While they offer search for keywords or items of interest (https://isip.piconepress.com/projects/nedc/html/tuh_eeg/), the full reports are currently not publicly available due to privacy regulations. Our hope is that our work encourages the clinical and research communities to recognize the potential of these reports when combined with advancements in modern machine learning, paving the way for broader access in the near future, much like what we've seen with radiology reports. We look forward to releasing our pretrained models (alongside our code), which can be readily used for both finetuning and inference without access to reports. We believe this will be a valuable resource for researchers and practitioners alike.
null
null
null
null
null
null
NEAR: Neural Electromagnetic Array Response
Accept (poster)
Summary: Multi-antenna radar systems face challenges in achieving high angular resolution due to hardware constraints, noise, and limited physical antennas. Traditional supervised learning methods for super-resolution struggle with generalization in unseen environments and require extensive training data. The authors propose Neural Electromagnetic Array Response (NEAR), an untrained implicit neural representation (INR) framework that predicts complex radar responses at arbitrary 2D spatial coordinates using sparse antenna measurements. NEAR predicts radar responses at unobserved locations by exploiting latent harmonic structures in radar wave propagation. It integrates physics-based signal processing with neural networks, avoiding reliance on large datasets. They claim that this is the first time to establish a link between antenna array response and the expressive power of an INR architecture. Additionally, they introduces a novel regularizer that incorporates radar physics and latent geometry to bridge the gap between traditional INR and NEAR. They include a 20 ×20 full virtual array response (noisy) as a benchmark reference. Evaluated through simulations and real-world (a commercial MIMO radar platform IMAGEVK-74) radar experiments, NEAR demonstrates superior performance in unseen environments compared to conventional methods. Overall, this research bridges signal processing and INR, offering a data-efficient, physics-aware solution for radar super-resolution without compromising interpretability. Claims And Evidence: The authors proves that predicting complex-valued responses at any arbitrary location within the 2D virtual antenna array domain is indeed a problem that falls within the class of functions representable by INR. Furthermore, they show that this mapping can be effectively approximated using multiple layers of Multi-Layer Perceptrons. To further justify the effectiveness of the physics-informed regularizer, the authors establish the algebraic properties of the block Hankel matrix corresponding to the ground truth response, leveraging its harmonic structure. The experiments demonstrate that the proposed method outperforms the baseline in terms of generalization, exhibits robustness to noise, achieves superior performance in multi-objective Direction-of-Arrival (DOA) estimation, and significantly reduces hardware requirements. Methods And Evaluation Criteria: The proposed method does not rely on large-scale data, exhibits low computational consumption, and demonstrates greater generalizability. They include a 20 $\times$ 20 full virtual array response (noisy) as a benchmark reference. A series of validations are conducted in both simulated and real-world environments, encompassing scenarios with varying noise intensities, multiple targets, and different sampling densities. In conclusion, the authors demonstrate the advantages of the proposed method in addressing the radar response super-resolution problem through a series of well-conducted experiments. Theoretical Claims: Theorem 4.1 gives an exact characterization of the set $S_T$ of all possible integer harmonics of the feature mapping $γ(r)$. The mapping from 2D coordinates to the complex values of the radar response is shown to belong to the class of INRs, as supported by Remark 4.2 and Theorem 4.1. Experimental Designs Or Analyses: The validity of the experimental designs can be referenced in the section "Methods and Evaluation Criteria." It is suggested to include a comparison with a NeRF-like method, where the rendering operation of NeRF can be modified to model radar waves. By adapting the NeRF approach for radar response, it could serve as a valuable baseline for evaluating the proposed method. Supplementary Material: The authors provide detailed proofs of theorems presented in the manuscript within the Supplementary Material, along with the parameters of the experimental setup, the data generation methodology, evaluation metrics, and additional experimental visualizations. The supplementary material also includes the source code. These mathematical proofs demonstrate the feasibility of applying INR to radar response overscoring, establishing a foundation upon which physics-informed regularizers can facilitate accurate INR learning. The experimental supplements ensure the reproducibility of the results. Relation To Broader Scientific Literature: The approach of combining implicit fields with physical constraints is a key feature in many recent works on 3D reconstruction. For example, the incorporation of continuous medium dynamics to describe the evolution of the Gaussian distribution illustrates how physical principles can be leveraged to enhance the accuracy and realism of reconstructed models\[1\]. \[1\] Tianyi Xie, Zeshun Zong, Yuxin Qiu, Xuan Li, Yutao Feng, Yin Yang, and Chenfanfu Jiang. Physgaussian: Physics-integrated 3d gaussians for generative dynamics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Essential References Not Discussed: N/A Other Strengths And Weaknesses: *Strengths*: Their work refines the INR superset analysis \[1\] by providing the exact set, rather than a superset, of integer harmonic frequencies that characterize INR functions. This derivation delivers a precise and tight characterization of the expressive power of INRs. They assert that their method employs a straightforward yet effective regularization strategy, in contrast to the more complex ray tracing approach\[2\]. \[1\] Roddenberry, T. M., Saragadam, V., de Hoop, M. V., and Baraniuk, R. G. Implicit neural representations and the algebra of complex wavelets. arXiv preprint arXiv:2310.00545, 2023. \[2\] Chen, X., Feng, Z., Sun, K., Qian, K., and Zhang, X. Rf- Canvas: Modeling RF channel by fusing visual priors and few-shot RF measurements. In Proceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems, 2024. *Weaknesses*: In this task, both the radar and environmental settings (similar to light and camera parameters in NeRF) are kept constant, with the goal of predicting higher-resolution radar response maps. However, the authors do not provide a rationale for why they choose to directly predict the value from the coordinates, rather than modeling it as a radiance field-like function. Lack of experiments comparing the two. Other Comments Or Suggestions: N/A Questions For Authors: Why opt to directly predict the value from the coordinates, instead of modeling it as a radiance field-like function? Please refer to "Experimental Designs Or Analyses". The superiority of the proposed method would be more effectively highlighted through an experimental comparison with radiance field-like method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your recognition of our work’s strengths, notably our theoretical analysis that precisely characterizes the expressive power of INR, as well as our development of an efficient and effective regularization strategy. In response to your concern regarding the rationale behind directly predicting the response value from the coordinates—as opposed to modeling it as a radiance field-like function—we have conducted a comprehensive comparison between our approach (NEAR) and NeRF$^2$[1], an innovative extension of NeRF[2] into the electromagnetism domain using ray tracing. NeRF$^2$ constructs a continuous volumetric scene function that interprets the propagation of RF signals and can tell what signal is received at any position after training with a set of input signal measurements. We compare NeRF$^2$ and our method (NEAR) in terms of angular resolution, target localization accuracy (same as we did in Section 5.2) and average running time using the real-world collected data. We adopt hyperparameters recommended in [1], and run all experiments on a laptop with CPU AMD Ryzen 9 5900 HS with Radeon Graphics and GPU NVIDIA GeForce RTX 3050 Ti Laptop. | Method | Angular resolution for 2m/3m/4.5m | Localization error for 1/2/3/4 target(s) | Average running time | |:-------:|:--------------------------------:|:--------------------------------------:|:--------------------:| | NEAR | 5.7248$^\circ$/6.6769$^\circ$/6.9941$^\circ$ | 0.0744m/0.0770m/0.0762m/0.0718m | 550.83s | | NeRF$^2$ | 8.5783$^\circ$/8.5783$^\circ$/8.8948$^\circ$ | 0.4902m/0.5096m/0.4346m/0.3898m | 1278.31s | From the table, we can see that NEAR is able to resolve smaller angle separation at different distances (and signal to noise ratio, SNR) and achieve a much smaller target localization error, compared to NeRF$^2$. This is attributed to some important distinctions between radiance-field reconstruction and our method, which are listed below: - Our setting uses far fewer measurements (see below) in the form of a antenna array response, compared to NeRF$^2$. This renders measurement-heavy methods like NeRF$^2$ somewhat inferior in our settings. Hence we need to heavily utilize the underlying wave propagation model and the harmonic structure of measurements received at antenna arrays, in order to successfully regularize the problem with so few measurements. This is a major contribution of our work which sets us apart from direct use of NeRF$^2$. - In fact, NEAR targets a different objective than NeRF$^2$. Our approach emphasizes more on the (super-resolution) localization of the targets, while NeRF$^2$ cares more about the physical property of all objects in a 3D scene in order to model signal propagation. This also serves a crucial reason why we opt to directly predict the response from the antenna coordinates rather than modeling all the voxels' properties as a continuous volumetric function. - As explained earlier, NeRF$^2$ requires a large set of measurements for training. According to [1], it uses around $6000 \times 21$ measurements and 80\%/20\% for training/testing splitting, while we only use a sparse set of $8\times8$ measurements for training. Under the same setting of training, NEAR uses less than half of the training time of NeRF$^2$ due to our proposed regularization rather than the ray tracing strategy, which is well known for its heavy computational cost. In summary, NEAR outperforms NeRF$^2$ by directly predicting antenna responses while leveraging the harmonic signal structure. Our approach significantly reduces the required training measurements, leading to improved angular resolution, enhanced localization accuracy, and reduced runtime. **Reference:** [1] Zhao, X., An, Z., Pan, Q., \& Yang, L. (2023, October). Nerf2: Neural radio-frequency radiance fields. In Proceedings of the 29th Annual International Conference on Mobile Computing and Networking (pp. 1-15). [2] Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., \& Ng, R. (2021). Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1), 99-106. --- Rebuttal Comment 1.1: Comment: The authors' response has addressed my proposed concerns. However, as I am not an expert in this field, I tend to keep my original rating (Borderline).
Summary: This paper addresses the challenge of achieving high-resolution angular estimation in multi-antenna radar systems using sparse measurements. The authors propose NEAR (Neural Electromagnetic Array Response), an innovative framework that leverages implicit neural representations (INRs) to predict complete antenna array responses from limited physical antenna data, effectively creating a large virtual sensing system with few physical antennas. The core technical contribution lies in seamlessly integrating INRs with a physics-informed regularization strategy, specifically designing a novel Block Hankel matrix-based constraint that captures the inherent harmonic structures of radar wave propagation. By developing a theoretically grounded approach that maps spatial coordinates to complex-valued antenna responses, the authors enable a continuous representation of the antenna array response field, which allows for super-resolution angular estimation while maintaining computational efficiency and generalizability. Experimental validation across both simulated and real-world scenarios demonstrates NEAR's superior performance, consistently outperforming baseline methods in response recovery, angular resolution, and direction-of-arrival estimation. The results showcase the method's robustness across different sampling configurations, noise levels, and number of targets, with significant improvements observed particularly in complex multi-target environments. Comprehensive evaluations using commercial MIMO radar platforms further validate the framework's practical applicability and potential for enhancing radar sensing technologies. Claims And Evidence: **Claim: Extensive simulations and real-world experiments using radar platforms demonstrate NEAR’s effectiveness** Assessment: Lack of comparison with data-driven algorithms and lack of computational efficiency comparison experiments make it impossible to determine the method's effectiveness. Methods And Evaluation Criteria: Yes, using simulations and real-word experiments to evaluate method's performance is reasonable. Theoretical Claims: The constraints used in the paper are common constraints in its application domain, and can effectively constrain INR. Experimental Designs Or Analyses: **Issue 1**: Lack of comparison with data-driven algorithms. **Issue 2**: Lack of computational efficiency comparison experiments, making it impossible to determine the method's usability. Supplementary Material: Codes Relation To Broader Scientific Literature: The paper applies cutting-edge AI technology INR to traditional problems, and combines domain-specific prior knowledge as constraints, successfully solving domain problems. This problem-solving approach may potentially be extended to many other fields. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. Please provide detailed experimental comparisons or a comprehensive analysis of your method's advantages over data-driven algorithms in terms of accuracy and computational efficiency. 2. Please provide experimental comparisons or a detailed analysis of the computational efficiency differences between your method and EMaC. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer xAKz for the constructive comments and suggestions. We provide additional experimental comparisons below to address your concerns: **1. Experimental comparison between our approach and data-driven method (NeRF$^2$).** We add a state-of-the-art data-driven baseline called NeRF$^2$[1], which represents scenes as neural radiance fields by optimizing an underlying continuous volumetric scene function using a set of input electromagnetic signal measurements. We compare NeRF$^2$ and our method (NEAR) in terms of angular resolution, target localization accuracy (same as we did in Section 5.2) and average running time using the real-world collected data. We adopt hyperparameters recommended in [1], and run all experiments on a laptop with CPU AMD Ryzen 9 5900 HS with Radeon Graphics and GPU NVIDIA GeForce RTX 3050 Ti Laptop. | Method | Angular resolution for 2m/3m/4.5m | Localization error for 1/2/3/4 target(s) | Average running time | |:-------:|:--------------------------------:|:--------------------------------------:|:--------------------:| | NEAR | 5.7248$^\circ$/6.6769$^\circ$/6.9941$^\circ$ | 0.0744m/0.0770m/0.0762m/0.0718m | 550.83s | | NeRF$^2$ | 8.5783$^\circ$/8.5783$^\circ$/8.8948$^\circ$ | 0.4902m/0.5096m/0.4346m/0.3898m | 1278.31s | From the table, we can see that NEAR is able to resolve smaller angle separation at different distances (SNR) and achieve a much smaller target localization error, compared to NeRF$^2$, highlighting the effectiveness and efficiency of our method. This improvement is primarily attributed to judicious use of signal processing ideas in designing the regularizer, which fully exploits the underlying harmonic structure in planar wave propagation from far-field targets. **2. Experimental comparison of computational efficiency between our approach and EMaC.** For the average running time, EMaC takes **1226.15s** while our approach only takes **550.83s**, indicating the potential of real-time implantation with future algorithmic and computing hardware improvements. **References:** [1] Zhao, X., An, Z., Pan, Q., \& Yang, L. (2023, October). Nerf2: Neural radio-frequency radiance fields. In Proceedings of the 29th Annual International Conference on Mobile Computing and Networking (pp. 1-15).
Summary: The authors utilize a new INR-based framework to achieve angular super resolution in multi-antennae radar systems. The authors further propose a physics-informed regularizer and provide theoretical insights into what functions can be represented by INRs under certain, in previous literature established, constraints. The authors provide extensive synthetic and real-world results as well as an ablation study, showing the superiority of the proposed approach over other solutions as well as the positive performance impact of individual components. Claims And Evidence: The performance claims made in this work are sufficiently supported by the experiments conducted. I have not extensively validated the proofs for claims about INRs representational power. Methods And Evaluation Criteria: I am not familiar with the application domain, as such I cannot speak on whether the evaluation criteria are appropriate. The ablation study is sufficient evidence to at least support baseline claims about the framework's efficacy and necessity of individual components (i.e. the additional regularizer). Theoretical Claims: I did not check the correctness of proofs provided in the paper. Experimental Designs Or Analyses: The ablation study is appropriately designed. Supplementary Material: I did not review the proofs in the supplement but did review the additional experimental results. Relation To Broader Scientific Literature: This work extends the literature in two directions, to the best of my knowledge. It introduces two new methods, NEAR broadly and the physics-informed regularizer specifically. It also extends the literature on INRs representational capabilities. Essential References Not Discussed: I am not familiar enough with the literature in this domain. Other Strengths And Weaknesses: The experiments in this paper seem well-selected and demonstrate the strong performance of the proposed method. On first glance the proofs also seem detailed and, given their veracity, make interesting contributions to the literature that elevates the overall significance of the paper substantially. Other Comments Or Suggestions: page 3, line 111, right: Typo in "reconstruc" - missing "t" Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer MVwc for the time and effort in reviewing our paper. We appreciate your positive comments on our work and have fixed the typo you pointed out. If there are any additional areas where you believe we could further improve our manuscript, we would greatly appreciate your insights. Please let us know if we can address any specific concern that could help justify a higher score. Thank you again for your consideration and support.
Summary: Problem Statement: The paper tackles the challenge of achieving angular super-resolution in multi-antenna radar systems using only sparse measurements. In radar systems, hardware constraints (i.e. having only a few physical antennas) and noise limit the achievable angular resolution. Traditional supervised methods often require large, high-quality training datasets and may not generalize well to new environments. Proposed Solution (NEAR): The authors propose NEAR—a framework based on untrained implicit neural representations (INRs) that predicts complex-valued antenna responses at unseen locations from sparse measurements. The method is physics-informed, as it leverages the latent harmonic structure inherent in electromagnetic wave propagation, and it incorporates a novel latent geometry–aware regularizer. Main Claims: - High-Resolution Prediction: NEAR can predict full virtual array responses (both amplitude and phase) with high accuracy despite limited measurements. - Generalization & Interpretability: By integrating physical laws (such as planar wave propagation and harmonic structure), the model generalizes well to unseen environments and remains physically interpretable. - Superior Angular Super-Resolution: NEAR improves upon conventional methods in resolving closely spaced targets (i.e., achieving super-resolution) while keeping hardware costs low. - Theoretical Foundations: The paper provides new theoretical insights into the expressive power of INR architectures when equipped with appropriate positional encodings and shows how these relate to the Fourier harmonic representation of array responses. Claims And Evidence: I found the claims to be well supported. Methods And Evaluation Criteria: The methods are described well and the authors do experimental studies on simulated as well as real world datasets with impressive results. - Implicit Neural Representation (INR): NEAR employs an untrained INR that maps 2D spatial coordinates to complex-valued radar responses. The architecture follows a typical INR design: a multilayer perceptron (MLP) with positional encoding (similar to NeRF) is used to represent the continuous response field over a virtual antenna array. - Physics-Informed Regularization: A key innovation is the introduction of an implicit regularizer that exploits the harmonic and low-rank structure of the radar array response. The authors show that when the response field is expressed in a domain where the underlying physics (planar wave propagation) holds, its corresponding block Hankel matrix exhibits low rank. The regularizer is integrated into the loss function to enforce consistency with these known physical properties. - Loss Function and Training: The overall loss is composed of a data fitting term (quantifying the difference between the predicted response and the available sparse, noisy measurements) and a regularization term (enforcing the harmonic/low-rank structure). Importantly, NEAR is trained without extensive offline training data—it uses only the sparse measurements obtained during normal operation. - Theoretical Analysis: The paper provides rigorous theoretical results that characterize the class of functions representable by the INR architecture, linking it to Fourier series. This analysis not only justifies the choice of positional encoding and network architecture but also explains how the harmonic structure of radar signals can be effectively captured. Theoretical Claims: I did not check the correctness of the proof. The theorem statements are sensible, although I'm not sure why Theorem 4.5 is needed -- if the Hankel matrices are low rank and the first K columns are independent, then doesn't the statement of the theorem automatically follow from the definition of rank? (this is a minor concern, I don't have major issues with it being included) Experimental Designs Or Analyses: Simulation Studies: - Setup: The simulations use virtual antenna array responses under various SNR conditions and different sparse sampling patterns (e.g., 6×6, 8×8, 10×10 grids). - Baselines: NEAR is compared against Enhanced Matrix Completion (EMaC), a SIREN-based approach, and a variant of NEAR without the physics-informed regularizer. - Results: NEAR consistently achieves lower NRMSE across various SNR levels and sampling patterns. In angular resolution tests, NEAR maintains high resolution probability even at small angle separations. For multi-target DOA estimation, NEAR shows significantly lower estimation errors than baselines, demonstrating its robustness in complex scenarios. Real-World Experiments: - Setup: A commercial MIMO radar platform (IMAGEVK-74) with a 20×20 virtual array is used. Sparse subsets of the full array response are treated as input. - Evaluation: - Angular Resolution: Experiments with two corner reflectors measure the minimum resolvable angular separation at various distances. NEAR achieves performance close to the full array benchmark. - Target Localization: The system is tested with multiple reflectors. NEAR outperforms both the full array baseline and EMaC in terms of localization error, attributed to its denoising capability. - Ablation Study: Removing the physics-informed regularizer leads to significantly worse performance, underscoring the importance of incorporating physical constraints into the model. Supplementary Material: I didn't read the supplementary. Relation To Broader Scientific Literature: The paper seems like a good contribution. The proposed method highlights the challenges associated with employing INR models to RADAR, and designs regularizers that respect the harmonics present in RADAR. The loss function designed is straightforward, but I think the authors do a good job evaluating their method and comparing it to baselines on simulated and real-world data. The significance of the Theorems is doubtful -- they seem unsurprising. Essential References Not Discussed: I think the related work section is detailed and comprehensive. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: I don't have major concerns. One question is: In the regularized loss function, I don't see the point of having $m_1, m_2$ as variables. For each $\theta$, you can compute the prediction $\widehat{Y}$ and use the pseudoinverse projection to compute $m_1, m_2$ in closed form. Why would you run an optimizer on $m_1, m_2$ as well? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer 5fdJ for the time and effort in reviewing our paper. We greatly appreciate the positive feedback. We hope the following responses can resolve your questions and concerns. **1. I'm not sure why Theorem 4.5 is needed --- if the Hankel matrices are low rank and the first $K$ columns are independent, then doesn't the statement of the theorem automatically follow from the definition of rank? (this is a minor concern, I don't have major issues with it being included)** There are two values which Theorem 4.5 add. Firstly, the low-rank property of $\mathcal{H}\_{N\_1,N\_2}(\boldsymbol{Y})$ depends on the choice of $N\_1$ and $N\_2$ according to the theoretical results from [1]. However, we cannot directly use the sufficient condition from [1] to claim that $\mathcal{H}\_{M\_1,M\_2-K}(\boldsymbol{Y})$ is rank-$K$ since in this case $N\_1=M\_1\notin[K,M\_1-K+1]$. Secondly, Theorem 4.5 reveals an important relation between submatrices of this low-rank matrix. It shows that all small Hankel matrices $\mathcal{H}\_{M\_2-K}(\mathbf{y}\_{m})\text{ for }1\leq m\leq M\_1$ are not only rank-$K$ with their first $K$ columns serving as a basis, but also share the *same linear coefficients $\boldsymbol{m}\_1$*. More details can be found in Appendix B. This is a significant result, since it helps us to re-use the parameters $\boldsymbol{m}\_1$ for representing subsequent new entries at unseen locations. Such theoretical results are non-trivial and no existing results can be found in the literature according to our best knowledge. **2. The significance of the Theorems is doubtful --- they seem unsurprising.** We mainly have two theoretical results, one is about expressive power of INRs, the other is about our physics-informed regularizer. We obtain a tighter characterization on the expressability of INR by refining existing analysis and derive the exact set of integer harmonics which describe the expressive power of INRs. One the other hand, our regularizer utilizes physical model (planar wave propagation) and mathematical properties of harmonics structured matrices, which establish a connection between low rank and linear predictability. This results in a regularizer which is both computationally efficient and significantly improves overall performance, as shown by our experimental studies. **3. One question is: In the regularized loss function, I don't see the point of having $\boldsymbol{m}\_1,\boldsymbol{m}\_2$ as variables. For each $\boldsymbol{\theta}$, you can compute the prediction $\hat{\boldsymbol{Y}}$ and use the pseudoinverse projection to compute $\boldsymbol{m}\_1,\boldsymbol{m}\_2$ in closed form. Why would you run an optimizer on as well?** Thank you for the good question. Please note that $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ are angles of arrival from multiple point scatterers, and do not represent incident angles. The unique global optimal $\boldsymbol{m}\_1^o$ and $\boldsymbol{m}\_2^o$ depend on target angles $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$, which are part of the sensing task and not known beforehand. In fact, the goal is to first complete a virtual array (by predicting array response at unseen locations) and then use the physical and predicted measurements to obtain a more accurate estimate of the angles $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ (and not the other way). If we were to first estimate the angles $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ from a *limited number of sparse* physical antennas (without completing the virtual array), the estimation error would be much larger. Therefore, in order to predict the virtual array response, we use the combined power of INR and the latent variables $\boldsymbol{m}\_1$ and $\boldsymbol{m}\_2$, which exploits low-rank relationship between entries of the Block Hankel matrix and guides the INR to approach the ground truth array response. However, if $\boldsymbol{m}\_1$ and $\boldsymbol{m}\_2$ are computed using the pseudoinverse projection after predicting $\hat{\boldsymbol{Y}}$ purely using INR, then the whole algorithm runs the risk of overfitting the observed data, especially given the scarce number of spatial measurements. **References:** [1] Hua, Y. (1992). Estimating two-dimensional frequencies by matrix enhancement and matrix pencil. IEEE Transactions on Signal Processing, 40(9), 2267-2280.
null
null
null
null
null
null
P(all-atom) Is Unlocking New Path For Protein Design
Accept (spotlight poster)
Summary: The paper introduces Pallatom, a protein generation model that generates protein structures with all-atom coordinates. The model uses a dual-track framework with residue and atomic-level representations and introduces atom14 representation for modelling variable side-chain coordinates. Pallatom learns a diffusion model to denoise this representation reusing components from AlphaFold3. Claims And Evidence: yes Methods And Evaluation Criteria: The claims in this paper are supported by some comparison against other representative protein generation models, including all-atom models. Experimental results showing superior performance on metrics like designability, diversity, and novelty. The ablation studies demonstrate the importance of some components like recycling mechanism and the atom-14 representation. Theoretical Claims: no theoretical claims Experimental Designs Or Analyses: yes Supplementary Material: quick pass on the supplementary Relation To Broader Scientific Literature: This paper builds on top of previous work in protein structure prediction (AlphaFold) and generation. It advances the field by providing a new all-atom protein generation without separate sequence design step. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: * Novel atom14 representation that effectively handles variable side-chain atoms * performs competitievly with baselines across multiple metrics * efficient sampling time compared to other methods that performed similarly overall using some components from modern architectures for structure prediction (Alphafold3) in the setting of generation, as such the proposed model is interesting from an empirical point of view for its performance. moreover the generation of all-atom molecules is a challenging problem. Weakness * the practical novelty stems from the combination of various ideas from Protpardelle and AlphaFold3; as such these 2 work have introduced most of the ideas in this paper. * see my question below; it feels strange to me that the model is called an all-atom model if the amino acid cannot be properly inferred from the generated atom14 representation without resorting to a classifier. Other Comments Or Suggestions: EDM is not defined anywhere; I assume it refers to Karras et al. ? Questions For Authors: Paragraph 3.3; why can't the amino acid be extracted from the atomic coordinates without using an AA classifier ? a proper all atom representation would yield the required information to extract directly the sequence based on which atom types are present. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you to the reviewer for your affirmation of our work. We have replied to your questions as follows. **Q1:** Paragraph 3.3; why can't the amino acid be extracted from the atomic coordinates without using an AA classifier ? a proper all atom representation would yield the required information to extract directly the sequence based on which atom types are present. The observed confusion likely arises from insufficient elaboration of our atom14 framework's design rationale. In this side-chain coordinate-based representation (`atom14`) that intentionally excludes elemental typing information, chemically distinct residues with similar conformations (e.g., GLU vs GLN differing in OE2/NE2 atom types) become geometrically indistinguishable – their coordinate differences are confined to minimal bond length variations. This necessitates two interdependent capabilities: (1) sequence decoding from atom point clouds, and (2) disambiguation of geometrically degenerate residues. Pallatom addresses this through a hierarchical attention mechanism: global attention captures holistic structural patterns while local attention learns fine-grained atomic interactions. This enables the model to develop rich atomic semantics that jointly encode spatial and sequential information. Crucially, our experiments reveal that a simple linear **SeqHead** layer suffices to accurately predict sequences from aggregated atomic representations, demonstrating that the learned high-dimensional embeddings inherently resolve the coordinate-to-sequence mapping without requiring complex auxiliary networks. This design philosophy ensures simultaneous optimization of structural plausibility and sequence-structure consistency during generation. We also have presented a comparison figure and detailed description showing the use of atom14 to represent 20 standard amino acids in the [ATOM14](https://anonymous.4open.science/r/Pallatom-rebuttal-114C/README.md). **Q2:** The practical novelty stems from the combination of various ideas from Protpardelle and AlphaFold3; as such these 2 work have introduced most of the ideas in this paper. The most relevant work to ours in all-atom protein design is Protpardelle, but Pallatom exhibits fundamental differences in comparison. While Protpardelle's atom73 represents all-atom protein design by tracking 20 side-chain states per residue, its discontinuous gradient updates rely on intermediate sequence predictions. In contrast, Pallatom's atom14 achieves superior efficiency (14 vs. 73 atoms) and sampling stability through continuous coordinates gradient updates, eliminating sequence dependency. Experiments (Table 1, Figure 3) confirm Pallatom's enhanced designability, and ablation studies show that hybrid sequence-guided approaches degrade performance. Regarding differences with AlphaFold3 (AF3), these are fundamentally distinct tasks. AF3 performs conditional diffusion given sequences, utilizing a diffusion module to decode structures. Pallatom aims to design all-atom proteins with unknown sequences and introduces the atom14 representation for this purpose. While we do employ some basic components from AF3 in our model, the innovative integration of these components—particularly the fusion and update mechanisms between different features—required complete redesign. To our knowledge, Pallatom is the first protein design method in the field to utilize AF3's fundamental components, whereas existing approaches like FrameDiff primarily build upon AF2's Invariant Point Attention with minor modifications. The core of Pallatom lies in its AtomDecoder unit (Figure 1B), which integrates and updates residue- and atom-track single/pair features, featuring critical distinctions from AF3's diffusion module: 1. **Dynamic pair representation**: Pallatom's pair features derive from *predicted intermediate structures* (with triangle updates), while AF3 uses nearly static Pairformer-encoded features. 2. **Traversing embedding**: A novel mechanism to avoid accumulated broadcasting residue/atom features across layers via skip connections – absent in AF3. **Q3:** EDM is not defined anywhere; I assume it refers to Karras et al. ? Thank you for identifying this oversight. We indeed employ the EDM framework [1] and will ensure proper citation of the reference in our work. [1] Karras, Tero, et al. "Elucidating the design space of diffusion-based generative models." *Advances in Neural Information Processing Systems* 35 (2022): 26565-26577.
Summary: The paper presents a novel diffusion-based approach for all-atom protein design. A key contribution is the atom14 representation, which unifies amino acid positions by padding them with virtual atoms. Another innovation is predicting amino acid types based on the atom14 representation instead of parallel generation. The proposed model is evaluated against state-of-the-art backbone-only and all-atom protein structure generators, demonstrating strong performance on the proposed metrics. The study includes an ablation analysis and discusses the advantages of all atom generation paradigm. Claims And Evidence: The paper supports its claim of simultaneous all-atom protein design with empirical results. However, it lacks crucial comparisons to a two-step strategy: backbone generation by FrameDiff[a], FrameFlow[b] followed by side-chain completion using DiffPack[c] or H-Packer[d]. Such comparisons would strengthen the claims. a. SE(3) diffusion model with application to protein backbone generation, Yim et al., 2023 b. Fast protein backbone generation with SE(3) flow matching, Yim et al., 2023 c. DiffPack: A Torsional Diffusion Model for Autoregressive Protein Side-Chain Packing, Zhang et al., 2023 d. H-Packer: Holographic Rotationally Equivariant Convolutional Neural Network for Protein Side-Chain Packing, Visani et al., 2023 Methods And Evaluation Criteria: The methods align with the problem statement, but the evaluation is limited to only proposed metrics. The study does not separately analyze backbone and side-chain generation performance, which would provide more insights. Theoretical Claims: The mathematical description of the approach appears correct, with no major theoretical issues identified. Experimental Designs Or Analyses: The experimental design adequately demonstrates the model’s advantages, but the evaluation could be improved by comparing the model’s backbone and side-chain design capabilities separately. Based on the method description, it allows masking atoms, so it can be trained for backbone atom generation by ignoring side-chain atoms, and it can be trained for side-chain atom generation by setting amino acid types and backbone atoms while updating only side-chain atom positions. This would allow comparison against a wider range of baselines on additional tasks (backbone generation and side-chain packing) to evaluate model performance in more detail on task-dependent metrics. The metrics proposed in the paper do not allow decomposition of model performance for backbone and side-chain atoms. Supplementary Material: The Appendix includes useful details on dataset construction, protein samples, and reasoning behind architecture choices and hyperparameters. Relation To Broader Scientific Literature: The paper contributes to protein design by introducing a model that simultaneously designs all atoms rather than focusing solely on backbone generation. While this direction is not entirely new, innovations such as the atom14 representation and amino acid prediction from atomic coordinates are novel. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: The paper is well-written but lacks justification for some architectural choices. For instance, it is unclear why $L_{atom}$ is insufficient, necessitating LDDT-based loss components. Further explanation would improve clarity. Other Comments Or Suggestions: 1. Figures 2A-C should have larger fonts and clearer explanations. 2. The process of setting virtual atom positions in the atom14 representation should be clarified in the main text. Questions For Authors: 1. What is the purpose of computing $f^{templatedistogram}$ during inference in Algorithm 1? While needed for training (LDDT subcomponent), its necessity during inference is unclear. 2. Have you considered evaluating backbone and side-chain generation separately? Given the model’s atom masking capability, could it be trained for these tasks independently and compared with corresponding baselines? 3. Have you considered two-step generation (backbone generation followed by side-chain packing) as a baseline in the experiments? If not, why? 4. Why was the LDDT-based loss component included instead of relying solely on $L_{atom}$? A detailed explanation would be helpful. 5. How are virtual atom positions assigned in the atom14 representation? Clarification would aid understanding. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you to the reviewer for your affirmation of our work. We have summarized and replied to your questions as follows. **Q1:** The purpose of computing $f^{template-distogram}$ during inference in Algorithm 1. This describes our self-conditioning mechanism: during sampling, the predicted structure from the previous timestep is converted into **2D distance bin encodings**. These **SE(3)-invariant features** are then initialized as residue-level pair representations via **Algorithm 3 (Template Embedder)**, which guides the diffusion process toward structurally plausible conformations. This approach – transforming sampled structures into template-like pair representations – is similarly employed in RFDiffusion and Proteus to enhance sampling quality. **Q2:** Evaluating backbone and side-chain generation separately. (Question 2 & Experimental Designs Or Analyses) Your suggestion to adapt the Pallatom framework for backbone-only design is insightful. To implement this, we simplified the `atom14` representation to `atom5` – a point cloud representation using four backbone atoms (N, Cα, C, O) plus Cβ. We named this method **Pallatom-bb**. Here are implementation details: - Retrained the model (Pallatom-bb) using identical training data. - Evaluation: PMPNN 1 mode By analogy to the right side of Table 1 and Figure 3, the experimental results for L=60 to 120 and L=150 to 400 are in [Pallatom-bb results](https://anonymous.4open.science/r/Pallatom-rebuttal-114C/README.md). Pallatom-bb achieves high designability within training sequence lengths but exhibits reduced structural diversity. While showing limited extrapolation capability for longer sequences, it marginally outperforms RFDiffusion in overall metrics. Crucially, backbone-only point cloud approaches neglect critical side-chain/sequence interactions, whereas atom14's all-atom framework explicitly models structural-sequence constraints, demonstrating superior holistic performance across design benchmarks. **Side-chain generation** We believe there may be a conceptual distinction between *side-chain generation* and *all-atom design* that requires clarification. In canonical definitions: - **Side-chain packing** assumes a *known sequence* and fixed backbone to optimize rotamer configurations - **All-atom design** jointly designs *both sequence and structure* from noise Pallatom's atom14 representation specifically addresses the latter scenario – modeling all-atom coordinates *without prior sequence knowledge*. Our framework achieves high-quality de novo protein design by explicitly resolving the ambiguity between amino acid types and their atomic geometries. While side-chain packing represents an important complementary task, it falls outside the scope of this work's objectives. **Q3:** Have you considered two-step generation (backbone generation followed by side-chain packing) as a baseline in the experiments? If not, why? Our understanding of this methodological distinction is as follows: side-chain packing inherently requires *prior sequence knowledge* (though sequences may originate from various sources). The field-standard approach employs inverse folding models like ProteinMPNN to derive sequences from designed backbones. This baseline methodology effectively decouples the process into two stages: 1. Backbone design (e.g., RFDiffusion) 2. Sequence design (e.g., ProteinMPNN) Under the PMPNN 1 protocol, Pallatom ranks second in designability (slightly below ProteinGenerator) while achieving superior diversity/novelty. For L=150-250, it delivers state-of-the-art performance, and maintains second-ranked performance for L=300-400, demonstrating robust extrapolation beyond training lengths. **Q4:** Ablation study for LDDT loss. Below are the comparative results from sampling 100 L=100 proteins after removing the smooth LDDT loss: | Method | DES-aa | DES-bb(w) | DES-bb(wo) | |-----------|--------|-----------|------------| | Pallatom | 87% | 95% | 95% | | wo-LDDT | 84% | 95% | 95% | The training observations revealed that the LDDT loss maintained consistently low values throughout training, suggesting it may act as an implicit violation loss to mitigate side-chain atomic clashes. Ablation experiments confirm the LDDT loss's role in enhancing the quality of Pallatom-generated full-atom proteins, particularly by preserving critical sequence-structure self-consistency (DES-aa reduction). **Q5:** Clarification for atom14 representation. We have presented a comparison figure and detailed description showing the use of atom14 to represent 20 standard amino acids in the [ATOM14](https://anonymous.4open.science/r/Pallatom-rebuttal-114C/README.md). --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses and for preparing the metrics to address my questions. I am satisfied with their answers and have decided to raise my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer jKdH, We sincerely appreciate your time in reviewing our response and for your encouraging feedback! We're pleased to confirm we've resolved the issues you raised. Your constructive observations provided invaluable insights that significantly strengthened the quality of our research. Please know your thoughtful engagement with our work has been immensely valuable to us. Best, Authors
Summary: This paper introduces a novel diffusion model for generating all-atom protein backbones and side-chains, enabling simultaneous sampling of protein structures and their corresponding amino acid sequences. The approach for all-atom protein generation relies on two key elements. First, an atom14 representation, which encodes each residue as a 14×3 vector specifying the coordinates of 14 heavy atoms. In ground truth data, missing atoms default to the Cα coordinates. Second, a state of the art denoiser architecture inspired by AlphaFold 3. This paper demonstrates that a single-layer neural network operating on atomic embeddings, trained jointly with the denoiser by cross entropy, produces sequences that are highly consistent with the generated structures. Empirical results show significant improvements in protein structure–sequence co-generation, underscoring the effectiveness of this approach. Claims And Evidence: * The main claim of the paper is that "that full-atom coordinates encodes essential protein information". The comparison with protein backbone generation techniques as well as two-step structure-sequence prediction strategies using a tool like ProteinMPNN convincingly demonstrate that modelling all-atom 3D coordinates is an efficient strategy to generate protein structure alongside their amino acid sequences. * The comparison with Protpardelle supports the effectiveness of the proposed denoiser architecture. Although there are potential confounding factors (since Protpardelle differs in several other aspects) the observed performance improvements strongly suggest that the novel denoiser design is a key contributor. * The ablation study on the atom14 representation provides compelling evidence of its effectiveness. Methods And Evaluation Criteria: **Evaluation Criteria** * The paper employs standard protein design metrics to assess designability and diversity, but it also adapts them to assess both all-atom structure and sequence generation. * Using two sets of metrics depending on whether ProteinMPNN is used for sequence design is a good idea. * This is complemented by an extensive benchmark against state-of-the-art methods and ablation studies (e.g., on the atom14 representation and recycling process). Theoretical Claims: N/A Experimental Designs Or Analyses: * Given the sensitivity of diffusion models to hyperparameter choices, it is great that the authors reported how different sampler settings affect protein design metrics, as shown in Table 2. * While the model is trained on lengths up to 128 residues, an evaluation of the model’s generalisation capabilities on longer sequences is also included. * Appendices (B, E, F, G) include detailed experimental settings and further analyses that facilitate reproducibility. Supplementary Material: Yes, I reviewed appendices A, B, C and D. Relation To Broader Scientific Literature: **Structure-sequence co-generation** The paper is well contextualised within the existing literature on protein structure–sequence co-generation, referencing methods such as Protpardelle, Multiflow, and ProteinGenerator, which are also included in the benchmark. Each of these methods differs from the proposed approach in some respect. E.g. Multiflow does not operate at the level of all-atom resolution. The closest work is Protpardelle, which introduced an all-atom diffusion model for the purpose of simultaneously designing protein structures and sequences. However, this paper employs a substantially different denoiser parameterisation, an alternative way of representing protein atoms and a distinct training strategy for sequence prediction. This combination of ideas leads to significant performance improvements on standard protein design metrics. **Inspiration from AlphaFold 2 and 3** While AF2 and AF3 predict backbone and sidechain coordinates, and may employ a similar atom14 representation, they operate with a fixed amino acid sequence as input. In contrast, Pallatom must generate both the sequence and the corresponding 3D structure starting from an a priori undefined number of heavy atoms, which introduces additional challenges in how the atom14 representation is leveraged. Essential References Not Discussed: The key contribution is a model for all-atom protein generation. It does not cite [1] and [2], respectively published at NeurIPS 2023 and NeurIPS 2024. Additionally, in the context of protein pocket design or peptide design, there are other approaches such as FAIR [3] and PocketFlow [4], and Pepflow [5] for all-atom sequence-structure cogeneration. These are not cited. [1] Martinkus, Karolis, et al. "Abdiffuser: full-atom generation of in-vitro functioning antibodies." Advances in Neural Information Processing Systems 36 (2023): 40729-40759. [2] Lu, Amy X., et al. "Controllable All-Atom Generation of Protein Sequence and Structure from Sequence-Only Inputs. [3] Zhang, Zaixi, et al. "Full-atom protein pocket design via iterative refinement." Advances in Neural Information Processing Systems 36 (2023): 16816-16836. [4] ZHANG, ZAIXI, Marinka Zitnik, and Qi Liu. "Generalized Protein Pocket Generation with Prior-Informed Flow Matching." The Thirty-eighth Annual Conference on Neural Information Processing Systems. [5] Li, Jiahan, et al. "Full-Atom Peptide Design based on Multi-modal Flow Matching." Forty-first International Conference on Machine Learning. Other Strengths And Weaknesses: **Stengths** * As the paper's title suggests, this work indeed represents a significant advance in protein design. It addresses the inherently complex challenge of joint structure–sequence generation with a conceptually simple and principled approach, while confining the intricate and ingenious parameterisation to the denoiser component of the diffusion model. This balance between overall simplicity and engineering sophistication is truly impressive. **Weaknesses** While the paper’s contributions appear significant, the presentation could benefit from clearer exposition: * A figure to illustrate an amino acid structure, with backbone and side-chain atoms, alongside its atom14 representation would improve clarity. * The parameterisation of the denoiser contains specialised jargon (e.g. "traversing embeddings" or "feature broadcasting process") that might be more accessible if explained in simpler terms, with an emphasis on the difference with AF3. * Some methodological details appear to be missing or insufficiently referenced. For instance, what's the algorithm to obtain $r^{\text{aligned}}$ for the aligned MSE loss at line 257. Could this be "Algorithm 28: Weighted Rigid Align" from AF3? * Additionally, in some sections, the wording may give the impression that certain methods are new, even though they appear to have been documented previously. For instance, the mention of "we introduce the smooth lddt loss" at line 269 may actually refer to "Algorithm 27: Smooth LDDT Loss" from AF3. Other Comments Or Suggestions: * Is it possible to fix the indentation in Algorithm 1? Questions For Authors: * Is it possible to use guidance with sequence-based information? * Could you include a discussion on potential limitations of the approach? * Can you contrast your all-atom sequence-structure cogeneration approach to that of [4] and [5]? Does the atom14 representation provide an advantage over the residue representation used in these papers? [4] ZHANG, ZAIXI, Marinka Zitnik, and Qi Liu. "Generalized Protein Pocket Generation with Prior-Informed Flow Matching." The Thirty-eighth Annual Conference on Neural Information Processing Systems. [5] Li, Jiahan, et al. "Full-Atom Peptide Design based on Multi-modal Flow Matching." Forty-first International Conference on Machine Learning. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their positive evaluation of our work. Regarding the suggestions and questions raised, we provide point-by-point responses below. **Q1:** Is it possible to use guidance with sequence-based information? Based on our ablation experiments, `hybrid14` exhibits extremely low designability. Therefore, we do not recommend introducing sequence guidance information when constructing side-chain atomic coordinates, particularly sequences estimated from the disordered structure at intermediate diffusion timesteps. According to the experimental results from Pallatom, we believe that diffusion on `atom14` is sufficient to learn effective sequence information. **Q2:** Could you include a discussion on potential limitations of the approach? We acknowledge the following potential limitations in the current study: 1. While we adopt AlphaFold3's broadcast and aggregation operations to integrate atomic-level and residue-level features, the details of atomic information may be reduced during global attention due to averaging effects from feature aggregation. 2. Our local attention mechanism follows AF3's framework and focuses solely on sequence-level neighboring atoms. Incorporating spatially adjacent atomic information may further enhance the model. 3. The atom14 representation might inadequately capture polar side chains and complex interaction patterns in certain scenarios. **Q3:** Can you contrast your all-atom sequence-structure cogeneration approach to that of PocketFlow [4] and PepFlow [5]? Does the atom14 representation provide an advantage over the residue representation used in these papers? We recognize the practical significance of works like [4] and [5], which focus on full-atom design for specific applications such as pocket-ligand or protein-peptide interactions, where side-chain details are critical. However, our current work exclusively addresses *unconditional all-atom protein design* and does not yet support condition-guided pocket design or peptide-binding tasks involving small molecules. We plan to explore these application-specific extensions in future studies. **Q4:** A figure to illustrate an amino acid structure, with backbone and side-chain atoms, alongside its atom14 representation, would improve clarity. To enhance clarity, we added a comparative figure and supplementary descriptions detailing ATOM14's encoding of the 20 canonical amino acids in [ATOM14](https://anonymous.4open.science/r/Pallatom-rebuttal-114C/README.md). **Q5:** Some methodological details appear to be missing or insufficiently referenced. For instance, what's the algorithm to obtain raligned for the aligned MSE loss at line 257. Could this be "Algorithm 28: Weighted Rigid Align" from AF3? We employ the standard **Kabsch** algorithm for rigid alignment, which minimizes the RMSD between structures through singular value decomposition (SVD). Unlike AlphaFold3, we do not implement weighting schemes during the alignment process. **Q6:** Additionally, in some sections, the wording may give the impression that certain methods are new, even though they appear to have been documented previously. For instance, the mention of "we introduce the smooth lddt loss" at line 269 may actually refer to "Algorithm 27: Smooth LDDT Loss" from AF3. Yes, we adopted a simplified version of the smooth LDDT loss from AlphaFold3, adapted exclusively for protein structures (excluding other biomolecular interactions). **Q7:** Essential References Not Discussed. We will make the following amendments to the related work. > ...ProteinGenerator employs Euclidean diffusion on one-hot encoded sequences combined with a structure prediction module to generate all-atom structures. Similarly, PLAID [2] achieves sequence design through latent space diffusion within ESM2 while decoding full-atom protein configurations. ... > Beyond protein modeling, recent works leverage atomic-level representations for localized design tasks. For instance, Abdiffuser [1] introduces a universal four-atom side-chain template for antibody CDR redesign, preserving dihedral freedom via pseudo-carbon atoms while integrating ideal amino acid templates for rotamer construction. FAIR [3] adopts a two-stage approach for protein pocket design: initial backbone/sequence generation followed by iterative refinement to ensure sequence-side-chain consistency. PocketFlow [4] extends this by simultaneously designing pocket sequences and all-atom structures through flow matching across backbone, side-chain torsion angles, and sequences. PepFlow [5] similarly utilizes multi-modal flow matching (torsion angles + sequences) for peptide design. However, such decoupled representations risk sequence-structure conflicts and steric clashes. Pallatom addresses these limitations through its atom14 representation, which intrinsically fuses structural and sequential modalities to minimize explicit conflicts.
Summary: The paper presents Pallatom, an end-to-end all-atom generative model that jointly learns protein sequences and their 3D coordinates. It uses an “atom14” representation to standardize side-chain atoms and employs a diffusion-based approach on Cartesian coordinates. A dual-track architecture updates residue-level and atomic-level embeddings through iterative decoding, incorporating a differentiable recycling mechanism that refines pairwise features as the structure is denoised. The model directly predicts side-chain positions and amino acid types during generation, showing improvements in designability, diversity, and novelty compared to existing methods like Protpardelle and Multiflow. Ablation studies confirm that the atom14 representation and recycling process substantially improve co-design quality. Claims And Evidence: 1. The authors’ main claim in the abstract—that Pallatom “excels in key metrics of protein design, showing significant improvements across the board”—is not fully supported. Figure 3 indicates that Pallatom underperforms Multiflow for sequences exceeding 300 residues, so it does not consistently achieve better results in designability, diversity, and novelty. The authors suggest that the model’s training crop size of 128 residues may explain the poor performance on longer proteins, but this constraint was self-imposed. Without additional experiments or evidence showing improved outcomes when using larger crop sizes, the claim that Pallatom would rival or exceed Multiflow in long-sequence regimes remains unsubstantiated. 2. The authors’ claim to have introduced a novel model architecture is equally problematic. The core framework seems largely adapted from AlphaFold3, modified mainly to handle settings without input sequences or MSAs. Several portions of the text may overstate these adaptations, and many of the algorithms included in the appendix appear to be taken verbatim from the AlphaFold3 supplementary. The manuscript should either reduce the duplicated content or provide a compelling justification for why extensive copying is required. 3. While the authors claim that the proposed model exhibits excellent training efficiency in the Abstract, the only empirical metric provided is that training takes approximately ten days (see Table 5). There is no comparative analysis against other methods in terms of training duration or computational resources, making it difficult to evaluate the claimed efficiency. Methods And Evaluation Criteria: More thorough assessments of sidechain geometry and validity are needed, especially since PallAtom diverges from other sequence–structure co-design methods (e.g., MultiFlow, CarbonNovo) in its treatment of sidechains. Specifically: 1. Sidechain steric clashes: The authors should measure potential clashes among sidechains at different positions, as these can significantly impact a protein’s stability and folding kinetics. 2. Amino-acid validity: The authors should confirm that the designed sidechains adhere to valid bond patterns matching one of the 20 canonical amino acids. Ensuring chemically feasible bond arrangements is crucial for real-world applicability. 3. Consistency between sidechain design and predicted residue type: The authors should verify that the specified amino-acid identity aligns correctly with the geometry of the modeled sidechain, thereby avoiding mismatches (e.g., a backbone labeled “Arg” with sidechain atoms arranged as “Phe”). Theoretical Claims: I did not find any extensive proofs or purely theoretical derivations in the submission that would require rigorous verification of correctness. Experimental Designs Or Analyses: I evaluated the experimental design and noted some important gaps in the authors’ comparative analyses. Specifically, the authors should provide comparisons to widely used structure-focused generative models like FoldFlow2, Proteus, CarbonNovo, and Genie2. Supplementary Material: I have thoroughly read the supplementary material. My primary concern is that substantial portions of the algorithmic descriptions appear to be taken directly from AlphaFold3’s supplements. Relation To Broader Scientific Literature: The protein design method will benefit drug discovery by enabling the development of novel therapeutic proteins with tailored functions and improved properties. Essential References Not Discussed: When introducing the “atom14” representation in the Introduction, the authors should cite the AlphaFold2 paper, as it originally introduced both the “atom14” concept and corresponding terminology. Providing this reference at the first mention of the “atom14” representation would offer appropriate historical context and credit. Other Strengths And Weaknesses: Weaknesses: 1. The authors have not clearly described the motivation for all-atom design in the Introduction. Are there any specific applications that all-atom designs that have clear benefits over the step-wise design of sequence, structure, and sidechain packing? 2. The technical novelty is very limited. The core model architecture seems largely adapted from AlphaFold3, modified mainly to handle settings without input sequences or MSAs. Substantial portions of the algorithmic descriptions appear to be taken directly from AlphaFold2’s supplements. 3. The paper does not compare its approach with several widely used protein-design methods, such as FoldFlow2, Genie2, CarbonNovo, and Proteus. Other Comments Or Suggestions: I have no other comments. Questions For Authors: 1. Why did the authors set such a small crop size of 128? What motivated this design choice? There are numerous applications that require proteins longer than 128 amino acids—for example, beta-barrel or helical-bundle designs intended for ligand binding. Simply stating that the work focuses on monomer proteins that can be easily synthesized using oligo-pool methods overlooks a significant range of applications where larger or more complex proteins are necessary. 2. As the authors claim training efficiency in the Abstract, did they employ any speed-up techniques to handle the local attention for the atom-level encoder/decoder? Directly applying a masking strategy to all flattened atoms would likely consume substantial memory and degrade computational efficiency. If you did implement optimizations, please consider sharing an anonymous link to your code so reviewers can examine these strategies. Otherwise, it might be that the crop size of 128 was chosen primarily to mitigate the memory and computational overhead inherent in an atom-level encoder/decoder—especially for longer proteins. Providing further details on how this trade-off is managed would strengthen the paper’s claims regarding training efficiency. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Supp. figures/tables: [LINK](https://anonymous.4open.science/r/Pallatom-rebuttal-114C/README.md). Not repeated hereafter. **Q1:** The authors' self-defined 128-residue training crop size leads to inferior performance, and no validation evidence with larger crop sizes is provided. We sincerely sorry that our current GPU resources and funding budget cannot support large-scale training. However, as Reviewer U5EF acknowledged, we have conducted evaluations at 3× the training sequence length while still achieving comprehensive superiority at 2× the training length, demonstrating Pallatom has robust sampling capability. More critically, Multiflow's training set maximum length is up to 384 residues. It should be noted that Multiflow's performance at its own training length distribution (L=200, 250) remains inferior to Pallatom's, even within its optimized range. **Q2:** The model architecture lacks innovation, primarily following AlphaFold3's core framework (with some algorithmic descriptions directly copied from AlphaFold3's appendix). We emphasize that **Pallatom is a structure generation model**, fundamentally distinct from AlphaFold3's structure prediction paradigm. While reviewers noted high similarities in network frameworks (e.g., shared operators), they overlooked critical differences in computational workflows and objectives: - **divergence**: Pallatom achieves *de novo* all-atom protein design with *unknown sequences*, whereas AF3 decodes pre-encoded sequence/MSA features to decode coordinates (e.g., replacing IPA with diffusion modules). To address all-atom representation for unknown sequences, we innovatively designed the `atom14` representation strategy – a foundational contribution that should not be overlooked. For architectural innovations, the core module of Pallatom lies in the **AtomDecoder** unit (Figure 1B), which jointly updates residue- and atom-track single/pair representations. This differs fundamentally from AF3's diffusion module through: 1. **Dynamic pair representation**: Pallatom's pair features derive from *predicted intermediate structures* (with triangle updates), while AF3 uses static Pairformer-encoded pair features. 2. **Traversing embedding**: A novel mechanism to avoid accumulated broadcasting residue/atom features across layers via skip connections – absent in AF3. In Appendix C.Algorithms, we explicitly highlight AF3-derived base components in blue. All other architectures and workflows are original to Pallatom. **Q3:** Comparisons to widely used structure-focused generative models like FoldFlow2, Proteus, CarbonNovo, and Genie2 are insufficient. Genie2 (unpublished work) was excluded from comparison, while other methods were benchmarked. Pallatom performs best in the vast majority of cases. Tables are at **New comparison results** in LINK. **Q4:** Validation of designed protein structural plausibility (side-chain steric clash detection) and chemical bond validity. Through comprehensive statistical analyses of bond length distributions, bond angle distributions, chi angle distributions, and conformational clashes, we demonstrate that the side-chain structures generated by Pallatom rigorously adhere to protein physicochemical constraints. Detailed visualizations and analyses are in **Sidechain analysis** in LINK. **Q5:** Consistency between sidechain design and predicted residue type. The DES-aa metric we employ intrinsically reflects the consistency between designed side chains and predicted sequences. To compute DES-aa, we compare Pallatom-generated structures (converted from atom14 to real atomic types) with their ESMFold-folded counterparts using all-heavy-atom RMSD. Consequently, mismatches (e.g., designed Arg vs. predicted Phe) induce aaRMSD deviations and reduce DES-aa values. To verify consistency between sidechain design and predicted residue type, we sampled 1,000 non-training proteins (<256 residues) from AFDB and conducted the following additional experiments: 1. Introduced minimal noise to native structures 2. Fed atom14 point clouds to Pallatom for single-step inference for sequence prediction 3. Calculate overall average seq-recovery-rate (AAR) between predictions and ground truth and the accuracy (ACC) of amino acid recognition for four pairs of geometrically similar conformations (EQ, DN, TV, SC). Result: AAR = 93.3%, ACC=97.4%. This shows Pallatom achieves sequence-sidechain consistency. **Q6:** Concerns regarding insufficient empirical validation of training efficiency and provision of optimized code. We will revise the description to emphasize **sampling efficiency** . We have provided sampling efficiency comparisons showing superiority over most baseline methods (Table 7). We have released optimized code for local attention at **GPU Memory Optimization** in LINK, offering a crop-based optimization strategy (vs. basic masking), achieved **73% VRAM reduction**. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ efforts in their rebuttal and have closely examined each of their responses. Unfortunately, my main concerns remain: 1. The manuscript claims that PallAtom outperforms other methods in the Abstract and the Results sections. However, Figure 3 indicates that PallAtom underperforms Multiflow for sequences exceeding 300 residues, and this performance gap widens for sequences longer than 350 residues. Although the authors repeatedly attribute this shortfall to PallAtom’s smaller crop size during training, they provide no experimental evidence to support this explanation. Other factors—such as model architecture, training strategies, or hyperparameters—could also be responsible. Notably, other methods that use larger crop sizes also differ in performance, indicating that additional variables must be considered. Additionally, the newly presented experiments in the rebuttal show that PallAtom yields worse results than Proteus and FoldFlow2 on long proteins when evaluated using the Designability metric. This finding further reinforces my concern. Moreover, as mentioned in my previous comments, many applications require designing long proteins, such as beta-barrel or helical-bundle structures for ligand binding. Improving long-protein design capabilities is one of the trends in the literature. Because PallAtom performs poorly on long proteins compared to previously published methods, I am not convinced that the author’s main claim holds. 2. While the Abstract initially highlights the training efficiency of PallAtom, there is no data supporting this claim. In the rebuttal, the authors shift focus to inference efficiency, yet they do not compare their approach with other established baseline methods (e.g., Genie1/2, FoldFlow2, Proteus). Although they benchmark the Designability metric against these methods, they do not benchmark inference efficiency. Without such comparisons, the community cannot fully assess whether PallAtom truly provides significant efficiency gains. 3. The authors have overlooked my recommendation to cite AlphaFold2 where “atom14” is first mentioned. Although AlphaFold2 deals with structure prediction and PallAtom addresses protein design, the concept of “atom14” originates in AlphaFold2 and warrants proper acknowledgment. Similarly, the smooth LDDT loss appears to be adapted from AlphaFold3, yet neither the main text nor the supplemental material offers an appropriate citation. These omissions disregard the historical context and original development of the methods being repurposed. 4. The authors have also overlooked my question on the motivation behind full-atom design. At the very least, the authors should describe specific applications where a full-atom approach offers clear advantages over a stepwise design process (i.e., separately designing sequence, structure, and side-chain packing). --- Reply to Comment 1.1.1: Comment: **Response to Q1** In the two rounds of rebuttals, we observed that reviewers consider the unconditional designability when **L>350** to be critically important. This motivation likely stems from reviewers' belief that many applications require the design of long proteins, which also aligns with one of the current trends in the literature. We acknowledge this prospect. However, it is crucial to recognize that in **current practical applications** of de novo protein design methods, there is **no substantial evidence** supporting the claimed necessity for long protein design. The reviewers' motivation and rationale in this regard might **exceed the actual requirements** of existing applications. Taking RFdiffusion—a widely adopted and experimentally validated method—as an example, we briefly summarize some recent applications: 1. In RF-AA’s de novo binder design targeting small molecules, the generated proteins typically range in length from **150-210** residues. [1] 2. For β-barrel protein design, including soluble β-barrels with 4 to 8 strands, total lengths range from **40-90** residues, while transmembrane nanopores are designed with **<300** residues. [2] 3. Miniprotein binders (55–65 residues) have been engineered to target lethal toxins like TcsL. [3] 4. A pentapeptide motif was first designed and subsequently extended to generate GPCR antagonists with lengths of **65–75** residues. [4] 5. Pseudocyclic proteins ( **120-260** amino acids) were developed to address challenges in binding and sensing of molecules. [5] [1] Rohith,et al.Generalized biomolecular modeling and design with RoseTTAFold All-Atom. Science2024 [2] David E,et al.Parametrically guided design of beta barrels and transmembrane nanopores using deep learning.BioRxiv 2025 [3] Robert J., et al. De novo designed inhibitor confers protection against lethal toxic shock. bioRxiv 2024 [4] Edin, et al. De novo design of miniprotein agonists and antagonists targeting G protein-coupled receptors.bioRxiv 2025 [5] Linna, et al. Binding and sensing diverse small molecules using shape-complementary pseudocycles. Science 2024 Interestingly, RFdiffusion does not excel in the designability of long proteins—a key focus emphasized by reviewers—and only marginally outperforms Pallatom when L=400. **Response to Q2** Additional comparisons of sampling time have been provided in the [New Sampling Time](https://anonymous.4open.science/r/Pallatom-rebuttal-114C/README.md), showing Pallatom’s clear superiority over the comparable CO-DESIGN model, CarbonNovo. **Response to Q3** Due to character limits, we could only response reviewer concerns in a single round of rebuttal, which led to the omission of less critical issues in the initial response. The reviewer suggested that our proposed `atom14` concept originates from AlphaFold 2 (AF2). However, the term `atom14` does not appear in the AF2 paper; it is instead a data format used in the code to record amino acid coordinates, not as an encoding method input to the network. In reality, AF2 employs backbone frames and side chain χ-angles to encode all-atom representations. We are confident the reviewers can discern the distinction between AF2 and Pallatom: a data storage format versus a new all-atom representation method for unknown amino acids. To avoid conflating Pallatom’s `atom14` with AF2’s framework in readers’ minds, we deliberately omitted this detail in the section of the **Preliminaries**. As emphasized, the advancement of `atom14` lies in its approach to modeling all atoms and how this representation is encoded and learned within the network. Our goal is to direct attention to the methodology of all-atom modeling in protein design. Regarding the "smooth LDDT" implementation, we adopted a simplified version from AF3, tailored exclusively for protein contexts. We will supplement the relevant citation. **Response to Q4** Responses were omitted in the initial round due to length constraints. The adoption of all-atom representations for biomolecules is an emerging trend in the field. In structure prediction, methods like AF3 and RoseTTAFold All-Atom have expanded atomic frameworks to accommodate non-protein systems. In design, recent efforts—such as pocket design [1] and short peptide engineering [2]—have begun incorporating full-atom protein modeling. These advances stem from the recognition that side chains play a critical role in mediating interactions between proteins and other biomolecules. Consequently, extending de novo design methodologies to fully account for all-atom protein architectures holds significant implications for functional biomolecule engineering. [1] Zhang, Zaixi, et al. Full-atom protein pocket design via iterative refinement. NIPS2023 [2] Li, Jiahan, et al. Full-Atom Peptide Design based on Multi-modal Flow Matching. ICML2024
null
null
null
null
null
null
MTSTRec: Multimodal Time-Aligned Shared Token Recommender
Accept (poster)
Summary: The paper introduces MTSTRec, a transformer-based multimodal recommendation framework that temporally aligns different modalities to improve sequential recommendations. Unlike existing methods, Unlike existing methods that perform either early or late fusion, MTSTRec employs a Time-aligned Shared Token (TST) module for intermediate fusion, ensuring better cross-modal alignment while preserving the unique contributions of each modality. Extensive experiments on multiple datasets demonstrate its superiority over baseline models, with ablation studies highlighting the importance of different modalities and the effectiveness of the TST module. ## update after rebuttal Since the authors did not add new results I mentioned in my comments, I will keep the score unchanged. Claims And Evidence: The authors provide evidence for the effectiveness of the proposed time-aligned fusion module through ablation studies on multiple datasets. However, two key issues remain: 1.The discussion on multimodal fusion strategies is limited, particularly regarding mid-fusion, which is central to this work. A more thorough comparison with existing approaches would strengthen the contribution. 2.The experiments lack baselines from multimodal sequential recommendation models, making it difficult to fully assess the model’s novelty and effectiveness. Including such baselines would provide a clearer evaluation of its improvements. Methods And Evaluation Criteria: Methods: The TST module effectively integrates multimodal features while maintaining temporal consistency, and extensive ablation studies validate the impact of individual components. Evaluation Criteria: Multiple datasets are used for evaluation, including one public dataset and two new datasets. However, the three types of multimodal fusion mentioned by the authors should be represented by corresponding baselines. Theoretical Claims: The paper provides detailed formulations of the model and the calculation methods for evaluation metrics in both the main text and supplementary materials. However, the theoretical analysis and proof are somewhat limited, with a primary focus on experimental validation. Experimental Designs Or Analyses: 1.Multiple datasets are used for evaluation, including one public dataset and two new datasets. 2.The detailed ablation studies effectively demonstrate the impact of individual features and fusion mechanisms. 3.The paper lacks a thorough comparison with more state-of-the-art multimodal recommendation methods, limiting the context for evaluating the model's novelty. Supplementary Material: 1.The supplementary materials provide valuable methods for extracting multiple modalities, particularly the large model prompt templates, which are of reference value. 2.The detailed description of Benchmark Models and implementation details improves the reproducibility of the study, ensuring easier validation of the results. 3.The experimental evaluation of Shared Tokens at various ratios provides useful insights into their effectiveness. Relation To Broader Scientific Literature: 1.Multimodal Fusion: It proposes a time-aligned fusion module, improving the integration of various modalities in recommendation systems, which advances multimodal learning research. 2.LLM for Feature Extraction and Processing: The work demonstrates the use of large models for extracting text information and handling different feature types, contributing to more effective feature processing in recommendation systems. Essential References Not Discussed: The paper lacks a sufficiently comprehensive discussion of relevant literature on multimodal fusion strategies, including but not limited to the following references, which were either not discussed or mentioned without being compared as baselines: [1] Zhou, Xin, and Zhiqi Shen. "A tale of two graphs: Freezing and denoising graph structures for multimodal recommendation." Proceedings of the 31st ACM International Conference on Multimedia. 2023. [2] Zhong, Shanshan, et al. "Mirror Gradient: Towards Robust Multimodal Recommender Systems via Exploring Flat Local Minima." Proceedings of the ACM on Web Conference 2024. 2024. [3] Jiang, Hao, et al. "What aspect do you like: Multi-scale time-aware user interest modeling for micro-video recommendation." Proceedings of the 28th ACM International conference on Multimedia. 2020. Other Strengths And Weaknesses: Strengths 1.The time-aligned multimodal fusion module proposed in this paper is highly innovative, and experimental results demonstrate the effectiveness of the token. 2.The paper provides a valuable analysis of multimodal sequential recommendation, offering insights that inspire further research into the role of multimodal fusion at different stages. Weaknesses 1.The discussion of related work is insufficient, with a lack of thorough literature review. 2.Additionally, the experimental comparisons with similar models are inadequate, leaving the innovation and effectiveness of the approach open to further validation. Other Comments Or Suggestions: 1.The writing is clear and flows smoothly, making the paper easy to follow for readers. 2.The discussion of related work is insufficient, and the experimental comparisons are lacking. More attention should be given to the discussion of multimodal fusion techniques. Questions For Authors: 1.How significant is the contribution of large models in extracting text information to the overall performance of the proposed model? 2.How does the performance of other LLM-based multimodal sequential recommendation models compare to your model? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We are especially grateful for the recognition of the strengths of our work, particularly the innovation of the time-aligned multimodal fusion module and its effectiveness demonstrated through our experiments. We also appreciate your kind comments on the clarity and structure of the writing, as well as the usefulness of the supplementary materials. Below, we address each of your comments and suggestions in detail. **[W1]** Insufficient discussion of related work [Weaknesses] [Experimental Designs Or Analyses 3] **[R1]** Thank you for your helpful suggestion. The mentioned works ([1], [2], [3]) were included in our original related work section, but due to space limitations, we were unable to discuss them in depth. In the revised version, we will enhance this section by more clearly comparing these methods with ours and highlighting the unique aspects of MTSTRec, particularly its time-aligned mid-fusion design. We appreciate your feedback and will further elaborate on these references if space allows in future revisions. **[W2]** lack of multimodal sequential baselines [Weaknesses] [Claims And Evidence 2] [Essential References Not Discussed] **[R2]** We respectfully contend that the current experimental design adequately supports the evaluation of MTSTRec’s novelty and effectiveness within the defined scope. In Section 4.1 (Experimental Settings), we included baselines such as SASRec and BERT4Rec (single-modal sequential models), enhanced versions SASRec+ and BERT4Rec+ (early fusion with multimodal features), and MMMLP (a state-of-the-art late-fusion multimodal model). Additionally, MMMLP+ incorporates the same multimodal features as MTSTRec, ensuring a fair comparison across fusion strategies. These models, tested on three diverse datasets (Section 4.2), provide a solid benchmark for assessing MTSTRec’s improvements, as evidenced by its superior performance (e.g., NDCG@5 gains of 3.4%-43.7% over baselines). **[W3]** Insufficient discussion and comparison of multimodal fusion strategies, especially mid-fusion [Claims And Evidence 1] [Methods And Evaluation Criteria] [Other Comments Or Suggestions 2] **[R3]** Thank you for your feedback. We believe the current content adequately supports MTSTRec’s contributions. Section 2.2 outlines early (VBPR), mid (MM-Rec), and late (MMMLP) fusion approaches, aligning with the three fusion types. Our focus is on the novel Time-aligned Shared Token (TST) module, which distinguishes our mid-fusion by temporally aligning modalities—unlike prior mid-fusion works like MM-Rec. Additionally, Table 4 compares various mid-fusion methods (e.g., TST (1:1), TST (1:2), TST (1:4), and bottlenecks), demonstrating TST’s superiority. **[Q1]** Impact of large language models on textual feature extraction and overall performance [Questions] **[R4]** Referring to Table 5, larger language models (LLMs) consistently outperform smaller text encoders by capturing more nuanced context, boosting scores on HR@5 and NDCG@5. Despite higher computational overhead, their richer embeddings better align user preferences with item attributes, closing a notable performance gap. (Llama 3.1 excelling in MTSTRec’s text extraction, with NDCG@5: 0.8754, HR@5: 0.8340, outperforming BERT (NDCG@5: 0.8585, HR@5: 0.108) and others. Its richer embeddings improve preference alignment by ~1.97% in NDCG@5) **[Q2]** Comparison with other LLM-based multimodal sequential recommendation models [Questions] **[R5]** We acknowledge recent progress in LLM-based recommenders such as LLM-Rec. While these approaches inspire our work, MTSTRec is designed for a multimodal sequential recommendation, integrating not only text but also images, prices, and item IDs via our proposed TST module. In contrast, LLM-Rec focuses on text-only scenarios with different modeling objectives. It is worth noting that LLMRec can be seen as analogous to our prompt encoder. Therefore, our ablation studies—comparing with early fusion, late fusion, and using only the prompt encoder—can be viewed as an indirect comparison with LLMRec.
Summary: This paper proposes a unified multimodal recommendation framework with a Temporally-aligned Shared Token (TST) fusion module to learn cross-modal interactions, ensuring time-consistent alignment and modality fusion. Comprehensive experiments are conducted to compare the framework with existing works and to validate the effectiveness of different modules. Claims And Evidence: The authors claim that this work is a unified solution for multi-modal recommendation. However, the evidence presented in Table 2 appears to partially undermine this claim. It can be observed that removing the ID embedding leads to the most significant performance degradation while removing other modalities (e.g., style) results in nearly identical performances. For instance, on the Fresh-Food E-commerce dataset, the NDCG@5 for MTSTRec (0.8800±0.0023) and for MTSTRec_W/O_style (0.8784±0.0020) show negligible difference, suggesting that the multi-modal features may not contribute substantially to the model’s overall performance. The above observations raise significant doubts regarding the model’s effectiveness in multi-modal recommendation scenarios. The minimal performance impact observed when removing specific modalities (e.g., style) suggests that the model may not be fully leveraging the potential of multi-modal features. Instead, the heavy reliance on ID embeddings indicates that the model’s success is predominantly driven by single-modal (ID-based) information. I expect that the authors further clarify the main contributions. Methods And Evaluation Criteria: The proposed method is built upon Transformer-like architectures with a TST fusion module. Since transformer has been applied in both academic research and industrial applications within the domain of Recommender Systems (RS), the soundness of the approach is well-established. For criteria, the authors use the commonly adopted metrics (HitRate, NDCG, and MRR) for evaluation, which are consistent with research works from literature. Theoretical Claims: N/A. This is an Application-driven ML paper, and no theoretical claim is presented. Experimental Designs Or Analyses: I have carefully checked the experimental designs, results, and analyses, and have two major concerns: 1) The datasets (Fresh-Food E-Commerce, House-Hold E-commerce, and H&M) are all E-Commerce datasets, which is limited in evaluating the model’s generalizability. Datasets in other domains (e.g., MovieLens, Last-FM, Yelp, etc.) should be considered for evaluation. It would greatly strengthen the paper if the above diverse datasets are used for evaluation. 2) The ablation studies are all conducted on the Fresh-Food E-commerce, which further limits the method’s generalizability. It remains unclear whether the observed performance improvements and the relative importance of each module hold true for the other two datasets. Supplementary Material: I have reviewed the supplementary materials. The authors release the codes and data in the reviewing process, which ensures reproducibility of the method. Relation To Broader Scientific Literature: As discussed in Methods And Evaluation Criteria section, the proposed method is built upon Transformer-like architectures with an TST fusion module. The transformer was first proposed in [1]. Since then, many works ([2], [3], [4]) in RS adopted such an architecture. Also, time-aware recommendations are also well-explored [5], [6]. [1] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. NeurIPS 2017. [2] de Souza Pereira Moreira G, Rabhi S, Lee J M, et al. Transformers4rec: Bridging the gap between nlp and sequential/session-based recommendation. RecSys 2021. [3] Sun F, Liu J, Wu J, et al. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. CIKM 2019. [4] Li C, Xia L, Ren X, et al. Graph transformer for recommendation. SIGIR 2023. [5] Lei Wang, Chen Ma, Xian Wu, et al. Causally Debiased Time-aware Recommendation. WWW 2024. [6] Qi Zhang, Longbing Cao, et al. Neural time-aware sequential recommendation by jointly modeling preference dynamics and explicit feature couplings. IEEE TNNLS. 2021. Essential References Not Discussed: N/A. The authors have provided a comprehensive literature review in the related work section. Other Strengths And Weaknesses: Strengths: S1. The paper is overall well-written with interesting ideas. S2. Extensive experimental results are presented. S3. The presentation is clear and precise. Weaknesses: W1. The TST module, from my point-of-view, serves as a temporal global feature switch, facilitating interaction between features of different modalities by acting as an intermediate channel. However, the time alignment function of the TST module seems to be duplicated with the positional encoding. In Table 4, the experimental results of different shared token configurations inadvertently undermine the claimed significance of the time-aligned mechanism. W2. No visualized results or case studies to intuitively demonstrate the model’s effectiveness. W3. Neither time-complexity analysis nor the computational overhead is presented, raising questions in real-world applicability. W4. SASRec, BERT4Rec, and MMMLP are all published before 2024. I would like to see comparisons with the latest SOTA methods like DiffMM (https://dl.acm.org/doi/pdf/10.1145/3664647.3681498), PromptMM (https://dl.acm.org/doi/pdf/10.1145/3589334.3645359), and FETTLE (https://dl.acm.org/doi/pdf/10.1145/3626772.3657701). W5. In page 15, the authors claim that the enhanced versions of SASRec and BERT4Rec integrate item ID, text, and image features. However, for MTSTRec, two additional sources are incorporated: prompt-text and price. The unfairness in input sources raises questions about experimental settings. Other Comments Or Suggestions: One major suggestion is that the main body of your submission should be self-contained. However, the authors heavily rely on Appendices for details, which makes it inconvenient for readers. Questions For Authors: I would like to hear from authors how the TST module functions when aligning time-aware features, as in my opinion, it serves more like an intermediate channel. Will the MTSTRec experience a significant performance degradation when removing the positional encoding in Equation (4)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. We appreciate your recognition of our work’s clarity, the thoroughness of the experimental evaluation, and a well-written presentation with interesting ideas. Below, we address each comment and concern in detail: [W1] Redundancy between TST and positional encoding; unclear role in time alignment [Weakness] [Questions] [R1] TST aligns features across modalities at each time step, complementing positional encoding’s intra-modality ordering. Together, they ensure robust temporal alignment and fusion. Omitting positional encoding sometimes yields decent results (e.g., NDCG@5: 0.8946 vs. 0.8942, NDCG@10: 0.9105 vs. 0.9086), due to TST’s implicit time handling. However, retaining it ensures fair Transformer-based comparisons and model stability. [W2] No visualized results or case studies to demonstrate the model’s effectiveness. [Weakness] [R2] We provided additional visualized results. See our response to Reviewer RAjD’s comment [R3]. [W3] Time-complexity analysis [Weakness] [R3] MTSTRec’s complexity primarily arises from Transformer layers at $O(n^2 \cdot d)$ per modality, yielding a naive upper bound of $O(m \cdot n^2 \cdot d)$. TST adds just one extra token per time step (plus a prediction token), so the overhead is small. With optimized implementations, MTSTRec remains only a minor constant overhead above standard Transformer-based sequential models. See [R2] in Reviewer RAjD. [W4] Comparisons with the latest SOTA methods like DiffMM, etc. [Weakness] [R4] Our method emphasizes temporal ordering, whereas DiffMM and PromptMM lack timestamps, complicating comparisons. FETTLE operates more as a plug-in than a standalone framework, though we may use its ideas later. Truly multimodal sequential recommenders remain rare; MMMLP (2023) is a notable example. For fairness, we also adapted SASRec and BERT4Rec to handle multimodal inputs. [W5] Unfair comparison due to different input sources across models [Weakness] [R5] We used prompt-text and price primarily to showcase TST’s mid-fusion scalability rather than any specialized engineering. For fairness, SASRec+ and BERT4Rec+ use matching text and image inputs. Ablations show MTSTRec (without prompt-text) still surpasses SASRec (e.g., 0.8574 vs. 0.8015 NDCG@5), proving TST’s advantage extends beyond additional modalities. Our core contribution is TST’s effective integration of diverse features, further boosted by extra signals. [W6] Clarification on the contribution of multimodal features vs. ID embeddings [Claims And Evidence] [R6] We acknowledge that ID embeddings are powerful predictors, yet our ablation studies show that removing other modalities—especially Text and Prompt Text—causes noticeable performance drops, as shown in Table 2 (leading to a significant drop in NDCG@5 from 0.8800 to 0.8574). This highlights the collective benefit of multimodal features. [W7] Lack of non-e-commerce datasets for generalizability [Experimental] [R7] Suitable non-e-commerce datasets for session-based, multimodal recommendations are hard to find, and time constraints limit broader tests. We are releasing our two private e-commerce datasets and plan to explore additional domains in future work. [W8] Ablation only on one dataset [Experimental] [R8] To address this concern, we conducted additional ablation studies on the House-Hold E-commerce dataset. The results show consistent performance drops when removing the TST module and fusion encoder, confirming the effectiveness and generalizability of our design across datasets. | Fusion Method | NDCG@5 | NDCG@10 | HR@5 | HR@10 | MRR@5 | MRR@10 | |------------------------------------------------|--------|---------|-------|--------|--------|---------| | **MTSTRec** | 0.8942 | 0.9086 | 0.9067 | 0.9358 | 0.8568 | 0.8607 | | w/o TST & Fusion Encoder (Late Fusion) | 0.8773 | 0.8896 | 0.8839 | 0.9116 | 0.8392 | 0.8429 | | w/o Multimodal Encoder (Early Fusion) | 0.8366 | 0.8516 | 0.8404 | 0.8738 | 0.7929 | 0.7974 | [W9] The method extends established Transformer and time-aware models, and additional related works could be considered for a broader context. [Relation To Literature] [R9] We have cited [1] and [3], and will add [2], [4], [5], and [6] to better position our method. While based on Transformer, our contribution lies in the TST module, which enables time-aligned multimodal fusion—a key difference from prior works [2–4] that lack such fine-grained alignment. Unlike time-aware methods [5,6] that rely on timestamps, we maintain temporal consistency via position encoding and TST alignment. Our use of prompt-enhanced text and thorough ablations further support the novelty. [W10] Over-reliance on appendix [Comments] [R10] We agree on the importance of self-containment. However, due to the 8-page limit, we had to move several implementation and analysis details to the Appendix. --- Rebuttal Comment 1.1: Comment: Thanks author for the detailed response, I am inclined to raise my score after carefully reading the rebuttals and comments of other reviewers. --- Reply to Comment 1.1.1: Comment: Thank you so much! We appreciate your positive feedback. All reviewers’ comments helped us improve our paper significantly.
Summary: This paper introduces MTSTRec, a multimodal sequential recommendation model that integrates textual, visual, and price information into a unified, time-aligned shared token representation. Claims And Evidence: The claims made in the paper are generally supported by the evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the problem at hand. Theoretical Claims: The paper does not introduce novel theoretical results but motivates the TST fusion conceptually. Experimental Designs Or Analyses: The experiments are well-structured, with ablation studies confirming the importance of each modality. However, robustness and efficiency evaluations are missing: - What is the model’s inference speed compared to SASRec? - How does the model handle noisy or missing modalities? The comparison is limited. - LLM-powered recommenders (e.g., LLM-Rec, ACL 2024) should be considered as baselines. Supplementary Material: The appendix provides additional details on feature extraction, implementation details and results. Relation To Broader Scientific Literature: - The paper builds upon work in multimodal recommendation (e.g., MMMLP, VBPR) and Transformer-based sequential models (e.g., SASRec, BERT4Rec). - The idea of mid-fusion via shared tokens is related to bottleneck attention mechanisms in multimodal learning (e.g., [1]), and these relevant works should be cited. - The idea of mid-fusion also relates to Variational Information Bottleneck (e.g., [2, 3]), and these relevant works should be cited. [1] Nagrani, Arsha, et al. "Attention bottlenecks for multimodal fusion." [2] Wei, Chunyu, et al. "Contrastive graph structure learning via information bottleneck for recommendation." [3] Zhao, Wenkuan, et al. "DVIB: Towards Robust Multimodal Recommender Systems via Variational Information Bottleneck Distillation." Essential References Not Discussed: See `Relation To Broader Scientific Literature`. Other Strengths And Weaknesses: See `Questions For Authors`. Other Comments Or Suggestions: See `Questions For Authors`. Questions For Authors: 1. Introduce random perturbations in images/text to see how robust MTSTRec is to input noise. 2. Report the comparison of training/inference time. 3. Can TST be visualized? Showing attention heatmaps of how tokens interact across modalities could strengthen the argument for TST effectiveness. 4. Would contrastive learning improve TST fusion? Have you considered using multimodal contrastive loss (e.g., CLIP-style loss) to further improve alignment? 5. Missing LLM-based multimodal recommendation baselines. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. We greatly appreciate your recognition of the strengths of our work, particularly your acknowledgment of our conceptual motivation for time-aligned shared token (TST) fusion, the well-structured experiments and ablation studies, and the comprehensive supplementary material. Below, we address each of your questions in detail. **[Q1]** Robustness and efficiency evaluations [Questions] [Experimental Designs Or Analyses 2] **[R1]** Thank you for the suggestion. Robustness to noisy inputs is indeed an important aspect of model reliability. While MTSTRec leverages time-aligned shared tokens to fuse modality-specific features—which may inherently help mitigate localized noise—we did not conduct perturbation experiments in this version due to time constraints. **[Q2]** Comparison of training/inference time [Questions] [Experimental Designs Or Analyses 1] **[R2]** The following table shows that MTSTRec requires more training time (83 minutes) due to its multimodal inputs (ID, text, image, prompt-text, price) and the TST module (Sec. 3.3.2), but its inference time (12.43 seconds) remains competitive—only slightly higher than SASRec+ (7.48 seconds). In return, MTSTRec achieves significantly better accuracy (NDCG@5 = 0.8942 vs. 0.8150 for SASRec+, Sec. 4.2), offering a strong performance–efficiency trade-off suitable for real-world use. We provided the time complexity analysis. See [R3] for Reviewer 2WPs. | Model | Training Time (minutes) | Inference Time (seconds) | |----------------|-----------------------------|------------------------------| | MTSTRec | 83 | 12.43 | | SASRec | 0.083 | 1.860 | | BERT4Rec | 4.1 | 1.88 | | SASRec+ | 66 | 7.48 | | BERT4Rec+ | 59 | 7.89 | **[Q3]** Visualize TST [Questions] **[R3]** The attention heatmaps from the MTSTRec model at layer 0 visualize the self-attention weights for a sequence of 21 tokens (18 product IDs padded with 2 zeros to reach the maximum length of 20) across five modalities: token (product ID), style (image), text, prompt-text, and sale price. Each 21x21 heatmap shows how much each token attends to others, with the x-axis and y-axis representing sequence positions (0 to 20). Yellow indicates high attention, and dark blue/purple indicates low attention. The heatmaps reveal modality-specific patterns: token focuses on item identity with sparse attention, style captures visual similarities with broader attention, text and prompt emphasize textual features with vertical stripes, and sale price shows scattered attention, leading to less importance in recommendation. https://anonymous.4open.science/r/MTST_ICML_rebuttal-0E5D/MTST_attention_heatmap.png **[Q4]** Potential of contrastive learning (e.g., CLIP-style loss) to improve TST fusion [Questions] **[R4]** Integrating a multimodal contrastive loss such as CLIP-style alignment is an intriguing direction, particularly for bridging text–image or text–style embeddings. In principle, a contrastive term might further align the modalities during TST fusion, potentially sharpening the shared token’s cross-modal representation. However, we have not explicitly explored a full-blown contrastive learning design so far, primarily because the supervised next-item prediction objective already forces alignment across modalities that correlate to the same product and also because contrastive training can introduce substantial additional computational overhead. That said, exploring a dual-objective setup—where TST’s latent space is refined by a CLIP-like objective—could be a compelling avenue for future research. **[Q5]** Missing LLM-based multimodal recommendation baselines [Questions] [Experimental Designs Or Analyses 3] **[R5]** We acknowledge recent progress in LLM-based recommenders such as LLM-Rec. While these approaches inspire our work, MTSTRec is designed for a multimodal sequential recommendation, integrating not only text but also images, prices, and item IDs via our proposed TST module. In contrast, LLM-Rec focuses on text-only scenarios with different modeling objectives. It is worth noting that LLMRec can be seen as analogous to our prompt encoder. Therefore, our ablation studies—comparing with early fusion, late fusion, and using only the prompt encoder—can be viewed as an indirect comparison with LLMRec. **[W1]** Missing related work on mid-fusion strategies [Relation To Broader Scientific Literature] **[R6]** Thank you for the helpful comments. We would like to clarify that [1] has already been cited in our paper. We agree that [2, 3] are relevant to our mid-fusion design and will include proper citations and discussion in the final version.
Summary: The authors propose a sequential recommendation framework focusing on multi modal feature fusion. In the proposed models, the authors include feature sets like product IDs, images, text, and prices.The main contribution comes from authors proposing a new block named Time-aligned Shared Token Fusion module. Each modal has its own self-attention block and the TST module learns an element wise average pooling token (z^sh) to be concatenated together with the modal specific token z^mod to be fed into the next layer's input to learn the mod specific patterns for the next layer. This basically forced the model to add a fused cross modal feature input into each layer. The authors conducted offline analysis on 3 datasets, including performance comparison and ablation evaluations on importance of multi-modal features and impact of different ways of multi-modal fusion. Claims And Evidence: I found the claims of the authors convincing. However, I will be more convinced of the results if there are online performance evaluations of the proposed algorithm (instead of just offline evaluation). I totally understand that not all researchers(especially on recommendation domain) will have access to online experiments, but since the authors are using private data from AviviD Innovative Multimedia for experiment evaluation, it will be great if there is any online results can be shared. Methods And Evaluation Criteria: The methods makes sense and is easy to follow. Theoretical Claims: I checked the correctness of the proposed method and they looked correct to the best of my knowledge. Experimental Designs Or Analyses: Comprehensive experiments are conducted on 3 offline datasets(1 is public right now and the author promise to make the other 2 public once the paper is published). The experiment design makes sense. Supplementary Material: No Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No Other Strengths And Weaknesses: Strength 1.Sequential recommendation is a very well established domain and there are many real world applications that can make the work relevant to a large group of audiences. 2. The proposed method makes sense and is actually straightforward and easy to follow. The innovation is not ground-breaking but more incremental on top of existing/well-established techniques. But for recommendation systems, sometimes a not-complicated but working solution is way more important than an over-complicated algorithm design. 3. The experiments are well designed, comprehensive and include all the analysis and ablation I would like to know as a reader. The offline analysis shows significant gains of the proposed algorithm on top of the existing baselines. 4. The paper is well written and easy to follow 5. The authors promised to release 2 of the offline dataset to the public after the paper is published, which can benefit follow-up researches. Weakness or Questions 1. My biggest concern is the lack of online test results. This is definitely not a reason to reject this paper but this definitely impacts on the confidence of whether this work will really work in real-life settings. In sequential recommendation, there can be a lot of times that offline evaluation results do not match online performance and an online LE can make the conclusion more convincing. If the authors have access to online evaluation, it will be great if they can share some of the learnings. 2. My second concern is that the proposed model will be much larger than many of the baselines. I.e., the proposed model uses an independent transformer block for each modal and has multiple layers on top of it vs baselines like bert4rec which comes with a much smaller model structure. How much of gain is coming from the model size(e.g. the effective size of the model) and how much is coming from the proposed methods? Can authors provide the parameter size of different models used in the experiment sections? Some nit questions(not concerns): 3. Why do the authors add a close embedding token (z_cz) for each modality at the end of each sequence instead of directly leveraging the output of the last activity of the sequence? Does this design choice come with gain observed offline? 4. Why the final output only concatenate all the modal specific tokens (z_mod) btu did not concatenate the shared token (z_sh)? Other Comments Or Suggestions: Please refer to Other Strengths And Weaknesses section. Thanks! Questions For Authors: Please refer to Other Strengths And Weaknesses section. Thanks! Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We appreciate the recognition of the practicality and clarity of our proposed method, the comprehensiveness of our experiments, and the potential impact of releasing two of our datasets to the public. We are especially grateful for your comment that “a not-complicated but working solution is way more important than an over-complicated algorithm design,” which aligns with our goal of proposing practical and effective methods for real-world recommendation scenarios. We have carefully addressed the reviewer’s comments and questions as follows. **[W1]** Lack of online testing [Weaknesses] **[R1]** We completely agree with your comment on online testing. This work is part of an industry-academia collaboration, and the proposed MTSTRec model has so far been developed and evaluated using historical data. As the next step, an online A/B test is planned to assess the model’s performance in a live setting. This paper, if accepted, will serve as strong evidence to convince the top management of the company to go ahead with the online testing. We hope to share what we will learn with the research community soon. **[W2]** Model size vs. performance gain/parameter comparison [Weaknesses] **[R2]** We acknowledge that MTSTRec has more parameters than the baseline models that are based solely on historical interaction data, such as SASRec and BERT4Rec, due to its modality-specific encoders and fusion layers for handling multimodal inputs. The parameter sizes of the models used in our experiments are: * MTSTRec: 59.24M * SASRec: 4.69M * BERT4Rec: 7.84M * SASRec+: 203.67M * BERT4Rec+: 400.33M Although we were unable to obtain the exact parameter sizes for MMMLP and MMMLP+, MTSTRec is significantly smaller than SASRec+ and BERT4Rec+, which serve as enhanced multimodal baselines. As shown in our ablation studies, the performance improvements of MTSTRec are not merely due to model size but largely result from the proposed TST fusion module and time-aligned multimodal design. **[Q3]** Why add a $z_{cz}$ token instead of using the last token? [Weaknesses] **[R3]** We have explored both approaches during development. Our experiments showed that relying solely on the last item token often obscured modality-specific contributions, particularly in short sequences or sessions with abrupt shifts, as it conflated signals across modalities. In contrast, adding a $z_{cz}$ token per modality acts as a dedicated placeholder for “the next item,” aggregating relevant signals from each modality’s perspective and yielding a clearer multimodal representation for prediction. Our testing confirmed that this approach consistently outperformed the last-token method across metrics like NDCG@20, enhancing accuracy without significant computational overhead. Due to space constraints, we did not elaborate on this comparison in the original paper. However, we recognize its importance and will include a detailed discussion, along with supporting experimental results, in Section 4 or an appendix of our revised paper to further substantiate this design choice. Thank you for highlighting this point. **[Q4]** Why not concatenate the shared token ($z_{sh}$) at the final output? [ Weaknesses] **[R4]** We evaluated both approaches—concatenating the shared tokens in the final output and excluding them—and found that preserving only the modality-specific CLOZE tokens yields better performance. The shared tokens effectively align cross-modal features during fusion, but adding them to the final output diluted modality-specific details essential for accurate predictions. We will provide more details on these findings in our revised paper. --- Rebuttal Comment 1.1: Comment: I would like to thanks for the detailed response from the authors. The authors have answered most of my questions. After careful review of the rebuttal and comments from other reviewers, my recommendation score stay the same. --- Reply to Comment 1.1.1: Comment: Thank you for the positive feedback! We would really appreciate it if you could kindly let us know which questions remain unanswered. (2025/04/07 update): Thank you for your previous review and feedback. Recent updates to our rebuttal have received positive feedback and led to increases in scores from other reviewers. We would be truly grateful if you could kindly revisit our responses. Your further consideration could make a meaningful difference, and we deeply appreciate the time and effort you’ve invested in reviewing our work.
null
null
null
null
null
null
BiMaCoSR: Binary One-Step Diffusion Model Leveraging Flexible Matrix Compression for Real Super-Resolution
Accept (poster)
Summary: BiMaCoSR is a method that combines binarization and one-step distillation to significantly compress and accelerate super-resolution (SR) diffusion models. It prevents model collapse from binarization using two auxiliary branches: Sparse Matrix Branch (SMB) and Low Rank Matrix Branch (LRMB). SMB captures high-rank information, while LRMB outputs low-rank representations inspired by LoRA. BiMaCoSR achieves a 23.8x compression and a 27.4x speedup over full-precision models without compromising performance. Comprehensive experiments show its superiority over existing methods. ## Update after rebuttal I have adjusted the score for the hardware acceleration. However, the justification for the novelty concern is unconvincing. Claims And Evidence: The authors claim that BMB is responsible for most of the high-frequency information. However, as demonstrated in Fig. 2 of the supplementary material, LRBM appears to play a more significant role in contributing to the high-frequency information in the MLP. Methods And Evaluation Criteria: 1. The author applies their method solely to the SinSR model. It would be beneficial to conduct experiments with additional diffusion models to evaluate the method's generalizability. 2. The author does not provide real-time speedup results, which are crucial, particularly for ultra-low bit quantization. I am curious to know whether the proposed method leads to actual computational reductions. Theoretical Claims: No theoretical claims and proofs. Experimental Designs Or Analyses: 1. It is quite surprising that in Table 2, the proposed method requires fewer FLOPs than ReSTE and XNOR, which do not include any additional computational branches. 3. Why does the author focus exclusively on one-step diffusion models? I haven't found any specific design or rationale for choosing this type of model. 4. It is essential to compare the proposed method with the following works [1, 2, 3] to validate its contribution, as all of them also employ additional computational branches (e.g., LoRA or sparse matrix), similar to the approach proposed here. [1] Huang W, Liu Y, Qin H, et al. Billm: Pushing the limit of post-training quantization for llms[J]. arXiv preprint arXiv:2402.04291, 2024. [2] Zhang Y, Qin H, Zhao Z, et al. Flexible residual binarization for image super-resolution[C]//Forty-first International Conference on Machine Learning. 2024. [3] Li Z, Ni B, Zhang W, et al. Performance guaranteed network acceleration via high-order residual quantization[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2584-2592. Supplementary Material: I have reviewed all the supplementary material. Relation To Broader Scientific Literature: I believe the proposed methods can be applied to any model architecture. Essential References Not Discussed: Adding new computation branches is very common in the binarization area. All of the following related studies are missing: 1. The author proposes SBM, but these techniques are widely used in low-bit quantization [5, 6] and binarization [1, 4], and should have been discussed in more detail. 2. Quantization-aware LoRA fine-tuning methods [7, 8] appear to be similar to LRBM, assuming they do not merge LoRA during inference. 3. SVD initialization [7] and magnitude-based selection [5] are also common practices, but related works in this area have not been cited. [4] Li Z, Yan X, Zhang T, et al. Arb-llm: Alternating refined binarizations for large language models[J]. arXiv preprint arXiv:2410.03129, 2024. [5] Kim S, Hooper C, Gholami A, et al. Squeezellm: Dense-and-sparse quantization[J]. arXiv preprint arXiv:2306.07629, 2023 [6] Dettmers T, Svirschevski R, Egiazarian V, et al. Spqr: A sparse-quantized representation for near-lossless llm weight compression[J]. arXiv preprint arXiv:2306.03078, 2023. [7] Guo H, Greengard P, Xing E P, et al. Lq-lora: Low-rank plus quantized matrix decomposition for efficient language model finetuning[J]. arXiv preprint arXiv:2311.12023, 2023. [8] Dettmers T, Pagnoni A, Holtzman A, et al. Qlora: Efficient finetuning of quantized llms[J]. Advances in neural information processing systems, 2023, 36: 10088-10115. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and clearly presented. 2. The proposed method achieves state-of-the-art (SOTA) performance. Weaknesses: 1. It is unconvincing that the method can achieve real speedup on hardware. 2. Additionally, the proposed techniques—such as LRMB, SMB, and the initialization strategy—lack novelty (see Essential References Not Discussed). Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > Q3-1:The authors claim that BMB is responsible for most of the high-frequency information. However, as demonstrated in Fig. 2 of the supplementary material, LRBM appears to play a more significant role in contributing to the high-frequency information in the MLP. A3-1: This is because the high-frequency information generated by MLP is much less than that of attention mechanism, and the function of MLP is not to generate high-frequency information. Therefore, this phenomenon is not in conflict with the claim. > Q3-2:...conduct experiments with additional diffusion models to evaluate the method's generalizability. A3-2: Please refer to Q1-4 and A1-4. > Q3-3:...provide real-time speedup results. A3-3: Please refer to Q1-5 and A1-5. > Q3-4:It is quite surprising that in Table 2, the proposed method requires fewer FLOPs than ReSTE and XNOR. A3-4: We explained this in line 308-311. To guarantee fair comparison, we leave the **first and last** conv layers in full precision in BiMaCoSr and the **first two and last two** conv layers in full precision in other binarized methods. This operation keeps the # total parameters of BiMaCoSr and other binarized methods approximately the same. > Q3-5:Why one-step diffusion models? A3-5: Please refer to Q1-4 and A1-4. > Q3-6:compare the proposed method with the following works [1,2,3]. A3-6: We provide the experiment result in the following table. [2] is not open-source yet therefore we can not compare considering the limited time for rebuttal. [1] is a PTQ method. If we train [1] with QAT, it is the same with [3]. BiMaCoSR consistently performs better than other methods. ||LPIPS↓|DISTS↓|CLIP-IQA+↑|FID↓| |-|-|-|-|-| |[1,3]|0.3908|0.2572|0.4347|110.36| |BiMaCoSR |0.3375|0.2183|0.4800|86.09| > Q3-7:The author proposes SMB, but these techniques are widely used in low-bit quantization [5, 6] and binarization [1, 4], and should have been discussed in more detail. A3-7: We clarify that the motivation of SMB is different from [1,4,5,6]. In these papers, the authors leverage sparse matrix to keep the outliers of the weight matrix in their PTQ task. As for BiMaCoSR, the purpose of SMB is to pass the information without loss in QAT scenario. To validate this, we could also quantize the SMB branch into 1-bit and result is provided in the table below. The result shows that the function of SMB is not to maintain the outliers of the weight matrix. Therefore, our SMB branch is different with [1,4,5,6]. We will add these differences in the revised version. ||LPIPS↓|DISTS↓|CLIP-IQA+↑|FID↓| Params | |-|-|-|-|-|-| |BMB+1-bit SMB|0.3935|0.2562|0.4541|110.30|4.98M| |BMB+SMB|0.3901|0.2558|0.4565|108.95|4.98M| > Q3-8:Quantization-aware LoRA fine-tuning methods [7, 8] appear to be similar to LRBM. A3-8: The first difference is that we keep the LRMB branch during the inference as an auxiliary branch to binarized branch, which could significantly improve the performance. The second difference is that the motivation of LRMB is to pass the low-frequency information in image SR task. As for [7,8], the motivation is to minimize the precision loss of weight. Therefore, we are quite different from these two works. > Q3-9:SVD initialization [7] and magnitude-based selection [5] are also common practices, but related works in this area have not been cited. A3-9: The difference between SMB and [5] is well discussed in A3-7. [7] is not published yet so we do not need to compare our method with it. We will cite [5, 7] in the revised version. > Q3-10:It is unconvincing that the method can achieve real speedup on hardware. Q3-10: Please refer to Q1-5 and A1-5. > Q3-11:...the proposed techniques ... lack novelty Q3-11: (1) For LRMB, we clarify that our motivation is different with current main stream. The detailed difference is shown in A3-8. (2) For SMB, the motivation of our method is to pass the information without loss. As for [5], the motivation of their method is to reduce the loss of weight. We explained this in detail in A3-7. (3) The main contribution of our method lies in the exploration of the combination of one-step diffusion model and binarization domain in SR task. We have provided a successful solution, which maintains the performance and accelerates the inference. (4) Reviewer 9pC4 recognizes our novelty in the combination of one-step diffusion model and binarization. [1] Billm: Pushing the limit of post-training quantization for llms. [2] Flexible residual binarization for image super-resolution. [3] Performance guaranteed network acceleration via high-order residual quantization. [4] ARB-LLM: Alternating refined binarizations for large language models. [5] Squeezellm: Dense-and-sparse quantization. [6] SpQR: A sparse-quantized representation for near-lossless llm weight compression. [7] LQ-LoRA: Low-rank plus quantized matrix decomposition for efficient language model finetuning. [8] QLoRA: Efficient finetuning of quantized llms. --- Rebuttal Comment 1.1: Comment: 1. As this work is for real applications, I think real-time speedup can help validate its practicality. It is well-known that FLOPs reduction does not mean real-time speedup. Moreover, in binarization, a lot of work is not hardware-friendly and only provides theoretical speedup. Thus, I further suggest that the author include the real-time speedup ([dabnn](https://github.com/JDAI-CV/dabnn) is an open-sourced framework to help apply the methods). 2. "first and last conv layers in full precision in BiMaCoSr and the first two and last two conv layers in full precision in other binarized methods." is also quite weird. I think all the methods should be under the same settings, and this justification is very confusing. What's the motivation for keeping 2 layers in full-precision in the proposed method but 4 in other methods? 3. If the author quantizes SMB, I think it is the same as [3]. All the methods in [1, 4, 5, 6, 3] and this paper, as mentioned in Lines 195-200, can seem as ways to compensate for information. Thus, I think the root motivation and approaches are very similar, which raises my novelty concern. 4. Quantization-aware LoRA can also be kept for the quantized branch during inference for improvement (merging them and then re-quantizing the model, in fact, brings some accuracy drops). For the root motivation, I think it is the same as SMB, which is to compensate for information. This also raises my novelty concern. 5. I still have novelty concerns as mentioned in 3 and 4. I think the combination of one-step diffusion model and binarization may not seem like a sufficient research novelty, since both of these things already exist, and the idea of combining these two things is a little bit trivial. Overall, I decide to keep the score. [3] Li Z, Ni B, Zhang W, et al. Performance guaranteed network acceleration via high-order residual quantization[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2584-2592. --- Reply to Comment 1.1.1: Comment: > Q3-12: real-time speedup We tested the real-time speedup with dabnn but failed with numerous errors. **Even so, we still deployed our BiMaCoSR with larq on our mobile phone, and the real-time speedup is 8.27x.** This result represents that our method is effective in real-time speedup and effective on real edge devices. Therefore, we believe that the real-time speedup and the corresponding designs are significant advantages of our method. > Q3-13: What's the motivation for keeping 2 layers in full-precision in the proposed method but 4 in other methods? We do this to guarantee fair comparison. With more layers unquantized, the total numbers of parameters of different methods are the same. Otherwise, the performance of other methods will further drop and their visual quality will be unacceptable in edge devices. > Q3-14: Novelty of SMB and LRMB We clarify that the SMB and LRMB are novel methods, and the motivation is not the same as the cited papers. If you think SMB and LRMB are not novel because they **can be seen as ways to compensate for information**, then all the methods for quantization lack novelty. We reiterate that the proposed BiMaCoSR is a binarized (W1A1) one-step diffusion model and optimized with QAT. All the cited papers are not similar to our setting. Moreover, the BiMaCoSR model proposed in this paper is a comprehensive solution that combines SMB with LRMB and BMB, forming a brand-new binary single-step diffusion model architecture. This architecture not only performs excellently in terms of compression and acceleration but also achieves outstanding performance in the task of image super-resolution. The synergy between this overall architecture and its various branches is one of the core innovative points of this paper, and it cannot be simply regarded as being the same as other methods of compensation information. > Q3-15: Novelty of the combination of one-step diffusion model and binarization. Although single-step diffusion models and binarization techniques already exist individually, effectively combining them is no easy task. Single-step diffusion models perform remarkably well in image generation tasks, yet they incur high computational and storage costs, making it difficult to deploy them on devices with limited resources. On the other hand, while binarization can significantly reduce the storage and computational requirements of a model, it leads to a substantial decline in model performance. In this paper, by proposing auxiliary branches such as LRMB and SMB, along with the corresponding initialization methods, the issue of information loss caused by binarization has been successfully addressed. Moreover, on the basis of the single-step diffusion model, extremely high compression and acceleration ratios have been achieved, while maintaining excellent performance. Therefore, this integration of technologies is far from being a simple combination. Besides, the proposal of BiMaCoSR holds great practical application value and significance. It enables high-precision image super-resolution models to run in real-time on devices with limited resources, providing efficient solutions for scenarios such as mobile devices and edge computing. The innovativeness and practicality of this technology are among the important contributions of this paper, and its innovativeness should not be negated merely because single-step diffusion models and binarization techniques already exist.
Summary: This work BiMaCoSR introduces the first binarized one-step diffusion model for real-world single image super-resolution (Real-SR)​. The paper addresses the heavy memory and computation demands of diffusion-based SR by combining 1-bit model binarization with one-step diffusion distillation. The core idea is to achieve extreme model compression and acceleration without sacrificing SR performance. To counteract the severe degradation (“catastrophic collapse”) that naive weight binarization would cause, the authors propose two lightweight auxiliary branches that preserve critical full-precision information: a Low-Rank Matrix Branch (LRMB) and a Sparse Matrix Branch (SMB)​​. LRMB (inspired by LoRA) captures low-frequency/large-magnitude components via a low-rank decomposition of each weight matrix, while SMB captures a few extreme outlier weight values in a sparse form. These branches, added to the main binary weight branch (BMB), allow the network to retain important information that 1-bit weights alone would lose, with negligible overhead. The overall architecture thus has three parallel components (BMB + LRMB + SMB) whose outputs sum to produce the layer’s result​. Using a distilled one-step diffusion model (SinSR) as a full-precision baseline, BiMaCoSR demonstrates state-of-the-art SR performance among heavily compressed models. It outperforms other binarization methods on standard real-world SR benchmarks (e.g. RealSR, DRealSR) across a comprehensive suite of 9 image quality metrics​. Notably, BiMaCoSR’s results are competitive with (and sometimes even better than) its full-precision one-step counterpart on certain fidelity and perceptual measures​. In terms of efficiency, the model achieves an impressive ~27× reduction in model size and ~23× faster inference compared to the full-precision one-step model​. In summary, the paper’s key contributions are: (1) the novel integration of binarization and one-step diffusion for SR, (2) the LRMB and SMB modules (with specialized SVD and sparse initialization schemes) to preserve information, and (3) extensive experiments showing dramatic memory/computation savings (23.8× smaller, 27.4× faster) while maintaining high restoration quality Claims And Evidence: The paper’s main claims are generally well-supported by empirical evidence. First, the authors claim that combining binarization with one-step distillation yields extreme compression and speedup with minimal loss in SR quality. This is convincingly demonstrated: BiMaCoSR’s model size and FLOPs are indeed drastically lower than baselines (e.g. ~27× smaller than the 32-bit model​), yet it achieves consistently higher or on-par performance on multiple benchmarks​. Table 1 shows BiMaCoSR outperforming competing binarized models on all evaluated metrics; for example, on the RealSR dataset it leads in PSNR, SSIM, and perceptual scores like LPIPS​. It even surpasses the full-precision SinSR and ResShift in some cases (e.g. LPIPS on RealSR)​. These results strongly back the claim of state-of-the-art performance at unprecedented compression levels. The authors also claim that their auxiliary branches (LRMB and SMB) effectively prevent the performance collapse normally seen with 1-bit networks. This is supported by a breakdown ablation study: with only the binarized branch (BMB) the PSNR and SSIM are much lower, but adding LRMB and SMB progressively improves all metrics​. For instance, PSNR on RealSR jumps from 26.41dB with BMB-only to 26.95dB after adding LRMB, and perceptual scores improve significantly as well​. This demonstrates that the branches successfully recover information lost due to binarization. The claim that the branches incur negligible overhead is also justified. A theoretical calculation shows the extra storage/computation for LRMB (with rank r = 8) is tiny compared to the binary weights​, and the sparse branch uses only 0.1% of weight elements​​. In practice, adding both branches only increases the model’s parameter count by a very small fraction (from ~3.7M to ~5.0M, which is still over 20× smaller than the full model)​. This evidence supports the claim that the overhead is practically negligible. Most claims are thus well-substantiated, and I did not find instances of over-claiming. One minor claim that could use more direct evidence is the suggestion that BiMaCoSR enables diffusion SR on resource-limited edge devices. The paper makes a strong case via compression and FLOPs reduction, but it does not report actual on-device inference times or memory usage. While a 27× speedup in FLOPs is promising​, real hardware speedups might be lower without specialized binary execution libraries. Thus, the deployability claim is plausible but not explicitly validated with a deployment experiment. Aside from this, all key claims (first binarized one-step diffusion SR, SOTA performance, effective information retention via LRMB/SMB) are convincingly supported by quantitative results and ablation studies. Methods And Evaluation Criteria: The methodology is well-aligned with the paper’s objectives. The goal was to drastically compress a diffusion-based SR model while preserving its high-fidelity output; to that end, the authors combined two complementary strategies: model binarization for compression and one-step distillation for fast inference. This joint approach directly targets both memory and speed objectives, and the method is executed thoughtfully. In particular, the introduction of LRMB and SMB is a clever design choice to meet the quality goal – these branches explicitly mitigate the known weaknesses of binarization (loss of information from small weights and rare large weights). The method leverages known techniques (low-rank approximation, sparse outlier capture, and XNOR-Net binarization) in a novel combination tailored for diffusion SR. The evaluation setup also matches the objectives: since the task is Real-World SR, the authors test on multiple Real-SR benchmarks (RealSR​, DRealSR, and a third dataset, likely a synthetic or DIV2K-based set) covering a variety of real degradations. They report a comprehensive set of 9 evaluation metrics – including standard fidelity metrics (PSNR, SSIM), perceptual distances (LPIPS, DISTS), and no-reference image quality scores (e.g. CLIP-IQA, MANIQA, NIQE/FID). This broad evaluation criterion is appropriate, as it captures both the fidelity and perceptual quality aspects of SR, aligning with the objective of producing realistic high-quality images. Theoretical Claims: The paper’s theoretical claims and derivations are mostly straightforward and correct. Rather than introducing new fundamental theory, the authors apply existing theoretical constructs (low-rank matrix factorization, binary convolution via XNOR) to their problem and provide derivations to justify design choices. For example, they express each full-precision weight matrix W as a sum of a low-rank component (matrices B and A from SVD) and a residual to be binarized. This decomposition is mathematically sound and allows them to claim a separation of “low-frequency” vs “high-frequency” information between LRMB and the binarized branch. While the terms “low-frequency” and “high-frequency” are used somewhat intuitively (referring to the magnitude/content of singular values rather than literal spatial frequency), the logic is reasonable: large singular values capture dominant image structures, and retaining them in FP (LRMB) should help reconstruct smooth components, whereas the binary residual can focus on fine textures. This claim is supported by the observed reduction in initial quantization error: after subtracting the SVD-based low-rank part, the norm of the remaining weight error ∥W_res∥²_F drops to 0.1855 from 1.1275 (a substantial reduction). This quantitative evidence backs the theoretical argument that their initialization decouples the weight information effectively. They also provide complexity analyses to support the “negligible overhead” claim. The paper derives formulas for the storage cost of LRMB: O_s = (m*r + r*n)*B (with B=32 bits for FP) versus the binarized weight cost m*n*B' (B'=1 bit). Given r ≪ m,n, they show that even if stored in 32-bit, the LRMB adds only a tiny fraction of what the full matrix would, confirming the overhead is minimal. Similar reasoning is applied to the sparse branch: only k (top 0.1%) entries of each weight matrix are kept, so the extra cost is trivial. All these derivations are mathematically correct. There are no complex new proofs in the paper; rather, the authors ensure that each design element is backed by a clear explanation or formula. I did not find any algebraic errors or logical gaps in these derivations. The use of the straight-through estimator (STE) for binarization is referenced (e.g., ReSTE [1]) and the binary convolution operation is formulated via XNOR and bit-count equations – these are standard in binary neural networks and are presented correctly. One minor observation is that the paper doesn’t formally prove why the combination of branches yields optimal retention of information (that would be a very difficult theoretical guarantee). Instead, it relies on intuitive reasoning (e.g., outlier weights are rare but crucial​, so capturing them in SMB is beneficial) which is then validated experimentally. This approach is acceptable for an applied paper. In summary, the theoretical basis of the method is solid and internally consistent. [1] Wu, Xiao-Ming, et al. "Estimator meets equilibrium perspective: A rectified straight through estimator for binary neural networks training." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. Experimental Designs Or Analyses: The experimental design is comprehensive and sound, giving credibility to the results. The authors conduct evaluations on three different SR datasets, covering both real-world degradations and (likely) a standard benchmark, which strengthens the generality of their claims. For each dataset, they report a wide range of metrics (nine in total), ensuring that no single metric bias (e.g., PSNR vs perceptual quality) dominates the assessment​. The comparison includes multiple baselines: (a) the full-precision diffusion models (ResShift and its one-step distilled version SinSR), and (b) several state-of-the-art binarization approaches adapted to the same one-step model (ReActNet, BBCU, ReSTE, BiDM). The ablation studies further bolster the experimental rigor. The paper includes a breakdown ablation (BMB only vs +LRMB vs +LRMB+SMB), loss function ablation, different ranks for LRMB, and different initialization strategies for the branches. These experiments are well-designed to answer key questions about why the method works. For example, the breakdown ablation clearly shows each component’s effect on performance and verifies that the final design (with both branches) is needed for the best balance of quality and efficiency. Supplementary Material: No separate supplementary document was provided for this review, so my assessment is based solely on the main paper. Relation To Broader Scientific Literature: The paper does an excellent job positioning itself in the context of prior work. In the introduction and related work, the authors survey two key areas: diffusion-based super-resolution and network binarization/acceleration. They cite the foundational and latest works in diffusion models for SR, such as SR3 (first iterative SR diffusion), DiffBIR, SinSR (one-step diffusion). On the binarization side, they reference seminal works like XNOR-Net for classification, as well as more recent binarization techniques and benchmarks (ReActNet, Qin et al. 2020/2022 for improved accuracy​, etc.). Crucially, they cite very recent papers that apply quantization to diffusion models: Binary Latent Diffusion (Wang et al. 2023), BiDM (Zheng et al. 2024), and a NeurIPS 2024 work “BI-DiffSR” (Chen et al. 2024)​. Essential References Not Discussed: The paper cites most of the crucial prior work, but there are a couple of references that should have been mentioned explicitly: 1. LoRA (Low-Rank Adaptation) – Hu et al., 2021. The idea of using a low-rank decomposition to inject or preserve information is directly inspired by LoRA (as acknowledged), but the original LoRA paper does not appear in the reference list. Citing it would credit the source of the low-rank approach and contextualize LRMB within the broader use of low-rank matrices in neural network compression. 2. Knowledge Distillation for Diffusion Models – While the authors cite SinSR and ResShift for one-step distillation, they might have also referenced the general concept of knowledge distillation (Hinton et al., 2015) or earlier works on distilling iterative generative models. However, this is a minor point since they did cite the specific SR diffusion distillation methods they used. Other Strengths And Weaknesses: Strengths: Beyond the points already discussed, the paper’s notable strengths include its originality and practical significance. Combining one-step diffusion with binary networks is a non-trivial and original idea – to my knowledge, this is indeed the first attempt at this combination, addressing a clear gap for deploying diffusion models. The resulting compression (≈24×) and speedup are very impressive, pushing the boundary of what’s possible for resource-constrained SR. Another strength is the thoroughness of validation: using nine different metrics and multiple datasets demonstrates robustness. The qualitative results (described in the text and shown in figures) also strengthen the paper – e.g., the authors describe how BiMaCoSR recovers fine details like hairs, facial features, and textures better than other compressed models​, emphasizing that it’s not just about numeric scores but also visible quality gains. Weaknesses: One weakness is that the method’s complexity might make reproduction challenging. The model introduces additional branches and special initialization routines (SVD for LRMB, “sparse skip” for SMB), which require careful implementation. However, the authors mitigated this by describing them in detail and promising to release code. Another minor weakness is that some improvements come at the cost of a slight drop in certain metrics. For instance, adding the sparse branch (SMB) improved perceptual scores but caused a small drop in PSNR/SSIM. This trade-off is actually expected (perception-distortion trade-off), and the authors do report it honestly. It’s not a serious issue, but readers focused purely on distortion metrics might note that the binarized model’s PSNR is a bit lower than a full-precision model in some cases. Additionally, as mentioned earlier, the reliance on a few full-precision layers means the model isn’t completely binary; however, the impact on compression is minor since those layers are a tiny fraction of parameters. A potential weakness in significance could be argued: the paper largely engineers known techniques (binarization, LoRA, distillation) together rather than introducing fundamentally new theory. I personally find the engineering contribution significant given the difficulty of making diffusion models this compact, but some might view it as an incremental combination (albeit a well-executed one). Other Comments Or Suggestions: 1. Clarify reported speedup vs compression ratios. 2. Cite LoRA for completeness. 3. Include details on k selection for SMB. Questions For Authors: 1. Actual Inference Speed: The paper reports a ~23× reduction in FLOPs, but have you measured wall-clock inference time or FPS on any hardware? For example, on a CPU or GPU, how much faster is BiMaCoSR compared to SinSR in practice? This would clarify if the theoretical speedup carries over given that current libraries might not be fully optimized for binary operations. 2. Full-Precision Layers: You chose to keep the first and last conv layers in full precision​. Did you attempt to binarize those as well, and if so, how badly did it hurt performance? In other words, is this partial precision absolutely required for acceptable results? Understanding this can help gauge if future improvements (or better training techniques) might remove the need for any full-precision layers. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q2-1:Cite LoRA and KD A2-1: Thank you for your advice. We will cite LoRA and KD in the revised version. > Q2-2:One minor claim that could use more direct evidence is the suggestion that BiMaCoSR enables diffusion SR on resource-limited edge devices. A2-2: In Table 2 in the main paper, our BiMaCoSR takes 1.83 GFlops and 4.98 M parameters, which can be safely deployed and run efficiently on mobile devices, e.g., Snapdragon 8 Gen3 based devices. We will add this example in the revised version. > Q2-3:Clarify reported speedup vs compression ratios. A2-3: Currently, the hardware support to binarized model is not well-developed. Therefore, we are not able to provide the actual-device running time. Despite, we report the speedup ratio following the previous research [1,3,4] and calculate the FLOPs needed for the inference process in the same way. Therefore, it is an engineering task to reach the calculated speedup ratio. As for research, we focus more on the balance of performance and speedup ratio. > Q2-4:Include details on k selection for SMB. A2-4: We describe the k selection for SML in Sec. 3.4. Specifically, we select the top k values with the highest absolute values of $\mathbf{W}_{\text{BMB}}^{\prime}$ to form the SMB branch. Each element is save with a triple, i.e., (row index, column index, value). During inference, there are efficient algorithms to calculated sparse matrix multiplication. > Q2-5:Actual Inference Speed A2-5: Please refer to A2-3. > Q2-6:Full-Precision Layers: You chose to keep the first and last conv layers in full precision. Did you attempt to binarize those as well, and if so, how badly did it hurt performance? In other words, is this partial precision absolutely required for acceptable results? A2-6: Experimentally, quantization of the first and the last conv layers hurts the performance significantly with negligible compression effect, as shown in the table below. This quantization scheme is also leveraged in many previous works[1,2,3,4]. | RealSR | LPIPS↓ | DISTS↓ | FID↓ | CLIP-IQA+↑ | Params | |-----------------|--------|--------|-------|------------|---------| | SinSR (FP) | 0.3635 | 0.2193 | 56.36 | 0.5736 | 118.59M | | BiMaCoSR | 0.3375 | 0.2183 | 86.09 | 0.4800 | 4.98M | | Fully Quantized | 0.3524 | 0.2423 | 91.17 | 0.4617 | 4.92M | [1] Bin Xia, Yulun Zhang, Yitong Wang, Yapeng Tian, Wenming Yang, Radu Timofte, and Luc Van Gool. Basic binary convolution unit for binarized image restoration network. In ICLR, 2022. [2] Haotong Qin, Mingyuan Zhang, Yifu Ding, Aoyu Li, Zhongang Cai, Ziwei Liu, Fisher Yu, and Xianglong Liu. Bibench: Benchmarking and analyzing network binarization. In ICML, 2023. [3] Zheng Chen, Haotong Qin, Yong Guo, Xiongfei Su, Xin Yuan, Linghe Kong, and Yulun Zhang. Binarized Diffusion Model for Image Super-Resolution. In NeurIPS, 2024. [4] BiDM: Pushing the Limit of Quantization for Diffusion Models. In NeurIPS, 2024.
Summary: This paper presents BiMaCoSR, a binary one-step diffusion model for efficient real-world image super-resolution (SR), which integrates 1-bit quantization and one-step distillation to address the high computational and memory costs of conventional diffusion models. To mitigate performance degradation caused by extreme compression, a dual-branch compensation mechanism is introduced: the Low-Rank Matrix Branch (LRMB), initialized via top-r singular value decomposition (SVD) to preserve low-frequency information, and the Sparse Matrix Branch (SMB), which maintains high-rank feature representations by absorbing extreme values in feature maps. These branches synergize with the binarized main branch (BMB) to enable decoupled learning of high- and low-frequency features. The model employs pretrained weights for initialization, with LRMB initialized through SVD decomposition and SMB via sparse value selection. Evaluations on RealSR and other benchmarks demonstrate superior performance over existing binarization methods in PSNR, SSIM, and LPIPS metrics. Ablation studies confirm the effectiveness of the dual-branch architecture and initialization strategies. This work proves that combining matrix compression techniques with one-step distillation enables efficient deployment of diffusion models on resource-constrained devices while preserving visual quality. Claims And Evidence: - The claim that "However, applying naive binarization will lead to catastrophic model collapse" is made without citing relevant sources. Moreover, the claim that adding skip connections can resolve this issue also lacks citation. This absence of supporting references undermines the solidity of the motivation presented. - The low-rank and high-rank of LRMB and SMB don't equate to low- and high-frequency information. For instance, the paper mentions using skip connections (like identity matrices) to access high-frequency information, yet edge detection operators can also convey such information. - The article claims to first one-step binarized diffusion, but this claim is questionable. Many binarized diffusion works exist in related research, and one-step implementation can be achieved through various sampling methods. Methods And Evaluation Criteria: The method in the paper seems to be a general binarization approach for diffusion models, yet the rationale for validating it on the Sr task is unclear. While the evaluation for the SR task is reasonable. Theoretical Claims: There are no relevant theoretical proofs in the main text, but rank-related parts are in the supplementary materials. I checked the logic there and found no errors Experimental Designs Or Analyses: The experiment includes effect comparisons, key speed tests, and ablation studies on each module. However, it has flaws: no actual - device running time was designed for speed-related aspects; Table 3 of the ablation study lacks BMB + SMB results, even though SMB's sparse matrix might be low-rank. Supplementary Material: I checked the supplementary materials and have no questions. Relation To Broader Scientific Literature: No relevant papers were found. Essential References Not Discussed: No relevant papers were found. Other Strengths And Weaknesses: Strengths: + The paper's SR performance is excellent, surpassing that of the compared methods in many dimensions. Weaknesses: - The paper's performance is good, but the biggest question is why binarization is tested on SR tasks with an unclear starting point. Testing it on pure generative tasks might yield more insights Other Comments Or Suggestions: No Questions For Authors: For binarized SR tasks, its performance is excellent, but I'm puzzled by some unmentioned methods. Inconsistent sampling steps aren't a valid reason for not comparing. Could this binarization - designed structure be applied to pure generative tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1-1: The claims about naive binarization and skip connection lack citations. A1-1: It is a common sense that naive binarization leads to model collapse and we provide experiments in table below. Moreover, we will add citations [1], [3], and [4] for naive binarization and [3] and [4] for skip connection in the revised version. | | LPIPS↓ | FID↓ | CLIP-IQA+↑ | Param | |--------------------------|--------|--------|------------|-------| | BMB (Naïve Binarization) | 0.4141 | 110.15 | 0.4325 | 3.69M | | BiMaCoSR | 0.3375 | 86.09 | 0.4800 | 4.98M | > Q1-2: The low-rank and high-rank of LRMB and SMB don't equate to low- and high-frequency information. A1-2: Yes. We clarify that the low-rank and high-rank of LRMB and SMB are not equivalent to low- and high-frequency information. Could you please specify the corresponding line number that you are referring to? > Q1-3: The article claims to first one-step binarized diffusion, but this claim is questionable. A1-3: In current research, one-step distillation is the only successful way to achieve one-step diffusion model. Simply changing the sampling methods often lead to model collapse [5,6]. Both one-step distillation and binarization are model compression techniques. The combination of these two techniques does not exist in current research. Therefore, to our best knowledge, BiMaCoSR is the first one-step binarized diffusion model. > Q1-4: The method in the paper seems to be a general binarization approach for diffusion models, yet the rationale for validating it on the Sr task is unclear. A1-4: We are working on the binarization of one-step diffusion model on SR task to address the needs of industry. Currently, the industrial companies are in urgent need to compress the SR diffusion models, after which can they deploy these excellent SR models on mobile devices to improve the imaging process. Whereas, general diffusion models have no such application scenario and they are usually deploied on cloud. In order to solve the current practical problem in the industry, we are focusing on the SR task, i.e., binarized one-step diffusion model for SR. Yet, we provide the result of BiMaCoSR compared with BiDM, ReActNet on general diffusion models (DDPM) in the following table. This result also supports our superior performance. | Methods | FID | param | |----------|---------|--------| | FP | 16.8274 | 35.72M | | BiDM | 38.5275 | 4.73M | | ReActNet | 76.8448 | 1.12M | | BiMaCoSR | 37.1792 | 2.02M | > Q1-5: No actual-device running time was designed for speed-related aspects. A1-5: Currently, the hardware support to binarized model is not well-developed. Therefore, we are not able to provide the actual-device running time. Despite, we report the speedup ratio following the previous research [1,4,6] and calculate the FLOPs needed for the inference process in the same way. Therefore, it is an engineering task to reach the calculated speedup ratio. As for research, we focus more on the balance of performance and speedup ratio. > Q1-6: Table 3 of the ablation study lacks BMB + SMB results, even though SMB's sparse matrix might be low-rank. A1-6: Thank you for your advice. We provide the result of BMB+SMB below and We will add it in the revised version. | RealSR | PSNR | SSIM | LPIPS | CLIP-IQA+ | |-----------|---------|--------|--------|-----------| | BMB + SMB | 26.3037 | 0.7466 | 0.3901 | 0.4565 | > Q1-7: ... the biggest question is why binarization is tested on SR tasks with an unclear starting point. Testing it on pure generative tasks might yield more insights. A1-7: We explain the reason and result in A1-4. Please refer to A1-4 for detailed explanation. > Q1-8: Could this binarization-designed structure be applied to pure generative tasks? A1-8: Yes, we apply BiDM and our BiMaCoSR on DDPM on CIFAR-10 and the result is provided in A1-4. [1] Basic binary convolution unit for binarized image restoration network. In ICLR, 2022. [2] Bibench: Benchmarking and analyzing network binarization. In ICML, 2023. [3] OneBit: Towards Extremely Low-bit Large Language Models. In NeurIPS, 2024. [4] Binarized Diffusion Model for Image Super-Resolution. In NeurIPS, 2024. [5] SinSR: Diffusion-Based Image Super-Resolution in a Single Step. In CVPR, 2024. [6] BiDM: Pushing the Limit of Quantization for Diffusion Models. In NeurIPS, 2024.
null
null
null
null
null
null
null
null
Inverse Reinforcement Learning with Switching Rewards and History Dependency for Characterizing Animal Behaviors
Accept (poster)
Summary: The paper introduces Switching Inverse RL (SWIRL), an inverse reinforcement learning framework for characterizing animal behaviors. In this problem setting, the goal is to infer reward functions and policies from animal behavior trajectories. To achieve this, SWIRL introduces two main design choices: time-varying rewards controlled by latent modes and biologically plausible history dependency. The time-varying rewards allow the model to capture switching behaviors. The history dependency occurs at two levels: the decision level (transitioning from one hidden mode to another conditioned on the previous state) and the action level (history-conditioned policies and rewards). The framework is optimized with an expectation-maximization procedure to recover the parameters for the hidden transition kernel, the policy, and the reward function. The paper compares SWIRL to prior IRL baselines and ablations of SWIRL on three sets of experiments: a 5x5 gridworld environment, a labyrinth dataset where water-deprived mice move freely based on varying internal goals, and a dataset where mice wandered an empty arena without explicit rewards. Across these experiments, they find SWIRL to outperform baselines in reward correlation, log-likelihood, and segment accuracy. Claims And Evidence: The central claim of this paper is that incorporating switching rewards and history dependency results in better inverse RL methods for characterizing animal behaviors. This claim is supported by three sets of experiments with tabular state and action spaces: - The first is a 5x5 gridworld environment where the underlying reward is accessible. Compared to baselines with constant rewards and less history dependence, SWIRL scores the highest correlation between predicted and ground truth rewards, log-likelihood of the held-out test data under the policy, and accuracy of mode prediction. - The second set of experiments is on a water-restricted labyrinth dataset. Since the ground truth rewards and behavior modes are not accessible, the paper only provides a quantitative comparison of the data log-likelihood under the policy, where SWIRL outperforms the baselines. They qualitatively visualize the mode switching in trajectories as well as the reward maps. Both indicate that the method learns a reasonable switching reward. - The third set of experiments is on a free-space wandering dataset. The held-out log-likelihoods and reward visualizations support their claim. They further plot the log-likelihood as a function of the number of hidden modes and show that SWIRL performance improves with more hidden modes, whereas removing the state dependence leads to degrading performance as the number of hidden modes increases. Methods And Evaluation Criteria: The paper proposes an inverse RL framework for characterizing animal behaviors. Since animal behaviors are likely controlled by time-varying objectives (e.g. find water when thirsty, otherwise go home), the paper proposes to infer a switching reward conditioned on latent modes. Moreover, animal behaviors are influenced by the history of observations, and thus the paper introduces history dependence to the hidden transition kernel, the reward, and the policy. This framework is optimized by an expectation-maximization procedure, which iteratively performs RL on the current reward estimate, uses the policy to estimate an evidence lower bound on the data likelihood, and updates the parameters via gradient descent. The proposed method is biologically plausible and mathematically sound. The evaluation criteria vary based on the problem setting. When the animal behavior data is synthetically generated by following a time-varying reward, then it is feasible to compare the correlation between predicted and ground truth rewards and the accuracy of mode switching. Otherwise, if one only has access to the dataset, then data likelihood under the policy is the only feasible evaluation criteria. Theoretical Claims: The paper did not make theoretical claims. Experimental Designs Or Analyses: As discussed in the "Claims And Evidence" section, the experimental design is sound and supports the claims made in this paper. Supplementary Material: I reviewed all parts of the supplemental material. Relation To Broader Scientific Literature: This work has implications for both the neuroscience community and the machine learning community, offering a useful tool for understanding animal behaviors and a flexible framework for general inverse RL problems. Essential References Not Discussed: The paper provides sufficient context for the reader to understand its contributions. Other Strengths And Weaknesses: **Strengths** 1. The paper introduces a novel inverse RL framework for characterizing animal behaviors. The proposed method models switching rewards using a hidden mode variable and incorporates history context into the hidden transition kernel, the policy, and the reward in a biologically plausible manner. 2. The proposed method shows strong empirical results across three sets of experiments. 3, The visualizations of learned rewards and mode transitions are particularly elucidating. **Weaknesses** 1. The experiments in this paper feature tabular states and action spaces. The implications for continuous domains remain unclear. 2. The proposed method is computationally heavy and requires manual tuning of the number of hidden modes. 3. The method description is not quite clear. See questions below. Other Comments Or Suggestions: 1. I suggest adding a more detailed description of evaluation metrics such as log-likelihood and segmentation accuracy. Currently, they are briefly introduced on lines 250-256 and lack clarity. 2. I suggest adding a more detail description of the evaluation domains. For example, describe the state and action spaces for each environment used for evaluation. Questions For Authors: - How is the reward function optimized? Looking at Algorithm 1 in the appendix and the auxiliary function (Eqn 5, 6, 7), it seems the gradient of the reward function is not directly passed through (because of the soft Q iteration). Do you backpropagate the gradient for the reward function through the Q iteration? If so, how many Q iterations do you use? - Why do you apply different optimizers to the transition kernel (L-BFGS) and the reward function (Adam)? Is it an empirical design choice? - Is there a reason why ARHMM and rARHMM are not included in experiments 1 and 2? ## Updates After Rebuttal The authors have adequately addressed my questions. I will maintain my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate the time and effort reviewer 5en4 dedicated to analyzing our work and providing such constructive feedback. We are pleased that the reviewer recognized the novelty of our work, the technical soundness, its implications for both neuroscience and ML community and the experiment design & presentation. Below, we address each of the questions and concerns: **1. Scalability concern:** Please refer to section 1 of our reply to Reviewer 9ThW. **2. Evaluation metrics:** We will add the detailed description of evaluation metrics in our next revision. Test log-likelihood is computed from the forward variable, where $\log p(s_{1:T}, a_{1:T}) = \log \sum_z \alpha_{T, z}$, and the computation of forward variable $\alpha$ is described in Appendix A.2. For test hidden mode segmentation accuracy, we use the Viterbi algorithm to predict the hidden mode of each timepoint and compute the percentage of correct hidden mode prediction compared to ground truth data. For reward correlation, we compute the Pearson correlation between learnt reward and ground truth reward. **3. Experiment setup:** We will add details of each environment in our next revision. For the state and action space: 5x5 Gridworld: State space: Discrete(25); Action space: Discrete(5). The 5 actions are ‘up’, ‘left’, ‘down’, ‘right’, ‘stay’. Labyrinth: State space: Discrete(127); Action space: Discrete(4). The labyrinth has a binary-tree structure and the 4 actions are ‘move in left’, ‘move in right’, ‘move out from leaf node’, ‘move out from non-leaf node’. Spontaneous Behavior: State space: Discrete(9); Action space: Discrete(9). Each state is a MoSeq syllable that defines a behavior motif. The 9 actions are just the next state the agent transits into. **4. How is the reward function optimized?** The soft Q iteration is differentiable and we backpropagate the gradient for the reward function through the Q iteration. We utilized 100 iterations and found the soft Q iteration usually already converged before 100 iterations in our experiments. We will add this discussion in our next revision. **5. Why do you apply different optimizers to the transition kernel (L-BFGS) and the reward function (Adam)?** This is an empirical choice. A rational reason behind that could be that in the M-step, the transition kernel typically yields a smoother loss surface compared to the reward function. As a result, with second derivatives, second-order optimizers like L-BFGS can take advantage of the smooth curvature, leading to faster and more stable convergence. In contrast, the reward function often has a more complex loss landscape, where Adam tends to perform better due to their robustness and adaptive step sizes. We will add this discussion in our next revision. **6. Is there a reason why ARHMM and rARHMM are not included in experiments 1 and 2?** ARHMM/rARHMM models the emission probability as $P(s’|s, z)=\sum_a P(s’|s, a)P(a|s, z)$, where the marginalization over the action space makes the log-likelihood incomparable to models like SWIRL in general case. Experiment 3 (spontaneous behavior) is the only experiment where action is effectively the same as the next state, making it possible to compare ARHMM/rARHMM with SWIRL by test LL. We thank again for the reviewer’s effort and humbly request that the reviewer consider raising their score if the above reply adequately addresses their concerns.
Summary: This paper addresses the limitation of traditional IRL, which assumes rewards depend only on the current state, making it insufficient for modeling long-term, history-dependent decision-making in animals. To capture this dependency, the paper introduces SWIRL, an IRL framework that models behavior as transitions between short-term decision-making processes, each governed by a distinct reward function. SWIRL infers these processes and their transitions using history-dependent reward modeling. The authors conducted experiments on simulated and real-world animal behavior datasets. Claims And Evidence: The paper makes well-motivated claims about the necessity of incorporating history-dependent rewards in IRL to model long-term animal decision-making. However, the paper does not provide a detailed analysis of how SWIRL scales to larger environments or higher-dimensional state spaces, which is crucial given its reliance on history-dependent modeling. Empirical results on computational efficiency would strengthen this claim. Overall, the paper provides substantial evidence to support its core claims but could improve in these areas to further solidify its contributions. Methods And Evaluation Criteria: 1. The proposed method increases the input state size by incorporating multiple past states, resulting in larger network inputs, higher computational complexity, and slower training and inference. 2. Hierarchical RL decomposes complex decision-making into multiple levels of abstraction. A high-level policy selects sub-goals or temporally extended actions, while a low-level policy executes fine-grained actions to achieve them. In SWIRL, the hidden modes resemble the sub-task structure in HRL, as they dictate different behavioral phases. However, unlike HRL, where the hierarchy is explicitly defined, SWIRL infers latent behavioral modes from data. A direct comparison with HRL-based methods—particularly goal-conditioned RL or options-based methods—could clarify whether SWIRL provides advantages in learning structured behaviors without predefined hierarchical policies. 3. POMDPs explicitly model uncertainty by maintaining a belief state over hidden variables. POMDP-based approaches have been widely used to model decision-making under partial observability, where the agent must infer unobserved environmental factors. SWIRL introduces hidden modes, which serve as an internal latent state governing behavior. Given this similarity, a comparison with POMDP-based RL methods could provide insights into whether belief-state tracking could enhance SWIRL’s performance. 4. The experiments only evaluate small history lengths, which may not be sufficient for capturing long-term dependencies in sequential decision-making. Given that the paper aims to model history-dependent decision-making, it could include a discussion on the feasibility of using alternative approaches, such as RNNs or Transformers, to learn sequential dependencies more effectively. 5. Since the reward network must learn multiple distinct reward functions corresponding to different hidden modes, there is a potential issue when some hidden modes appear infrequently. This could lead to underfitting in the reward networks for rarely occurring modes. Is there any discussion on how to mitigate this issue, such as through data augmentation, regularization, or constraints on reward function learning? Theoretical Claims: N/A Experimental Designs Or Analyses: 1. Since incorporating past states increases input size and computational complexity, it would be valuable to assess the efficiency of SWIRL in larger environments. A computational runtime analysis or an empirical study on training efficiency would provide useful insights. 2. The paper primarily compares SWIRL against history-agnostic IRL models. While this highlights the benefit of incorporating history dependency, additional baselines, such as RNN-based RL methods [1, 2] and Transformer-based RL models [3, 4], could provide a stronger contextual comparison in handling long-term dependencies and sequential decision-making. 3. The reward network must learn multiple distinct reward functions for different hidden modes. However, if certain modes appear infrequently, they may not receive sufficient training, leading to potential underfitting. A sensitivity analysis on the frequency of hidden modes and its impact on reward learning would help assess robustness. [1] Memory-based Control with Recurrent Neural Network. [2] Deep Recurrent Q-Learning for Partially Observable MDPs. [3] Decision Transformer: Reinforcement Learning via Sequence Modeling. [4] Offline Reinforcement Learning as One Big Sequence Modeling Problem. Supplementary Material: Yes. I reviewed all parts of the supplementary material. Relation To Broader Scientific Literature: 1. The relationship between SWIRL and broader scientific literature, particularly Hierarchical RL and POMDPs, could be further clarified, as SWIRL’s hidden modes resemble the sub-task structures in HRL and the latent states in POMDPs, making a direct comparison with these methods valuable for distinguishing its contributions. 2. This paper models history dependency by explicitly incorporating past states into the input, increasing the input size and computational cost. However, alternative sequence modeling methods, such as RNNs and Transformers, have been extensively studied for capturing long-term dependencies in reinforcement learning. Discussing whether such architectures could complement or improve SWIRL’s performance would provide valuable context and better position the work within the broader literature on history-aware decision-making. Essential References Not Discussed: POMDPS: [1] Learning Predictive State Representations. [2] Inverse Reinforcement Learning in Partially Observable Environments. Hierarchical RL: [3] Data-Efficient Hierarchical Reinforcement Learning. RL + RNN: [4] Memory-based Control with Recurrent Neural Network. [5] Deep Recurrent Q-Learning for Partially Observable MDPs. RL + Transformers: [6] Decision Transformer: Reinforcement Learning via Sequence Modeling. [7] Offline Reinforcement Learning as One Big Sequence Modeling Problem. Other Strengths And Weaknesses: See comments. Other Comments Or Suggestions: 1. According to the submission guidelines: **Section headings should be numbered, flush left, and set in 11 pt bold type with the content words capitalized. Leave 0.25 inches of space before the heading and 0.15 inches after the heading. Similarly, subsection headings should be numbered, flush left, and set in 10 pt bold type with the content words capitalized. Leave 0.2 inches of space before the heading and 0.13 inches afterward.** However, it seems that the author has modified these formatting rules. 2. The full name of IRL appears multiple times throughout the paper, including in Section 1 (Introduction), Section 3.2, and Discussion. It is recommended to define it only once at the first occurrence and use the abbreviation consistently thereafter. 3. In line 122, the term "autoregressive process (ARHMM)" appears, while in line 189, it is written as "autoregressive hidden Markov model (ARHMM)." Please clarify which is the correct full name for ARHMM to maintain consistency. Questions For Authors: Although the discussion section mentions POMDPs, I believe the statement that "SWIRL can be extended to POMDPs" is not entirely accurate. POMDPs represent a parallel approach rather than a direct extension of SWIRL. It would be more appropriate to compare SWIRL directly with existing POMDP-based methods rather than framing it as a future extension. Could the authors clarify their perspective on this distinction? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort reviewer CRRd dedicated to analyzing our work. Below, we address each of the questions and concerns: **1. Hierarchical RL, RL+RNN and POMDPs:** We will refer to the works listed in the “Essential References Not Discussed” section using bracketed citations in this response. &nbsp;&nbsp;**1.1** [1][3][4][5][6][7]: We thank the reviewer for pointing out the RL methods with history dependency or multiple reward functions. However, it is worth noting that our method is solving the Inverse RL problem (given expert demonstrations, trying to recover the expert’s policy and the reward function) instead of the RL problem (given reward function, trying to find the optimal policy). IRL is a much harder problem and the goal is very different from RL. As a result, it is not feasible to apply those RL methods as baselines for our method. &nbsp;&nbsp;**1.2** [2]: it is worth noting that [2] infers reward functions from expert demonstrations while assuming the Observation function P(z|s, a) is already known—a condition that does not hold in our case. To the best of our knowledge, there is no existing Inverse POMDP method that can effectively address our problem. We recognize the need of discussing related POMDP literature and will add it in the next revision. **2. Scalability and longer history:** Please refer to section 1 and section 4 of our reply to Reviewer 9ThW. We appreciate the idea of using RNNs or Transformers to learn sequential dependencies more effectively and will add it in the next revision as a future direction. **3. Hidden mode with infrequent occurrence:** In cases where a hidden mode occurs infrequently, regularization or constraints can be incorporated to improve the reward recovery. For instance, if the reward is expected to be sparse, L1 regularization can be used; if large reward coefficients are undesirable, L2 regularization can help. When the occurrence of a mode is too low and prior knowledge is too limited, a common practice is to reduce the number of hidden modes, effectively merging the rare mode with a more frequent mode with similar reward function. **4. Section heading issue:** We apologize for the heading space issue and will correct it in our next revision. **5. Full name of IRL appears multiple times:** We will use the abbreviation consistently in our next revision. **6. ARHMM abbreviation issue:** ARHMM should be the abbreviation of the autoregressive hidden Markov model. We will address this issue in our next revision. **7. Response to Questions For Authors:** Thank you for the thoughtful comment. Upon reflection, we agree that the original statement may have been imprecise. While SWIRL is not a POMDP method in the formal sense (e.g., modeling belief states or latent dynamics explicitly), it does address non-Markovian behavior by augmenting the state space with recent history (e.g., using {s_t, s_{t-1}, s_{t-2}, …}). This is a commonly used strategy to handle partial observability or non-Markovianity—similar in spirit to how LSTMs or memory-based policies are used in POMDP settings. So while SWIRL is not derived from a POMDP framework per se, it shares key characteristics with POMDP-appropriate methods in its use of temporal context. When we wrote that it “can be extended to POMDPs,” we were referring to the potential for a more principled approach—e.g., explicitly modeling latent states or belief updates—to further generalize the method. We will revise the discussion to clarify this distinction and avoid the implication that our current method is directly extendable in a formal POMDP sense. We thank again for the reviewer’s effort and humbly request that the reviewer consider raising their score if the above reply adequately addresses their concerns. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. While I understand that your method focuses on IRL rather than RL, my suggestion regarding RNN or Transformer-based models was motivated by your claim of modeling history dependency, not by a desire for RL baselines. Currently, history is handled via fixed-length state augmentation (up to L=4), where policy and reward depend on a short sequence of past states. While this may capture some short-term context, it lacks the expressive power of true sequence models like RNNs or Transformers, which better model long-term, variable-length dependencies. Since history modeling is a key contribution of the paper, the current implementation feels overly simplistic. I would be interested to hear the authors’ perspective on this point, particularly whether they see potential for incorporating more expressive sequence models into the SWIRL framework, or have specific reasons for preferring the current design. Regarding hidden modes with infrequent occurrence, I appreciate the suggestions regarding regularization and mode merging, my concern was primarily about the robustness of reward learning under mode imbalance. A sensitivity analysis on mode frequency and its effect on reward learning would provide more concrete evidence of the method’s reliability under imbalanced settings. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer CRRd for their acknowledgement on the distinct difference between RL and IRL. Following the rebuttal comment, we conducted additional experiments and addressed their two concerns as below: **1. Use a RNN/transformer-based policy model** &emsp;&emsp;1.1 We acknowledge the reviewer’s concern. Although previous usage of RNN/transformer-based policy models is limited to RL literature, we are aware that some recent IRL work concerning the fine-tuning of LLMs essentially have used transformer-based policy models [1][2]. &emsp;&emsp;1.2 It is possible to use a RNN/transformer-based policy model in SWIRL. We conducted an additional model-free SWIRL experiment following the same setup as Appendix D.1 but with a **transformer-based policy model**. To facilitate interpretation and visualization of the learned reward function, we constrained the transformer’s input sequence length to be 2. Our results show that the transformer-based model-free SWIRL achieves reasonable performance. Although the recovered reward maps appear noisier compared to previous results (Fig. 2A and Fig. 8 in the paper), they still successfully identify the key high-reward regions. We visualize the transformer-based SWIRL results in https://anonymous.4open.science/r/SWIRL_rebuttal-C46B/sim_result_iql_transformer.pdf. &emsp;&emsp;1.3 The choice between using the model-based SWIRL presented in the main body of our paper and the model-free variant depends on the specific use case. As discussed in Section 1.1 of our response to Reviewer 97hW, IRL is inherently a challenging problem. Performing IRL in a scalable, model-free manner often sacrifices reward recovery accuracy, even though the recovered policy could perform well. Moreover, employing RNN/transformer-based policy models, while allowing compatibility with variable-length histories, also makes the interpretation of inferred reward function hard. Therefore, for neuroscience applications that involve moderate-sized state-action spaces and prioritize reward recovery and interpretability, we recommend using the model-based SWIRL implementation. That said, we also present the above additional experiment, providing concrete evidence that the model-free SWIRL described in Appendix D.1 can be implemented with a transformer-based policy and achieve reasonable performance. For applications that prioritize scalability and focus primarily on policy recovery instead of reward recovery, the transformer-based model-free SWIRL can be an ideal choice. **2. Sensitivity analysis on mode frequency and its effect on reward learning** We conducted an additional experiment with SWIRL to examine how low mode frequency affects reward learning. In this experiment, we simulated a 5x5 gridworld environment featuring three hidden modes: water, home, and explore. Both water and home were associated with sparse reward maps, each with different high-reward states, while explore had a dense, uniform reward map. We generated three datasets with those switching reward maps, each consisting of 100 trajectories with a traj length of 500 steps (50000 timepoints in total). In the first dataset, the home mode occurred with low but still normal frequency (5,999 timepoints; 12% of the data). In the second dataset, home mode frequency was further reduced (1,202 timepoints; 2.4%), and in the third dataset, home mode occurred extremely infrequently (332 timepoints; 0.66%). We applied SWIRL to each dataset, both with and without regularization (reg). Without reg, SWIRL performed well when the home mode occurred at 12%, but its performance degraded significantly at lower frequencies (2.4% and 0.66%). With reg (applying L1 reg to two of the reward maps and L2 reg to the third), SWIRL was able to recover reasonable reward maps even at 2.4% occurrence. However, when the home mode appeared in only 0.66% of the data, even reg could not achieve reasonable reward recovery. These results demonstrate that SWIRL is capable of handling reasonably low mode occurrences when appropriate reg is applied. We visualize the results in https://anonymous.4open.science/r/SWIRL_rebuttal-C46B/sim_sensitivity_analysis.pdf. (A) True reward maps. (B) Plot illustrating the Pearson correlation between the true and discovered reward maps over 10 runs. The x-axis represents different datasets (12%/2.3%/0.66% home modes) and regularizations (without/with reg). ‘Overall’ is the correlation across all three hidden modes. ‘Home mode’ is the correlation for just the home mode. (C) SWIRL discovered reward maps. We thank again for the reviewer’s efforts and humbly request the reviewer to consider raising the score as we have clarified and addressed all concerns raised by the reviewer. **Reference:** [1] Wulfmeier et al. "Imitating language via Scalable Inverse Reinforcement Learning." NeurIPS 2024 (2024). [2] Li et al. “Joint Reward and Policy Learning with Demonstrations and Human Feedback Improves Alignment.” ICLR 2025 (2025).
Summary: In this work, the Authors extend the IRL framework, designed previously to consider multiple goal maps in real-world agents concurrently (Ashwood et al & Zhu et al), by explicitly incorporating history-dependent policies and rewards into the model. Using this new framework, the Authors model several standard datasets in the field (Rosenberg et al & Markowitz et al) where they show overall improvement in performance. The relation of the proposed method to other models, viewed as its edge cases, is discussed. Claims And Evidence: The main claims that (1) the consideration of history-dependent policies and rewards in this specific IRL framework is novel and (2) that the consideration of the history-dependency improves the model’s predictive power on the (Rosenberg et al) dataset are correct (I elaborate in the sections below). While history-dependency has indeed been considered in other prior work, including the models of neuroscience-relevant data (e.g. by “hacking” the definition of the Markovian state through defining it as a sequence of the past states), the addition of such historic dependency to the IRL framework is novel. Methods And Evaluation Criteria: The methods and evaluation criteria here follow the well-established pattern used previously in prior works by Ashwood et al & Zhu et al. Specifically, the work here uses the datasets by Rosenberg et al & Markowitz et al, which have been shown to be adequate for this task and have been overall widely used in the field. As such, the methods and criteria here do make sense for the task at hand. Theoretical Claims: The methods used here make perfect sense for the task. The use of the E-M (Forward-Backward) algorithm has been previously argued to be the go-to tool for the POMDP-like problems, such as the problem here. I’ve scanned through the derivations in the Appendix; they looked overall correct. Experimental Designs Or Analyses: The experimental design here follows the standard practice in the field. Apart from the use of standard datasets, the work provides a reasonable set of baseline models and ablations. Supplementary Material: I’ve scanned through the entire supplementary material. Relation To Broader Scientific Literature: While the Authors correctly cite prior work (including works by Ashwood et al and Zhu et al) and correctly discuss the novelty of this new work in relation to prior work (the consideration of the history-dependency), it is specifically this part that is the most concerning for me. The addition of the history dependence is, arguably, a minor tweak upon Zhu et al’s work. Zhu et al’s work, in turn, was a minor (though a much bigger) tweak upon Ashwood et al’s work and, as such, to the best of my knowledge, hasn’t made it to one of these conferences. While adding the historical dependency is an important step in the right direction, this paper would’ve been much stronger if it had provided novel insights into the existing data, however, the qualitative conclusions for the Rosenberg et al & Markowitz et al data match those of the previous work. Thus, while the work appears completely correct to me, the scope of the novelty in the current format sadly precludes me from recommending the acceptance of this paper at this point. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: What are the novel, unique insights that can be obtained from the considered and/or other data using the proposed algorithm but not the previous algorithms? ______________________ Post-rebuttal: Upon the discussion with the Authors, I agree that 1) considering the history dependency is important and 2) that not having pre-clustered data has the potential to make a difference. Thus, I agree that the score can be raised and the work qualifies for a weak accept. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate the time and effort reviewer FWjW dedicated to analyzing our work. We are sorry that the key contributions of SWIRL are likely misunderstood by the reviewer and apologize for not clarifying this fact more clearly in the paper. Below we will provide a thorough discussion on the relationship between Ashwood et al., Zhu et al. and SWIRL and clarify the novel contribution of SWIRL: **1. Comparison among Ashwood et al., Zhu et al. and SWIRL** &nbsp;&nbsp;**1.1** **Ashwood et al.** proposed the reward function as a dynamical combination of a number of reward maps. The inference method proposed in Ashwood et al. assumes the reward function changing dynamics is the **same for every trajectory**, which greatly limits its applicability. As discussed in Sec. 4.2 of our paper, Ashwood et al. used the **short, pre-clustered** and **stereotyped** labyrinth trajectories **trialized** and **sampled** from the original long trajectories. In their dataset, for water-restricted mice (with a water reward in labyrinth), each trajectory is **very short (20 steps)** and has **very similar reward changing dynamics and state visitation sequence** (always start from ‘home’ states e.g. 1,0, go to water port, then go back to ‘home’). They then conduct a **separate** experiment on another dataset of water-unrestricted mice (with no water reward in labyrinth) to find ‘explore’ map, where the trajectories are also **very short, pre-clustered** and **stereotyped**. &nbsp;&nbsp;**1.2** In **Zhu et al.**, the agent is switching between a number of reward functions, and this model can handle trials with different reward switching dynamics. However, Zhu et al. only tested it on **the same labyrinth dataset as Ashwood et al.** and did not really show this ability. Furthermore, in Fig. 3e of our paper, I-1 (Zhu et al.) has a lot of unreasonable fast-switchings between hidden modes, which shows that the method of Zhu et al. **cannot** accurately recover the reward switching dynamics in the **long, non-stereotyped raw** behavior data. Also, Zhu et al. **cannot** recover the reward with **action-level history dependency** (e.g. Fig. 3c). &nbsp;&nbsp;**1.3** The introduction of **state-dependent decision mode transition** and **action-level history dependency** in **SWIRL** is critical and enables the effective reward learning on **long, non-stereotyped naturalistic trajectory** of the labyrinth dataset. Compared to the datasets used by Ashwood et al. and Zhu et al., in our experiment we simply segment the original data of water-restricted mice into trajectories of **500 time steps** each, **without any clustering**. Our trajectories are **much longer** and each trajectory maintains the original **different reward switching dynamics** from raw data. Our result shows that only models with both decision-level and action-level history-dependency can recover the switching rewards accurately. **2.** **Response to Questions For Authors:** SWIRL provides novel insights into real animal datasets &nbsp;&nbsp;**2.1** For labyrinth, as described above, SWIRL shows the switching between ‘home’, ‘water’ and ‘explore’ reward functions in the **long, naturalistic** water-restricted mice trajectories, which is a **new** result that has not been reported before. Rosenberg et al. does not have RL/IRL analysis and Zhu et al. & Ashwood et al. only show the switching between ‘home’ and ‘water’ in the **short, pre-clustered, handcrafted** version of water-restricted mice trajectories. Our result is clearly **much more naturalistic and reliable** than previous literature and supports the following important claim: Mice make decisions based on both current state and their history. &nbsp;&nbsp;**2.2** For spontaneous behavior, Markowitz et al. shows DLS dopamine activity correlates with reward through RL experiment. We verify the claim with IRL experiment and provide novel insights that there exist switching hidden decision modes with varying dopamine correlations. To the best of our knowledge, we are **the first** to perform switching reward analysis on this type of dataset, and our finding of switching hidden decision modes in those spontaneous behavior syllables is **new**. **3.** In addition, we are also the first to incorporate this switching reward idea with model-free inverse-Q-learning-based IRL method, enabling scalable application, which is an important contribution to the ML community. We discuss model-free SWIRL implementation in detail and exhibit its reasonable performance on the gridworld experiment (same env as Sec. 4.1) in Appendix D.1. We thank again for the reviewer’s effort and humbly request that the reviewer consider raising their score if the above reply adequately addresses their concerns regarding the scope of the novelty. --- Rebuttal Comment 1.1: Comment: Thank you for your response. **I would like to start a conversation with you here.** As of now, I don't think I've misunderstood the key contributions of this work, as you imply. Feel free to argue / convince me otherwise. Details below: - The three works in question (Ashwood et al, Zhu et al, and yours) consider the same phenomenon at different timescales: Ashwood et al look at ~20-step segments; Zhu et al look at super short segments; you look at ~500-step segments. While each other the three works makes a claim that the timescale they consider is the only "correct" one (and I have my preferences here as well), the truth is that each of them corresponds to a model whose usefulness should be ultimately measured in terms of the conclusions derived form it and written in plain words. This way, Ashwood et al uncover several goal maps, which they then interpret and explore --- that is, without a doubt, useful. Zhu et al's work cater to the whole IBL / Pillow Lab idea that behavioral sequences should be modeled as short-term HMMs. While I haven't seen any novel quantitative results emerging from it, this at least conforms to a larger research agenda. **What are your qualitative results?** I would argue that the length of the trajectories is a modeling choice rather than a result. I would also argue that long trajectories have been considered in prior work, although in different scenarios. **What does including the history-dependency teach us about brain, compared to prior models?** Please kindly list your contributions, in plain qualitative words, an we will take it from there! Thanks --- Reply to Comment 1.1.1: Comment: We greatly appreciate the reviewer FWjW’s for their willingness to start a conversation with us. Unfortunately, in ICML 2025, authors can only provide one final response after the reviewer's comment. We will try our best to clarify our contributions here. --- We present the first generative IRL model able to capture naturalistic mouse behavior over hours of maze exploration with switching decision modes. **SWIRL incorporates history-dependency, enabling data-driven testing for this hypothesis in naturalistic animal behavior, as history-dependency itself is a key concern in behavioral neuroscience.** [1] A.K. 'The what, how, and why of naturalistic behavior.' Current Opinion in Neurobiology (2022) **Our main contribution is a model that can model complex naturalistic behavior that prior models lacking history-dependencies could not.** As labs begin recording neural data during hours-long free exploration, our framework offers a powerful tool to link brain activity to naturalistic behavior: Identifying when an animal seeks water, explores, or rests is essential to dissect the underlying neural circuits—each could be driven by distinct processes. This segmentation is not possible with pre-clustered, trialized data. From this perspective, prior work (Ashwood et al. (DIRL), Zhu et al.) goal fundamentally differs from ours. We emphasize that the key difference between SWIRL and prior work is not the length of behavioral trajectories, but whether the model can handle raw naturalistic behavior trajs. Below **Details 2 and the linked figure** highlight the differences between our data and those used in previous work. --- **Details** 1. Animal behavior experiments in neuroscience generally fall into two types. The first involves simple, short, trialized tasks—such as 2AFC—where animals perform structured actions within brief time windows. The second involves complex, long, and non-stereotyped behaviors that are difficult to trialize, such as animals navigating a large maze. Neuroscientists are increasingly interested in the second type, as it better reflects naturalistic behavior. Advances in recording techniques have made such experiments more feasible and common. 2. The Rosenberg labyrinth with water port dataset was clearly designed as a second type experiment, capturing hours of naturalistic mouse behavior in a large maze. However, by pre-clustering and downsampling the raw data, DIRL effectively constrain it to a first type experiment setting, where behavior follows a specific path: home → water → home. All trajs only visit the same 5–7 states, rendering most of the 127-state maze irrelevant. As a result, their ‘water’ and ‘home’ reward maps reflect a narrow, stereotyped and trialized task rather than the original freely exploring behavior. Zhu et al. use the same processed data from DIRL and reach the same constrained conclusion. Although theoretically the model of Zhu et al. is not limited to trialized behavior as DIRL, we did test Zhu et al. (I-1) in our paper Fig. 3E and found that lacking history-dependency, their model cannot find meaningful hidden mode segments from the raw naturalistic behavior data. We visualize the difference between the original labyrinth trajs used by SWIRL and the constrained, stereotyped trajs used by DIRL: https://anonymous.4open.science/r/SWIRL_rebuttal-C46B/swirl_dirl_trajs.pdf. *Note: DIRL trajs in above figure have unrealistic paths through walls, indicating a possible mismatch between their reported states and actual maze locations. Since their data & code release did not include the clustering and sampling pipeline (they just provided the preprocessed trajs) and they never visualized the trajs in their paper (nor did Zhu et al.), it's unclear what exactly went wrong. However, it's obvious that their trajs are short, highly stereotyped, and trialized, a stark contrast to our long, naturalistic behavior data.* 3. In contrast, SWIRL directly uses the raw labyrinth dataset, capturing truly naturalistic, freely moving behavior over hours. This allows us to model unconstrained mouse labyrinth decision-making with a generative model and IRL for the first time. With our learned model parameters, one can even simulate realistic mouse behavior with switching rewards—embodying the principle: “What I cannot build, I do not understand.” Our model is a key step toward systematically understanding naturalistic behavior. 4. In conclusion, our modeling approach represents a clear advancement beyond prior work. Our application of IRL to the original Rosenberg labyrinth dataset is entirely novel. While the reviewer noted that Zhu et al. did not publish at a major conference, we argue that our contribution is more substantial than Ashwood et al, which was accepted at NeurIPS. --- We thank again for the reviewer’s efforts and humbly request the reviewer to consider raising the score as we have clarified our novel contribution to the study of brain in naturalistic behavior.
Summary: This paper presents an EM-based IRL algorithm SWIRL (SWItching IRL), for learning time-varying reward functions to model animal behavior. The paper extends IRL by incorporating time-varying, history-dependent reward functions. A key contribution of this work is that it incorporates capturing the shifting motivations and history-dependent decision-making observed in animals by modelling long behavioural sequences as transitions between short-term decision-making processes, each governed by a unique reward function. They do this by considering history dependency at both the decision level (transitions between decision-making processes depend on previous decisions and environmental context) and the action level (actions depend on the history of states within a decision-making process). Paper is well written, coherent, and technically sound. Claims And Evidence: The claims are well supported through multiple experiments, and various baselines. The claims on originality and novelty is well supported ie. first IRL model to integrate both decision-level and action-level history dependency. Claims on principled method with empirical validation is also very well supported i.e. a mathematical formulation of SWIRL, including detailed explanations of how history dependency is incorporated at different levels, and a clear demonstration of improvements over baseline methods. In addition the paper is well written and presented. Claim on the relevance and importance of the work lie in the modelling of animal decision-making, which has the potential to further our understanding of intelligent behavior. Methods And Evaluation Criteria: To test the efficacy of the proposed method, SWIRL is empirically tested on simulated data and real-world animal behaviour datasets. Multiple baselines have been compared such as latest concurrent work on Locally Consistent IRL (Nguyen et al., 2015), ARHMM (Wiltschko et al., 2015), rARHMM (Linderman et al., 2016) I-1, I-2, S-1, S-2. The evaluation is competitive, and even though the approach compares similarly, it does consistently well and even favorably when compared to Nguyen et al., 2015 which is very positive. The paper further provides connections between SWIRL and autoregressive dynamics models, arguing that SWIRL offers a more generalized and principled approach to characterizing animal behaviour. Results presented demonstrated that it outperforms existing models lacking history dependency, both quantitatively and qualitatively. Theoretical Claims: One of the major concerns of the method proposed is difficulty in scaling to larger state space? Would these results hold there? How might the authors propose to tackle this? Would it be possible to outline how the method generalizes to larger state space? It seems that longer L in SWIRL does require a considerable amount of computing resources, how does the method work in this case? Have you considered environments where longer L was required? Was the method still competent compared to baselines? It would be useful to add discussion around this. Additionally, if you can point me to experiments that would be useful as well that consider longer L. Experimental Designs Or Analyses: Experiments are clear and results are reported with rigor. Complexity and Scalability of the method is unclear given the grid world experiments are not to scale with the dimensions considered. It is not clear to use this method, how might one go about choosing the number of hidden modes? Was this a hyperparameter? Is there a way to meta learn this variable? Supplementary Material: Yes I skimmed the complexity analysis. Relation To Broader Scientific Literature: The work is well positioned in the literature. Multiple baselines have been compared such as latest concurrent work on Locally Consistent IRL (Nguyen et al., 2015), ARHMM (Wiltschko et al., 2015), rARHMM (Linderman et al., 2016) I-1, I-2, S-1, S-2. The evaluation is competitive, and even though the approach compares similarly, it does consistently well and even favorably when compared to Nguyen et al., 2015 which is very positive. The paper further provides connections between SWIRL and autoregressive dynamics models, arguing that SWIRL offers a more generalized and principled approach to characterizing animal behaviour. Results presented demonstrated that it outperforms existing models lacking history dependency, both quantitatively and qualitatively. Essential References Not Discussed: No Other Strengths And Weaknesses: Since a major contribution here is considering the Action-level history, for experiments that only consider (S-2) even though it is better than only current length of 1, it seems its too short a length to showcase the utility of history. Is this a result of the environments? How is this length determined or recommended by the method to be considered? Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer 9ThW for their detailed and thoughtful comments on our paper. We are pleased that the reviewer recognized the novelty of our work, the technical soundness, its potential impact on intelligent behavior research, and the overall paper presentation. Below, we address each of the questions and concerns: **1. Scalability and Generalization to Larger State Space** &nbsp;&nbsp;**1.1** Our SWIRL framework is compatible with larger state-action space. As discussed in Sec. 3.4, SWIRL inference procedure can be done in a scalable way (model-free IRL) instead of the model-based approach shown in the main body of the paper. It is worth noting that IRL in large and/or continuous state-action spaces is a very challenging problem. Even state-of-the-art standard IRL methods, when assuming a single static reward function, still cannot accurately recover the ground-truth reward. Despite limitations in recovering the true reward function, IRL methods can still produce policies that approximate the expert’s behavior. As a result, recent IRL literature [1][2] in such settings usually focuses on policy recovery performance rather than reward function. &nbsp;&nbsp;**1.2** In Appendix D.1, we present a detailed discussion of a scalable model-free variant of SWIRL and evaluate its performance on the same gridworld experiment as described in Section 4.1 of the main text. Despite being model-free, this scalable variant of SWIRL is still capable of achieving reasonable reward recovery. We expect that in environments with large state-action space, SWIRL will perform worse in reward recovery but still maintain reasonable policy recovery. &nbsp;&nbsp;**1.3** SWIRL has the potential to be further extended to continuous state-action space. We can change the DQN-style inverse-Q-learning approach in Appendix D.1 to SAC-style ([1] already shows SAC is compatible with inverse-Q-learning), building a SWIRL variant compatible with continuous state-action space. Further extension to RNN/transformer structure could improve SWIRL’s performance on data with longer history-dependency. **2. Selection of number of hidden modes** Yes, it is a hyperparameter. We discussed the selection of the number of hidden modes in Appendix B.4.1. and Appendix B.4.3. It can be meta-learned based on the trend of test LL. In general, we recommend selecting the number of hidden modes at the point where the test log-likelihood curve plateaus. **3. Selection of L for labyrinth experiment** We discussed longer L experiment results (S-3 & S-4) in Appendix B.4.2. Generally, L should be selected based on the trend of the test LL curve as well as the recovered hidden mode segments and reward maps. In this case since the hidden mode segments and reward maps remain similar with L >= 2, we choose to show S-2 as the main experiment. **4. Consideration of longer L** &nbsp;&nbsp;**4.1** For model-based SWIRL, the feasibility of longer L depends on the environment size. In the labyrinth environment (127 states, 4 actions), model-based SWIRL can perform reasonably fast (within 3 hours) on a L40S GPU with L up to 5. For a smaller environment such as the one in spontaneous behavior experiment (9 states, 9 actions), it is easy to test long L (e.g. L=10) with our model-based SWIRL. &nbsp;&nbsp;**4.2** For environments with longer L, we can use the scalable model-free SWIRL proposed in Appendix D.1, but at the cost of inaccurate reward recovery. On the labyrinth dataset, we find it is easy to test longer L (e.g. L=20) with the model-free SWIRL. However, mice behavior in this dataset does not show that long action-level history dependency and such long L actually leads to lower test LL. We thank again for the reviewer’s effort and humbly request that the reviewer consider raising their score if the above reply adequately addresses their concerns. **Reference:** [1] Garg et al. "Iq-learn: Inverse soft-q learning for imitation." Advances in Neural Information Processing Systems, 34 (2021). [2] Zeng et al. “Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees”. Advances in Neural Information Processing Systems, 35 (2022).
null
null
null
null
null
null
Adversarial Inception Backdoor Attacks against Reinforcement Learning
Accept (poster)
Summary: This paper introduces a new backdoor attack against deep reinforcement learning agents, specifically addressing the constraint that an attacker cannot arbitrarily modify the reward function to some extremely large value. The key insight is to selectively poison high-return time steps in the agent’s training data, manipulating actions to induce adversary-desired behaviors. The authors formalize the attack within a theoretical framework, providing guarantees on both the attack's success and stealthiness. They further propose Q-Incept, a training algorithm designed to poison DRL agents effectively. Experimental results across various Gym simulation environments demonstrate the attack’s efficacy. Claims And Evidence: - **Motivation and Justification:** The motivation of the paper is not well-supported. While I understand that reward clipping is common in widely used simulators, existing works like TrojDRL do not introduce arbitrarily large reward values either—they simply modify the reward to the maximum allowable value within the clipping range (i.e., +1). I suggest that the authors clarify their contribution by emphasizing that their primary focus is on the poisoning strategy, rather than on the reward modification constraint. Unlike prior works that apply to poison randomly, their approach leverages an additional Q-value network to guide the poisoning process, which is a more structured and strategic method. Furthermore, if the training trajectories are perturbed offline—meaning they are modified after interaction with the environment—then the altered reward is never subjected to the environment’s clipping constraints. In that case, why does the attacker need to adhere to the reward bound at all? Since the attacker has full access to the training process, any reward clipping applied during training can simply be ignored. Given this flexibility, what is the fundamental limitation of existing arbitrary reward poisoning methods? - **Clarification on Action Manipulation:** Starting from Line 155, the authors state that “our adversary changes actions after the episode has finished, meaning these perturbed actions will never actually occur in the environment.” However, in TrojDRL, while actions are manipulated during the agent’s training process, the true state transition is still determined by the original (unmodified) action. As a result, neither the environment nor the agent perceives the modified action during training. This modified action only affects the updates of the policy network. Given this similarity, could the authors clarify whether their approach fundamentally differs from TrojDRL in this regard? Methods And Evaluation Criteria: The selected simulator and datasets make sense for the problem that the paper studied, although more extensive experiments are needed to further support its attack effectiveness, please see the "Experiment Designs" for more details. Theoretical Claims: The theoretical claims look fine to me. Experimental Designs Or Analyses: - Missing evaluations against defenses: The experiments do not assess the attack’s effectiveness against relevant defenses such as provable defenses [1], BIRD [2], or simply fine-tuning, although there are brief discussions in the paper. Including some preliminary results would provide a more comprehensive understanding of the attack’s robustness and stealthiness. - Limited RL algorithm evaluation: Only PPO-trained agents are evaluated, while results on other widely-used RL algorithms like DQN and A2C are missing. - Poisoned return is not reported: The paper does not report the poisoned return, which is an important metric for understanding how effectively the attack degrades the agent’s performance. Including this would provide a clearer assessment of the attack’s impact. - Applicability to offline RL: The inception attack modifies stored trajectories offline, which suggests that it could also be applied to offline RL-trained agents. It would be beneficial to discuss whether it would be possible to apply the attack to an offline RL setup. [1] Bharti, et al., Provable defense against backdoor policies in reinforcement learning, NeurIPS 2022. [2] Chen et al., BIRD: Generalizable Backdoor Detection and Removal for Deep Reinforcement Learning, NeurIPS 2023. Supplementary Material: Yes, check the sensitivity test of the pointing rate and the training curve of different attacks. Relation To Broader Scientific Literature: The paper contributes to the broader literature on adversarial attacks inRL by introducing an inception backdoor attack that manipulates stored trajectories offline rather than injecting poisoned samples during online interactions. This approach builds upon prior works such as TrojDRL, which introduces backdoors in RL through reward manipulation but differs by focusing on action perturbation at high-return time steps rather than arbitrary reward poisoning. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths:** - The attack is formally defined within a theoretical framework, providing guarantees on both attack success and stealthiness, which strengthens its conceptual foundation. - Unlike prior works that randomly poison training data, the proposed attack leverages an extra Q-value network to strategically select high-return time steps for poisoning, making it more targeted and effective. **Weaknesses:** - More clarification on the paper's motivation is needed. - The paper does not evaluate the attack against existing defenses or compare its effectiveness across multiple RL training algorithms (e.g., DQN, A2C). Other Comments Or Suggestions: No. Questions For Authors: Please see the "Claims" and "Experiment designs" part for questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and questions! **“Existing works like TrojDRL do not introduce arbitrarily large reward values either”** It is true that TrojDRL perturbs the agent’s reward by a fixed value $\pm c$, but this $c$ may need to be arbitrarily large for attack success. Let’s return to our example in Figure 2. As $\gamma$ approaches $1$, $Q(\text(start) a)$ approaches infinity. Therefore, in order for the attack to be successful, i.e. $Q(\delta(\text(start)) a^+) > Q(\delta(\text(start)) a)$, the attacker’s reward poisoning constant $c$ must also approach infinity. **“I suggest that the authors clarify their contribution… why does the attacker need to adhere to the reward bound at all?“** The adversary should adhere to our proposed reward bounds because they otherwise become trivially detectable, irrespective of an offline or online attack. As an example, imagine a simple, rule based defender $D$ that takes in a reward value $r$ and verifies $\inf(R) \leq r \leq \sup(R)$, for benign reward function $R$, otherwise labeling the reward as adversarial. This defense detects both unbounded SleeperNets and TrojDRL (with a sufficiently large $c$ hyper parameter) while having a 0% false positive rate. Furthermore, simply clipping rewards in data collected offline breaks SleeperNets and TrojDRL, but not Q-Incept. Therefore there are many realistic scenarios in which our reward bounds will be enforced. This is what led us to explore constrained reward attacks and subsequently design Q-Incept. We are happy to clarify further if you have any questions. We will also be sure to include this in an extended motivation in our updated manuscript. **“Could the authors clarify whether their approach fundamentally differs from TrojDRL in this regard?”** Of course! TrojDRL and Q-Incept’s action poisoning techniques are fundamentally different, as we aimed to capture in Section 4.1. In short, TrojDRL’s approach alters the agent’s *policy* at training time (see Equation 7), meaning the agent *chooses* **and** *transitions* with respect to $a^+$. In contrast, Q-Incept poisons the *perceived transition function* of the MDP, meaning they *choose action* $a^+$ but *transition with respect to an optimal action*. In both theory and in practice, the action manipulation of TrojDRL does not improve attack performance (see Table 1), while Q-Incept has theoretical guarantees of attack success (see Table 2 and Section 4.3). The core insight of Q-Incept is that manipulating how the agent perceives the transition function is sufficient for achieving both attack success and stealth. Under constrained rewards simply forcing exploration the target action, as TrojDRL does, is not enough. Understanding this distinction is very important to understanding our contributions, so please feel free to ask more questions. **“Missing evaluations against defenses… such as provable defenses [1]”** Defenses like [1] achieve “universal” results by targeting the attack’s trigger directly, aiming to “sanitize” the state and remove the trigger. This comes at a cost, however, as they are subsequently *trigger dependent*. Furthermore, most backdoor attacks, including Q-Incept, are trigger agnostic, meaning they can use any trigger pattern to achieve attack success. Therefore, evading a defense like [1] simply requires devising an evasive trigger. To prove this, we perform additional evaluations against [1] and are able to successfully break the defense, resulting in **100% ASR after state sanitization**. Captioned figures detailing our method for breaking [1] are provided at the following anonymous github (https://anonymous.4open.science/r/Q-Incept-ICML-2387/cqPg.md). We will be sure to include these results in a revised version of the paper. **“Limited RL algorithm evaluation…”** This is a fair criticism, therefore we have performed additional evaluations of Q-Incept against DQN, proving the attack is successful against both on and off-policy DRL algorithms. Captioned results can be found at our anonymous github. **“Poisoned return is not reported…”** We do not report poisoned return as it is not the metric we are aiming to optimize. At test time the adversary can have a multitude of objectives they aim to solve by exploiting the backdoor, many of which will not be in direct opposition to the agent’s return (e.g. biasing a warehouse bot to handle some products more often than others). Therefore we report ASR alone as it is a more atomic metric that captures the level of control afforded to the adversary by the backdoor. That being said, we can include poisoned returns in the appendix. For instance, on Q*bert we attain a poisoned return of 0 with Q-Incept - the minimum possible score. **“Applicability to offline RL…”** Q-incept is certainly and directly applicable to offline RL, as you point out. Due to limited time we can’t perform these experiments right now, but we look forward to future works extending Q-Incept to offline RL.
Summary: This paper proposed a novel backdoor attack framework called Q-Incept to attack the deep reinforcement learning training process by changing the state, reward, and action stored in the replay buffer. The proposed method designed new transition and reward functions for the MDP under the backdoor attack. The experiment shows strong attack performance compared with other backdoor RL methods. ## Update after rebuttal The rebuttal has adequately addressed most of my concerns, and I am still leaning toward accepting this paper. Claims And Evidence: 1. The proposed algorithm is well-motivated and supported by the proof. The author also provides theoretical guarantees of Q-Incept, which strengthens their contributions. Methods And Evaluation Criteria: 1. The authors conduct experiments on various environments, including Atari games, CAGE, Highway Merge, and Safety Car. They report two key metrics, which are ASR and BR to capture the attack performance and stealthiness. The results show that Q-Incept achieve the best ASR across all scenarios. Theoretical Claims: 1. I skimmed the proofs in the Appendix and they make sense to me. Experimental Designs Or Analyses: 1. The experimental designs are reasonable. However, I do have some questions about the BR results. Why the backdoored BR performance can outperform No Poisoning BR scores? Also, why a higher $\beta$ poisoning rate has a higher BR score than a lower $\beta$ rate in certain environments? Could the authors provide some insights on this? Supplementary Material: I skimmed the proof and checked the ablation studies. Relation To Broader Scientific Literature: This work is related to robust reinforcement learning, trustworthy AI, and AI safety in general. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. I do have some concerns about the triggers. In the image setting, the trigger is set as $6\times6$ checkerboard, but it is not mentioned where and how the trigger is injected. Could the authors provide some poisoned images? Also, I guess the $6\times6$ checkerboard trigger might be obvious for human eyes to detect. Could the authors justify the reason for choosing this trigger? Could a more visibly stealthy trigger be used in your approach? Other Comments Or Suggestions: 1. Missing left parentheses in the table below equation 9 2. I think the third row and fourth row of the table below equation 9 should be exchanged according to the transition function the authors provide. Questions For Authors: 1. Please refer to the Experimental Designs section. 2. Please refer to the Other Strengths And Weaknesses section. 3. Could the author compare training overhead between your approach and baseline approaches? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review and questions, we look forward to further discussion. **“Why the backdoored BR performance can outperform No Poisoning BR scores?”** For Q-Incept, our theoretical results show that the optimal policy for benign states in $M’$ (the poisoned MDP) is the same as in $M$ (the benign MDP). Therefore we should expect the agent to learn a strong policy even under Q-Incept poisoning. This is supported by our empirical results, as you have pointed out. We believe any increase in BR score after poisoning in some environments is merely due to the variance of PPO. We would expect these two scores to get closer as we average over more runs. You can also see that the BR scores of Q-Incept and of No Poisoning are within a standard deviation of each other in these environments. **“Also, why a higher poisoning rate has a higher BR score than a lower rate in certain environments?”** Similar to the last question, we believe this discrepancy is due to the general variance of PPO. Our other leading theory is that it has something to do with the generalization capabilities of the agent’s network. At lower poisoning rates the agent does not see the trigger as often, yet when they do, they get a (relatively) large signal to take action $a^+$. This may lead the agent to explore the action $a^+$ more often in benign states as it has not seen the trigger often enough to be certain that no other states require $a^+$ to be taken. This is just a theory, however, so we leave deeper explorations to future work. These results do lend extra evidence towards the stability and stealthiness of Q-Incept, however, as BR scores are not damaged as $\beta$ increases. **“In the image setting, the trigger is set as 6 x 6 checkerboard, but it is not mentioned where and how the trigger is injected. Could the authors provide some poisoned images?”** Certainly! Please see our anonymous github (https://anonymous.4open.science/r/Q-Incept-ICML-2387/mydv.md) which contains images of the 6x6 checkerboard trigger on the Q*bert environment. The other reviewers also asked us for additional figures, so feel free to look through those as well. The trigger we show for Q*bert is the same for all other image domains. In short, the checkerboard is inserted at the top left corner of the image by setting every other pixel to a value of 0 or 255, respectively. In the case of models that use framestacks as input, we insert the trigger into every frame (treating each framestack as a distinct state). We will be sure to clarify this in our amended appendix. **“Also, I guess the checkerboard trigger might be obvious for human eyes to detect. Could the authors justify the reason for choosing this trigger? Could a more visibly stealthy trigger be used in your approach?”** More stealthy triggers can absolutely be used with Q-Incept. Our method is completely agnostic to the trigger function $\delta$, only requiring that the trigger does not naturally occur in the environment during training. In real world environments, you can imagine the trigger is a special sticker or object the adversary can place in the environment - which is much stealthier. $\delta$ also does not need to be a deterministic function, allowing the adversary to implement other novel trigger techniques. Our motivation for using the checkerboard trigger in our experiments is that it is visually distinct from normal states - meaning the CNN based agent should be able to easily distinguish poisoned and benign states. This means the success or failure of each method is dependent on their reward and action poisoning strategies alone. To give further proof to our claim of trigger agnosticism, we have performed additional experiments for reviewer cgPq, using a stealthier trigger. You can see the trigger in Figure 2 (right) at the following anonymous github (https://anonymous.4open.science/r/Q-Incept-ICML-2387/cqPg.md). This trigger looks like a graphical glitch, which wasn’t uncommon with old atari systems and TVs, so having it appear for single frames at a time would appear normal. Despite this, Q-Incept is still equally effective - resulting in a 100\% ASR and a BR score of 17618. **“Missing left parentheses in the table below equation 9 … I think the third row and fourth row of the table below equation 9 …”** Thanks for pointing this out, we’ve fixed these errors in our overleaf. **“Could the author compare training overhead between your approach and baseline approaches?“** Sure thing. Training the Q-network used in Q-Incept causes some computational overhead, but fortunately it isn’t too extreme. We ran tessts on a desktop machine (2x RTX 4090, Threadripper 7980x) and found that SleeperNets, TrojDRL, and Q-Incept run at 1038, 987, and 730 simulation steps per second respectively against Atari Q*bert. Note that our Q-network training runs in series with our PPO training, so it’s likely that significant performance increases can be found for Q-Incept by training in parallel.
Summary: The paper proposes a new method, Q-Incept, for backdoor poisoning attacks. Previous work assumes the ability to arbitrarily change the reward within some “poisoned” states in the dataset. The authors rightly point out this is not necessarily realistic, as they arbitrarily manipulate the magnitude of the reward, and this might rarely be possible in practice. They add an additional constraint to solve this, so that the adversarial attacker cannot induce rewards that are larger or smaller than those given by the original MDP. They additionally demonstrate that previous work causes rewards to grow arbitrarily large, and when the additional constraint is added, these previous methods fail to consistently induce the desired “poisoned” behaviour. The authors provide a theoretical justification for why previous methods fail under constrained rewards and prove that Q-Incept achieves high attack success rate across multiple environments while maintaining the agent’s performance in benign tasks. Claims And Evidence: The evaluation spans a diverse set of RL environments, including Atari games, cyber network defense, and autonomous driving simulations. The results convincingly demonstrate that Q-Incept maintains high attack success rates (100% in multiple environments) while ensuring the agent still performs well on the underlying benign tasks. The authors also present ablation studies confirming the necessity of their inception-based action manipulation technique. They also have experiments showing the high magnitude of rewards induced by previous methods, and show that previous methods fail when the reward magnitude is constrained. Methods And Evaluation Criteria: Yes, they use two main metrics: Attack Success Rate (ASR) and Benign Return (BR). ASR measures the extent to which the adversary can induce the targeted behavior, while BR ensures that the poisoned agent still performs well on its intended task, making it less likely to be detected. This makes sense given the context. Theoretical Claims: The theoretical analysis of why previous methods fail when rewards are constrained is clear and intuitive. Experimental Designs Or Analyses: I thought the experiments were straightforward and made sense given the analysed setting. I did not check the code or setups in detail (hyperparameters etc.) Supplementary Material: No Relation To Broader Scientific Literature: Q-Incept builds upon prior poisoning methods like TrojDRL and SleeperNets, addressing their fundamental reliance on arbitrarily large reward perturbations. Instead of reward-based manipulation, Q-Incept carefully modifies high-return states. This aligns with concerns in adversarial RL on detectability and stealth. Essential References Not Discussed: In related work, some alternative poisoning methods are mentioned, I would also include Lu, C., Willi, T., Letcher, A., Foerster, J.N.. (2023). Adversarial Cheap Talk. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:22917-22941 Available from https://proceedings.mlr.press/v202/lu23h.html. Which discusses poisoning by appending extra information to agent observations. Other Strengths And Weaknesses: Strengths: +The proposed method is based on the insight of manipulating high reward actions, rather than inducing arbitrarily high magnitude rewards. + Extensive evaluation Weaknesses: - I think the title is too generic - The plots should not be Weights and Biases screenshots Other Comments Or Suggestions: NA Questions For Authors: 1) How does Q-Incept compare to SleeperNets and TrojDRL in beta values? (% of poisoned states needed to achieve success) 2) How does Q-Incept performance drop with lower betas? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your helpful feedback and questions, we look forward to further discussion with you. Based upon our response we kindly ask you to consider increasing your assessment of our paper. **"In related work, some alternative poisoning methods are mentioned, I would also include [Lu et. al 2023]"** We agree that this is an interesting and relevant paper so we will include it in our related work section. **"The title is too generic"** This is a fair critique. We have further thoughts about a different title "Reward Poisoning is Not Enough: Adversarial Inception for Constrained Universal Backdoor Attacks against Reinforcement Learning", but we are not certain that OpenReview allows us to change it. If you have other ideas we are very open to your suggestions. **"The plots should not be weights and biases screenshots"** We understand your concern. For the camera ready version of the paper we will download the raw data from weights and biases and plot everything using a graphics library (e.g., seaborn) instead. **"How does Q-Incept compare to SleeperNets and TrojDRL in beta values? (% of poisoned states needed to achieve success) … and how does Q-Incept performance drop with lower betas?"** In the TrojDRL paper they evaluate on Atari environments with $\beta = 0.025\%$, while in the SleeperNets paper they evaluate on a wider range of environments using poisoning rates from $\beta=0.005\%$ to $0.5\%$. Both attacks are evaluated with unbounded rewards. Our poisoning rates are comparable, being in the range $\beta = 0.05\%$ to $\beta = 1.0\%$, with the only outlier being Highway Merge which seems particularly resilient to backdoor attacks - likely due to its short episodes (15 time steps) and training time (100,000 time steps). We have performed some additional experiments for the rebuttal on Q*bert where we evaluate Q-Incept at smaller poisoning rates from $0.05\%$ to $0.01\%$. We can see that even at a far lower $\beta=0.03\%$, Q-Incept is still able to achieve an ASR of 98.3%. This means we are able to replicate the results of SleeperNets with the exact same $\beta$ they used, despite operating under constrained rewards. Once we go lower to $\beta=0.01\%$ the attack starts to fail, however, which is to be expected as the agent very rarely sees the trigger. | Beta | ASR | StDev(ASR) | BR | StDev(BR) | |:-----:|:-----:|:----------:|:------:|:---------:| | 0.3% | 100% | 0% | 18,381 | 882 | | 0.1% | 100% | 0% | 17,749 | 1,380 | | 0.05% | 100% | 0% | 17,937 | 1,304 | | 0.03% | 98.3% | 2.9% | 16,573 | 873 | | 0.01% | 21.1% | 6.2% | 16,374 | 2,088 | We expect these findings to be replicable across the all Atari environments we evaluated as they seem to yield similar attack performances. It is similarly likely that lower poisoning rates for Q-Incept can be used on the Safety Car environment - though attack performance will drop eventually, of course.
null
null
null
null
null
null
null
null
Divide and Conquer: Exploring Language-centric Tree Reasoning for Video Question-Answering
Accept (poster)
Summary: The paper introduces Language-centric Tree Reasoning, a framework for VideoQA that hierarchically decomposes complex questions into a logical tree. It first recursively splits questions into perceptual sub-questions using linguistic cues and retrieval-augmented generation (RAG). Then, answers are aggregated bottom-up, guided by video content for verification. Experiments across multiple benchmarks show improved accuracy and interpretability over existing MLLMs. ## update after rebuttal Thank the author for the rebuttal. I will keep my original rating, which was already positive. Claims And Evidence: LTR enhances reasoning accuracy and transparency in VideoQA by leveraging a hierarchical, language-driven decomposition. This claim is supported by: - Quantitative gains of 1%–2% on open-ended tasks and 2%–4% on multiple-choice tasks. - Improved compositional consistency metrics. - Qualitative examples showing traceable reasoning steps that diagnose errors. Methods And Evaluation Criteria: Methods: - Divide Stage: Recursively decompose a complex question into a language-centric logical tree using RAG to ensure semantic coherence. - Conquer Stage: Perform video-aided bottom-up tree reasoning to aggregate sub-question answers, verify intermediate results, and derive the final answer. Evaluation Criteria: - Open-ended benchmarks assessed via GPT-3.5 (accuracy and scoring). - Multiple-choice tasks evaluated by having MLLMs select from provided options. - Detailed ablation studies and qualitative error analyses further validate each component. Theoretical Claims: - Hierarchical Reasoning: Mimicking human System-2 reasoning by breaking down complex questions improves logical consistency. - Language-Centric Approach: Anchoring the reasoning process in linguistic logic prior to engaging visual evidence reduces bias from overly salient visual cues. - Training-Free Adaptability: Integrating pre-trained MLLMs with RAG allows for task-specific reasoning without additional fine-tuning, preserving generalization. Experimental Designs Or Analyses: - Dataset Coverage: Experiments conducted on a wide range of VideoQA datasets, including MSVD-QA, MSRVTT-QA, TGIF-QA, ActivityNet-QA, STAR, Ego-Schema, and Video-MME. - Ablation Studies: Analysis of each component (e.g., RAG integration, video guidance, answer verification) to show their contributions. - Qualitative Examples: Detailed reasoning paths for both successful cases and failure scenarios (highlighting where perceptual errors occur). Supplementary Material: - Appendix A: Additional experiments on both open-ended and multiple-choice benchmarks. - Appendix B: Extended qualitative results with visual examples that illustrate the logical tree reasoning process. - Appendix C: Discussions on complexity, the language-centric reasoning paradigm, and training-free generalization. - Appendix D: Limitations and future work, including discussions on long-video handling and potential reinforcement learning integration. Relation To Broader Scientific Literature: - The work builds on previous VideoQA methods that utilize visual feature extraction and neural modular networks but distinguishes itself by focusing on interpretability. - The authors compare against recent state-of-the-art models (e.g., VideoLLaMA, VideoChat2, Qwen2-VL) and position LTR as an approach that provides a clear reasoning trace. - Aligns with literature on hierarchical reasoning and the cognitive basis for multi-hop question answering. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: Heavy reliance on MLLMs’ zero-shot capabilities with no finetuning, which may limit adaptability in domain-specific tasks. Other Comments Or Suggestions: N/A Questions For Authors: Would incorporating fine-tuning for domain-specific scenarios help overcome the limitations of zero-shot reasoning in complex, long-video contexts? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper, acknowledging its strengths, and providing valuable suggestions for improvement If you have any further concerns, please feel free to raise them during the second-round rebuttal phase. As recommended by the official FAQ, we provide all figures and tables via the [anonymous link](https://anonymous.4open.science/r/ICML25-3189-7D31/README.md). ### Domain Adaptability and Finetuning on Domain Specific Tasks In general, finetuning MLLMs in a specific domain significantly enhances performance by providing a more solid basis for reasoning. We further detail finetuning for different modules as follows: * Divide with Top-down Recursive Checking: Finetuning is generally unnecessary for this stage since question decomposition primarily relies on language understanding, with visual content serving as a supplementary element where coarse understanding is sufficient for most scenarios. However, when encountering different question distributions or videos with distinct characteristics (e.g., 360-degree videos or extremely long videos), finetuning may offer significant benefits. * Perceptual Leaf Question Answering: This step can be readily finetuned using any VideoQA dataset, which would improve the accuracy of leaf question answers and, consequently, enhance subsequent reasoning stages. * Video-Aided Logical Reasoning and In-Process Answer Verification: Although finetuning these steps can benefit the framework, it requires additional effort to construct corresponding reasoning data for the specific domain. To further validate the effectiveness of finetuning at different stages, we conducted experiments on AGQA-Decomp by finetuning our entire framework, leveraging the compositional graph and detailed answers provided in AGQA-Decomp. In [Table1](http://anonymous.4open.science/r/ICML25-3189-7D31/R-AFv7/Table1.png), we compare the performance of Qwen2-VL and VideoChat2 on AGQA-Decomp with and without finetuning. For the main question, our LTR framework improves the baseline MLLMs by 3%–4%, and the finetuning strategy further boosts LTR by an additional 3%–4%, leading to a total improvement of 7%–8% in accuracy. For sub-questions, the improvement is even larger, 12%–13%, because the Perceptual Leaf Question step is more amenable to enhancement in a specific setting. Moreover, the improvement in terms of $c$-$F_1$ is approximately 15%–16%, largely attributable to the increased sub-question accuracy and the hierarchical information aggregation and logical inference in Video-Aided Logical Reasoning. We thank the reviewer for this suggestion and will incorporate these experimental results and discussions in the following version of the paper. We thank the reviewer again for the detailed reviewing, and hope our rebuttal have solved the proposed concerns and would strengthen the reviewer's confidence in judging positively for this paper.
Summary: This paper proposes Language-centric Tree Reasoning (LTR), a training-free, model-agnostic framework that enhances reasoning capabilities and interpretability in Video Question Answering (VideoQA) by using MLLMs. LTR addresses the limitations of existing MLLMs, such as opacity and lack of controllability in their reasoning processes. The framework operates by recursively generating a language-centric logical tree based on the input question and incorporating video content to create leaf nodes. These leaf nodes represent simple perceptual questions that can be answered by MLLMs. LTR then performs bottom-up reasoning through the tree, leveraging MLLM responses to the leaf node questions and verifying consistency with visual evidence. This process culminates in an answer to the original question and a traceable reasoning path. Experiments on 11 VideoQA benchmarks using four different MLLMs demonstrate that LTR improves reasoning accuracy and provides a more transparent and verifiable VideoQA system. Ablation studies analyze the effectiveness of individual components within the framework, and case studies showcase its enhanced error tolerance and explainability. Claims And Evidence: The evidence presented largely supports the claims made in the submission, offering a convincing case for the effectiveness of the LTR framework. Methods And Evaluation Criteria: The proposed methods for enhancing VideoQA through language-centric tree reasoning are reasonable to the problem. Theoretical Claims: This submission does not present any theoretical claims requiring formal proofs. The focus is on the empirical evaluation of the proposed framework. Experimental Designs Or Analyses: The experimental design appears generally sound, encompassing a suitable range of evaluations. Testing the LTR framework with four different open-source MLLMs across 11 benchmarks, covering both open-ended and multiple-choice question types, provides a comprehensive assessment of its effectiveness. The chosen evaluation metrics are appropriate for VideoQA tasks. Furthermore, including a thorough ablation study allows for a detailed analysis of the contributions of individual components within the LTR framework. However, certain specific questions regarding the experimental setup and results are raised in the "Other Strengths and Weaknesses" section. Supplementary Material: Yes, I reviewed the appendix. Relation To Broader Scientific Literature: This work addresses the limitations of current MLLMs in VideoQA, particularly their lack of transparent System-2 reasoning. It proposes a novel language-centric reasoning framework, offering a potential solution to the interpretability challenges faced by existing approaches like VoT and DSTN. Essential References Not Discussed: The paper provides a comprehensive overview of related work. Other Strengths And Weaknesses: Strengths: - The language-centric tree reasoning framework appears novel and offers a promising approach to improving interpretability and controllability in VideoQA. - The proposed framework's training-free and model-agnostic nature enhances its practicality and broad applicability. Weaknesses: - The LTR framework, as described, involves multiple steps and inferences to process a given question and generate the reasoning tree. While this is acceptable for research purposes, practical applications require a fully automated pipeline integrated with the model, eliminating manual intervention. The paper needs to address the feasibility of such an automated pipeline and discuss how the process of generating the tree can be seamlessly integrated into a real-world VideoQA system. The lack of a clear automation strategy limits the scalability and generalizability of the proposed approach. - The current description of the LTR framework suggests its applicability might be limited to specific question types. It remains unclear how the framework would handle more complex or nuanced questions, such as those requiring summarization (e.g., "Summarize what happened in this video") or those involving negation in multiple-choice scenarios (e.g., "Which of the following statements is false?"). The authors should clearly define the types and scope of questions that the LTR framework can handle and explicitly address these limitations. A discussion of the limitations regarding question types should also be included in the limitations section. - While the reported experimental results show improvements across various metrics, the paper would benefit significantly from richer qualitative analysis. Including more detailed case studies would provide valuable insights into the LTR framework's practical impact. Specifically, the authors should provide examples of: - Questions correctly answered with LTR that were previously answered incorrectly by the baseline MLLMs. - Questions that remain unanswered even with LTR. - Questions answered correctly by the baseline MLLMs but incorrectly answered after applying LTR. These case studies would provide the community with a deeper understanding of the LTR framework's strengths and weaknesses, facilitating further research and development in this important area. Other Comments Or Suggestions: - On page 2, line 83, left column, "oof" appears to be a typographical error and should be corrected to "of." - In section 3.2.2, "Video-Aided Logical Reasoning," the provided example refers to "Figure 2 (red box)." However, the explanation seems to correspond to Figure 1, not Figure 2. The authors should verify the figure reference and correct it accordingly for clarity and accuracy. Questions For Authors: - The proposed LTR framework presents a compelling approach to incorporating reasoning into VideoQA. However, the authors should elaborate on how the process, particularly the In-Process Answer Verification stage, ensures the correctness of intermediate responses within the reasoning tree. What mechanisms are in place to prevent and handle potential errors during this stage, such as incorrect verification by the model itself? How would the framework perform if an incorrect answer is generated at an intermediate node? Could this lead to a cascading effect, ultimately resulting in an incorrect final answer? - The experiments utilize open-source models. Have the authors explored applying the LTR framework to any closed-source models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper, acknowledging its strengths, and providing valuable suggestions for improvement If you have any further concerns, please feel free to raise them in the rebuttal comment. As recommended by the official FAQ, we provide all figures via the [anonymous link](https://anonymous.4open.science/r/ICML25-3189-7D31/README.md). ### Automated Pipeline We agree that automation is critical for practical applications. Our two-stage approach, first generating the language-centric logical tree, then performing bottom-up reasoning through tree validation, handles VideoQA automatically. For each question, the system recursively decomposes it into simpler sub-questions until all leaf nodes are perceptive, using retrieved few-shot examples as guidance. The resulting tree is processed bottom-up to analyze both the video and question, yielding interpretable answers. Furthermore, while LTR is designed for full automation, it also allows for human intervention in sensitive scenarios, enabling users to adjust intermediate responses for better accuracy. ### Applicable Question Types We provide two figures to illustrate how LTR framework handles summarization and negation in multiple-choice scenarios. For the summarization question([Figure1](https://anonymous.4open.science/r/ICML25-3189-7D31/R-DfhN/Figure1.png)), LTR naturally decomposes the video into parts and generates a Language-centric Logical Tree that prompts a summary for each segment, leading to a comprehensive summary of the main question. In the case of negation in multiple-choice questions([Figure2](https://anonymous.4open.science/r/ICML25-3189-7D31/R-DfhN/Figure2.png)), the framework breaks down the question by checking each statement individually and then integrates the results to determine which statement is false, closely mirroring human reasoning. However, for questions that are linguistically simple yet require complex cognitive visual reasoning, such as “Is there a thief?”. LTR may struggle to generate a reasonable Language-centric Logical Tree. In these cases, the question is often misclassified as perceptual due to its simplicity, even though answering it requires detailed motion analysis and a comprehensive understanding of the video content. We will include these discussions in the following version. ### More Qualitative Results We further provide three cases to illustrate LTR’s performance in different settings. In [Figure3](https://anonymous.4open.science/r/ICML25-3189-7D31/R-DfhN/Figure3.png), for “Questions unanswered with LTR,” the counting sub-question misidentify background objects due to perceptual miscounting and a logic trap, resulting in an incorrect final answer despite a transparent reasoning process. In [Figure4](https://anonymous.4open.science/r/ICML25-3189-7D31/R-DfhN/Figure4.png), for “Questions correctly answered with LTR and incorrectly answered by baseline,” LTR accurately deduced the stationary object’s properties by intersecting responses from multiple sub-questions and leveraging visual-aided reasoning, whereas the baseline relied solely on perceptual cues. In [Figure5](https://anonymous.4open.science/r/ICML25-3189-7D31/R-DfhN/Figure5.png), for “Questions correctly answered by baseline and incorrectly answered with LTR,” the baseline produce a correct answer without any explanation. However, an erroneous leaf response in LTR led to an incorrect final answer, even though the intermediate video content analysis remained robust. ### Errors in Intermediate Nodes Errors or hallucinations in reasoning are inevitable, but prior study [1] show that providing more context or sampling multiple answers can help suppress them. In LTR, detailed sub-question decomposition offers comprehensive context that reduces hallucinations and minimizes cascading errors, as evidenced by improved sub-question accuracy. So, an error at an intermediate node is less likely to cascade and adversely affect the final answer. The In-Process Answer Verification stage further mitigates errors by prompting the model to check both reasoning logic and consistency between answers and visual content. Although no system can fully eliminate errors, this verification step increases the reliability of intermediate responses, as confirmed by experiments in Table5 of the submission. We will include these discussions in the following version. [1] Fei, H. et.al. Video-of-thought: Step-by-step video reasoning from perception to cognition. In ICML, 2024 ### Performances on Closed-Source Models. To valid the effectiveness of LTR on closed source models, we conduct experiments using GPT-4o on EgoSchema and MVBench, with 16 uniformly sampled frames per video. As summarized in [Table1](https://anonymous.4open.science/r/ICML25-3189-7D31/R-DfhN/Table1.png), LTR yield significant accuracy improvements on these benchmarks, demonstrating its generalizability across various models. ### Minor Issue All typos will be revised in the following version.
Summary: This paper introduces Language-centric Tree Reasoning (LTR), a framework to enhance the reasoning of MLLMs. It uses MLLMs to first hierarchically break down a question into sub-questions, then conquer the question by answering the sub-questions in a bottom-up way. In the experiments, LTR is applied to four state-of-the-art MLLMs and it can consistently improve their performance on 11 video question answering benchmarks. Claims And Evidence: 1. The paper claims that the proposed LTR can improve the reasoning capability of MLLMs. The consistent improvements on four MLLMs across 11 benchmarks support this claim. 2. The paper claims that the design choices in the LTR are essential to the overall performance. The ablation studies in Section 4.5 support this claim. Methods And Evaluation Criteria: The proposed method is well-motivated. Theoretical Claims: This paper does not involve theoretical claims. Experimental Designs Or Analyses: To demonstrate the effectiveness of the LTR framework, it is applied on a few pre-trained MLLMs and it is shown that the performance can be consistently improved. However, the experiments do not compare LTR with other baseline methods that can improve the reasoning capability of MLLMs. The proposed LTR can be proved effective only when it can outperform the baselines. A few simple baselines include: 1. Chain-of-Thought prompting [1], which prompts the models to "think step-by-step". 2. LTR needs to execute an MLLM for multiple times to answer one question. Therefore, it is a test-time scaling approach. The following test-time scaling baselines should be considered: (a) Majority Voting [2], which uses the MLLM to sample N answers and selects the most frequent one as the response. (b) Best-of-N [2], which samples N answers and uses a reward model to score the answers. The one with the highest score is selected as the response. The reward model can be the evaluated MLLM itself. Especially, the N used in the baselines above should be comparable to the average number of MLLM executions in LTR. [1] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. Jason Wei et al. NeurIPS 2022. [2] Let's Verify Step by Step. Hunter Lightman et al. ICLR 2024. Supplementary Material: I reviewed the supplementary material and highly appreciate the experiments on additional benchmarks, qualitative results, and the discussion about limitations. Relation To Broader Scientific Literature: The previous MLLM reasoning works do not capture the logical structure of questions. The LTR framework proposed in this work leverages the logical structure and improves the MLLM performance on video question answering benchmarks. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Other Strengths:** 1. The proposed LTR framework is training-free and can be directly employed on pre-trained MLLMs. 2. Multiple up-to-date MLLMs are evaluated in the experiments. 3. The paper writing is clear and easy to follow. **Other Weaknesses:** 1. The question of the qualitative example in Figure 3 is too easy. The answer can be easily guessed even without the video provided. To demonstrate the effectiveness of the proposed method, a more challenging question should be used as a qualitative example. Other Comments Or Suggestions: **Typos:** 1. L82-L83: off -> of 2. L380-L381: existace -> existence **Writing:** 1. In L46-L47, the concept of "System-2 reasoning" should be explained. 2. In Section 4.5, qualitative examples are highly appreciated to demonstrate how each module affects the reasoning of the model. 3. In Section 4.6, the base model used in the qualitative example should be mentioned. **Rating justification:** The proposed method is well-motivated and can effectively improve the performance of MLLMs. However, due to the lack of comparison with any existing baseline that can improve MLLM reasoning, I cannot vote for the acceptance of this paper. I am open to adjust my rating after rebuttal. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper, acknowledging its strengths, and providing valuable suggestions for improvement If you have any further concerns, please feel free to raise them during the second-round rebuttal phase. As recommended by the official FAQ, we provide all figures and tabels via the [anonymous link](https://anonymous.4open.science/r/ICML25-3189-7D31/README.md). ### Comparison with More Baseline Methods. To further validate the effectiveness of our LTR, we compare the it with three baselines (CoT, Major Voting, and Best of N), across AGQA-Decomp, MVBench, and CausalVidQA using VideoChat2 as the MLLM, as shown in [Table1](https://anonymous.4open.science/r/ICML25-3189-7D31/R-8P4e/Table1.png). For fair comparison, the repetition count (N) for all baselines is set to match the number of MLLM executions in LTR for various questions. For CoT, we carefully construct [five text-based examples](https://anonymous.4open.science/r/ICML25-3189-7D31/R-8P4e/Figure3.png) as prompts generated by GPT-4o. For Best of N, VideoChat2 also serves as the reward model. These comparisons will be included in the following version. In [Table1](#table1), although CoT, Major Voting, and Best of N improve model accuracy on three datasets, their gains remain relatively moderate compared with our LTR, largely due to the lack of detailed information in these methods. Specifically, the Major Voting and Best of N do not provide detailed sub-question analysis and the CoT offers only simple reasoning examples that constrain the reasoning ability of MLLMs, however, our LTR delivers a structured, comprehensive context and a detailed reasoning process that clearly demonstrates how the answer is deduced, thereby enhancing both accuracy and interpretability. Furthermore, the compared methods contribute little to the compositional consistency of models, as they lack intra-question information exchange. In contrast, our LTR employs a hierarchical information aggregation and logical inference procedure that yields a consistent and accurate reasoning process, ultimately achieving superior performance in accuracy and compositional consistency on several VideoQA benchmarks (***i.e.***, AGQA-Decomp, MVBench, and CausalVidQA) while also offering enhanced interpretability and controllability. ### More Complex Qualitative Results and Module Analysis To validate the effectiveness of our LTR framework, we provide two complex qualitative examples in [Figure 1](https://anonymous.4open.science/r/ICML25-3189-7D31/R-8P4e/Figure1.png) and [Figure 2](https://anonymous.4open.science/r/ICML25-3189-7D31/R-8P4e/Figure2.png) in our response, along with a detailed analysis for the first example. In [Figure1](https://anonymous.4open.science/r/ICML25-3189-7D31/R-8P4e/Figure1.png), the video depicts a scenario where a man is struck in the crotch by a dog, causing him to bend over and hold his crotch. In Stage 1, the Divide with Top-down Recursive Checking successfully decomposes the main question into a language-centric logic tree with simpler sub-questions that precisely target perceptual aspects (e.g., detecting the man, the dog, and the actions performed). In Stage 2, the Video-Aided Logical Reasoning module integrates both logical cues and visual evidence to infer answers for each non-leaf node. Crucially, the In Process Answer Verification stage is applied to ensure consistency across intermediate responses; for instance, it corrects the answer for non-leaf question [inter_2] by cross-validating the reasoning with the visual content. Furthermore, we also update more qualitative results along with comprehensive discussion, covering both successful and failure cases, to better illustrate the impact of each module within our LTR framework. The revised figures and detailed descriptions are available in our response to Reviewer DfhN. We will include these examples and discussions in the following version. ### Explanation of System-2 Reasoning The term system-2 originates from psychology and cognitive science, where the dual process theory delineates human reasoning into two distinct processes: system-1, which is fast, intuitive, situational, and perceptual, and system-2, which is slow, logical, abstract, and cognitive [1]. Notably, CoT, Major Voting, and Best of N are also typical examples of system-2 reasoning models. We will explain more on system-2 reasoning and its relevance to computer science in the follow version. ### Minor Issue The MLLM employed in the qualitative analysis is VideoChat, and all the typos will be corrected in the following version. [1] Evans, J. S. In two minds: dual-process accounts of reasoning. Trends in Cognitive Sciences, 7(10):454–459, 2003. ISSN 1364-6613. --- Rebuttal Comment 1.1: Comment: The authors' responses addressed my concerns. I highly appreciate the comparison with a few baseline methods. I have raised my rating to weak accept.
null
null
null
null
null
null
null
null
Representation Preserving Multiclass Agnostic to Realizable Reduction
Accept (poster)
Summary: In the PAC learning model, one is trying to learn a function f over a distribution D. One is given samples and attempts to return a hypothesis h so that Pr_{x ~ D}(f(x) neq h(x)) is small. In the realizable setting, one obtains samples of the form (x,f(x)) where f is guaranteed to be in some function class C and x ~ D. In the agnostic model, one obtains samples of the form (x,y) from some arbitrary distribution (with marginal x ~ D) and tries to output a hypothesis h where Pr(h(x) neq y) is competitive with the best Pr(f(x) neq y) over the best f taken from the class C. There are many variations of this setting where the existence of a learner in the realizable setting implies the existence of an agnostic learner. [Hopkins et al '22] found a relatively simple reduction that can be used to show in a fairly wide variety of settings that a learning algorithm in the realizable setting can be used to construct an agnostic learner. Their primary focus was on the distribution-family model where the x-marginal D is taken from a known but arbitrary family of distributions. In this context, their construction provided the first known realizable to agnostic reduction but only when the space of values of y was finite (as otherwise the reduction is false without extra assumptions). This paper shows that a simple modification of the reduction of [Hopkins et al '22] can be used when the space of y's is infinite in the distribution-free model (i.e where D can be taken to be *any* distribution over the domain X). In fact, they prove this in a slightly more general context that includes list-decodable and multi-valued learning problems. While many of these reductions were already known in [David et al., 2016], the resulting reduction in this paper is substantially simpler and guarantees that the returned hypothesis of the agnostic learner comes from the same class of functions returned by the realizable learner (albeit this is all at the cost of somewhat worse sample complexity than [David et al., 2016]). Claims And Evidence: Yes. They are supported by mathematical proof. Methods And Evaluation Criteria: N/A Theoretical Claims: Although I didn't read proofs in detail, I was able to convince myself that the results in the main body could all be proved using techniques similar to those mentioned. Although the full proofs may contain small errors, they should be easily fixable. On the other hand, I believe that Theorem A.3 is wrong. Experimental Designs Or Analyses: N/A Supplementary Material: I skimmed the appendices. Relation To Broader Scientific Literature: This work expands on the ideas of [Hopkins et al] and produces results similar to those of [David et al] but with some advantages and disadvantages over that work. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: While the main result of this paper is nice, the technique is a relatively simple extension of [Hopkins et al] and the headline result was basically already known by [David et al], and I do not think that this result is central enough for this simplified proof to be publishable on its own. The discussion of the reduced sample complexity in the case of Massart noise was cute, but not enough to save the paper. Other Comments Or Suggestions: Unless I am misunderstanding the definitions (your definition of partial learners is hard to follow), I believe that Theorem A.3 is wrong. In particular, let X={a,b}, Y=A={0,1}, L(y,z) = 0 if y=* or y=z and 1 otherwise. C=H consists of two functions: f(a) = 0, f(b) = * g(a) = g(b) = 1 Then a 1-sample realizable learning algorithm takes (x,y) and returns g if x = b or y = 1 and f otherwise. This has error 0 since if the true function is f, then D must be supported on (a,0). However, consider the agnostic learning problem where D is supported on (b,0). Then Algorithm 1 running on this produces only the hypothesis g, which has error 1. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for dedicating their time to assess our work. Below, we address the comments provided by the reviewer. **1. The significance of our work:** We establish the first representation-preserving reduction from agnostic to realizable learning for multiclass classification with an infinite label space, in addition to various other learning settings. Notably, this type of reduction was not presented in the work of David, Moran, and Yehudayoff (2016). This is because their approach relied on a compression-based reduction using boosting, which increases the complexity of the agnostic learner’s hypothesis class compared to that of the realizable learner. In contrast, our algorithm is representation-preserving, meaning the agnostic learner’s hypothesis class remains the same as that of the realizable learner. Thus, our algorithm and results are fundamentally different from theirs. In particular, ours is the first proof that a class $\mathcal{C}$ is agnostically learnable with hypothesis class $\mathcal{H}$ if and only if $\mathcal{C}$ is realizably learnable with hypothesis class $\mathcal{H}$. **2. The novelty of our work:** Of course we take inspiration from the work of Hopkins et al., but the key ingredient in our algorithm is novel. In particular, the algorithm of Hopkins et al. runs a realizable learner on an unlabeled sample with all possible labelings, which can only apply to learning settings with a finite effective label space. However, our algorithm runs a realizable learner on all the subsets of a labeled sample, which can handle learning problems with an infinite label space, i.e. multiclass PAC learning with an infinite label space. Moreover, the analysis of the algorithm is also different, as we must account for the variable size of the optimal subset present in this initial labeled data set. Therefore, we designed a novel algorithm with crucial technical differences compared to previous works. This answers an open problem mentioned in the work of Hopkins et al. **3. Correctness of Theorem A.3:** Thank you for your skepticism, but as we will explain the result in Theorem A.3 is correct (though we should note there is a typo on page 11 line 558, where it should say $L(*,\cdot)=1$). In the example you constructed, we suspect there may be a typo in your definitions, since if the distribution $\mathcal{D}$ is supported on $(b,0)$, then according to your loss definition this distribution would be realizable by hypothesis $g$, since $L(g(b),0) = L(1,0) = 0$ (your loss has $L(1,0)=0$). We suspect your intention in the construction might have been to construct a scenario where the realizable learner always returns a function with loss $1$. Such cases can indeed occur, and are compatible with our result, since then the best-in-class loss $L_{\mathcal{D}}(\mathcal{C}) = 1$, and we trivially satisfy the agnostic learning guarantee $L_{\mathcal{D}}(A(S)) \leq 1 \leq L_{\mathcal{D}}(\mathcal{C}) + \epsilon$. We hope this rebuttal convinces you of the significance and novelty of our contributions, and addresses your concern about the correctness of Theorem A.3. --- Rebuttal Comment 1.1: Comment: 1. If a big part of your novelty is that your reduction is representation preserving, you should at least define what you mean by representation preserving in your paper. As far as I can tell, you do not, and it was not at all obvious to me what you meant by it. 2. I'm not sure what version of Hopkins et al you are looking at, but I was unable to find the version of the open question you quoted or anything equivalent in it. But again, Hopkins et al is largely concerned with the distribution-family setting for which your results definitely do not resolve anything. 3. I believe my counter-example had a typo (now fixed) where I swapped the 0 and 1 values of the loss function. But it is made irrelevant by your typo on line 558. I am now just left to wonder what the point of the point of the result is in the first place. Is learning with partial classifiers just equivalent to learning where there happens to be an extra output called * which is never right? If so, what is the point of the model? Wouldn't *any* reduction from realizable to agnostic learning automatically also hold for partial classifiers? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for dedicating their time to reassess our work. Below, we address the new comments provided by the reviewer. - The definition of the representation-preserving property is implicit in all of our formal definitions. For instance, see Definitions 3.1 and 3.2. In fact, the reason for introducing a hypothesis class $H$ in addition to the concept class $C$ is to capture this property. However, we acknowledge the reviewer’s comment and will ensure that this is discussed more clearly in the camera-ready version. In short, if the output of a realizable PAC learning algorithm lies in a hypothesis class $H$, our reduction guarantees that the output of the agnostic PAC learning algorithm also lies in $H$. - In the final version of the work by Hopkins et al., published in TheoretiCS 2024, they mentioned this open problem. This version is available on arXiv: https://arxiv.org/pdf/2111.04746. Here is the relevant paragraph: "Finally, we note there are a few settings where Algorithm 1 runs into issues, especially discrete infinite settings such as infinite multi-class classification and properties such as privacy that require more careful data handling. We leave the extension of our method to these settings as an intriguing open problem." We note that, prior to our work, there was no agnostic-to-realizable reduction for the multiclass setting with infinite label spaces that also preserves representation. Thus, our work is the first to provide a proof that, in the multiclass setting with an infinite label space, a concept class $C$ is agnostically PAC learnable by learners within a hypothesis class $H$ if and only if $C$ is realizably PAC learnable by learners within the hypothesis class $H$. - Your understanding of the partial concepts setting is correct. In addition to $\star$ never being a correct prediction, the other difference is that the symbol $\star$ will never appear as a label in the data. This subject was developed in the work of Alon, Hanneke, Holzman, and Moran (FOCS 2021, https://arxiv.org/abs/2107.08444), with the purpose of unifying a variety of learning scenarios whose learnability is not captured by traditional PAC learning theory that previously required separate problem-specific analyses. One classic example is the problem of learning a linear classifier with a margin. For instance, both the Perceptron and SVM operate over the concept class of all linear classifiers, whose VC dimension grows linearly with the Euclidean dimension. But even in infinite-dimensional spaces, if the data are separable with a margin $\gamma > 0$, the sample complexity remains finite and scales linearly in $1 / \gamma^2$ (supposing the data are normalized, for simplicity). This fact is perfectly captured by noting the VC dimension of the partial concept class of $\gamma$-margin linear classifiers is $1 / \gamma^2$ (the partial concepts in this class are specified by linear classifiers, and output $\star$ for points $\gamma$-close to the separator). They show that, for {0,1,$\star$}-valued partial concept classes, the VC dimension still characterizes PAC learnability. Moreover, there exist partial concept classes that are PAC learnable, yet any disambiguation of them—intuitively, filling in the $\star$ symbols with labels to form a total concept class—is not PAC learnable with the traditional PAC learning framework. We refer the reviewer to the aforementioned paper for a more detailed discussion. Regarding the final question, consider the task of agnostically learning the partial concept class of linear classifiers with margin, under a distribution that does not have a margin (which is admitted in the agnostic setting). If we try to apply the basic reduction from Hopkins et al., there is no realizable labeling of the unlabeled data set, so the realizable learner will not be well-behaved if we just run it on all labelings of the unlabeled data. Thus, it becomes necessary to consider sub-samples of the data, as in our work (Hopkins et al. also consider a variant for partial concepts, which considers all realizable labelings of all subsets of the data, but as we previously mentioned, it still runs into problems in the multiclass partial concepts setting). The main contribution of the current manuscript in this context is proving a representation-preserving reduction that applies even in the general case of multiclass partial concepts with an infinite label space, as well as other settings. We hope this rebuttal convinces you further of the significance and novelty of our contributions. Once again, thank you for dedicating your time to reassess our work.
Summary: The paper studies a representation preserving agnostic to realizable reduction. The reduction can nicely be described as, splitting the training data into two parts $V$ and $T$. Now on the first part of the training data $V$ the learner runs on all subsets the realizable learning algorithm, getting $ 2^{|V|} $ my outputs of the realizable learning algorithm. Now using $ T $ as a validation set the learner picks the best of the $ 2^{|V|} $ hypotheses, created in the first step of the process. The proof follows from a high level from first considering $V$ samples from a mixture of a distribution realizable/labelled by the best reference hypothesis $ c^* $ and a part not realizable by $c^*$. Since the first step runs the realizable algorithm on all subsets, it especially runs on the largest subset $ S' $ of $ S $ that is labelled by $ c^*.$ Thus on this subset the realizable learner gets a good preformance on the part of the distribution labelled by $ c^* $ and since the agnostic algorithm has to compete with $ c^* $ and it fails on the other part of the distribution, the output of the realizable learner is close to the preformance of $ c^* $ on the true distribution. Now to ensure that the output of the realizable learner on the distribution realizable by $ c^* $ is small, the paper makes sure that $ |V| $ is large enough such that the largest subset $ S' $ of $ S $ label by $ c^* $ contains sufficently many examples to get small error under the part of the true distribution realizable by $c^*$. The next step is to extract this good hypothesis from the $ 2^{|V|},$ hypothesis created in the first step of the algorithm. By Chernoff and a union bound over the $ 2^{|V|} $ hypothesis, the true risk is close to the empirical risk for all the $ 2^{|V|} $ hypotheses and the hypothesis with the smallest empirical risk can be chosen as the final classifier. The above algorithm is related to that presented in Hopkins et al. 2022, but can handle infinite labelspaces as with for instance multiclass classification, which was left as an open problem in Hopkins et al. 2022. Furthermore the framework also works for list learning and multilabel PAC learning (again with infinite labels). Furthermore, the paper shows that the sample complexity of the framework is better when considering the multiclass classification problem (with infinite labels) with Tsybakov and Massart Noise. Claims And Evidence: Most of the proofs of the claims are in the main, and to the best of my knowledge, these are sound. I did not check the proofs in the appendix. Methods And Evaluation Criteria: No experiments are included in the paper, so I did not know how to evaluate this question. Theoretical Claims: I checked the proofs in the main and to the best of my knowledge they are sound. I did not check the proofs in the appendix. Experimental Designs Or Analyses: No experiments are included in the paper, so I did not know how to evaluate this question. Supplementary Material: No. Relation To Broader Scientific Literature: The paper presents related work, where the most related is the work by David et al. 2016 and Hopkins et al. 2022. They solve an open problem stated in Hopkins et al. 2022 about giving an agnostic to realizable reduction of the infinite labels multiclass classification setting. Essential References Not Discussed: I think it would be good to add a remark about the samples complexity obtained in Brukhim et al. 2022 when presenting corollary 4.3 and the sample complexity obtained in Charikar and Pabbaraju, 2023 when presenting corollary 4.4, since they are a $ 1/\varepsilon $ polynomial factor better. Also, is there any work on multilabel classification, that would also be nice to add. Other Strengths And Weaknesses: The paper is well written. The paper solves an interesting open problem from Hopkins et al. 2022, extending the agnostic to realizable framework to the channeling infinite label space setting, in an elegant way. Other Comments Or Suggestions: Congratulations on your nice paper, here are some notes that I took while reading. - page 3 second column line 131-133 Vapnik 2006, the date might be of? - page 3 second column line 160 and proof of lemma 4.2, the notation $ (\mathcal{X} \times \mathcal{Y})^* $ is used differently, as respectively all possible sequences and examples where $ c^{*} $ is realizable. - page 5 first column line 254-255 is it $ \varepsilon/(2\mu^*) $ or is it $ \varepsilon \mu^*/2 $? That was a general question I had for this proof. - page 7 first column, from equation 8 to equation 9, where did the $ \epsilon/2 $ from the left side of equation 8 go? Questions For Authors: 1. I was a bit confused about some of the motiviation but was not able to make it concise, here was my thoughts: Page 2 first column line 055-073: Looking at David 2016 and the proof of their theorem 3.3, my understanding of the agnostic sample compression scheme they use is that it is the realizable sample compression run on the largest sub sample $ S' $ of $S$ which is realizable by a hypothesis in $C$. Thus, the output space of their agnostic sample compression scheme is the same as the output space of the realizable sample compression scheme, so representation preserving in terms of the output of the realizable sample compression scheme. Jumping to page 5 second column line 254-268, the framework is applied with algorithm 1 in Brukhim et al. 2022, which from my understanding is an algorithm/realizable sample compressions scheme(please correct me if i am wrong), thus the output of the algorithm 1 presented in this paper is the output of algorithm 1 in Brukhim et al. 2022, so a realizable sample compressions scheme run on some subset of the data set $ S,$. Now again going to Brukhim et al. 2022 and considering their proof of the agnostic case, it uses theorem 3.3. in David 2016, thus it runs the algorithm 1 in Brukhim et al. 2022/the sample compressions scheme on the largest sub sample $ S' $ realizable by the hypothesis in $C$, thus essentially having the same output of hypothesis?. Thus, going back to page 2, first column line 055-073 I don't quite get the motivation for outputting simple learners? It is also known from, for instance, "Optimal Learners for Multiclass Problems" Amit Daniely, Shai Shalev-Shwartz, theorem 1 that in for instance the multiclass learning setting that simple learners/proper learners can not guarantee to learn. My understanding is that the picture is the same for list learning, and that the proof of Charikar and Pabbaraju, 2023 goes through a similar sample compression technique, this understanding is from their section 7.2 I did not check their appendix B. I get that the reduction presented in the paper, in the case that the learner is simple, proper, or close to proper, the reduction will output something simple. However, from my understanding, the cool thing about this paper is that it captures cases with infinite label spaces that the work of Hopkins et al. 2022 did not, e.g., Multiclass classification as a prime example. Thus i don't get this motivation on page 2 first column line 055-073 about necessarily being related to simple learners. I apologize in advance if I have misunderstood some of the previous works since they are not simple for me to grasp. I also understand that the motivation of a work is somewhat surjectiv so as written above I most likely are misunderstanding something. 2. page 5 second column line 265-268 and page 6 first column line 287-292, in both place i think it would be worth mentioning the agnostic abounds obtained in respectively Brukhim et al. 2022 and Charikar & Pabbaraju, 2023, which from my understanding is a $ 1/\varepsilon $ factor smaller. I would like to keep my score. I also hope that the authors will add a the comment to corollary 4.3 and 4.4 about the worse sample complexity compared to these works and that they in these two cases output the same as the sample compression scheme of David et al. 2016. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for dedicating their time to assess our work. In particular, we thank the reviewer for taking the time to verify that the work is technically sound. We are delighted that the reviewer found that we solved an interesting open problem in an elegant way, and moreover mentioned that our paper is well written. We will make sure to correct the typos and incorporate minor suggestions mentioned by the reviewer for the camera-ready version. Below, we address the major comments provided by the reviewer. **1. Other comments or suggestions:** * It is $\epsilon/(2 \mu^*)$. * There should be a $\epsilon/2$ on the right-hand side of equation 9. **2. Question about representation preserving property:** Thank you for this insightful discussion. Your understanding of the prior literature is accurate. For multiclass classification, if we use Algorithm 1 from Brukhim et al. (2022) as our input realizable learner, our approach will yield the same hypothesis class as presented in their paper. And indeed, since that algorithm is compression-based, the reduction of David et al. will be representation-preserving. However, a key advantage of our reduction algorithm lies in its flexibility. For instance, if a realizable learner with simpler hypotheses is available for a given problem, our Algorithm 1 can directly leverage it to produce a correspondingly simpler agnostic learner, which is not necessarily achievable with prior methods. As another interesting example, our reduction would be applicable, and representation-preserving, for realizable learners based on optimally orienting the one-inclusion hypergraph (which we generally would not expect to be compression-based), and in fact would return a predictor which itself is expressed as such an orientation using a subset of the data $V$. The fact that we return a predictor produced by the realizable learner is itself another notion of simplicity (e.g., a predictor based on an orientation of the realizable-case one-inclusion hypergaph is simpler than majority votes of such predictors, and the simplicity is even more apparent compared to a predictor based on an orientation of the agnostic one-inclusion hypergraph).
Summary: The authors study agnostic learning with black-box realizable learners, extending the work of Hopkins et al. (2022). They adapt the simple reduction from Hopkins et al. in a very general PAC learning setting (encompassing list learning and many more). They prove that their reduction algorithm achieves a sample complexity of roughly $\frac{\text{realizableAlgo}(\epsilon,\delta) + \log(1\delta) }{\epsilon^2}$. They prove that the $1/\epsilon^2$ is in general unavoidable. Their algorithm resolves the open problem of Hopkins et al., of providing such a back box reduction for the multi-class case with infinite labels. Furthermore, they also study particular noise settings. Claims And Evidence: Yes. Methods And Evaluation Criteria: N/A. Theoretical Claims: Proofs are mostly standard and correct. Experimental Designs Or Analyses: N/A. Supplementary Material: Proofs are mostly standard and correct. Relation To Broader Scientific Literature: Some clearer distinction with Hopkins et al. (2022) would be nice. See below. Essential References Not Discussed: All good. Other Strengths And Weaknesses: The paper is nice, continues an important line of work, and should be interesting for the theoretical ICML community. It is somewhat incremental and the new achievements compared to Hopkins et al. (2022) are not fully clear. Please, see questions below. Some more discussion with other recent attempts combining learners to achieve optimal learners would be nice. The cited Aden-Ali et al paper and others. While not necessarily related to agnostic learning, the general theme of trying to combine basic learners in a simple way seems is quite common nowadays (in contrast with boosting, sample compression, one-inclusion graph-based, etc. techniques). Other Comments Or Suggestions: Please check your usage of \citep vs \citet. E..g, use "work of Hopkins et al. (2022)" instead of "work of (Hopkins et al, 2022)". Typos: * "an unified" --> "a unified" * " (Agnostic → Realizable )." remove space after realizable * Inconsistent "class vs Class" in "Concept Class C, Hypothesis class H" in Algorithm 1. Questions For Authors: There seems to be a significant difference to Hopkins et al (2022). Note how the reduction calls the algorithm $A(\cdot)$ with potentially non-realizable subsamples. Standard realizable algorithms cannot necessarily handle these. E.g., the Perceptron algorithm might not even stop when run on a non-realizable sample. In some sense you are assuming that the realizable algorithm can detect that the sample is not realizable and then just returns some default hypothesis. Please make this distinction clearer in the next version. Or is there an easy fix? E.g., in Hopkins et al (2022) an unlabeled sample is used an labeled only with hypotheses, hence realizability is guaranteed. It is not really clear from the paper if the new results (e.g., reduction for the $|Y|=\infty$ case) require the adapted blackbox reduction or if an new analysis of the original algorithm of Hopkins et al. (2022) would suffice. Please clarify. Are the changes necessary? Also, Theorem 2.2 states that their reduction needs overall $1/\epsilon^2$ samples for VC classes (instead of the worst-case $1/\epsilon^3$). Is this also possible here? Corollary 4.3 here says that the new reduction requires $1/\epsilon^3$ overall at least for the DS dimension case. What about finite or binary $Y$? The tightness proof is not very satisfactory. It only holds for the $O(1)$ vs $1/\epsilon^2$ case. In particular it is not shown that an $1/\epsilon^3$ can ever be required for this reduction (in some natural case like multi-class, preferably with finite labels). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for dedicating their time to assess our work. In particular, we thank the reviewer for taking the time to verify that the work is technically correct. We are delighted that the reviewer found our work nice and important, and moreover mentioned that our paper should be interesting to the theoretical ICML community. We will make sure to correct the typos and incorporate minor suggestions mentioned by the reviewer for the camera-ready version. Below, we address the major comments provided by the reviewer. **1. Running realizable learner on unrealizable sets:** Thank you for pointing this out. In this work, our definition of learning algorithm requires it to always return a function. However, for learners that only satisfy this for realizable input data sets, we can still apply our reduction by simply modifying line 2 of our algorithm to only run $A$ on all subsets of $V$ that are realizable by $\mathcal{C}$ (which could be identified using a "weak consistency" oracle for $\mathcal{C}$). The theorem and proof remain valid. **2. The algorithm of Hopkins et al. does not work for multiclass classification with an infinite effective label space:** Thank you for this insightful question. We address it by noting the Hopkins et al. reduction fails for the well-studied "stars and sets" concept class, namely Example 4.1 of Daniely et al. (2011). Specifically, consider $\mathcal{X} = [0,1]$, denote by $\mathcal{F}(\mathcal{X})$ the collection of all finite subsets $A\subseteq \mathcal{X}$. Let the label space $\mathcal{Y} = \mathcal{F}(\mathcal{X}) \bigcup \lbrace\star\rbrace$, where $\star$ is a special label (not to be confused with "$*$" from the partial concept classes section). For every $A\subseteq \mathcal{X}$, define $f_{A}:\mathcal{X}\rightarrow\mathcal{Y}$ as follows: $$ f_{A}(x) = \begin{cases} \star & \text{if } x\in A \\\\ A & \text{otherwise}. \end{cases} $$ Let the concept and hypothesis class be $\mathcal{C} = \mathcal{H} = \lbrace f_A: A\in \mathcal{F}(\mathcal{X}) \cup \mathcal{X}\rbrace$. Then, there is a realizable learner $\mathcal{A} _ {\text{good}}$ that returns $f_{\mathcal{X}}$ unless a label $A \in \mathcal{F}(\mathcal{X})$ appeared in the sample, in which case it returns $f_A$. Let $\mathcal{D}$ have marginal on $\mathcal{X}$ the uniform distribution over $\mathcal{X}$, and let the labels be always $\star$ (i.e., realizable with target $f_{\mathcal{X}}$). Now, consider the algorithm of Hopkins et al. with this scenario and realizable learner $\mathcal{A} _ {\text{good}}$. Let the unlabeled dataset be $S_U = \lbrace x_1, x_2,\cdots, x_n\rbrace$, and the labeled dataset be $S_L =\lbrace(x_{n+1},\star),(x_{n+2},\star),\cdots,(x_{n+m},\star)\rbrace$, and with probability one these $x$'s are all distinct. Denote $A = \lbrace x_{n+1},x_{n+2},\cdots,x_{n+m}\rbrace \in \mathcal{F}(\mathcal{X})$. Then their algorithm would run $\mathcal{A} _ {\text{good}}$ on all realizable labelings of $S_U$; in particular, one of these is $(S_U,f_A(S_U)) = \lbrace (x_{1},A),\ldots,(x_{n},A)\rbrace$, and the output hypothesis of $\mathcal{A} _ {\text{good}}(S_U,f_A(S_U))$ would be $f_A$. By the definition of $f_A$, we know that the empirical error of $f_A$ on $S_L$ is 0. Their algorithm then outputs any ERM on $S_L$ from these functions produced by $\mathcal{A} _ {\text{good}}$, which means their algorithm can output $f_A$. However, the true error rate of $f_A$ is 1, while the best error in the concept class is 0. Thus, their algorithm fails for this concept class (for essentially the same reason ERM fails for this concept class). In contrast, our algorithm returns $f_{\mathcal{X}}$, hence achieves error 0. **3. Sample complexity when we have a finite effective label space:** Thank you for pointing this out. For multiclass classification with a finite effective label space size $|\mathcal{Y}| _ {\text{eff}}$, the sample complexity of our reduction can be reduced to $\tilde{O}(1/\epsilon^2)$ rather than $\tilde{O}(1/\epsilon^3)$. More specifically, in this case learnability of $\mathcal{C}$ is equivalent to finite Natarajan dimension $d$, and we can use a similar argument as above, only running the realizable learner on the maximal realizable subsets of $V$, of which there are at most $|V|^d |\mathcal{Y}|^{2d}_{\text{eff}}$ using Natarajan's generalization of the SSP Lemma, so that this bounds the size of $\mathcal{H}_V$ (rather than $2^{|V|}$), leading to a refinement of the final bound (though this analysis then becomes much closer to that of Hopkins et al.). **4. Sample complexity lower bound:** We note that in the case of a finite effective label space, we are able to prove an improved sample complexity of $\tilde{O}(1/\epsilon^2)$. On the other hand, we agree with the reviewer that exploring the possibility of sharpness of the $\tilde{O}(1/\epsilon^3)$ bound in future work would be interesting.
Summary: This paper studies the PAC learning of the problem of multiclass classification with unbounded numbers of labels. The primary contribution is a novel reduction from the agnostic learning setting to the realizable setting that preserves the structure of the output space, which resolves an open problem posed by Hopkins et al., 2022. The authors introduced "unified PAC learning" which encompasses multiclass, list, ad multilabel PAC learning. Additionally, they explores reductions under Massart and Tsybakov noise and extend their results to partial concept class. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: It's very related. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper is well written. The mathematical development is clear and concise. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for dedicating their time to assess our work. We are delighted that the reviewer found our algorithm novel, and mentioned that our paper is well written. We would be happy to provide additional clarification on any aspect of the paper that could help inform the review score.
null
null
null
null
null
null
Promoting Ensemble Diversity with Interactive Bayesian Distributional Robustness for Fine-tuning Foundation Models
Accept (poster)
Summary: The authors propose a new Bayesian inference framework called “Interactive Bayesian Distributional Robustness” (IBDR). IBDR is designed to improve the quality and diversity of model ensembles by modelling interactions between individual models in the ensemble in order to prevent them from collapsing into similar solutions. The proposed IBDR framework uses a dual optimization procedure that promotes both distributional robustness and model diversity. The authors demonstrate empirically that using the IBDR framework leads to improved performance on image classification and common-sense reasoning tasks, where IBDR outperforms baseline methods on the majority of tasks. ## update after rebuttal As I stated in my comment below, the authors sufficiently answered all of my questions in their rebuttal and provided additional analysis that I think will strengthen the paper. I have increased my score accordingly. Claims And Evidence: All claims made in the submission are supported by clear convincing evidence. In particular, the large number of tasks and baseline comparisons done in the experiments section provide clear empirical evidence of claims made regarding the performance improvements obtained by using IBDR. Methods And Evaluation Criteria: Yes, the authors selected a relevant set of tasks in image classification and common-sense reasoning, and compared IBDR directly to all relevant baselines in the literature that I am aware of. Theoretical Claims: The authors provide proofs of Theorem 4.1, Corollary 4.2, and Corollary 4.3 in the appendix (section C). I read these proofs thoroughly. As far as I am aware, the proofs are correct and accurately prove the three relevant claims. Experimental Designs Or Analyses: I did check the validity of the experimental design. In particular, the authors provided a link to their anonymized codebase which made it easy to check the validity of their experimental setup by looking directly at the code they ran to produce the results provided in the paper. As far as I am aware, the experimental setup is valid. Supplementary Material: I reviewed all supplementary material. Relation To Broader Scientific Literature: The authors show that IBDR improves performance of ensemble learning. This is relevant to the broader community because ensemble learning is very relevant in the literature, especially for applications such as fine-tuning foundation models. Additionally, mode collapse is a very well-studied and relevant problem in the literature. The authors propose a novel and interesting way to address this problem of mode collapse that I think will be of interest to the broader research community. Additionally, the authors demonstrate that IBDR outperforms relevant baselines in the literature on a variety of relevant benchmark tasks, providing motivation for researchers to use IBDR in practice. Essential References Not Discussed: There is no essential related work missing from the paper as far as I am aware. Other Strengths And Weaknesses: Main Strengths: 1. Novelty: IBDR is a novel Bayesian inference framework that introduces a novel approach to deal with the model collapse problem - one of the most relevant problems in this area in the literature. 2. Theoretical motivation: The authors provide strong theoretical evidence to motivate their IBDR framework (i.e. Theorem 4.1). 3. Empirical evidence: The authors provide compelling empirical evidence to demonstrate that IBDR outperforms other state of the art methods in the literature across a large number of benchmark tasks. Potential Weaknesses: 1. What is the limit on IBDR performance improvement as the number of particles increases? The authors mention that as the number of particles is increased, the ensemble quality is improved, thus improving performance. However, there is an inherent tradeoff here because increasing the number of particles also increases the runtime linearly. Tables 4 and 5 demonstrate this accuracy-runtime tradeoff with empirical results for 1, 2, 4, and 8 particles. The authors use this result to provide justification of their choice to use 4 particles to balance this tradeoff - which I do agree is a reasonable choice given these results. However, it is clear from Table 4, that up to 8 particles, the performance continues to improve as the number of particles increases. This begs the question: what is the limit on the number of particles after which adding additional particles no longer improves performance? I.e. would accuracy continue to increase as we add up to 20 particles? Up to 100 particles? There must be some limit where adding more particles no longer continues to improve accuracy? I therefore think that it would be very interesting and informative if the authors were to extend the results in Tables 4 and 5 to include larger numbers of particles, ideally up to the number of points where this “limit” on continued accuracy improvement is reached and we see that the accuracy stops improving. This would be useful to demonstrate the best possible performance of IBDR in cases when practitioners are willing to pay higher runtime costs to achieve the best possible accuracy. 2. Runtime of IBDR compared to baselines? Table 5 provides runtimes per epoch for IBDR with different numbers of particles. However, I think it would also be good to include the runtime per epoch for all baseline methods compared to in Tables 1-3. This would demonstrate the accuracy-runtime tradeoff when choosing between IBDR and these baseline approaches. Other Comments Or Suggestions: 1. Typo in running title: “Promoting Ensemble Diversity with Interactive Bayesian Distributional s for Fine-tuning Foundation Models”? What is s? 2. Section title: “B.2. Data augmentations”, augmentations should be capitalized for consistency with other section titles in appendix. Questions For Authors: Please see the two specific questions listed above under “weaknesses”. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thanks reviewer KcS3 for their supportive review and feedback. We would like to address some question and concern as follow: 1. **What is the limit on IBDR performance improvement as the number of particles increases?** Thank you for the helpful suggestion. Following your comment, we conducted additional experiments using a greater number of particles. The results are reported in the table below for your convenience. As shown, the performance of our method generally improves as the number of particles increases. Notably, we observe a slight performance gain when using 12 particles. However, this appears to be the upper limit of improvement—using 16 particles yields comparable or even slightly lower performance than using 12 or 8 particles. | Accuracy | Camelyon | EuroSAT | Resics45 | Retinopathy | |-|-|-|-|-| | 1p | 82.4 | 93.1 | 84.2 | 73.8 | | 2p | 84.8 | 93.9 | 86.6 | 74.3 | | 4p | 85.1 | 95.0 | 87.3 | 76.5 | | 8p | 85.8 | 95.5 | 87.4 | 77.0 | | 12p | 86.1 | 95.3 | 87.6 | 77.3 | | 16p | 85.9 | 95.0 | 87.6 | 77.2 | 2. **The accuracy-runtime tradeoff when choosing between IBDR and other baseline approaches.** We thank the reviewer for the insightful suggestion. We have already conducted experiments comparing the runtime of our method with several baseline approaches. As shown in the table below, our method has a slightly higher runtime than some baselines such as SA-BNN and SVGD. Deep Ensemble, being the simplest among the compared methods, achieves the fastest runtime—this aligns with its relatively lower accuracy, as reported in Table 1 in our paper. These results highlight the performance-runtime trade-off among different methods, as the reviewer suggested. We will extend our runtime analysis to include all datasets and baselines, and incorporate the findings into the revised version. | Model | Camelyon | EuroSAT | Resisc | Retinopathy | |-----------------|----------|---------|--------|-------------| | SA-BNN | 128 | 121 | 125 | 122 | | SVGD | 124 | 118 | 121 | 120 | | Deep Ensemble | 67 | 64 | 66 | 66 | | IBDR (4 particles) | 158 | 149 | 156 | 151 | 3. **Regarding some typos in our paper** - Typo in **running title** and in **Section title: “B.2. Data augmentations”**: Thank you for the helpful suggestions. We will correct the typo in the running title by removing the extraneous "s". Additionally, we will capitalize “Augmentations” in Section B.2 to ensure consistency with the formatting of other appendix section titles. --- Rebuttal Comment 1.1: Comment: Thank you for providing the additional analysis to answer my two questions. I think that adding these two analyses will strengthen the paper. I have updated my score to reflect this. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your recognition of our work and the time and effort you have dedicated as a reviewer. We will incorporate these analyses into the final version of our paper.
Summary: The authors introduce a method to encourage diversity in an ensemble of Bayesian neural-net particles. To achieve this, they combine results from distributional robustness and determinantal point processes to derive a PAC-Bayesian-style upper bound on their target objective. An approximation of this bound becomes their training objective. As the title already suggests, the paper focuses solely on finetuning existing models and is evaluated extensively on both image classification and LLM reasoning tasks. ___ _Post-rebuttal update: Switched from weak reject to weak accept given the rebuttal._ Claims And Evidence: The claims are broadly consistent and provided with sufficient evidence. Some restrictions are - Claims regarding uniqueness. l195 claims that "conventional Bayesian frameworks" can't enforce diversity (see comments on related work below) - Despite general derivations and theoretical applicability, the method is evaluated only for the subtask of fine-tuning and lacks a proper empirical evaluation of its general statements. (see below) Methods And Evaluation Criteria: Both, the methods and evaluation criteria make sense given the restriction the authors impose on themselves in the title, i.e., fine-tuning. However, the abstract and most of the theory are written and presented as a general new method for particle-based ensemble learning. The experimental setup is not able to evaluate that. Theoretical Claims: All theoretical claims are supported by detailed enough proofs whose correctness I checked. An exception is the step from theory to practice, which is not justified, i.e., from Corollary 4.3 to equation (5). While the authors claim this to be _"a minor relaxation"_ (l299) they lack arguments why (5) is still supposed to be a valid upper bound as it contains a multitude of changes. Additionally, the theory depends on the fact that the loss function has to be bounded. As far as I can see this constraint is dropped completely during application making leaving the theoretical foundation of the approach on a rather weak foundation. Experimental Designs Or Analyses: To evaluate fine-tuning results for classification and common-sense reasoning, the experimental design is adequate and extensive with numerous baselines and ablations. Supplementary Material: I reviewed the proofs and skimmed the remaining parts. Relation To Broader Scientific Literature: Except for the crucial omission discussed in the next section the paper's relation to the broader literature is properly discussed. Essential References Not Discussed: Although the paper belongs to the field of particle-based Bayesian ensembles, its discussion of the field is mostly limited to varying MCMC methods and the claim that _"conventional Bayesian frameworks" (l198)_ lack the required mechanisms. Other Bayesian approaches, e.g., the literature which can be summarized under the keyword "repulsive ensembles" (D'Angelo & Fortuin, 2021) is completely omitted. _____ D'Angelo and Fortuin, Repulsive deep ensembles are bayesian, NeurIPS 2021 Other Strengths And Weaknesses: ## Strengths - clear and understandable writing style ## Weaknesses - Apart from those already discussed, the major weakness of the paper is its reference section. Many papers point to arxiv preprints instead of the published versions (starting, e.g., with Abbas et al., who published their work in ICML 2022). Others appear multiple times, e.g., Foret et al., Nguyen et al., Welling et al.,... . The most drastic case is the Lora paper reference. It is referenced once as "Hu, E. and et al.", then once more with the complete author list "Hu, E. J., Shen..." and finally as Bartlett et al., 2023. The claimed author list is disjoint from the Lora authors, and the arxiv reference finally points to a completely unrelated Reinforcement learning paper. Given that there seems to be a researcher called Lora Bartlett that could mean that an llm was used at least for some of the references. Other Comments Or Suggestions: - Please follow the ICML style guide when submitting to ICML and use `\citet` and `\citep` whenever appropriate - l370: _"According to Table 1, IBDR outperforms all baselines by large margins."_ -> That is not what the table shows, there are even situations where IBDR is outperformed by large margins. What is shows is that IBDR outperforms the baselines on average. ### Typos - l162 right column is missing a minus sign in the exponential - Running title is broken Questions For Authors: - Why are the baselines between the two experimental setups different? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for your feedback. We provide detailed responses to the main concerns as follows: 1. **l195: 'conventional Bayesian frameworks' can't enforce diversity** - Indeed, in Section 3.3, we introduce what we meant by traditional Bayesian framework. Given a training set $S$, we have the closed from for the posterior $p(\theta|\mathcal{S})$. To handle this posterior, variational approach aims to learn an approximate posterior $q(\theta)$ using the ELBO. At the end, we sample $K$ models $\theta_1,\dots,\theta_K$ from $q(\theta)$ for ensemble the predictions. Certainly, this framework does not allow the particle interaction. - Our main contribution is the joint distribution $Q^K$, which models interactions among particles and enables distributional robustness. This links to sharpness-aware minimization and allows flexible definitions of interaction (e.g., prediction divergence), unlike prior work like SVGD, wherein particle diversity is primarily enforced through similarity kernels over weight vectors. 2. **The method is evaluated only for the subtask of fine-tuning ...** - A key limitation of ensemble-based methods is the need to store and train multiple models, which becomes computationally expensive when dealing with large-scale models. Besides, in the era of model finetuning, where fine-tuning pre-trained models is a dominant paradigm, our method becomes particularly advantageous. It mitigates the primary drawbacks of traditional ensemble approaches, by requiring only the storage of multiple lightweight adapters rather than full models. - Following your suggestions, we have also conducted additional experiments involving training from scratch on CIFAR100 as follows: |Accuracy|Resnet18|Resnet34| |-|-|-| |Standard Training|76.29|77.31| |IBDR (4 particles)|77.45|77.92| 3. **... why (5) is still supposed to be a valid upper bound as it contains a multitude of changes** - Eq (5) is a relaxation of the one in Corolarry 4.3 that keeps its spirit. Particularly, for the one in Corolarry 4.3, given $\theta_{1:K}\overset{iid}{\sim}Q$, we need to find out the perturbation $\theta_{1:K}^{'}$ all in once that maximizes the loss while staying closely to $\theta_{1:K}$ via minimizing $\frac{\lambda}{K}\sum_{k=1}^{K}c(\theta_{k},\theta_{k}^{'})$. In the relaxation version, we isolate $\theta_k$ and $\theta_k^{'}$ in the sense that we find the perturbation $\theta_{k}^{'}$ around $\theta_k$ by maximizing the loss and minimizing the distance $c(\theta_k^{'}, \theta_k)$, while replacing $\theta_{-i}^{'}$ by the fixed $\theta_{-i}$. This is tolerable because $\theta_{-i}^{'}$ stays closely to $\theta_{-i}$. Besides being easier to implement, this relaxation still preserves the spirit of the original one. 4. **The theory depends on the fact that the loss function has to be bounded...but this constraint is dropped completely during application...** - We do not break or violate the bounded loss constraint in our application. In particular, we mainly focus on classification tasks with the CE loss, whose formulation can be simplified as $-\log p_y$ where $y$ is the ground-truth label of $x$ and $p_y$ is the model prediction. We acknowledge that $-\log p_y$ tends to infinity if and only if $p_y$ tends to 0. This is impossible due to 2 reasons: - At the beginning of training, the model lacks prior knowledge and typically makes near-uniform predictions. This causes the predicted probability $p$ to fluctuate around $1/M$, where $M$ is the number of classes, rather than approaching zero. - During training, the objective is to increase the predicted probability for the correct class (maximize $p_y$). Therefore, the training process inherently pushes $p_y$ closer to 1, not 0. - In addition, we notice that in the SAM paper, the McAllester PAC-Bayes bound was used to develop its theory, which is only applicable to the 0-1 loss. In this paper, we leverage with more advanced PAC-Bayes theorem, leading to a less restrictive family of loss functions. 5. **.., paper's discussion of the field is mostly limited to varying MCMC methods. Other Bayesian approaches,... is completely omitted** - We will definitely discuss [D'Angelo & Fortuin, 2021] in the revised version. However, we notice that our approach is fundamentally different from these Stein approaches. 6. **The LoRA paper reference:** - Thanks for your comment. This is indeed an error in our bibtex. We will definitely fix them in the revised version. 7. **l370: "Table 1, IBDR outperforms all baselines by large margins..."** - We will change the wording. Indeed, Table 1 includes a column labeled AVG, and our method outperforms all other baselines by more than 2% in terms of average accuracy. 8. **Why are the baselines between the two experimental setups different** - We chose baselines closely related to our method for image classification, and followed BLoB’s setup for commonsense reasoning, including comparisons with popular Bayesian methods in LLMs. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal and the answers. The following are only clarifying comments from my side and do not require an answer from the reviewers (unless, of course, they want to). - **On fine-tuning**: I agree that fine-tuning is becoming an important task, yet disagree that it is _"a dominant paradigm"_, the field is still larger than foundational models. In my opinion, ensembles face two problems. (i) storage and computational resources, which you state, but also (ii) the potential of collapsing unto a subset of modes, i.e., losing their diversity. After reading the current work, the reader is left wondering, whether they can use it also in their shallower, more classical, approaches. Your preliminary CIFAR100 results suggest that that is the case, further strengthening the paper. - **valid upper bound**: A _relaxation_ would be to switch to a looser bound that might have some benefits, such as easier training, better numerical stability, etc. If I read the paper and your explanation correctly, what you are doing is an _approximation_ to the true bound. This is completely fine, but should be presented that way. - **_"However, we notice that our approach is fundamentally different from these Stein approaches."_** The point I tried to make was that reading the variational inference paragraph (starting l064) the existence of particle-based variational inference approaches, be they stein-based, repulsive BNNs, etc. is completely omitted. One remaining weakness of the work is that the method tends to struggle with calibration in certain regards. I encourage the authors to explore further improvements in that direction in future work. Given your overall rebuttal and the other reviews, I adjust my score accordingly. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s insightful feedback and valuable suggestions, which will undoubtedly enhance our paper. We will certainly incorporate them into the revised version.
Summary: The paper introduces Interactive Bayesian Distributional Robustness, a novel Bayesian inference framework designed to improve ensemble diversity and robustness in fine-tuning foundation models. The core idea of IBDR is to explicitly model interactions between multiple sampled particles in the Bayesian inference process. Unlike traditional Bayesian methods, which treat sampled models as independent, IBDR leverages a joint distribution and introduces a divergence loss to enforce diversity among the sampled models. Claims And Evidence: 1. IBDR enhances ensemble diversity compared to existing Bayesian methods -- Supported by experiments. 2. IBDR improves robustness through Wasserstein-based distributional optimization -- Supported by theoretical derivations and empirical results 3. IBDR generalizes across tasks, from vision (ViT) to language models (LLaMA-2) -- Supported by experiments Methods And Evaluation Criteria: The accuracy and Expected Calibration Error (ECE) metrics are appropriate for evaluating both predictive performance and model uncertainty. Theoretical Claims: The paper presents several theoretical claims, particularly Theorem 4.1 and Corollary 4.2, which establish an upper bound for the population loss under the IBDR framework. These claims appear mathematically sound and extend prior results in distributional robustness. Experimental Designs Or Analyses: The experimental design is well-structured and follows standard machine learning evaluation practices. Supplementary Material: NA Relation To Broader Scientific Literature: - Bayesian Neural Networks: Extends variational inference and Bayesian optimization techniques. - Sharpness-Aware Minimization: Connects to robustness-aware training methods. - Distributional Robustness Optimization: Uses Wasserstein-based robustness techniques. Essential References Not Discussed: NA Other Strengths And Weaknesses: While IBDR is evaluated on multiple datasets, computational overhead is not thoroughly analyzed. Given that the method involves interactive Bayesian sampling, a study on efficiency would be valuable. Other Comments Or Suggestions: See above. Questions For Authors: NAAN Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s comments and respond to the key concerns as follows: **"While IBDR is evaluated on multiple datasets, computational overhead is not thoroughly analyzed. Given that the method involves interactive Bayesian sampling, a study on efficiency would be valuable."** Thank you for the insightful suggestion. We would like to point out that an analysis of performance and runtime with varying numbers of particles is already provided in Appendix A.1. For your convenience, we report these corresponding tables below. | Accuracy | Camelyon | EuroSAT | Resics45 | Retinopathy | |---------------|----------|---------|----------|-------------| | 1p | 82.4 | 93.1 | 84.2 | 73.8 | | 2p | 84.8 | 93.9 | 86.6 | 74.3 | | 4p | 85.1 | 95.0 | 87.3 | 76.5 | | 8p | 85.8 | 95.5 | 87.4 | 77.0 | | Runtime (sec/epoch) | Camelyon | EuroSAT | Resics45 | Retinopathy | |---------------------|--------------|-------------|-------------|--------------| | 1p | 51 ± 1.8 | 50 ± 1.5 | 48 ± 1.7 | 51 ± 0.7 | | 2p | 80 ± 2.1 | 83 ± 2.4 | 93 ± 2.1 | 85 ± 0.9 | | 4p | 158 ± 4.3 | 161 ± 4.9 | 156 ± 4.1 | 151 ± 2.1 | | 8p | 220 ± 6.2 | 230 ± 5.7 | 218 ± 7.3 | 246 ± 6.8 |
Summary: This paper introduces a distributionally robust method for Bayesian estimation, aimed primarily at fine-tuning foundation models. Central to the contribution is a term to promote particle diversity during optimization. Theoretical results of the proposed method are provided, and extensive fine-tuning experiments for vision transformers and language models are presented. ## Update after rebuttal The authors have committed to fixing the errors in the proof of Theorem 4.1 and clarified its claims. This was my primary concern in the submitted manuscript, and thus, I have raised my score from a 1 to a 3 post-rebuttal. Claims And Evidence: The empirical claims about the performance of IBDR seem well-evidenced. There are several claims about variational inference that I do not agree with (see the "essential references" below), but this is more minor. I also have significant comments about the theory (see below). Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. Standard fine-tuning tasks are chosen for reasonable foundation models, with strong baselines. Theoretical Claims: I am not an expert in distributionally robust optimization in particular, but was not able to follow all technical claims, and believe presentation can be improved. For example, the proof of Theorem 4.1 states "it follows from the PAC-Bayes theorem developed by (Alquier et al., 2016) [...]" -- but (Alquier et al., 2016) is a 40 page paper with several theorems. It appears that the authors are referring to Theorem 4.1, but then there is a missing logarithm for $1 / \delta$ (which is introduced later). The proof subsequently states "by choosing $\beta = $[...] in Eq. (6)", but there is no $\beta$ in Eq. (6). Assuming this is actually referring to the unlabeled equation above, the stated value of $\beta$ is not correct; unless I'm missing something, even after correcting $1 / \delta \rightarrow \log(1/\delta)$, the correct value of $\beta$ should actually be $$\beta = \frac{\sqrt{8N}}{L} \sqrt{D_{\text{KL}}(Q^K \parallel P^K) + \log(1 / \delta)}$$ for the algebra to work out. I did not attempt to verify further claims. Experimental Designs Or Analyses: The experimental design and analysis is appropriate as far as I can tell. As mentioned above, the methods and evaluation criteria are appropriate. Supplementary Material: I reviewed Appendix A, and some of Appendix C (as detailed above). Relation To Broader Scientific Literature: The proposed method relates to the current and relevant topic of fine-tuning foundation models, through an interesting lens of distributional robustness. Essential References Not Discussed: Several times it is claimed that variational inference methods lack a mechanism to promote diversity. However, approaches like nonlinear Stein VI [1] have very similar explicit forms, optimizing an objective that combines a loss function with a diversity penalty. One may also use mixture variational posteriors, which have recently been shown quite successful in deep learning tasks [2]. Other Strengths And Weaknesses: I think clarity can be improved, particularly in the theoretical section. Other Comments Or Suggestions: Some typographical comments: - (Line 16, Right Column) These citations should use `\citet{}`. - (Line 150, Right Column) There is a missing space in "cross entropy (CE)". - (Line 249, Right Column) Strictly speaking, $f(x; \theta_i)$ is generally not a "predictive probability." - (Line 321, Left Column) "Gradient descend" should be "gradient descent". - (Appendix C) The notations $D_\text{KL}(Q \parallel P)$ and $D_\text{KL}(Q , P)$ are inconsistently used. I think it would be interesting to compare results to LoRA with IVON [3], though this was only made publicly available in December and thus falls under "concurrent work." Questions For Authors: 1. What is the meaning of "non-maximal prediction probabilities"? 2. The nonlinear SVGD paper [2] found that the repelling regularizer could be quite important, and in particular settles on an entropy-based regularizer rather than a log-determinant-based one. Have the authors considered other similar choices? ## References [1] Wang, D., & Liu, Q. (2019). Nonlinear Stein variational gradient descent for learning diversified mixture models. In International Conference on Machine Learning (pp. 6576-6585). PMLR. [2] Shen, Y., Daheim, N., Cong, B., Nickl, P., Marconi, G. M., Bazan, C., ... & Möllenhoff, T. (2024). Variational learning is effective for large deep networks. arXiv preprint arXiv:2402.17641. [3] Cong, B., Daheim, N., Shen, Y., Cremers, D., Yokota, R., Khan, M. E., & Möllenhoff, T. (2024). Variational Low-Rank Adaptation Using IVON. arXiv preprint arXiv:2411.04421. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and would like to address the concerns as follows: 1. **Regarding the Theoretical Claims:** - **Regarding the citation of prior work in the proof of Theorem 4.1** : Thank you for the suggestion. Our proof relies on Theorem 2.1 from the cited study. We will clarify this in the revision to better guide readers. - **Regarding the $\beta$ value in our proof:** We appreciate the reviewer for poiting out our typos. In line 858, ${1}/{\delta}$ should be $\log\frac{1}{\delta}$. In line 878, the correct value $\beta$ should include a nested square root over the divergence and $\log\frac{1}{\delta}$, as reviewer suggested. Also, “equation (6)” in line 878 should refer to line 858, which is equation (7). We carefully double-check our proof and except these typo, our proof remains true. Specifically, the RHS of this equation (7) actually becomes $\sqrt{\frac{\text{D}_\text{KL}\left(Q^{K} \| P^{K}\right)+\log\frac{1}{\delta}}{2N}}\times L$ with the correct value of $\beta$ above. 2. **Regarding our claim about promoting diversity** - **Conventional Bayesian frameworks lack explicit mechanisms to model interactions between particles θ1:K during training:** Indeed, in Section 3.3, we introduce what we meant by traditional Bayesian framework. Given a training set $S$, we have the closed from for the posterior $p(\theta|\mathcal{S})$. To handle this posterior, variational approach aims to learn an approximate posterior $q(\theta)$ using the ELBO. At the end, we sample $K$ models $\theta_1,\dots,\theta_K$ from $q(\theta)$ for ensemble the predictions. Certainly, this framework does not allow the particle interaction. - **The novelty of our promoting diversity mechanism:** Our main contribution is the joint distribution $Q^K$, which models interactions among particles and enables distributional robustness. This links to sharpness-aware minimization and allows flexible definitions of interaction (e.g., prediction divergence), unlike prior work like SVGD, wherein particle diversity is primarily enforced through similarity kernels over weight vectors. 3. **Comparison with nonlinear Stein VI approach**: - Thanks for pointing out us the nonlinear Stein paper. This is an interesting paper that extends Stein Variational Gradient Descend (SVGD). These two share the same form of the objective function $\max_{\rho}{ \left(F(\rho)+H(\rho)\right)}$ where $H\left(\rho\right)=-\int\log\rho d\rho$ is the entropy. As pointed out in Eq.(8) in Theorem 1 of the non-linear Stein paper, the second term in this equation relevant to $\nabla_{\theta}k\left(\theta,.\right)$ is known as the repulsive term derived from maximizing the entropy $H\left(\rho\right)$. This term encourages the particles to spread out for avoiding the mode collapse. - Differently, our **proposed approach** enables the model interaction through $l_{div}\left(\theta_{1:K},x,y\right)$. Interestingly, we can define this term appropriately to encourage various kinds of diversity (e.g., diversity in the model span or diversity in model predictions). Evidently, in our practical method, motivated by the theory of Determinantal Point Processes, we propose a diversity loss to encourage the diversity in model predictions that is proven to improve the ensemble performance. - Finally, we will cite and discuss the non-linear Stein paper. 4. **Typographical comments**: We thank the reviewer for catching these typos. We will correct these typos in the revised manuscript. 5. **Comparison results with IVON LoRA:** As reported in **Section 5.2** of our paper, we conduct experiments on CommonSense Reasoning, which are also used in IVON-LoRA’s study. For the reviewer’s convenience, we will present our results alongside theirs as follows: | Accuracy| WG-S | ARC-C | ARC-E | WG-M | OBQA | BoolQ | |-------------------|------|-------|-------|------|------|--------| | IBDR (ours)| 72.51| 70.56| 86.95 | 76.46 | 84.60 | 86.89 | | IVON-LoRA | 72.1| 69.9| 87.5 | 76.6 | 80.9 | 86.1 | | ECE | WG-S | ARC-C | ARC-E | WG-M | OBQA | BoolQ | |-------------------|------|-------|-------|------|------|--------| | IBDR (ours)| 24.17 | 21.20 | 9.71| 11.19| 5.82 |1.54 | | IVON-LoRA | 27.5 | 25.8| 10.1 | 23.0| 11.2| 5.6| 6. **Other Reviewer's Question** - **The meaning of "non-maximal prediction probabilities"?**: These are the predicted class probabilities excluding the one assigned to the ground-truth class. For example, if the true class is $y_3$ and the original model prediction probability is $[p_1, p_2, p_3, p_4]$ for classes classes $y_1, y_2, y_3, y_4$, then the non-maximal prediction is $[p_1, p_2, p_4]$ - **Regarding question about entropy-based regularizer**: In our proposed approach, we use the diversity loss $l_{div}\left(\theta_{1:K},x,y\right)$ to encourage the diversity of model predictions instead of the model weight diversity from the entropy maximization. --- Rebuttal Comment 1.1: Comment: Thanks for the reply -- especially the commitment to fixing the errors in the theoretical results, which were my primary concern. I've raised my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for recognizing the contributions of our work and for your constructive feedback. We will carefully incorporate your suggestions to strengthen the final revision.
null
null
null
null
null
null
How Transformers Learn Regular Language Recognition: A Theoretical Study on Training Dynamics and Implicit Bias
Accept (poster)
Summary: The work shows a theoretical analysis studying how a single layer of a Transformer (more precisely, an attention layer with a linear layer on top) learns to solve "even pairs" and "parity check" - two regular language recognition tasks. The authors begin by analyzing the even pairs task, showing that the Transformer trained in two phases learns to solve the even pairs problem, and discuss the training dynamics. Next, the authors discuss how parity check can be solved with Chain-of-Thought (CoT), either by inference with a trained Transformer or by training with CoT data. The authors verify their results experimentally. Claims And Evidence: I think that the theoretical analysis of Transformers trained on the tasks studied in the paper is novel, and the results are interesting in the context of understanding training dynamics and capabilities of Transformers for solving language learning tasks. However, there are some flaws in the paper that I believe the authors need to address: - My main concern is that, beyond showing nice theoretical analysis and some study of training dynamics, it is unclear what is the main takeaway from the paper. What do these results teach us about what Transformers can do, or about how they learn, that we didn't know before? One conclusion that is addressed in the discussion is the importance of CoT for learning complex tasks like parity check. However, this seems to overlap with results shown in previous works, and it is unclear to me how the analysis of parity learning with CoT shown in this work differs from previous results of similar flavor. Another interesting result is that learning "even pairs" can help with learning parity check, but this is not shown directly and not discussed. I believe that stating clearly what is the main novel conclusion from the theoretical analysis and what it informs us about learning with Transformers can greatly improve the paper. - Related to the above, I believe that the relation between learning "even pairs" and "parity check" is interesting, but it is not studied in a setting that captures learning with language models. Specifically, the result for using transformers trained on even pairs to compute parity during inference seems particularly synthetic, and it is unclear whether this is just a way to introduce the next result in the section, or whether this results is interesting on its own. For the analysis of the second approach, adding the regularization with respect to the even pairs appears synthetic and it is unclear why it is needed. Can a similar result be shown by just changing the data mixture (e.g., training on a mixture of "even pairs" and "parity check" task, and testing on "parity check" with CoT)? How does the "even pairs" regularization improves training, given other results showing that parity learning is possible just with CoT? - The authors emphasize that learning happens in two phases, but it seems that the phases arise from the change of hyper-parameters at some point in training. While this is reasonable for the theoretical analysis, I believe that the authors should discuss whether the two-phase learning should happen with a standard learning-rate schedule. In particular, I think that experimental results in a standard setting (standard, maybe even deep, Transformer, standard learning rate schedule etc.) would be helpful. - Additionally, I think that connecting the theoretical conclusions to real-world problems (even somewhat synthetic problems) through additional experiments could be a good way to make this work more appealing to a broader audience. To summarize, I believe that the theoretical setting and results are interesting, but the bottom-line conclusion is unclear, and some of the results are perhaps too synthetic. Methods And Evaluation Criteria: See above. Theoretical Claims: See above. Experimental Designs Or Analyses: See above. Supplementary Material: No. Relation To Broader Scientific Literature: See above. Essential References Not Discussed: No. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. Please note that all our new experiment results (i.e., the figures we quote below) can be accessed via the link https://anonymous.4open.science/r/icml2025-4BCC/Figures%20for%20ICML.pdf Q1: Main takeaway about transformers: (i) How analysis of parity learning with CoT differs from previous results; (ii) How learning "even pairs" help with learning of parity check. A1: **(i)** There are two key differences of our analysis from previous theoretical analysis [1,2] of parity check. *First*, [1,2] analyze only the first one or three gradient descent steps, not the entire training process. They do not prove the convergence of the training algorithm as the number of iterations goes large. In contrast, we characterize the dynamics of the entire training process, with the loss converging to global minimum (nearly zero) (Theorems 5.1–5.3), which is a comprehensive analysis of training process. *Second*, prior studies requires all input data sequence to have the same length, whereas our approach allows inputs to have different lengths -- a more realistic setting. Such a more general setting is much more challenging to analyze, because tokens appearing in different sequence lengths influence each other’s gradient updates. Our analysis explicitly captures these dependencies and provides a fine-grained analysis of individual token values, making our technique more generalizable to broader learning scenarios. **(ii)** To connect the two problems, the learning output of "even pairs" serves as the first step in CoT towards solving parity check. Hence, using "even pairs" as a regularizer in parity check loss can provide an initial momentum to start the training of parity check. The reviewer can refer to lines 330–347 right column to see more detailed explanation in terms of how parameter updates. **Further insights:** (a) Layer roles: Our analysis characterizes distinct roles of feed-forward and attention layers via their joint training dynamics. Specifically, the attention layer $W$ learns to capture token-level semantics—e.g. token equality—while the linear layer $u$ encodes positional information. These functional separations are expected to persist in deeper architectures, where different layers may specialize to recognize different information such as content or positional information. (b) Training process exhibits different phases, where the first phase is parameter alignment and the second phase enables growing parameter norm and loss fast decay. Such dynamics is also observed in real-world problems (see Figure 3). (c) Our developed techniques for analyzing joint training of linear and attention layers can be useful for studying more general transformers. [1] Kim, J. and Suzuki, T. Transformers provably solve parity efficiently with chain of thought. ICLR 2025 [2] Wen et al. From Sparse Dependence to Sparse Attention: Unveiling How Chain-of-Thought Enhances Transformer Sample Efficiency, arXiv 2024 Q2: (i) It is unclear whether Algorithm 1 is interesting on its own. (ii) Why adding the regularization is needed, and how does the "even pairs" regularization improves training. A2: **(i):** Our Approach 1 in Section 5.1 is new and of independent interest. Existing works studying CoT typically require training with CoT data [1,2]. In contrast, our Approach 1 shows that transformers trained on even pairs can solve parity check by simply using CoT at **inference time** without additional training. **(ii):** The output of "even pairs” algorithm serves as the first step in CoT towards solving parity check. Hence, using "even pairs" as regularization provides initial momentum to the training process for parity check. Also note that such a regularization approach is equivalent to changing the data distribution, i.e., mixing the even pairs data and parity data together and training the transformer model. Q3: Does the two-phase learning happen with a standard learning-rate schedule? A3: Yes. For vanilla GD with fixed stepsize throughout the training process, our new experiments (Figure 4) show that the training still exhibits the similar two-phase process as characterized in our theorem. Q4: Connecting theoretical conclusions to real-world problems through additional experiments A4: We provide new experiments in Figure 3 on real-world dataset 'shakespeare' with deeper and realistic transformers (nanoGPT). It can be observed that the training with vanilla Adam optimizer also exhibits a two-phase learning curve, i.e., the parameter norm grows fast in phase 1 and slows down in phase 2. **Generalization:** Our analysis framework can be extended to study general regular language problems. More details are provided in A4 of our response to Reviewer TZ3J. Thank you again for your insightful comments. We hope our responses addressed your concerns and would greatly appreciate your kind consideration in increasing your score.
Summary: This paper presents a detailed theoretical analysis of how a one-layer transformer learns two sequence recognition/classification tasks: even pairs and parity check. The analysis decomposes the factors driving the attention weights and token score, with a discussion of the training dynamics in detail (e.g. attention weight reliance on different tokens, linear layer performing a max-margin solution). Claims And Evidence: They seem fine overall. Methods And Evaluation Criteria: The two tasks seem appropriate and standard from the mentioned related work. Theoretical Claims: I found sections 4 and 5 a bit hard to follow, see notes below. Experimental Designs Or Analyses: The experiment in Section 6 makes sense. Supplementary Material: No Relation To Broader Scientific Literature: This work extends prior theoretical work on the learning dynamics of transformers on a more representative architecture and on multiple tasks. Essential References Not Discussed: Not to my knowledge Other Strengths And Weaknesses: This work is a bit outside of my expertise so I'll comment from a general perspective. The biggest concern I have is the accessibility of this work to a broader audience. While I get the gist of ideas, I find the paper a bit hard to follow. Other Comments Or Suggestions: Related to the accessibility point above, I think it would be great if the authors could add more diagrams, or interleave the theoretical analyses with the empirical results. These can help a more general reader understand the work and in turn interest a broader audience. Questions For Authors: It was unclear to me if the two-phase phenomena is entirely emergent or tied to the two-phase gradient descent schedule, which from Figure 3 I take to be the former (emergent). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. Please note that all our new experiment results (i.e., the figures we quote below) can be accessed via the link https://anonymous.4open.science/r/icml2025-4BCC/Figures%20for%20ICML.pdf Q: Add more diagrams, or interleave the theoretical analyses with the empirical results. A: Thanks for the suggestion. We will add a diagram to illustrate what each training phase of the even pairs problem learn, and will add a diagram to illustrate how the CoT-based inference approach solves the parity check problem. We will also present the experiments of the learning phases for even pairs and parity check problems right after theorems for their training dynamics. Q: It was unclear to me if the two-phase phenomena is entirely emergent or tied to the two-phase gradient descent schedule, which from Figure 3 I take to be the former (emergent). A: Yes. As demonstrated by our additional experiments (Figure 4), the two-phase phenomena arises naturally even under a fixed stepsize for the entire gradient descent training process. Thank you again for your insightful comments. We hope our responses addressed your concerns and would greatly appreciate your kind consideration in increasing your score. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I would like to keep my score and recommend the area chair to consider the feedback provided by reviewers with more direct expertise when making the final decision. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking time to read our rebuttal and for your thoughtful comment. We sincerely appreciate your positive opinion!
Summary: The authors theoretically study language recognition tasks with transformer. Formally, they study the training dynamics of transformers trained with gradient descent on the parity check and even pairs problems. Considering a single-layer simplified transformer, they first show that the even pairs problem can be solved directly before showing how Chain-of-Thought was needed for the harder parity check problems, either at inference with a model pre-trained on even pairs or during the training itself. For both tasks, the authors identify two distinct phases of learning where the attention grows in the first, mapping data to separable vectors. Then, once the attention module is stable, the linear layer grows and implements ma ax-margin decision boundary to separate the attention outputs between positive and negative samples. They provide a convergence rate for the training loss and empirically validate their findings on synthetic data. Claims And Evidence: The theoretical claims are well supported by clear and detailed proofs, and the authors also provide experimental validation of their theory in a controlled setting. Methods And Evaluation Criteria: The main contributions of this work are theoretical, and the authors provided detailed proofs and brief experimental computation to validate the theory in a synthetic setting. I believe the methods and evaluation criteria make sense for the problem. Theoretical Claims: The theoretical findings are very detailed and greatly explain the training dynamics. The proofs are well thought, detailed, and clear. In my opinion, these are the main strengths of this work. Experimental Designs Or Analyses: The experimental setup is well explained and remains in a controlled setting. I believe the current submission would be strengthened with additional visualization to illustrate the different training dynamics and phases identified by the authors. Supplementary Material: I read the proofs and the experiments left in the appendix. Relation To Broader Scientific Literature: I find that related work and prior works are well introduced and compared. The submission's contributions are interesting and are part of a growing interest in the literature on the theoretical understanding of transformers. The novelty seems to come mostly from the slightly more general model considered and the study of language recognition tasks with truncated CoT framework. The max-margin solution seems connected to [1] that showed how gradient-descent on softmax model converges in direction to a max-margin solution that separates locally-optimal tokens from non-optimal ones. [1] Tarzanagh et al. Max-Margin Token Selection in Attention Mechanism, NeurIPS 2023 Essential References Not Discussed: To the best of my knowledge, there are no essential references not discussed in the current submission. Other Strengths And Weaknesses: **Strengths** - The paper is very well-written and motivated - The problem tackled is of great interest - I appreciate that the model considered is (slightly) more general than prior works [1], which leads to a more involved analysis - The findings are interesting and well explained to grasp their impact on our understanding of the training dynamics - The proofs of clear and elegant **Weakness** I list below what I think are weaknesses, but I would be happy to be corrected if I misunderstood some important aspects of the authors' contributions. - While the authors' approach is indeed more general thatn [1] because of the linear layer, it is still less general that [2] since the linear layer is incorporated the value matrix $W_V$ in the optimization which amounts to simply consider an attention module and not a transformer: typically feed-forward blocks have non-linearities as it was considered in [2]. - I find some part of the setting oversimplified compared to more practical settings and some of these simplifications are not well-motivated in my opinion (see questions). - The current submission lacks additional visualization or empirical validation since the considered setting seems a little bit ad-hoc and remains simple. It would be interesting to see how the figures vary with other hyperparameters for the controlled setting. Overall, I find the paper interesting and the analysis well conducted. I believe this is valuable work that can be improved with more visualization and variety in the controlled setting to illustrate the theoretical findings better. This is the reason for my current score, but I remain open to modifying my score, provided the authors clarify the points mentioned in the weaknesses section. *References* [1] Kim, J. and Suzuki, T. Transformers provably solve parity efficiently with chain of thought. ICLR 2025 [2] Wen et al. From Sparse Dependence to Sparse Attention: Unveiling How Chain-of-Thought Enhances Transformer Sample Efficiency, arxiv 2024 **Update after rebutal**: increased score from 3 to 4. Other Comments Or Suggestions: I list below some potential typos: - l. 121, second column: "Frobenious" --> "Frobenius" - l 400, first column: "thee" --> "the" Questions For Authors: 1) It seems that the embedding strategy described from l.171 only depends on the position in the sequence and not on the token value. Could the author clarify this point? If this is the case, then I believe it is not very realistic since for the tasks considered in the submission, the value of the token matters a lot (and it is, of course, the case in more practical settings). 2) In Theorem 41, the authors propose to choose $\lambda = \Omega(L_{max}^2)$ instead of the common $\sqrt{d}$ in the vanilla transformer (motivated to avoid small gradients when the magnitude of logits increases). However, the order of $L_{max}$ and $d$ in common transformers is typically $\approx 10^2$ (time series forecasting [1], ViT [2], BERT [3], etc.). It means replacing a scaling of order $10^1$ by a scaling of order $10^4$. I believe this will lead experimentally to shrinking the logits range and outputting almost uniform attention values. Could the authors elaborate on that? 3) Related to the point above, I notice that in their experiments, the authors take $L_{max} = 6$, which seems quite small, given again the type of sequences processed by transformers in more practical settings, and $\lambda=2$. This does not seem to match the predicted $\lambda=\Omega(L_{max}^2)$ of Theorem 4.1. Maybe I misunderstood the setting, but could the authors clarify that point? 4) I am intrigued by the gradient descent in two phases considered in the analysis. Why did the authors choose this framework instead of a more classical GD, SGD, or more adapted Adam (noting that adaptive optimizers are better for transformer and attention-based models [4])? 5) Related to the point above, how did the author select the value $t_0$? It seems that the optimization can be heavily impacted because of that, especially given that the parameter $\lambda$ is used in front of the learning rate. Could the authors elaborate on that? *References* [1] Ilbert et al. SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention, ICML 2024 [1] Dosovitskiy et al., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, ICLR 2021 [3] Devlin et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, NAACL 2019 [4] Zhang et al. Why are adaptive methods good for attention models? NeurIPS 2020 Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. Please note that all our new experiment results (i.e., the figures we quote below) can be accessed via the link https://anonymous.4open.science/r/icml2025-4BCC/Figures%20for%20ICML.pdf Q1: It seems that the embedding strategy does not depend on the token value. A1: We would like to clarify that the embedding strategy does depend on both the token value and the position. To see this, suppose the token is at the $\ell$-th position in the sequence. Then if the token is symbol 'a', embedding vector $E$ has 1 at the $\ell$-th **odd** index, i.e., $E[2\ell-1]=1$; otherwise if the token value is 'b', the embedding vector has 1 at the $\ell$-th **even** index, i.e., $E[2\ell]=1$. Clearly, embedding also depends on the token value. Q2: In experiments, authors take $L_{\max}=6$ and $\lambda=2$, which does not match $\lambda=\Omega(L_{\max}^2)$ in Theorem 4.1. A2: We provide new experimental results (see Figures 1 and 2) under $\lambda=L_{\max}^2$, which exhibit similar training phases as our original experiments and validate our theory. We further note that $\lambda=\Omega(L_{\max}^2)$ of Theorem 4.1 is a sufficient condition to guarantee the convergence of the training theoretically, but may not be necessary. In practice, as demonstrated in our original experiment, a much smaller $\lambda=2$ can also lead to desired convergence even with a faster rate. Q3: Theorem 4.1 chooses $\lambda=\Omega(L_{\max}^2)$ instead of the common $\sqrt{d}$ in the vanilla transformer. Will this lead experimentally to shrinking the logits range and outputting almost uniform attention values? A3: As discussed in A2, $\lambda = \Omega(L_{\max}^2)$ achieves the same desirable output (non-uniform attention) as smaller $\lambda$, although the training becomes slower. We also provide new experiments on deeper and larger models (see Figure 3), and observe that choosing a large scaling factor may not always degrade the performance drastically. Several reasons can possibly explain this. (a) Deeper models have layer normalization and adaptive learning rates, which might compensate for smaller gradients. (b) The dynamics of feed-forward layers can help activate softmax to learn so that attention is not uniform. (c) The attention weights will learn to increase its value to mitigate the high scaling factor (as shown in the Figure 3). We will include those experiments and discussion on the choice of $\lambda$ in the paper. Q4: I am intrigued by the gradient descent in two phases considered in the analysis. Why did the authors choose this framework instead of a more classical GD, SGD, or more adapted Adam (noting that adaptive optimizers are better for transformer and attention-based models [4])? A4: Our two-phase learning-rate schedule serves as a simplified approximation of the decaying learning rate in Adam, which takes high learning rate in early training and then takes smaller learning rate in later training. See lines 182-191 (right column) for more detailed explanation. Nevertheless, even for classical GD with fixed stepsize throughout the training process, our new experiments (Figure 4) show that the training still exhibits the similar two-phase process as characterized in our theorem. Q5: How did the author select the value $t_0$? A5: In Theorem 4.1, we provide a parameter setting that guarantees a desirable training output, where $t_0 = 1/(\eta L_{\max})$, and $\eta = O(\min\{1/L_{\max}, 1/\lambda^{2/3}\})$. For example, if we choose $\eta=1/\lambda$ and $\lambda=L_{\max}^2$, making the learning rate at the first phase $O(1)$, then $t_0=O(L_{\max})$. This choice suggests how to set the duration of the warm-up phase in practice, which can be chosen based on the length of input sequences. Thank you again for your insightful comments. We hope our responses addressed your concerns and would greatly appreciate your kind consideration in increasing your score. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed answer and for the additional experiments. I appreciate the authors' efforts to address my concerns. I will consider that along with the other reviews (and their responses) for my final recommendation. **Update** --> After carefully reading other reviews and the authors' answers to them, I decided to increase my score, given that most of my concerns are addressed. Although the setting is simplified, the analysis is well done, and additional experiments on larger models have been conducted, showcasing the same two-phase pattern. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking time to read our rebuttal and for your thoughtful reconsideration of our work. We sincerely appreciate your positive feedback and strong support!
Summary: This paper focuses on two typical regular languages: even pairs and parity check. The authors show that one-layer transformers can learn even pairs directly without CoT. For parity check, it is shown that one-layer transformers can learn it with CoT and with a small amount of data mixing with even pairs. Claims And Evidence: The theoretical claims are supported by clear theorems and proofs. Methods And Evaluation Criteria: / Theoretical Claims: Due to the large amount of theoretical papers I have been assigned to review, I cannot verify every proof in full detail. The theorem statements look correct to me. Experimental Designs Or Analyses: The experiments on one-layer transformers are presented. The experimental designs are sound and valid. Supplementary Material: I did not have time to review the supplementary material in detail. Relation To Broader Scientific Literature: It is crucial to understand whether transformers can learn formal languages defined according to Chomsky's hierarchy, and this has been studied in many prior works empirically or theoretically. This paper contributes to this line of research by proving that one-layer transformers can learn even pairs and parity check, the two most basic regular languages, under a neat theoretical setting. Essential References Not Discussed: / Other Strengths And Weaknesses: Strengths: 1. The theoretical setting is very neat and clean. 2. Many prior theoretical works train layer weights of transformers separately, which is not very natural. This paper directly analyzes the joint training of both layers. Weaknesses: 1. It is not clear how the results, or even how this line of research, can be extended to deep transformers. In fact, Deletang et al. (2023) (cited in the paper) showed that deep transformers are not able to learn even pairs (acc is not 100%). It might be the case that the setting studied in this paper is just a delicate setting that happens to enable transformers to learn these two regular languages. If this is the case, it would be important to discuss how generalizable the results are and whether there is something to learn from this paper for deep transformers. 2. Although I really appreciate that the authors made the setting clean and neat, I do feel that the proof may critically depend on some seemingly weak assumptions: (1) zero initialization; (2) two-phase LR schedule; (3) merging $W_u W_v$ as $u$ and $W_kW_q$ as W. 3. The authors claim that one of their main technical contributions is to analyze the joint training of both layers. However, in the first phase of the two-phase LR schedule, the learning rate of the two layers is decoupled, which makes me suspect that the analysis may implicitly assume that one layer is trained much faster than the other one. In this case, the analysis is not really for the joint training of both layers. 4. I feel the analysis is too ad-hoc to the two regular languages studied in this paper. It is not clear how easily their analysis can be extended to other regular languages. Other Comments Or Suggestions: In the second approach to parity check, it would be better to phrase the method of adding even pairs as data mixing rather than adding a regularization loss, since data mixing is a more intuitive term to describe this process. I agree that its effect is similar to regularization, though. Questions For Authors: 1. I would like to see the authors' thoughts on Weakness 1. I would like to raise my score if the authors could point out some useful insight that can be extended to deep transformers. 2. I would like to see the authors' thoughts on Weaknesses 2, 3, 4. I would like to raise my score if the authors could actually show that the proof is flexible enough to hold with big variations of the setting. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. Please note that all our new experiment results (i.e., the figures we quote below) can be accessed via the link https://anonymous.4open.science/r/icml2025-4BCC/Figures%20for%20ICML.pdf Q1: It is not clear how the results can be extended to deep transformers A1: We thank the reviewer for the thoughtful comment. We acknowledge that there are several settings that make our results seem better than empirical study. E.g., we construct an orthonormal embedding vectors, whereas Deletang et al. (2023) needs to learn the embedding functions and may lead to additional errors. However, our results still offer several key insights for deep transformers as discussed below. **(i) Layer roles:** Our theoretical analysis uncovers distinct roles played by different components of the transformer during training. Specifically, the attention layer $W$ learns to capture token-level semantics—e.g. token equality—while the linear layer $u$ encodes positional information. These functional separations are expected to persist in deeper architectures, where different layers may specialize to recognize different information such as content or positional information. **(ii) From simple to complex tasks via CoT**: We theoretically justify how transformers trained on simple tasks can generalize to complex tasks using CoT (Algorithm 1). This helps explain the success of CoT in scaling model reasoning capabilities without additional training. **(iii) Stabilizing training with simple-task supervision:** We demonstrate that incorporating simple tasks data (e.g. even pairs) improve the initial training of complex tasks (e.g. parity check) using CoT. This finding provides practical guidance for constructing datasets in deep transformers—including simpler examples can stabilize training process. Q2: The proof may critically depend on some seemingly weak assumptions: (1) zero initialization; (2) two-phase LR schedule; (3) merging $W_u W_v$ as $u$ and $W_k W_q$ as $W$ A2: We would like to point out that these assumptions can be relaxed or justified as follows. (1) can be relaxed to random initialization such as Gaussian initialization. Then concentration inequalities can ensure that initial parameters are relatively small with high probability. Then, similar techniques can be applied to analyze the training dynamics of transformers to show that GD updates will push parameters on the right track. (2) serves as a simplified approximation of the decaying learning rate in Adam, which takes high learning rate in early training and then takes smaller learning rate in later training. See lines 182-191 (right column) for more detailed explanation. (3) can be removed by separately training $W_k, W_q$, $W_v$ and $u$. Our analysis can be extended to include analyzing the changes of more terms such as $W_k W_k^\top$, $W_q^\top W_q$, and $\langle x_\ell W_k , W_q x_L\rangle$ during the training. We expect that the core idea remains similar. Q3: The analysis may not be really for the joint training of both layers. A3: Our analysis is indeed joint training. While the learning rates of the two layers are different in the first phase, our analysis does not de-couple their changes into two-time scales. Specifically, on pages 16-21, we explicitly analyze the effect of gradients of $u$ and $W$ on parameters update at *each time step* and take into consideration how one parameter affects another. Q4: I feel the analysis is too ad-hoc to the two regular languages studied in this paper. A4: Thanks for the insightful question. While our analysis focuses on two specific regular languages, the core techniques, token dependencies, and the role of CoT, are generalizable via the following unified view of regular language tasks. Every regular language can be recognized by a finite-state automaton, which operates by updating a state $s$ based on an input symbol $w$ via a transition function $\delta(s,w)$. This abstraction aligns well with our framework, where even pairs compare the last token (state) with another token in the sequence, appending the result at the end, while CoT iteratively applies this process to determine the final answer. To generalize our approach to arbitrary regular languages, the key is extending from simple token-equality comparisons to a more general transition function $\delta(\cdot,\cdot)$. This can be done by find an attention map that can approximate a general transition function $\delta$. The training dynamics analysis will leverage some key techniques that we develop here such as handling the coupling of layers and implicit bias. Q5: It would be better to phrase the method as data mixing rather than adding a regularization. A5: Thanks for the suggestion! We will make the change or add discussions. ------ Thank you again for your insightful comments. We hope our responses addressed your concerns and would greatly appreciate your kind consideration in increasing your score. --- Rebuttal Comment 1.1: Comment: Thanks for the response. Still, I'm not fully convinced that this paper can provide useful insights into deep Transformers, and I also don't see strong evidence that the proof is flexible enough to be applied to other settings. I would like to keep my score since my two main questions remain. --- Reply to Comment 1.1.1: Comment: We thank the reviewer very much for the response and further comments. We respect your decision, and are grateful that you are keeping towards the positive opinion of our paper. We would also like to take this opportunity to further elaborate on a few points raised in your feedback. **Q1:** Useful insights into deep transformers **A1:** We first re-iterate that our analysis uncovers the distinct roles played by different layers of transformers in two phases of the training. This behavior is also observed in our new experiments on **deep** transformers on real-world data (e.g., Shakespeare text generation in Figure 4 https://anonymous.4open.science/r/icml2025-4BCC/Figures%20for%20ICML.pdf). Regarding the technical analysis of deep transformers, we would like to emphasize that theoretical understanding of their training dynamics remains a significant challenge. Current analytical tools are not yet well-equipped to handle the complexity of deep transformer architectures. In this context, our work makes two key contributions to the existing toolbox. First, we extend existing tools to analyze the joint training dynamics of attention and feed-forward layers, capturing their interplay. This goes beyond most prior studies, which only focus on one attention layer. Second, we analyze of the entire training process for **multi-step** Chain-of-Thought (CoT), capturing how CoT reasoning evolves throughout training and converges, whereas existing theoretical work captures only the first a few steps of gradient descent, without addressing the full trajectory of training. We also note that, beyond the depth of transformers, there are other important directions for advancing the theoretical understanding of transformers. In our work, a key contribution lies in demonstrating how training on a simple language task of even pairs can benefit more complex parity check task via attention mechanism. Specifically, as shown in Approach 1 (Sec 5.1), transformers trained solely on even pairs (without CoT training) can solve parity check by simply applying CoT at inference time. This stands in contrast to existing works, where CoT inference typically depends on CoT-based training. Additionally, we show that the output from even pairs can serve as an effective regularizer during training of parity check, providing a strong momentum to initialize the training. **Q2:** Whether proof is flexible enough to be applied to other settings **A2:** Our framework supports two key generalizations as we elaborate below. **Generalization to other regular languages:** Using the FSA framework describe in A4 of our first rebuttal, the transition function $\delta$ of parity and even pairs is equivalent to an XOR operator, because it examines whether two symbols are equal. However, our techniques can handle more general operations such as $\delta$ is AND, OR and even combination of these operators through CoT. For example, let us consider the case when $\delta$ is an OR function. Mathematically, if we equate 'a' with 1 and 'b' with 0, OR function $\delta$ satisfies $\delta(b,b) = b$, $\delta(a,b) = \delta(b,a)=\delta(a,a)=a$. Then, using the rationale described in line 256-274 (left column), we expect that the training to learn such a function will satisfy \\[ \langle u_t, E_1^w \rangle > 0, \langle u_t, E_\ell^w\rangle <0,\quad \langle E_1^w - E_\ell^{w'}, W_t E_L^a \rangle > 0, \langle E_1^a - E_\ell^w, W_t E_L^b \rangle > 0, \langle E_2^w - E_1^b, W_t E_L^b \rangle >0 \\] In other words, if the last token and the first toke are both 'b', then the transformer will assign more attention weights to the second token; otherwise, the transformer will assign more attention weights to the first token. Our second phase analysis can also be applied to such a setting since the max-margin problem solution will be determined by the initialization of linear and attention layer. **Generalization to other settings such as Random Initialization and Separate Key/Query/Value Matrices:** We outline more details for these extensions. Our goal is to prove similar results where the token score grows and attention score satisfies the desired property at a certain time step $t$. For example, we can show that there exists $t$ such that \\[ \langle W_o^t W_v^t, E_1^w \rangle > 0.\\] This is because sequences of length $L = 1$ (which always have positive labels) dominate the early training, and create an initial bias for the first token. Therefore, even if $W_o, W_v$ are initialized randomly, the gradient will always be negative, pushing $\langle W_o^t W_v^t, E_1^w \rangle$ to be a positive number. Then, using our argument starting in line 808, we can show that the attention layer also starts to assign more attention to the first token in positive samples. Finally, we sincerely thank the reviewer once again for the insightful discussions throughout the review process. We greatly appreciate the opportunity to engage with you on these important technical topics.
null
null
null
null
null
null
Think Twice, Act Once: A Co-Evolution Framework of LLM and RL for Large-Scale Decision Making
Accept (poster)
Summary: This paper introduces a novel framework, termed Agents Co-Evolution (ACE), which combines Large Language Models (LLMs) and Reinforcement Learning (RL) for large-scale decision-making in the context of power grid operations. In this framework, the Soft Actor-Critic (SAC) algorithm (Haarnoja et al., 2018) is first employed to learn an initial policy that maximizes both the expected return and policy entropy. The states from low-reward mini-batch samples in the replay buffer are then converted into natural language descriptions, which are provided to a pretrained LLM along with the relevant context. The refined actions are derived from the output of the LLM, which acts as the policy actor. Subsequently, the LLM is used as a value critic to re-evaluate the long-term impact of key decisions at the trajectory level. Additionally, the framework integrates reward-based prioritization and weighted policy update strategies to generate a high-quality replay buffer that combines experiences from both direct environmental interactions and LLM refinement. The performance of ACE is demonstrated via experiments on three power grid competitions. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: n/a Experimental Designs Or Analyses: Yes, I've checked the soundness and validity of experimental designs and related analyses. Supplementary Material: Yes, I reviewed the supplementary material, which includes some python codes. Relation To Broader Scientific Literature: This paper introduces a novel algorithmic framework that enables collaboration between Large Language Models (LLMs) and Reinforcement Learning (RL) for large-scale decision-making. While several collaboration frameworks have been proposed in the literature, to the best of my knowledge, this is the first paper to employ an LLM as both a policy actor and a value critic simultaneously. Essential References Not Discussed: I think the reference part of the paper is ok. Other Strengths And Weaknesses: The paper conducts an ablation study to assess the effectiveness of each component in the ACE framework. Additionally, Section 5 offers a detailed discussion, providing a deeper analysis. Both of these efforts contribute to a clearer understanding of the work for the reader. A weakness of this paper is the lack of a theoretical proof or at least an analysis for the convergence of the proposed method. Other Comments Or Suggestions: 1. In the experimental section, the method in this paper is compared with LLM4Teach, but it seems that the way LLM4Teach is used here differs from the original LLM4Teach paper. In the original paper, policy-level alignment is only the first step in training the RL model. The second step is to allow the agent to update and optimize the policy through further interactions with the environment. It is suggested to clarify this. 2. It is recommended to include additional evaluation metrics, such as the communication cost introduced by interacting with the LLM and the model sizes corresponding to different comparison methods. This would help readers gain a more comprehensive understanding of the practical significance of the proposed method. Questions For Authors: For ACE in the experiment, Qwen2-7B instruct and GPT-4o-0806 are used. Which LLM is used for the other comparsion methods, say LLM4Teach? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > LLM4Teach Thank you very much for your insightful question. You raise a valid point regarding the original version of LLM4Teach requiring environmental interaction after policy-level alignment. However, due to the significant time cost associated with LLM-environment interactions, we adopted the mode similar to ACE, where alignment is only enabled during the training phase with Qwen2-7B-Instruct as the aligned model. We will provide a more detailed explanation of this modification in the revised version to ensure complete clarity. > Computational and Memory Evaluation Metrics Thank you for raising the important question regarding computational costs. We conduct experiments in the NeurIPS 2020 competition environment. For expert-guided RL, we trained for 100K timesteps with a total duration of 6h 4m14s. Below are the computational costs and memory requirements of ACE (Qwen2-7B + SFT): | Module | Count | Samples | Time | | :-: | :-: | :-: | :-: | | ACE-RL | - |~40K | 3h 4m 41s | | ACE-LLM Inference | 508 | 264 | 1h 48m 0s | | ACE-LLM Sampling | 4981 | 32 | 59m12s | | ACE-LLM Training | 2 | 200 | 26m10s | | ACE-Total | - | ~40K | 6h18m 3s | - **Computational Overhead**: ACE's additional computation primarily comes from three modules: selective LLM inference, sampling, and training. With f_LLM and g_LLM query intervals set to 256 and 32, respectively, only 264 samples and 508 inferences were required (due to multi-round reasoning) for ACE training. The SFT module is performed just twice during co-training. Unlike traditional LLM-RL interactions, ACE can continue learning through sampling from the constructed LLM buffer even during update steps when f_LLM and g_LLM are not activated, significantly improving sample efficiency. - **Memory Overhead**: The primary memory overhead comes from the base model (approximately 14GB for ACE(Qwen2-7B)). The memory required for the generated LLM buffer and SFT datasets is substantially smaller than the RL buffer. Moreover, model memory requirements become negligible when using the GPT4 API instead of hosting the model locally.
Summary: This paper proposes the ACE framework, which co-evolves LLMs and RL agents for industrial-scale decision-making. ACE decouples the high-level reasoning and fine-grained control by employing a "Think Twice, Act Once" strategy. The framework is evaluated on multiple power grid operation challenges from the L2RPN competitions, where it consistently outperforms both RL-only approachs and LLM-only approaches. Claims And Evidence: The claims in the work are supported by experiments on three different L2RPN challenge environments, where ACE shows improvements in key metrics. Methods And Evaluation Criteria: The evaluation is carried out on power grid operation benchmarks with several metrics. Comparisons against multiple baselines (expert-guided RL, LLM-only, and LLM-guided RL) are made, ensuring that the evaluation criteria are well matched to the problem. Theoretical Claims: There are no theoretical claims that need to be checked in this work. Experimental Designs Or Analyses: I check the soundness of experimental design, including benchmarks, baselines, and ablation studies. The experimental design is robust and comprehensive Supplementary Material: I briefly read Appendix B and C. Relation To Broader Scientific Literature: ACE builds upon and extends prior work in RL and LLM-guided decision-making. It relates to earlier approaches like LLM4Teach and RL-GPT. This work situates its contributions within the context of industrial control challenges and large-scale decision-making, addressing critical limitations of both RL (sample inefficiency and large action spaces) and LLMs (inability to maintain long-horizon consistency). Essential References Not Discussed: The key related works are well discussed in this work. Other Strengths And Weaknesses: Strengths: - Novel co-evolution framework that leverages the strengths of LLMs for offline reasoning and RL for online control. - Comprehensive experimental evaluations and detailed ablation studies. Weaknesses: - Experiments are confined to power grid control scenarios, leaving open questions about generalizability to other industrial domains. - Potential computational overhead due to the dual-system (LLM and RL) integration is not fully discussed. Other Comments Or Suggestions: 1. Figure 2 does not clearly depict the specific process, consider providing additional examples to illustrate how the dual-role refinement leads to improved credit assignment and decision making. 2. Discuss potential limitations in extending the framework to other domains and the associated computational trade-offs. Questions For Authors: 1. Have you considered applying ACE to other industrial control tasks beyond power grid management? How do you anticipate the framework will perform in domains with different dynamics and action space structures? 2. What are the specific computational and memory overheads introduced by the dual-system architecture of ACE compared to standard RL methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Cross-Domain Applicability of ACE We appreciate the reviewer's question about ACE's generalizability. While our study focuses on power grid control, the core modules of ACE are domain-agnostic and not constrained by specific environment characteristics. We believe ACE's potential applications in other domains depend on two core mechanisms: (1) The **trajectory refinement module** effectively compresses and navigates large action spaces through LLM-based multi-round semantic reasoning, validated through environment interaction. The combination of **semantic abstraction with RL optimization** could benefit other discrete decision-making problems with large action spaces. (2) The **reward reshaping module** uses LLMs' causal reasoning to adjust reward signals based on trajectory-level analysis. This is particularly relevant for environments with **sparse rewards** or where reward functions are **difficult to specify explicitly**. The LLM's semantic understanding enables dynamic reward adjustment, offering an alternative to fixed rule-based reward designs. > Computational and Memory Overheads Thank you for raising the important question regarding computational costs. We conducted experiments in the NeurIPS 2020 competition environment. For expert-guided RL, we trained for 100K timesteps with a total duration of 6h 4m14s. Below are the computational costs and memory requirements of ACE (Qwen2-7B + SFT): | Module | Count | Samples | Time | | :-: | :-: | :-: | :-: | | ACE-RL | - |~40K | 3h 4m 41s | | ACE-LLM Inference | 508 | 264 | 1h 48m 0s | | ACE-LLM Sampling | 4981 | 32 | 59m12s | | ACE-LLM Training | 2 | 200 | 26m10s | | ACE-Total | - | ~40K | 6h18m 3s | - **Computational Overhead**: ACE's additional computation primarily comes from three modules: selective LLM inference, sampling, and training. With f_LLM and g_LLM query intervals set to 256 and 32, respectively, only 264 samples and 508 inferences were required (due to multi-round reasoning) for ACE training. The SFT module is performed just twice during co-training. Unlike traditional LLM-RL interactions, ACE can continue learning through sampling from the constructed LLM buffer even during update steps when f_LLM and g_LLM are not activated, significantly improving sample efficiency. - **Memory Overhead**: The primary memory overhead comes from the base model (approximately 14GB for ACE(Qwen2-7B)). The memory required for the generated LLM buffer and SFT datasets is substantially smaller than the RL buffer. Moreover, model memory requirements become negligible when using the GPT4 API instead of hosting the model locally.
Summary: The paper proposes Agents Co-Evolution (ACE), a synergistic framework that integrates Large Language Models (LLMs) and Reinforcement Learning (RL) agents to address challenges in large-scale decision-making problems. While LLMs struggle with long-sequence, real-time decision-making, and RL faces inefficiency in vast action spaces, ACE combines the strengths of both. The framework employs a dual-role trajectory refinement mechanism, where LLMs act as both Policy Actors and Value Critics during RL training. The Actor refines suboptimal actions through multi-step reasoning, while the Critic handles temporal credit assignment and trajectory-level reward shaping. Meanwhile, RL agents improve LLMs' task-specific decision-making through prioritized experience replay. ACE is shown to outperform existing RL and LLM-based methods in experiments involving complex power grid operation tasks with large action spaces (over 60,000 discrete actions). Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. The proposed method combine LLM agent and RL agent with their Policy and Value model to benifit from them both. The theoretical claims is mainly about how to mix up and they seem to be right. Experimental Designs Or Analyses: Yes. The experiments are on L2RPN task which are standard power grid challenges. This paper shows the learning effect of proposed ACE method with different kind of LLM. Supplementary Material: Yes. It shows a lot of design details of the proposed ACE method, including model structure, data type and hyper-parameters. Relation To Broader Scientific Literature: This article proposes a method that simultaneously utilizes both reinforcement learning (RL) agents and large language model (LLM) agents in parallel to balance the training iteration speed of RL and the prediction accuracy of LLM. There are many methods for joint training of the two, usually using a cascaded hierarchical reinforcement learning approach to divide the tasks at different granularities. The method proposed in this article introduces an innovation in the learning framework, presenting a new way to combine the two. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths 1.This paper proposes a new framework for training RL agents combined with LLM agents, which improves the iteration speed of training. 2.The proposed method can significantly enhance learning performance with a smaller sample size. 3.The proposed method is applicable to different language models and has a certain degree of scalability. Weaknesses 1.There is too much text in the images; the aesthetic could be optimized. 2.More types of LLMs, such as DeepSeek and Qwen, could be tried in the experiment. Other Comments Or Suggestions: None Questions For Authors: The experiment shows Expert-guided RL needs more sample and achieve worse survival rate, I wonder the time cost comparision between Expert-guided RL and ACE method because LLM spend more time while sample generation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Computational and Memory Overheads Thank you for raising the important question regarding computational costs. We conduct experiments in the NeurIPS 2020 competition environment. For expert-guided RL, we trained for 100K timesteps with a total duration of 6h 4m14s. Below are the computational costs and memory requirements of ACE (Qwen2-7B + SFT): | Module | Count | Samples | Time | | :-: | :-: | :-: | :-: | | ACE-RL | - |~40K | 3h 4m 41s | | ACE-LLM Inference | 508 | 264 | 1h 48m 0s | | ACE-LLM Sampling | 4981 | 32 | 59m12s | | ACE-LLM Training | 2 | 200 | 26m10s | | ACE-Total | - | ~40K | 6h18m 3s | - **Computational Overhead**: ACE's additional computation primarily comes from three modules: selective LLM inference, sampling, and training. With f_LLM and g_LLM query intervals set to 256 and 32, respectively, only 264 samples and 508 inferences were required (due to multi-round reasoning) for ACE training. The SFT module is performed just twice during co-training. Unlike traditional LLM-RL interactions, ACE can continue learning through sampling from the constructed LLM buffer even during update steps when f_LLM and g_LLM are not activated, significantly improving sample efficiency. - **Memory Overhead**: The primary memory overhead comes from the base model (approximately 14GB for ACE(Qwen2-7B)). The memory required for the generated LLM buffer and SFT datasets is substantially smaller than the RL buffer. Moreover, model memory requirements become negligible when using the GPT4 API instead of hosting the model locally. > Presentation We appreciate the reviewer's feedback regarding the visual presentation. We have addressed this concern by splitting Figure 2 into two separate figures to enhance clarity and readability: Figure 2a now presents the algorithmic framework, while Figure 2b demonstrates the prompt examples. > Experimental Expand We appreciate the reviewer's suggestion regarding model diversity. We agree that testing additional models would help validate ACE's generalizability. We are exploring additional experiments with DeepSeek-R1-Distill-Qwen-7B as the base model, and we commit to including these results in the revised manuscript.
Summary: The paper introduces **Agents Co-Evolution (ACE), a framework that leverages Large Language Models (LLMs) to enhance the sample efficiency of large-scale Reinforcement Learning (RL) decision-making systems**. The core principle of ACE involves using the reasoning capabilities of LLMs to guide the RL training phase mainly by refining the replay buffer, a memory of state transitions, with transitions deemed more valuable for updating the policy. The ACE framework was evaluated on three power grid operation challenges (L2RPN competitions) with large action spaces. **ACE outperformed existing RL and LLM-based methods** on the chosen benchmarks, demonstrating significant improvements in sample efficiency compared to RL-based methods and faster decision-making speed compared to LLM-based methods. Furthermore, the study presents ablation studies to validate the effectiveness of the different LLM-based refinement strategies used in the pipeline. Claims And Evidence: The study claims that training an RL agent using trajectories from a replay buffer modified by an LLM enhances both performance and sample efficiency. However, the baseline models employed to support this claim appear insufficient—especially those based solely on an RL approach. Moreover, RL decision-making systems are known to be highly sensitive to hyperparameter choices [1]. The proposed method relies on numerous hyperparameters, as shown in Table 4 (e.g., the fine-tuning interval for the LLM and the number of transitions used during fine-tuning). Yet, there is little to no analysis of how these hyperparameters affect the system’s overall performance. [1] Adkins et al., A Method for Evaluating Hyperparameter Sensitivity in Reinforcement Learning, 2024 Methods And Evaluation Criteria: The proposed method makes sense, and the benchmarks derived from a series of competitions aimed at training an intelligent system to operate a power network align with the goal of improving RL systems for large-scale decision-making environments. Theoretical Claims: The study does not include traditional mathematical proofs for theoretical claims. Instead, the paper offers empirical evidence and ablation studies to support its claims. Experimental Designs Or Analyses: The main concerns regarding the soundness of the analyses are as follows: - The limited range of RL-based approaches evaluated. - The low number of seeded runs used for benchmarking each method in the study. - The unexpectedly high uncertainty observed in the performance plots (Figure 3 and Figure 4) across different episodes. Supplementary Material: Yes, particularly the section of the codebase that calls the LLM for trajectory refinements. Overall, the code provided is well structured and easy to understand. Relation To Broader Scientific Literature: The study addresses challenges encountered in decision-making systems with large action spaces, a long-standing issue in reinforcement learning (RL). One of the main difficulties is the sample inefficiency that arises when tackling such problems [1]. Industrial decision-making systems are often characterized by the need to make multiple decisions involving different components in a relatively short period of time, which exacerbates the problem in these scenarios [2]. As a result, these problems are known to involve high-dimensional action spaces, leading to high sample inefficiency when addressed with RL. Several techniques have been developed to tackle the problem of high-dimensional action spaces, ranging from value-based approaches like AIN-DQN [3] to actor-critic frameworks such as Wolpertinger [4]. However, this paper focuses on approaches that leverage guidance from external sources to improve exploration, such as those used in imitation learning [5], [6], [7] and reinforcement learning with expert feedback [8], [9]. Furthermore, the study focuses on works that use Large Language Models (LLMs) to influence the RL system's training regime without involving them in decision-making. As a result, it examines sample efficiency from a novel perspective, exploring approaches that use LLMs to make the RL training pipeline more sample efficient, such as LLM4Teach [10]. \ \ \ [1] Sutton et al., Reinforcement learning: An introduction (2018)\ [2] Dulac-Arnold et al., Challenges of Real-World Reinforcement Learning (2021)\ [3] Zahavy et al., Learn What Not to Learn: Action Elimination with Deep Reinforcement Learning (2019)\ [4] Dulac-Arnold et al., Deep Reinforcement Learning in Large Discrete Action Spaces (2016)\ [5] Argall et al., A survey of robot learning from demonstration (2009)\ [6] Nair et al., Overcoming exploration in reinforcement learning with demonstrations (2017)\ [7] Torabi et al., Behavioral Cloning from Observation (2018)\ [8] Ouyang et al., Training language models to follow instructions with human feedback (2022)\ [9] Christiano et al., Deep reinforcement learning from human preferences (2017)\ [10] Zhou et al., Large Language Model as a Policy Teacher for Training Reinforcement Learning Agents (2023) Essential References Not Discussed: No, I believe that most key contributions related to the study have been cited, including: - Fully RL-based approaches to improve sample efficiency - Behavior cloning - Inverse reinforcement learning - Reinforcement learning with Human Feedback [1] - LM-RL integrations for decision-making systems However, one key contribution relevant to the Related Works discussion is missing: the research on replacing human feedback in RLHF with AI feedback [2]. Additionally, the omission of LiFT [3], which leverages foundation models to guide RL training through unsupervised learning, further limits the discussion. \ \ \ [1] Christiano et al., Deep reinforcement learning from human preferences (2017)\ [2] Bai et al., Constitutional AI: Harmlessness from AI feedback, (2022)\ [3] Nam et al., LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers, (2023) Other Strengths And Weaknesses: ### Strengths - The proposed contribution is well understood and devoid of any ambiguity. - The experimental setup is well-organized and easy to follow. ### Weaknesses - **Major:** - The baseline methods used to represent RL-based approaches do not seem competitive enough given the breadth of work done to leverage expert knowledge in RL. - More time should be devoted to explaining how both the RL agent and LLM agent co-evolve/train and how the training frequency impacts the system's overall performance. - Some hyperparameters are introduced by the system (see Table 4), but the study does not present any analysis of how these parameters affect the overall training regime. - **Minor:** - Algorithm 1 outlines the overall framework and is a critical component of the study; therefore, it should be moved from the appendix to Section 3, which discusses the proposed approach. - The term “co-evolution” is ambiguous, as it may lead readers to assume that the study employs evolutionary strategies for large-scale decision making. - Additionally, while Table 1 provides a comparative analysis of ACE against various baselines—demonstrating its sample efficiency—it would be valuable to examine how its performance changes with different numbers of refined samples injected into the replay buffer. Other Comments Or Suggestions: - Figure 2, which illustrates the architecture of the proposed approach, appears to be overloaded with information. It would be beneficial to split it into two separate figures: one that outlines the overall pipeline, and another that focuses on the different prompting strategies used for the LLMs. - I’d suggest using terms like “dual training framework” or “multi-agent framework” instead of “co-evolution” to highlight the fact that the parameters of different agents are being updated through gradient-based strategies. Questions For Authors: - I would like to understand why the experimental analysis does not include additional expert-guided RL methods, despite their thorough discussion in the related work section. - Moreover, the ablation studies in Section 4.5 appear insufficient given the numerous hyperparameters introduced, as detailed in Table 4 of the appendix. - Lastly, while Section 3.2 outlines the various prompting strategies used to update the RL pipeline’s replay buffer, it remains unclear whether these strategies were designed based on theoretical principles, empirical evidence, or a combination of both. Can you provide more information about that? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Baselines We sincerely appreciate your insightful suggestion. In response, we explain our baseline selection criteria and add new baselines for comparison: - **Original baseline**: We choose SMAAC because it is the winning solution in WCCI 2020 and relies less on predefined rules than other methods, offering better scalability across different environments. - **Additional baselines**: Given the extensive training scenarios in L2RPN competitions (about 32 years of grid data at 5-minute intervals), creating comprehensive human feedback is challenging. Therefore, we leverage Grid2Op's official rule-based solutions as expert knowledge and add two baselines: - ExpertAgent [1] as the official rule-based agent. - CurriculumAgent [2] as the imitation learning-based RL baseline. - **Result**: For the NeurIPS 2020 task, ExpertAgent and CurriculumAgent achieved survival rates of 46.2% and 80.7%, respectively, **both lower than ACE's performance**. Notably, although CurriculumAgent performs better than SMAAC, its expert experience comes from extensive environment simulations, resulting in significantly higher time costs. Moreover, since the core components of the ACE framework (the f_LLM and g_LLM) operate independently of RL training, interacting solely with the RL buffer, we will further explore the integration of ACE during CurriculumAgent's RL fine-tuning phase. [1] Marot et al., Expert system for topological remedial action discovery in smart grids. [2] Lehna et al., Managing through topology actions: A comparative study between advanced rule-based and RL agents. > Ablations To further investigate the details of ACE co-training, we conducted an in-depth analysis of three key hyperparameters as follows: - **LLM Activation Frequency**: We examine how different query intervals of f_LLM affect convergence and efficiency. We test intervals of 128, 256, and 512 steps while keeping the g_LLM query interval fixed at 32 steps. - **LLM Activation Conditions**: We explored different bad case thresholds {-0.3, 0, 0.3} to control the volume of samples refined by LLM. - **LLM Training Frequency**: We investigated three LLM training modes for ACE (Qwen2-7B): No SFT, One-time SFT (at RL's 2000th epoch), and K-time SFT (once per 100 refined LLM samples). Based on the NeurIPS 2020 environment, we find: - **Higher activation frequency leads to faster initial RL learning**. A 256-step interval showed significant improvements over 512 steps. Interestingly, as training scenarios increased from 288 to 576 cases, the performance gap between different frequencies narrowed, suggesting we can reduce activation frequency for environments having enough diverse scenarios while maintaining effectiveness. - **Extreme thresholds for filtering bad cases are suboptimal.** A threshold of 0.3 introduced 510 samples, 0 included 275 samples, and -0.3 allowed only 83 samples for refinement. In our experiments, constrained by the capability of the base model, introducing too many samples results in minimal refinement benefits while increasing LLM reference time by 46%. Conversely, too few samples led to slower convergence and approximately 6% performance degradation compared to the standard settings. Additionally, incorporating a planning perspective by considering cumulative multi-step rewards thresholds for consecutive substation operations outperformed single-step thresholds. - **SFT directly impacts ACE performance.** Using f_LLM without SFT showed limited improvement after about 2000 epochs, with the survival rate stabilizing at 77.5%. Applying a single SFT at the 2000th epoch led to immediate improvement, reaching 78.5% in the first post-SFT evaluation step. Moreover, multiple SFT iterations demonstrated further enhancement over single SFT, ultimately achieving a survival rate of 84.8%. Thank you for providing these valuable references. We commit to including these ablation studies in the revised manuscript. > Prompting strategies Our approach combines theoretical principles with empirical evidence as follows: - Theoretical Foundation: - f_LLM leverages multi-step reasoning and counterexample analysis to refine suboptimal actions. The core mechanism relies on LLM's semantic understanding of state-action pairs and environment feedback validation (Eq. 4). - Inspired by TD(λ)’s credit assignment principle, g_LLM replaces fixed temporal decay with trajectory-level causal reasoning. LLMs analyze delayed impacts and adjust rewards non-parametrically (Eq. 5), addressing non-uniform dependencies inherent in industrial control. - Empirical Validation: - As shown in Table 2, removing f_LLM reduces the WCCI 2020 task reward from 69.8 to 48.3, and survival rate from 92.9% to 71.4%; removing g_LLM decreases reward to 61.5. - Without multi-round reasoning validation and bad case reasoning, reward drops to 65.7 and 60.2, respectively. Thank you again for your time and expertise. Hope my responses address your concerns. --- Rebuttal Comment 1.1: Comment: > Baselines Thank you for adding these baselines. Are the reported results from the competition, or are they based on the authors' own experiment runs? If they come from the competition, could you provide more details on the evaluation methodology used in the competition and whether it aligns with yours to confirm that the results are indeed comparable? > Ablations Thank you for these insightful analyses. Can you also report the results for some of the "LLM Collaboration Parameters" in Table 4, particularly the query interval for the LLM critic $g_{\text{LLM}}$ and the impact of the adjustment scale parameter $K$ on performance? I believe these are key parameters that directly influence the transitions used to train the RL agents. > Prompting Strategies - Theoretical Foundation - Unfortunately, simply providing equations to formalize the LLM prompting step does not constitute a solid theoretical foundation. This is especially true for LLMs, as they have been shown to be highly sensitive to the distribution of their input prompts [1]. - Empirical Validation - I am still concerned about some of the design choices made in the pipeline, given the large design space. For example, I would have liked to see more analyses supporting the choices made in the transition and trajectory selection criteria, such as whether alternative approaches were tested and what results were obtained. Some questions remain unanswered, such as why select trajectories with returns above a certain threshold rather than those below or farther from the median? At the very least, some of these design choices need to be well motivated against the large number of alternative options that exist. [1] Sclar et al., Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design, or: How I Learned to Start Worrying About Prompt Formatting (2024) The experimental results still leave several unanswered questions that may arise while reviewing the work. For this reason, I have decided to maintain my current score. --- Reply to Comment 1.1.1: Comment: Thank you for raising the important question regarding $g_\text{LLM}$. We are very pleased to receive your further replies and questions. > Baselines For baseline comparisons, the new experimental results are derived from our experiment runs: - ExpertAgent: We directly test it using the official code since it requires no training. - CurriculumAgent: We re-implement and run it in our environment based on their official code. For the metrics, the survival rate metric we used is the standard evaluation metric from the official benchmark. We also employed the episode rewards metric from the expert-guided RL baseline's setting to evaluate each agent's performance. > Ablations We appreciate your careful review and questions. We are pleased to conduct in-depth research on $g_\text{LLM}$ from three aspects: - **$g_\text{LLM}$ Activation Frequency**: We test query intervals of 32, 128, and 512 steps to examine how different query intervals of $g_\text{LLM}$ affect convergence and efficiency. - **Key Trajectories Selection Criteria**: Given the extensive trajectory lengths (800-2000 steps), even with the SMAAC baseline's high-level planning to reduce RL decision frequency, using complete trajectories for LLM reward shaping is impractical. The key trajectories selection mechanism primarily serves to reduce LLM input tokens. We address this concern by using three alternative criteria: - **Reward-based**: We filter trajectories by setting boundaries on reward **absolute values**, capturing both high- and low-reward cases $|r_t| > K$. Since medium-reward trajectories constitute the majority, using them directly as key cases would significantly increase inference time. - **State-based ($\rho$)**: We identify trajectories where the maximum line flow change exceeds a threshold: $|\rho_{t} - \rho_{t-1}| > K'$, indicating either notably effective/ineffective control measures or significant environmental changes. - **Action-based (System Topology)**: Given that stable operation requires no action, we designate trajectories with topology changes $a_t \neq \\{\\}$ as key trajectories. - **Reward Shaping Candidate Sets**: With the current reward function ranging from -1 to 1, the reward shaping should be carefully set to prevent policy oscillation as it directly influences Q-value estimation. Our default shaping candidate set includes ±0.2 and ±0.4. We conduct further ablations studying with K = {±0.1, ±0.2} , K = {±0.3, ±0.6} , K = {±0.4, ±0.8} and K = {±0.5, ±1.0} where the query interval of $g_{\text{LLM}}$ is 32. For computational efficiency, we fixed the $f_{\text{LLM}}$ query interval at 256 and conducted experiments with **3,000 epochs** in the WCCI 2020 training environment with 1 seed and testing on 10 scenarios with 3 seeds, we present the average evaluation results as follows: | Ablation Type | Configuration | Description | Survival Rate | Episode Rewards | | - | - | - | - | - | | **Activation Frequency** | Interval=32 | High frequency queries |**93.98**|68.09| | | Interval=128 | Medium frequency queries | 87.41 | 64.29 | | | Interval=512 | Low frequency queries | 72.66 | 51.49 | | **Key Trajectory Selection** | Reward-based | High/Low reward |**93.98**|68.09| | | State-based | Line flow changes | 92.64 | 68.04| | | Action-based | Topology variations | 72.66 | 51.48| | **Reward Shaping Candidate Sets** | K={±0.1, ±0.2} | Conservative shaping| 83.85 | 60.61 | | | K={±0.2, ±0.4} | Moderate shaping |**93.98**|68.09| | | K={±0.3, ±0.6} | Moderate shaping |87.52|64.26| | | K={±0.4, ±0.8} | Moderate shaping |**93.98**|**68.18**| | | K={±0.5, ±1.0} | Aggressive shaping | 87.53 | 64.45 | The experimental results reveal three key findings: - Higher query frequency achieves the best performance. This is because $g_\text{LLM}$ only modifies a small number of samples, so too low interaction frequency makes $g_\text{LLM}$'s influence almost negligible. - Interestingly, both reward-based and state-based selection criteria achieve the best performance, while action-based selection shows significantly limited performance. This suggests that LLMs are more effective at extracting meaningful patterns from explicit information like performance indicator or state change compared to abstract topological change. - Moderate thresholds (K={±0.4, ±0.8} and K={±0.2, ±0.4}) yield the best results, outperforming both too conservative and too aggressive settings. It indicates that K should be carefully tuned to provide sufficient guidance for policy improvement while avoiding Q-value estimation instability. We ensure that the detailed ablation studies of $f_\text{LLM}$ and $g_\text{LLM}$ will be organized well and analyzed in a dedicated section **in our updated manuscript**. Thank you again for your time and expertise. Hope my responses address your concerns.
null
null
null
null
null
null
Importance Corrected Neural JKO Sampling
Accept (poster)
Summary: This paper presents a method to sample from an probability distribution known through its density, up to an unknown normalizing constant. The method follows the trend of neural parameterizations to solve the proximal steps of the JKO scheme to compute the Wasserstein Gradient Flow of the reverse Kullback-Leibler divergence. The neural parameterization is based on the Benamou-Brenier formulation of optimal transport, which allows to write every step of the JKO scheme using a neural ODE and thus to parameterize the density at a given step via a continuous normalizing flow, that is tuned to minimize the reverse KL divergence. A few theoretical properties of the resulting scheme are presented (mostly in the case of a log concave energy). Besides, to counter the fact that Neural JKO schemes essentially explore the energy landscape locally, the authors propose to introduce rejection steps to enhance sampling. They modify the classical importance sampling scheme using a proposal coming from the Neural JKO scheme (to mitigate the curse of dimensionality) and allowing to propagate the resulting probability density in order to be able to resume subsequent JKO steps. The method is tested on a benchmark of several distributions with closed form densities with various properties (multimodal, with narrow high energy regions, various dimensions) and report improvements over a number of competing methods (classical sampling methods, or more recent JKO, NF or diffusion based). The measures of quality of the generated samples are the energy distance to GT samples or KL divergence values. ## Update after Rebuttal I thank the authors for their responses and maintain my positive opinion on the paper. I maintain my score to 4. Claims And Evidence: The paper’s main claim is the capacity of the proposed method to escape local maxima of the target density and prevent model collapse thanks to the proposed rejection scheme. This is indeed supported by the experiments where the methods shows better capacity to sample from multimodal distributions. The proposed rejection scheme combines well with the neural JKO scheme because it allows to propagate densities, which is essential to continue sampling using the JKO scheme. Methods And Evaluation Criteria: The method is tested on several sampling tasks where the target distribution is known and enjoys tractable sampling for comparisons. The quantitative evaluation criteria make sense, i.e. using MMD for two sample hypothesis testing and using estimates of log normalization constants. The method is tested against several types of methods : classical sampling methods (MALA, HMC) that are known to have trouble handling multimodality, and more recent generative model based methods (DDS based on diffusion models, their own neural JKO without resampling) and a last CRAFT method based on sequential MC methods. Theoretical Claims: The novel theoretical results are in Corollary 3.2, Thm 3.3, Thm 4.2 and Corollary 4.3. The first two results state results on the convergence of JKO steps, reformulated using a dynamical OT formulation towards tha WFG curve of the reverse KL divergence, in the log-concave case where the KL divergence is geodesically convex (which is not the one that is the objective of the method). This is not a new result per se, but the statement that the functional G (detailed in app F) is geodesically convex for small tau even when F is not is a strong motivation to use the JKO scheme instead of more direct mininmization strategies. The other theorems showcase the good properties of the proposed rejection scheme and how the combine well with the JKO scheme. I only checked the proof of Thm 4.2. Experimental Designs Or Analyses: While the combination of JKO scheme and the rejection sampling outperforms all other methods, it would have been interesting to include in the comparison a few methods based on rejection schemes. Naive ones (rejection sampling or importance sampling) are bound to perform poorly in the cases that are tested du to high rejection rates or the curse of dimensionality but surely there must be more recent methods using similar concepts ? Namely the idea of using proposal distributions that are less naive than default choice and are guided by a generative model has been used in some works recently. Supplementary Material: I reviewed parts of the supplementary material, namely App B1,C1, D, E and F. App E is essential to understand fully the experiments, and app F1 presents interesting considerations on neural JKO schemes. Relation To Broader Scientific Literature: The paper seems mostly well positioned within the related literature on WGF, sampling algorithms, continuous NF. I still have two small concerns : - A number of papers on neural versions of the JKO scheme are cited, but the actual approach taken by the authors is not clearly positioned compared to this work : what is the originality (if any) of the proposed neural parameterization (CNF version, Benamou-Brenier dynamical view on transport) compared to these studies ? Are there already to the authors’ knowldge papers that use JKO schemes specifically for sampling ? - As in my comment above, the combination of rejection schemes using generative models as proposal distributions and more classical methods have been explored recently, can the authors say more about the relation between their work and those methods. I think about for instance about Gabrié et al. 2022 (and others cited in the introducion), which the authors cite but do not discuss much. Essential References Not Discussed: See my comment above. Other Strengths And Weaknesses: The theoretical analyis of WGF is interesting and gathers a number of results that are spread in the literature, which is nice, but only applies in the ideal log-concave case, which is precisely the one that the authors aim at extending with their method. This is a minor criticism as the analysis is much harder in the non log concave case and the authors give convincing arguments about the well behavedness of the JKO scheme in the nonconvex case (app F). The rejection scheme seems efficient and importantly is not at odds with the need to propagate densities in the JKO scheme, which is a nice result. Other Comments Or Suggestions: There are a number of typos in the paper that I did not list (e.g. Lebesgue, not Lebesque) but should not survive a careful proofreading of the paper. Questions For Authors: See my questions above about 1) the existence of rejection schemes combined with generative models 2) the use of JKO for sampling and positioning the present instanciation of neural JKO within the literature. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the detailed and valuable feedback. ## Convexity Assumption There appears to be a misunderstanding regarding the assumptions for the theoretical part: We **do not assume that the density is log concave**. Instead, Assumption 3.1 assumes that the functional is $\lambda$-convex along generalized geodesics. For the KL divergence, this corresponds to the assumption that **the target energy $-\log(q)$ is $\lambda$-convex** for some $\lambda\in\mathbb R$. We stress that this explicitly **includes the case of negative $\lambda$**. This assumption is **much weaker than log-concavity** and is automatically fulfilled, if $-\log(q)$ is smooth (plus some asymptotics for $\\|x\\|\to\infty$). More intuitively, the condition can be rephrased as: $-\log(q)$ is $\lambda$-convex if and only if there exists some (possibly negative) $\lambda\in\mathbb R$ such that $-\log(q(x))-\frac{\lambda}{2}\\|x\\|^2$ is convex. Given that this misconception appeared in more than one review, we will include this discussion in the final version and highlight that $\lambda$-convexity is usually not an issue as long as the target distribution is smooth. ## Relation to other neural JKO Schemes We stress that we consider the theoretical results (Cor 3.2, Thm 3.3) to be the main novelty of Section 3. As outlined in the beginning of Section 3, the implementation (Sec 3.2) is close to previous papers, specifically to (Vidal et al., 2023) and (Xu et al., 2024), who also use the dynamic formulation of the Wasserstein distance. But these papers consider the generative modeling setting instead of sampling. We are not aware of a reference, which adapts the dynamic formulation for sampling, which requires some smaller technical differences (e.g., other $\mathcal F$, we start with latent distribution, while Vidal et al., Xu et al. start with the data distribution). However, we stress that the main contributions of our paper are: - The theoretical analysis (Cor 3.2, Thm 3.3) of neural JKO schemes, - proposing importance-based rejection/resampling steps which can maintain the density of the generated samples (Section 4) and preserves independence of the generated samples, - combining both to an importance corrected neural JKO sampler, which achieves state of the art results. In the introduction and in the beginning of Section 3 we already write that similar approximations of the JKO steps exist in the literature. We will add a reminder in the beginning of Section 3.2 (and specifically point to Vidal et al., 2023, Xu et al., 2024 for using the dynamic formulation). ## Literature on Rejection Schemes with Generative Models Indeed there exist some approaches to combine rejection steps with generative models in the literature. Many of these approaches are based on sequential Monte Carlo techniques (e.g., CRAFT which we used as comparison, but also Arbel et al., Phillips et al.). These methods implement a reweighting step by first approximating importance weights and then sampling from the empirical distribution defined by this weights. However, for SMC-based methods the analytic evaluation of the arising density is usally not possible. Moreover, after SMC reweighting steps, there exist with a high probability several samples at the exact same position. Finally, the generated samples are not exactly independent. The paper of (Gabrie et al.) proposes to iteratively train a normalizing flow for sampling by running a Langevin process, adding Metropolis steps with the current normalizing flow as proposal, and retraining the normalizing flow with the updated samples. In contrast to our paper, the rejection steps are not part of the model, but are rather used for training a normalizing flow. In addition, they can't include these steps in the model, because this would require to evaluate the density of these steps, which is not possible for the Metropolis algorithm. Since several papers from the literature [1] found that training normalizing flows to approximate probabilities with disconnected modes (like GMMs) or non-Gaussian tails (like funnel, mustache) are difficult, this might limit the expressiveness of the model. We will extend the discussion on these methods in the final version. [1] https://arxiv.org/abs/1907.04481, https://arxiv.org/abs/2009.02994, https://arxiv.org/abs/2206.14476
Summary: This paper contributes a method called "Importance corrected neural JKO sampling", based on the well-established Jordan-Kinderlehrer-Otto (JKO) scheme. The method is constituted by a flow-based ordinary differential equation (ODE), which is parameterized by neural networks and learned using standard neural ODE optimization techniques. The authors approximate this solution by using a finite sequence of proximal maps; they show that, under some assumptions on the target distribution, the neural network that minimizes a given loss over the ODE will minimize the reverse KL w.r.t. the target, and produce samples via a flow mapping. In particular, the solution asymptotically approaches the Wasserstein gradient flow and approximates the flow discussed in the well-established theorem from Benamou and Brenier. However, the authors identify that this sampling procedure can be inefficient due to the behavior of the reverse KL divergence, used in variational inference techniques like neural ODEs; to ameliorate these inefficiencies, they adapt this neural JKO scheme by contributing a method hybridizing importance and rejection sampling. This method samples from a proposal distributions and uses a weighted rejection step, whose corrected distribution is consistent with the target and is closer to the target than the proposal in the prescribed reverse KL. Further, due to its construction, the *corrected* distribution's density is known analytically, which allows the user to know the normalized density when retrieving the target samples from this procedure. This procedure gives typical Monte Carlo convergence rates that are nominally dimensionally-independent. Finally, the authors provide numerical results demonstrating competitive results with several typical methods on many benchmark problems spanning low- to high-dimensions. ## Update after rebuttal I thank the authors for their response. I maintain my score to 3. Claims And Evidence: The claims in this submission are reasonably clear and well-supported, though their consequences are a little under-discussed. A mild criticism is that the provided theoretical results largely hinge on log-densities that are $\lambda$-concave, with discussion of the non-convexity of the reverse KL and its tendencies to seek modes; however, the results for the importance correction show improvement in this reverse-KL regime. While such a descent property is comforting, it seems undercut by the qualifications the authors lay out regarding the loss the authors choose to descend. Methods And Evaluation Criteria: The methods employed and evaluation metrics seem well-founded. If these results are published with further details, one suggestion would be to report results using MMD with various kernels, rather than relying solely on the energy distance. Theoretical Claims: The theoretical proofs were only briefly considered. The simpler proofs, i.e., Corollary 3.2, Theorem 4.2 (i), and Corollary 4.3 were checked, and Theorem 4.2 (ii) was also considered informally. The remaining proofs were not. Experimental Designs Or Analyses: The provided experiments largely demonstrate a comprehensive approach to the problem at hand. A slight shortcoming that might be suggested is that, as the importance/rejection step is not tethered to the gradient flow regime, seeing if that performs any different when used in conjunction of Langevin or Hamiltonian dynamics. This would allow the reader to see how much the error decreases due to a good importance correction versus the quality of the JKO. For instance, if the importance corrected Langevin dynamics outperformed MALA substantially due to sample independence, then this is a notable result in and of itself. Supplementary Material: The supplementary material included code to reproduce the experiments. It was not run by the reviewers, though it was briefly skimmed to ensure that the code looked feasible. Relation To Broader Scientific Literature: This reiterates several results and ideas in neural ODEs (e.g., [Chen et al., 2018] in NeurIPS) and JKO sampling literature (e.g., [Jordan et al., 1998] for its introduction, as well as, e.g., [Salim et al., 2020] in NeurIPS for the discussed connection to proximal algorithms). Further, its proposed methods augment typical variational inference techniques, e.g., [Marzouk et al. 2016] in Handbook of UQ, [Lambert, 2022] in NeurIPS. The discussion of functional geometry over the reverse KL divergence echos prior literature, e.g., [Marzouk et al. 2016] and [Grenioux et al., 2023] in ICML, but is clear and reasonably concise in the prior literature's connection to the current methods. Regarding neural JKO details, the authors provide a convincingly comprehensive collection of works reflecting progress towards neural network-based JKO sampling schemes, e.g., [Altekrüger et al., 2023], [Mokrov et al., 2021], [Onken et al., 2021], and [Xu et al., 2024] (publication venues provided in submission). Each of these seems to tackle varying problems of using JKO for sampling unnormalized densities. For instance, [Altekrüger et al., 2023] seems to tackle a flow using MMD-based discrepancies, [Mokrov et al., 2021] seems to minimize the reverse KL divergence (the expression of the functional is in (5), where the constant $\beta$ seems to be chosen according to (13)), but do so using an explicit construction of transport maps using a discrete sequence input-convex neural networks, [Onken et al., 2021] seems to work very similarly to this paper using neural ODEs but omits the regularization term this submission's authors denote $w_\theta$ (which corresponds to the Wasserstein proximal mapping). The authors do not seem to articulate the differentiation of their submission's neural JKO step from [Xu et al., 2024] in terms of approaching the JKO from a neural ODE perspective; the reviewer could not distinguish between the two in the time allotted. In particular, (8) from [Xu et al., 2024] seems virtually identical to (7) in this submission. Essential References Not Discussed: The importance-correction rejection steps closely mimic the accept-reject step in typical Metropolis-Hastings algorithms, where the density described in (4.2) seems to be akin to a density derived from the Markov transition kernel, e.g., [Andrieu et al., 2003] in Machine Learning. The divergence seems to be that MH accept/reject steps act locally, whereas this is a globalized accept/reject procedure; critically, the reject step still "accepts" a change of position according to the proposal distribution to make such steps global regardless. However, while this superficially seems like a "Metropolized" transport algorithm in the vein of, e.g., [Parno and Marzouk, 2018] in SIAM Journal on UQ and [Gabrié et al., 2022] in PNAS, the globalization differentiates itself from such algorithms. It would behoove the authors to investigate parallels in, e.g., "global" MH transition kernels (see discussion in, e.g., section 3.3 in [Andrieu et al., 2003]) and differentiate themselves for the benefit of the reader who believes this to be such a Metropolization of Neural JKO. Moreover, the acceleration of Langevin sampling using birth-death processes, proposed in [Lu et al., 2023], was overlooked. Finally, there is little to no mention of Stein variational gradient descent (SVGD) [Liu and Wang, 2016] in NeurIPS, despite its popularity as a competing algorithm. SVGD approximates a gradient flow under a kernelized KL-loss ([Liu, 2019] in NeurIPS) or a $\chi^2$-gradient flow ([Chewi et al., 2020] in NeurIPS), and the recently introduced noisy SVGD variant ([Priser et al., 2024] in ICLR), which addresses some drawbacks of the vanilla SVGD. However, these developments are notably absent from the relevant literature section. Other Strengths And Weaknesses: The major weakness of this article is that it has a very inconsistent writing style and does not seem well-contained. In particular, many of the theoretical background and discussion seemed at-length with little intent; Theorem 2.2 is included from prior literature but only invoked once and hardly discussed for the reader's benefit. Many terms stand undefined (e.g., lsc/lower semi-continuous, coercive, $\lambda$-convex, etc.). Further, it does lack the discussion of some vital assumptions. In particular, there is some missing context regarding the assumption of $\lambda-$convexity of the target distribution $q$. Of course, there will always be assumptions for theoretical results, but such an assumption is particularly strong seeing as this is not satisfied by most distributions considered in the numerical results. Perhaps such an assumption is "softened" by the rejection steps, or just analytically convenient and stronger than necessary in practice. With careful consideration of the typical ICML reader, the analytical and theoretical discussion could be made significantly more accessible and parochial by focusing on the actual methods and ideas vital to the method. Finally, there is a connected issue that the paper is not well self-contained, where it's unclear what theory, exactly, the authors contribute to the field and what is rehashed from prior literature. In particular, the paper does not differentiate its core neural JKO scheme very well from similar neural JKO approaches. With these shortcomings in mind, the paper does invariably provide strong theoretical guarantees, particularly for smooth and log-concave targets. Despite some issues with the presentation of guarantees and terminology, the compact presentation of Bernamou & Brenier, its connection to the algorithm, and the discussion of neural ODEs is reasonably clear and helpful. To add to the theoretical insights, the rejection procedure seems method-agnostic and might be of great benefit to other inference techniques, particularly other variational inference schemes. If anything, the authors undersell the intrigue of such an idea. The numerical results demonstrate an undeniably strong algorithm, with a stark comparison against unadjusted neural-JKO. The independence of the corrected samples (which is not proclaimed until the conclusion) is actually quite remarkable, especially compared with the sequential nature of the correction of Metropolis algorithms. Additionally, Appendix B.1 is well written considering how formal the discussion of the theory assumptions are within the paper. Other Comments Or Suggestions: - A discussion of the nominal dimension-free ideas in Corollary 4.3 could be nice: - The dimension-free guarantee applies to $\alpha$, so what makes the problem hard? Presumably it has to do with when the target is non-log-concave, which will be exacerbated by dimension, when performing the neural-JKO scheme. - If there is sufficient space, it might be interesting to include more explicit insight on how the importance rejection step compares to typical SMC methods as well as, e.g., [Midgley et al., 2023] in ICLR 2023, or the CRAFT algorithm discussed. - This is partially elaborated in the related work section, but it would be more helpful after the explanation of the algorithm. - If space, it might be nice to just remark in the text that 4.3 simply comes from an application of Hoeffding's inequality, as it is unclear why it is a corollary at first glance. - While *annealed* importance sampling was largely popularized by [Neal, 2001], the idea of importance sampling dates back significantly further. - Similarly, while (Rezende & Mohammed, 2015) certainly introduced the "CNF" term, it should be noted that the "concept" of coupling complicated target and simple reference dates further back, even within generative modeling communities. - Visual accessibility: - In addition to the highlighting of tabular metrics, the numbers should be bolded for visual accessibility. - The coloring in Figures 2/3 should probably be adjusted for visual accessibility. Instead of red/green consider using a visually accessible color palette (a little over four percent of people are red/green colorblind). - Style/grammar comments: - The introduction would benefit from more careful attention to its language and style. - There should be appropriate capitalization in the bibliography. - The letter $\lambda$ is used both for Lebesgue measure and convexity. - The intention of the word "ration" on line 322, page 6 is unclear; perhaps the authors meant "ratio"? - In the left column of line 416 on page 8, the phrasing "We can see, that importance..." is quite awkward. - Remark 4.5 is clearly crucial, but difficult to parse. What is "moderate base of $1+r$"? The entire remark could benefit from more clear writing. - The appendices, especially F, should be reviewed for style/grammar (e.g., line 662 "We give some more backgrounds..." in appendix A, repeated use of "Lebesque" instead of "Lebesgue" in appendix B, the sentence in 1303-1305, line 1442 "...this package does not rely backpropagation...", line 1476 "...the evaluation can be cheaper and the density evaluation since these...", line 1476 "Residual architectures is at least....", line 1477 "...they are very expansive to train..." etc.). Questions For Authors: 1) Does the (uncorrected) neural JKO (N-JKO) scheme described align exactly with prior N-JKO schemes? Is there a difference? What does this paper contribute over, e.g., [Xu et al., 2024]? A different method or just theory on an equivalent method? - The paper is unclear as to whether such an uncorrected scheme is previously introduced. For example, it cites many papers using JKO-like schemes via W2 proximal mapping iterations, but does not delineate very well from them. Being clearer here would help the paper stand out and better define the contributions to the field, where a reader could clearly see that this is not just adding importance/rejection sampling to a pre-existing neural ODE scheme. 2) Do Cor. 3.2 and Thm 3.3 have anything to do with the rejection steps? - As suggested in previous comments, it seems like this submission attempts to contribute two different methods that work in tandem. The theoretical extent that the importance correction contributes to _specifically_ the JKO scheme is not well described. It is certainly acceptable that the answer to this theoretical question is "they are two different items that work well together", but such a disclaimer would highlight exactly what the authors can say about their work. 3) What are the practical and theoretical parallels with MCMC schemes? Why not just use a "Metropolis Adjusted Neural JKO" scheme? Should I regard the result as a CRAFT-like approach to unadjusted Langevin dynamics (ULA)? - While the guarantees of JKO are great in practice, the uncorrected N-JKO does not seem too different from Langevin. There is unfortunately no comparison to a Metropolis adjustment to neural JKO, nor is there a comparison of the uncorrected N-JKO scheme to ULA. Again, it is imperative that the methods are appropriately benchmarked and indeed that the fact this paper incorporates two different methods is justified. 4) Practically, are there any remarkable features regarding N-JKO's sensitivity to hyperparameters? Time discretizations? Adjoint solves? etc. - In contrast to MALA or HMC/NUTS, there is a functional optimization loop inside the body of the methods. This is admittedly discussed in part within appendix F, i.e., that the required coupled forward/adjoint can be quite costly, that they use a classic adaptive explicit Runge-Kutta scheme, and that "choosing larger architectures [than what they have] does not bring significant advantages". However, in contrast to MALA, HMC, and (unmentioned) SVGD, the proposed methods do not have any asymptotic or mean-field guarantees for a nonasymptotic architecture/function class; therefore, it is certainly worth noting how much effort regarding heuristics and hyperparameter tuning it may take to achieve the results given. 5) It is stated in F.2 that the evaluation of the neural IC only uses a total of five Rademacher vectors for estimating the Jacobian trace. If it's a Monte Carlo estimator, why would this be enough? Would increasing this make the performance better or worse? Could this poison the error metrics in any way? - While it is entirely acceptable to use just a few evaluations for _training_ the neural networks, it is imperative that the authors are clear that any additional refinement of this discretization works in their favor and not against it. To be more explicit, a case where _higher_ variance in the trace estimator coincidentally improves results over more exact trace estimators would raise questions of method validity. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Many thanks for the detailed and thoughtful review. Please find our answers below. For the final version, we will additionally correct the typos, extend the literature part (e.g. with SVGD), improve the visual accessibility based on your comments and add the definitions of lsc/coercive/$\lambda$-convexity in the appendix. ## On the $\lambda$-Convexity We stress that for the theoretical results we only require that the negative log-density of the target is $\lambda$-convex for some $\lambda\in\mathbb R$ **which explicitly includes negative $\lambda$**. This assumption is very weak and **automatically fulfilled when $q$ is smooth enough** (plus some asymptotics for $\\|x\\|\to\infty$). In particular, the theoretical results are **also applicable for target densities which are not log-concave**. We will add this explanation after Assumption 3.1 in the paper. ## Questions 1. We stress that in Section 3 we consider the theoretical results (Cor 3.2, Thm 3.3) to be the main novelty of our paper. The actual neural JKO steps are indeed very similar to approaches in the literature, particularly, to the papers of (Vidal et al., 2023; Xu et al., 2024) which also rely on the dynamic formulation of the Wasserstein distance. Since these papers consider the generative modeling setting there are some small technical differences (e.g., we start with the latent distribution while Vidal et al., Xu et al. start with the data distribution; moreover they only consider the Gaussian as target $g$, which makes the whole objective convex regardless of $\tau$, but is not useful for the sampling application). In the final version, we will clarify the relation to these approaches more detailed in the beginning of Section 3.2, where we derive the neural JKO step. 2. The statements of Cor 3.2 and Thm 3.3 are not directly related to the rejection steps in Section 4. However, in order to be able to combine the importance-based rejection step in an iterative manner, we need the following requirements on our sampling model: evaluate the density, sample independently, start at an arbitrary latent distribution, avoid mode collapse. Since not many generative models (or sampling methods) fulfill these properties at the same time, we focused on the neural JKO scheme. We will clarify this in the introduction. 3. It is not surprising that the neural JKO scheme and Langevin sampling produce very similar results, since both approximate the same Wasserstein gradient flow with respect to the KL divergence in the limit. The main difference is that we **can evaluate the density of the distribution generated** by the neural JKO scheme, while we **can't do that for the distribution generated by Langevin sampling**. Thus, we can combine the neural JKO scheme with our importance-based rejection resampling steps, while we can't do that with the Langevin steps. We briefly outlined this relation in the introduction (l 47--50 right side), but will add a reminder to that in the numerical results. Regarding ULA vs. neural JKO: When running our experiments, we also ran a plain ULA sampling (without Metropolis correction). The results are mostly similar to the MALA results despite the Metropolis correction helps a little bit to fill out the narrow tails of the funnel/mustache distribution. Therefore, we omitted ULA in the paper. 4. The most important hyperparameters are the initial step size $\tau$ and the size of the network architecture, which require some tuning. If those parameters are chosen inappropriately, the model might miss some modes (which matches the theory, since we loose any convexity of the JKO steps in this case). We outline a possible tuning strategy in the answer to Reviewer BUrR (Question 2). But most hyperparameters from the neural JKO steps are quite standard choices (the velocities are dense 3-layer neural networks, we use the `torchdiffeq` library with rather standard parameters, the training parameters like batch size, learning rate etc. are also quite standard choices for CNFs). 5. We think that we can resolve this confusion: When computing the Jacobian trace for **5 new Rademacher vectors in each time-discretization step** of the ODE. Since the ODE trajectories are almost straight and therefore the Jacobian only slightly varies over the different time steps. Consequently, the effective number of considered Rademacher vectors is **5 times the number of time discretization steps** which is (depending on the example) in a order of magnitude of 20 to 100. Therefore the errors mostly cancel out over the whole solution of the ODE. Indeed, we observed that the variance of the estimator over the whole solution of the ODE is already very small for 5 Rademacher vectors per time step and taking more does not have much of an effect. Taking less Rademacher vectors causes a bias in the importance weights and lowers the quality of the results. We will add this explanation to the final version of the paper.
Summary: This paper applies the Wasserstein Gradient Flow (WGF) framework to the sampling problem, i.e. sampling from a given target distribution. The proposed approach consists of two key stages: **Stage 1**: JKO Steps with Continuous Normalizing Flows (CNFs) - Given a terminal density, the authors perform Jordan–Kinderlehrer–Otto (JKO) steps, each of which corresponds to an ODE-based control problem. - These steps are learned through Continuous Normalizing Flows (CNFs), which parameterize the velocity field. - However, a notable drawback of this approach is its slow and suboptimal convergence, primarily due to the complexity of solving high-dimensional JKO problems. **Stage 2**: Importance Rejection Sampling Enhancement - To address the inefficiency of pure JKO-based updates, the authors incorporate an rejection sampling scheme that alternates with JKO steps. - Due to the resampling, it shows faster convergence, requiring only few steps till they reach the convergence. Moreover, it converges to more optimal solution. - The resulting framework iterates between JKO-based updates and importance sampling, aiming to improve convergence speed and sample quality. The proposed method is evaluated on several sampling benchmarks including LGCP and funnel energy functions. Claims And Evidence: The idea of combining importance sampling with a Wasserstein Gradient Flow framework is novel and conceptually interesting. Theoretical claims are correct. However, I see major computational challenges in the proposed method: A critical aspect of this method is that it heavily relies on accurate trace estimation of the gradient of the velocity field in CNFs, which is required for both the JKO update and importance sampling correction. I have strong concerns regarding the computational feasibility of the proposed approach: - To obtain a sample from $\mu_k$, an initial sample $x$ must be passed through $k$ different CNF models $v^i_\theta$ ($i =1, 2, \dots k$). Furthermore, it is necessary to estimate the trace of the Jacobian $Tr(\nabla v^k_\theta (x))$ along the whole trajectory to perform importance sampling. - Updating $v^{k+1}_\theta$ requires drawing a fresh batch of samples from $\mu_k$ at every iteration, leading to excessive computational overhead. - The training of CNFs itself is demanding since it involves the integration of the trace $Tr(\nabla v^k_\theta (x))$, which further exacerbates the computational burden. - Even in the evaluation procedure, it still requires to compute the integration of the trace. Moreover, due to the importance sampling, the sample should be drawn by large batch. Due to this computational burden (in both training/evaluation scheme) and due to the heavy reliance on importance sampling, I guess that JKO-IC (the proposed method) will possess scalability issue. Methods And Evaluation Criteria: My major concern is its computational efficacy as discussed in previous section "Claims and Evidence". Accordingly, I don't see the potential application to more realistic dataset. To verify its applicability, I would like to further ask authors to include the following: - Comparison on the training time, evaluation time, and GPU storage with other benchmark approaches? - Discussion on the advantages of the proposed method over existing importance weighted based schemes like [1 ,2, 3]? - I believe there should be a comparison with [1] and [2] for several datasets (e.g. funnel, LGCP). These are also importance sampling based methods. - I feel that there are no sharp or realistic energy function benchmarks discussed in the paper. Could the authors run on 40-modes GMM implemented in [3] or [4], or more realistic datasets DW4 or LJ13 discussed in [5]? (The implementation of GMM in this paper is different from original 40-mode benchmark.) Theoretical Claims: I’ve checked all the theories and proofs. I believe the claims are all correct. Experimental Designs Or Analyses: As aforementioned, I would encourage authors to verify the followings: - Computational Efficacy Study. - Discussion on related works including [1], [2], and [3]. - Comparison with [1] and [2]. - Comparison on additional benchmark dataset. Supplementary Material: Yes, I checked theory, algorithm, details of benchmark data, evaluation metrics, and time/gpu consumption. Relation To Broader Scientific Literature: The sampling problem can be reformulated in various ways. This paper suggests a new sampler by incorporating rejection sampling into the original JKO scheme. I believe this idea is very interesting and worth further investigation to improve its efficacy. Essential References Not Discussed: I believe a discussion and direct comparison with [1] and [2] is necessary to further improve the paper. **References** [1] Phillips, Angus, et al. "Particle Denoising Diffusion Sampler." ICML, 2024. [2] Chen, Junhua, et al. "Sequential Controlled Langevin Diffusions.", ICLR, 2025. [3] Albergo, Michael S., and Eric Vanden-Eijnden. "Nets: A non-equilibrium transport sampler.", preprint, 2024. [4] He, Jiajun, et al. "No Trick, No Treat: Pursuits and Challenges Towards Simulation-free Training of Neural Samplers.", preprint 2025. [5] Akhound-Sadegh, Tara, et al. "Iterated Denoising Energy Matching for Sampling from Boltzmann Densities." ICML, 2024. Other Strengths And Weaknesses: Strength - The paper is well-organized, easy-to-follow. - The idea of integrating importance sampling with JKO updates is a new fresh idea. - The paper is well-written and provides a rigorous theoretical foundation. Other Comments Or Suggestions: . Questions For Authors: - Did the authors observe any instability or divergence issues when alternating between JKO steps and importance sampling? - Ablation Studies: What happens if we reduce the batch size (affects importance rejection sampling)? Also, could authors provide ablation studies on number of flow steps (longer flow steps)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the detailed evaluation of our paper. Please find our comments below. ## Literature The process of research is an active field and therefore many interesting contributions are published frequently in particular in this modern field. However, we want to kindly point out, that in accordance to the ICML guidelines, **comparisons to very recent preprints, which are not published yet (like [3,4]) or published a week before the submission deadline (like [2]) or haven't even been available as a preprint until the submission deadline (like [4]) cannot be expected.** We do our best to provide timely comparisons and have now even added a comparison to [2] (details below), but we **can't compare to methods which appeared on arxiv shortly before or even after the ICML submission deadline**. Of course, we are happy to include all papers mentioned by the reviewer in the "related work" section (where [1,3] are already cited). ## Computational Cost We respectfully disagree with the reviewers opinion that our method would be computational infeasible. Specifically we would like to stress the following points: - The evaluation of $\mathrm{trace}(\nabla v_t^k(x))$ is estimated by a Hutchinson trace estimator ($\mathrm{trace}(A)=\mathbb E[z^TAz]$) for $z$ with zero mean and identity covariance matrix. - During the training of the CNF a **single sample of $z$ is enough** in the Hutchinson estimator to approximate the trace (since we only need an unbiased estimate of the loss for training the CNF). For the evaluation we take 5 samples of $z$ for each time step. We find that this already leads to a small variance of the resulting densities, while the cost remains comparably low (see also the answer to Question 5 of Reviewer Eq2H). - When training $v_\theta^{k+1}$, we do not draw in each training step "a fresh batch from $\mu_k$". Instead, we maintain during training time a dataset of 50000 samples of $\mu_k$. During training of $v_\theta^{k+1}$ we then draw batches from this dataset. After the training of $v_\theta^{k+1}$ is completed, we generate a dataset of $\mu_{k+1}$ by applying the CNF onto the samples from $\mu_k$. We add this detail to the numerical details. - During evaluation we **don't need to draw a large batch**. During test time the samples don't interact with each other (see also our reply to the part "Batch size" in the paragraph "Questions" below) - Please note that we **already discussed the computational aspects of CNFs** in Appendix F.2 and list the resulting **training and evaluation times including the required GPU memory in Table 5 in the appendix**. We can see that they remain moderate for all examples considered in the paper. Nevertheless, as already discussed in the limitations paragraph, lowering the computational cost is part of our current and future research. In particular the use of supervised trained generative models as intermediate surrogates for several steps in our model is future work and goes beyond the scope of the paper. ## Other Benchmark Problems and Comparisons We believe that our test problems are standard among the community and respectfully disagree with the reviewers opinion that "no sharp benchmarks" were used. However, upon the reviewers request, **we now ran our method on the GMM40 problem in $d=50$**. For the sake of availability of comparisons, we adapt **the same setting as Table 3 in [2]** and also evaluate the Sinkhorn distance. We can see in the table below that the neural JKO IC outperforms DDS, CRAFT and SCLD on this example. | | DDS | CRAFT | SCLD [2] | Neural JKO IC (ours) | | :---: | :---: | :---: | :---: | :---: | | Sinkhorn distance | $5435.18$ | $28960.70$ | $3787.73$ | $3154.91$ ## Questions 1. **Stability:** In the case that hyperparameters are chosen inappropriately, the method might miss some modes of the target distribution, but we never observed divergence issues. Once the hyperparameters are chosen appropriately, it consistently produces the same results. 2. **Ablations:** The paper **already contains an ablation study** with respect to the number of steps in Figure 10, where we plot the error measures over the number of steps. Since our model is trained iteratively, the plots show how the model would behave if we stop the training earlier and we can see that the error measures saturate. 3. **Batch size:** We would like to clarify that at test time **the sampling procedure is independent of the batch size**. The only parameter in the rejection step depending on several samples, in particular depending on more than one sample is $\mathbb E[\alpha(X_k)]$ (it is estimated within the training phase and fixed afterwards). For the evaluation **the samples do not interact with each other** and are consequently independent. --- Rebuttal Comment 1.1: Comment: I thank authors for the detailed response. Moreover, I appreciate authors for correcting some points that I have misunderstood. My concerns are adequately addressed. I'll raise the score to 3.
Summary: This paper proposes to sample from an unnormalized probability density via a sequence of interleaved continuous normalizing flows (CNFs) and importance accept/reject steps. The CNFs, which are penalized with a velocity norm regularizer as in OT-Flow (Onken et al. 2021), are interpreted as Wasserstein proximal mappings applied to the reverse KL divergence loss functional by replacing the static form of the $W_2$ distance typically used to define the proximal mapping with its equivalent dynamic formulation. The authors use this interpretation to show convergence of their OT-regularized CNF velocity fields to the velocity field corresponding to the Wasserstein gradient flow of the reverse KL divergence as the proximal mapping step-size $\tau\to0$ and that, for nonzero $\tau$, the velocity fields at each step correspond to the OT velocity fields between the starting and ending measures at each step. The CNF scheme is implemented by representing the velocity fields as neural networks and optimizing the parameters of the networks to minimize the dynamic OT-regularized CNF loss; hence the scheme is termed “neural JKO.” To address the issue of slow or incorrect convergence known to plague CNFs when the target density is multimodal, the authors propose to insert “importance-based rejection steps” in between some of the CNF/Wasserstein proximal steps. These rejection steps consist, essentially, of one step of rejection sampling: each particle $X$ from the current ensemble (resulting from previous CNF and rejection steps) is rejected with probability $1 - \alpha(X)$, where $\alpha(X) = \min \left\{ 1, \frac{g(X)}{cf(X)} \right\}$, $g(X)$ is the unnormalized density of the target measure, $f(X)$ is the unnormalized density of the current ensemble, and $c > 0$ is a tuning parameter. Thus, each particle has a lower chance of getting rejected if the importance weight $\frac{g(X)}{f(X)}$ is large. If the particle is rejected, it is replaced by repeating the entirety of the previous CNF/rejection procedure (starting from the reference density) to generate a new sample from the current particle distribution. The authors show that it is possible to write the new density of the particle ensemble after it has been transformed by this one-step rejection procedure and that the rejection sampling step decreases the KL divergence to the target. The density information is carried through the rest of the CNF/rejection sampling procedure, and as such allows the procedure to be used for density estimation as well. Numerically, the method is exercised on a variety of target distributions in between two and 1600 dimensions and shown to generate samples which result in lower energy distance and higher estimates of the “log normalizing constant” in comparision to MALA, HMC, DDS, CRAFT, and Neural JKO. The experiments in the paper all, in the end, generate an ensemble of 50,000 samples, and the quality metrics for the samples are likewise computed over 50,000 samples. ## update after rebuttal I thank the authors for their response. I have a positive view of this paper and will thus maintain my score. Claims And Evidence: All claims are sound and substantiated by theoretical and empirical results. Methods And Evaluation Criteria: The proposed method on the whole is sensible in the context of similar, previously successful methods for sampling from unnormalized probability densities (e.g., continuous normalizing flows and OT-regularized variants, annealed flow transport Monte Carlo, etc.). In order to use the method, one must have access to the unnormalized density of the target measure and its score. One potential methodological issue, which is briefly mentioned in remark 4.5, is that the rejection steps rely on the generation of new samples from the current particle distribution. In the context of the proposed method, this generation implies re-sampling from the reference distribution and re-simulating the previous sequence of CNF and rejection layers. The CNF layers shouldn’t pose too much of an issue in themselves, as they involve integration of previously identified ODEs with relatively straight trajectories, but the previous rejection layers have the potential to dramatically increase the time required to regenerate another sample. That is, if a sample is rejected at any layer, we must start over at the reference distribution and re-do all of our previous CNF/rejection layers. If the acceptance probability is $1 - r$, then the probability of successfully passing through $n$ previous rejection layers is $(1 - r)^n$. This feature may result in long runtimes, as we see for the higher dimensional test distributions in Table 5. In the exposition of the paper it is not made explicitly clear how this resampling is performed in the rejection steps -- i.e., the rejection procedure presented in section 4 is general and never specified for the method at hand. The reader must, therefore, put the pieces together on their own. I think it would be helpful to explicitly outline how one resamples in the context of this neural JKO-IC method so as to give a clearer picture of what is at stake in the rejection steps. A diagram of an example sample trajectory might even be helpful for this purpose (perhaps with a background similar to that of the plots in Figure 10, but with arrows indicating the procession of one particle through the layers, with at least one rejection step depicted). The quality of the samples generated is evaluated using the energy distance (MMD with the negative distance kernel) and by estimation of the log normalizing constant. These evaluation metrics are consistent with those used for similar sampling methods in the literature. In the appendix there are plots of 2D marginals of the samples generated by the various methods, which are also helpful. The method is primarily compared to the Metropolis-Adjusted Langevin Algorithm (MALA), Hamiltonian Monte Carlo (HMC), the Denoising Diffusion Sampler (DDS), Continual Repeated Annealed Flow Transport Monte Carlo (CRAFT), and Neural JKO (equivalent to the proposed method without the rejection steps). MALA and HMC are standard MCMC approaches, while DDS, CRAFT, and Neural JKO are all, to some extent, based on dynamic measure transport. This collection of comparison algorithms is reasonable, and in the appendix there are additional  comparisons to un-regularized continuous normalizing flows (CNFs). Theoretical Claims: I checked the proofs of Corollary 3.2, Theorem 3.3, Theorem 4.2, and Corollary 4.3 at a high level. My only quibble is that in the proof of Theorem 4.2, the density $\tilde p$ is interpreted as a probability, i.e., $\tilde p(x) = \mathbb{P}(\tilde X = x)$,  which isn’t quite right. However, I suspect that if the proof were repeated working with $\mathbb{P}(\tilde X \in A)=\int_A\tilde p(x)\,\mathrm{d}x$ for some measurable set $A$ the result would still emerge. Experimental Designs Or Analyses: The experimental design on the whole seems sound, but I think it would be interesting to compare the proposed Neural JKO-IC method to the competing methods for varying ensemble sizes. As far as I can tell, all experiments performed involved the generation of $N = 50,000$ samples, which is quite large and should render any Monte Carlo estimation error in the losses for the various methods small. Thus, the error metrics reported in the results are likely dominated by the biases of the various methods. What happens if the ensemble size is decreased, for example, to 50, 500, or 5,000 samples? Does Neural JKO-IC also have low variance relative to the competing methods at these smaller ensemble sizes? At the very least, the particle ensemble size should be listed in the main part of the paper (not just the appendix) to give proper context for the results. There is also no comparison or discussion of the *computational cost* among the various methods. Training and runtimes are given for Neural JKO-IC are given in Table 5, but no training or runtime information is available for the competing methods. Knowing the relative computational costs of each of the methods is important for contextualizing numerical performance and informing choice of method in application. Supplementary Material: In the supplement, I reviewed the proofs, algorithms, experimental details, and additional results. Relation To Broader Scientific Literature: The connection between OT-Flow (Onken et al. 2021) and Wasserstein proximal mappings was highlighted in (Vidal et al. 2023), and the Neural JKO scheme (absent the rejection layers) which forms the basis for the proposed method was also proposed in Vidal et al. 2023. Annealed Flow Transport Monte Carlo (AFT, Arbel et al. 2021) and Continual Repeated Annealed Flow Transport Monte Carlo (CRAFT, Matthews et al. 2022) are similar in spirit to the proposed method in that they involve the interleaving of normalizing flow layers with importance resampling and mutation steps. One big difference between AFT/CRAFT and the proposed method is that the former methods are designed follow a specific annealing path of densities, while the latter seeks a sequence of densities corresponding to Wasserstein proximal mappings. Moreover, the importance resampling steps used in AFT/CRAFT fundamentally cannot alter the support of the empirical particle distributions, while the rejection sampling approach, while perhaps more costly, generally does modify the support, and evidently to good effect. Importance weights have also been combined with normalizing flows in, e.g. in (Noe et al. 2019), where normalizing flows are used to define a proposal distribution for importance sampling, but this approach is not iterative like AFT/CRAFT and the proposed method. ## References - Onken, D., Fung, S. W., Li, X., and Ruthotto, L. OTflow: Fast and accurate continuous normalizing flows via optimal transport. In AAAI Conference on Artificial Intelligence, volume 35, pp. 9223–9232, 2021. - Vidal, A., Wu Fung, S., Tenorio, L., Osher, S., and Nurbekyan, L. Taming hyperparameter tuning in continuous normalizing flows using the JKO scheme. Scientific Reports, 13(1):4501, 2023. - Arbel, M., Matthews, A., and Doucet, A. Annealed flow transport Monte Carlo. In International Conference on Machine Learning, pp. 318–330. PMLR, 2021 - Matthews, A., Arbel, M., Rezende, D. J., and Doucet, A. Continual repeated annealed flow transport Monte Carlo. In International Conference on Machine Learning, pp. 15196–15219. PMLR, 2022. - Noé, F., Olsson, S., Köhler, J., and Wu, H. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. Science, 365(6457):eaaw1147, 2019. Essential References Not Discussed: No, all the works that came to mind when I was reading this submission were appropriately cited. Other Strengths And Weaknesses: ### Strengths - I was previously unaware of the connection between OT-regularized CNFs and Wasserstein proximal mappings, and I enjoyed learning about it by reading this paper. While this viewpoint was already advanced in (Vidal et al. 2023), the convergence results and the viewpoint of “piecewise geodesic interpolation” in this work are, to my knowledge, new. - While the neural JKO numerical approach already appeared in (Vidal et al. 2023), the addition of the importance-based rejection steps to the neural JKO scheme is new and does seem to improve sample quality significantly. - The rejection steps are relatively easy to add if the neural JKO steps have already been implemented. ### Weaknesses - The numerical examples do not exercise the method across a range of sample sizes. - The rejection steps have the potential to cause long runtimes of this method. - The numerical results are difficult to quickly interpret due to the tabular format. Other Comments Or Suggestions: *(In this section I append the line numbers with “L” to indicate that the text in question is in the left column, and “R” to indicate that it is in the right column.)* The preliminary material presented in section 2, while providing a firm mathematical background for the methods presented, is a little bit technical, and I wonder if the level of technicality could be lightened in order to make the material more approachable to a broad machine learning audience. For example, on line 146 (R), the minimal norm velocity field is described as belonging to a “so-called regular tangent space.” While this statement is true, it would be more evocative and approachable to simply state that the velocity must be a gradient. Similarly, the definition of Wasserstein gradient flows could be explained in the main body of the paper without invoking reduced Frechet subdifferentials (velocity is the negative gradient of the first variation of $\mathcal F$…), with the more technical definition saved for the appendix. Level of technicality is of course a matter of taste, but I wonder if toning down some of the detail in the main body of this paper would help it achieve broader appeal. Presenting the numerical results in tables makes it difficult to quickly assess the relative performances of the methods, especially given that all of the metrics are in scientific notation. Consider using bar charts, box plots, or another suitable plot instead of tables for the numerical results; these plots would make it easy for the reader to quickly compare the performances of each method without having to squint at nine rows of seven exponents.   There are some typographical errors or notational inconsistencies which need to be corrected: - Lines 152-153 L: the integral in the definition of $\mathcal{P}_2(\R^d)$ should be $\mathrm{d} \mu(x)$, not $\mathrm{d} x$ - Line 157 L and elsewhere: it is confusing to use $\bf \pi$ to denote the coupling between $\mu$ and $\nu$ and also to use $\pi_1$ and and $\pi_2$ to denote projection of $\bf \pi$ onto the first and second coordinates. Consider using $\gamma$ or a different letter for couplings instead ($\gamma$ matches $\Gamma(\mu, \nu)$). - Line 184 R: Here the normalized target density is written $q(x) = Z_gg(x)$, but it should be $q(x) = g(x)/Z_g$ to be consistent with the setup in Section 1 (Line 43 L). - Line 190 R: It seems like $\propto$ is used to indicate equality up to an *additive* constant, but I think this notation is more commonly used for equality up to a multiplicative constant. Perhaps consider alternate notation, or at least specify that the constant is additive. - Line 246 R: should the stopping time be $\tau$ and not $T$ in $\mu_\tau^{k+1} = z_\tau^k(\cdot, T)_\sharp \mu_\tau^k$? - Line 276/Equation (7): the top row of the matrix on the RHS should be $v_\theta(z_\theta(x, t), t)$, not $v_\theta(x, t)$. This error is also in Algorithm 6. - Line 298 L: Should this read “this leads to slow *convergence* speeds”? - Line 311 L: The base/acceptance rate should be $1-r$, not $1+r$ There are also spelling and grammatical errors throughout the paper. Below is a list of the ones I noticed, but I would suggest performing a thorough check for others which I may have missed. - Line 78 L: “the rejection steps *readjust”* - Line 83 L: subject-verb agreement needed here: either “our methods *generate*… and *allow*” or “our *method* generates… and allows” - Lines 91-92 L: “the velocity fields… *converge*” - Lines 97-98 L: “we consider neural JKO schemes *in more detail”* - Line 132 L: “directly *following”* - Line 148 R: “*absolutely* continuous” - Line 322 R: “a constant *ratio”* - Line 364 L: “and *which* we can sample from” - Line 374 L: I’m not sure that “vice versa” is quite the correct term here. Maybe you are looking for “likewise” or “moreover” instead? - Line 380 R: “it is a kernel *metric”* - Line 422 L” “distance between *two* sets of…” - Line 1052: “Since it holds $\tilde X = X$ if $X$ *is accepted* and…” - Line 1256: “we evaluate the Wasserstein distance based on *fewer* samples” - Line 1372: “we run an independent chain for each generated *sample”* - Line 1452: “becomes *computationally* costly” - Line 1470: “*Brenier’s* theorem” - Line 1474: “We observed numerically that the expressiveness of discrete-time architectures *scales…* and *that* *these architectures* are less stable to train” - Line 1475: “The evaluation can be cheaper and the density evaluation” -- I’m not sure what this sentence is trying to say. Perhaps that “the sampling and density evaluation can be cheaper”? - Line 1476: “Residual architectures *are”* - Line 1477: “they are very *expensive*” - Line 1505: “On the other side” -- perhaps you mean “on the other *hand*”? Questions For Authors: 1. How does one choose the “schedule” of rejection layers and CNF layers? Would a heuristic like ESS, which is used in CRAFT/AFT, be useful here? 2. How sensitive is the method to the initial choice of step-size $\tau_0$ and step-size schedule? 3. Line 289 R: Is the curse of dimensionality in rejection sampling specific to use of a standard normal proposal distribution? Or would the issue occur with any fixed/non-tailored proposal? If it is the latter, you may want to emphasize this point to further highlight why the use of the tailored/CNF proposals is useful. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your very detailed and thoughtful review. Please find the answers to your questions and comments below. Additionally, we will correct the typos and grammar errors. ## Questions 1. Our choice of the schedule is based on the following heuristic: Since the rejection layers are (on a long term) more expensive than the CNF layers, we start with using only CNF layers until the progress made by them becomes small. This can be detected without knowing the target distribution, because we have (approximately) access to the Wasserstein distance between input and output of the JKO layer via the norm of the velocity field. Afterwards, we alternate one CNF layer and three rejection steps. Numerically, the CNF layer only does minor local adjustments from this point, while the main work is then done by the rejection steps. 2. If the step size is too large, we might miss some modes of the target distribution (which matches with the theoretical consideration that we loose convexity (with respect to the Wasserstein metric) of the loss in this case). If the step size is small, then the mapping learned by the CNFs is close to the identity and does not have a large effect. While the model is sensitive with respect to the first case, the second case only increases training and sampling time of the model (since more CNF steps are required), but does not effect much the quality of the result. A simple tuning heuristic for the step size can again be established based on the (approximated) Wasserstein distance: We start with a small step size and increase it as long as the Wasserstein distance between input and output of the next CNF step is smaller than a certain threshold. While we have not automatized this procedure so far, we are quite sure that such an adaptive choice of the step size can be established. 3. The curse of dimensionality indeed not specific to normal distribution, but to any proposal which is not close enough to the target. So in fact the use of tailored/CNF proposals can help to reduce or even completely avoid the curse of dimensionality. We will highlight this further. ## Number of Samples Our main intention behind the large number of samples is the following: Since the samples generated by our model are independent, our main quantity of interest is how much the distribution of a generated samples differs from the target distribution (in MMD, Wasserstein etc.). Of course we cannot measure this quantity directly, but taking a finite number of samples from both distributions. Using a small number of samples introduces an error in this estimator, which is much more related to the properties (sample complexity) of the evaluation metric than with the approximation quality with the model. Therefore, from our viewpoint, the number of samples should be chosen as large as possible. While the $\log(Z)$ estimation can be viewed in the same way (just with the KL divergence), we agree that this can also viewed differently. If we are not interested in the KL divergence between generated and ground truth distribution (which was our viewpoint so far) but in the $\log(Z)$ estimate itself, we agree with you that reporting the errors for different numbers of samples could be useful. We will include such a study in the final version. In our first experiments with $5000$ samples the greater picture is about the same as for $50000$ samples. ## Proof Theorem 4.2 (i) Yes, you are right, the expression $P(X=x)$ is meant in a weak sense (where we integrate against all measurable sets $A$) and that this part is written a bit sloppy. We will write it down more formally for the final version. However, neither the proof nor the result change. ## Visualization etc. Thank you for the suggestions. We will add a flow chart-like diagram describing the sampling process to the paper. For the numerical part, we think that box plots often make it harder to assess the details, even though we agree that they give a better first impression. We are happy to add a box plot for visualizing the numbers, but we prefer to keep the tables as well (if the space constraints allow it, we will keep both in the main part of the paper).
null
null
null
null
null
null
Diffusion Sampling Correction via Approximately 10 Parameters
Accept (poster)
Summary: This paper proposes PCA-based Adaptive Search (PAS) to optimize the sampling process of diffusion probabilistic models (DPMs). The key of the method is leveraging Principal Component Analysis to identify a low-dimensional subspace for sampling correction. The method also includes an adaptive search strategy to reduce correction steps. PAS operates in a plug-and-play fashion, enhancing existing fast solvers without significant additional training costs. Experimental results show that PAS significantly improves sampling quality on diverse datasets. Claims And Evidence: The authors claim that PAS can correct the truncation errors and improve the sample quality. This claim is well-supported by the improvements of FID on multiple datasets. However, despite the authors claim that PAS can enhance sampling efficiency of existing fast solvers (requiring only a small number of learnable parameters and negligible training time), there is a lack of intuitive comparison with other methods regarding efficiency. Methods And Evaluation Criteria: The method is well-founded, using PCA on the sampling trajectory to extract low-dimensional basis vectors for sampling correction. The method is evaluated primarily using FID on standard image generation benchmarks. Comparisons with previously state-of-the-art fast solvers, as well as detailed ablation studies, ensure a comprehensive evaluation. Theoretical Claims: The derivations of PCA-based correction and adaptive search appear correct and well-grounded in numerical methods. The claim that sampling trajectories lie in a low-dimensional subspace is experimentally validated but could benefit from more theoretical justification. Experimental Designs Or Analyses: The experimental design is comprehensive, covering 5 datasets, multiple solvers, and different NFE values (5–10 steps). Comparisons with other trajectory-based methods (e.g., AMED, GITS) have only been done on DDIM, and it would be useful to compare and analyze combined with other SOTA solvers (e.g., iPNDM). There is a lack of comparison and analysis about training efficiency between PAS and other methods. Supplementary Material: The authors have not provided supplementary material. Relation To Broader Scientific Literature: The paper is well-positioned in the broader context of: fast DPM solvers (e.g., DDIM, DPM-Solver-2, DPM-Solver++, iPNDM) and trajectory-based acceleration methods (e.g., AMED, GITS). It builds on recent findings that sampling trajectories of DPMs lie in a low-dimensional subspace, extending this idea with a novel PCA-based correction approach. Essential References Not Discussed: The paper cites all major relevant works, including prior fast solvers, trajectory-based corrections, and low-cost training strategies. Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: Have you experimented with alternative dimensionality reduction techniques in place of PCA? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's recognition of our work and meticulous review. Please find below our responses to all the questions. We would greatly appreciate it if you could consider increasing the score if you are satisfied with our response. **Abbreviation: CaE (Claims And Evidence), EDoA (Experimental Designs Or Analyses)** >***CaE and EDoA2: The study lacks a comparison of training efficiency between PAS and other methods.*** **A**: We provide a comparison of PAS with the previously well-performing AMED and GITS methods, while also introducing the high-cost PD method as a contrast. The comparative experiments, based on sample quality and training cost on CIFAR10, are shown in the table below: FID↓(A100 hours↓) | Method\NFE | 1 | 2 | 4 | 8 | | ----------- | ---------- | ---------- | ---------- | ---------- | | DDIM+PD[R1] | 9.12(~195) | 4.51(~171) | 3.00(~159) | 2.57(~146) | | Method\NFE | 5 | 6 | 8 | 10 | | --------------- | ------------ | ------------ | ------------ | ------------ | | DDIM+AMED[R2] | \ | 25.15(~0.08) | 17.03(~0.10) | 11.33(~0.11) | | DDIM+GITS[R3] | 28.05(<0.01) | 21.04(<0.01) | 13.30(<0.01) | 10.37(~0.01) | | DDIM+PAS (Ours) | 17.13(~0.01) | 12.11(~0.02) | 7.07(~0.03) | 4.37(~0.04) | As shown in the table above, PAS demonstrates higher training efficiency compared to AMED, and better sample quality than GITS with negligible additional cost. In contrast, PD, which yields higher sample quality, incurs a significantly larger training cost. Notably, **PAS is theoretically orthogonal to PD, AMED and GITS**, and can further improve their sampling efficiency. >***Theoretical Claims: Lacking theoretical justification for sampling trajectories in a low-dimensional subspace.*** **A**: Inspired by [WV23] and [WV24], we have added more theoretical analysis to the paper (see **Reviewer pbLK, Expand theory**). In brief, since analyzing neural networks directly is inherently intractable, [WV23] and [WV24] approximate diffusion trajectories using Gaussian score structures, providing both theoretical and empirical insights. Furthermore, [WV23] offers a theoretical analysis in Sec. 4.2, suggesting that diffusion trajectories **resemble 2D rotations on the plane spanned by the initial noise ($x_T$) and the final sample ($x_0$)**. This offers a theoretical justification for why sampling trajectories lie in a low-dimensional subspace. It also explains why trajectories corresponding to different initial noises lie in different subspaces (see our Fig. 2b). >***EDoA1: Add a comparison with other trajectory-based methods (e.g., AMED, GITS) combined with other SOTA solvers (e.g., iPNDM).*** **A**: We did not include additional comparisons between PAS and other trajectory-based methods (e.g., AMED, GITS) primarily because the integration of PAS with DDIM already demonstrates significant improvements in sampling efficiency compared to AMED and GITS, thereby validating the effectiveness of PAS. Furthermore, PAS is theoretically orthogonal to methods like AMED and GITS, and can potentially be combined with them to further enhance sampling efficiency. Therefore, we believe that exploring how to better combine PAS with trajectory-based methods (e.g., AMED, GITS) is a more meaningful direction, which will be considered in our future work. >***Questions For Authors: Have you experimented with alternative dimensionality reduction techniques in place of PCA?*** **A**: We have experimented with both standard PCA (`torch.svd`) and low-rank or sparse matrix PCA (`torch.pca_lowrank`). Notably, PCA_lowrank (used in our paper) computes approximately 1.58× faster than standard PCA, with some loss in accuracy. We provide a comparative experiment on the CIFAR10, as shown in the following table: PAS+DDIM, FID↓ | Method\NFE | 5 | 6 | 8 | 10 | | ----------- | ----- | ----- | ---- | ---- | | PCA | 16.38 | 13.08 | 7.01 | 4.39 | | PCA_lowrank | 17.13 | 12.11 | 7.07 | 4.37 | From the table, we can observe that PCA_lowrank and standard PCA yield **comparable results**. Upon further analysis, we believe this is because only around 4 principal components are sufficient to span the complete sampling trajectory space (which consists of 1000 vectors). Since extracting 4 principal components from trajectory space is relatively straightforward, the choice of dimensionality reduction technique is not a critical factor in the effectiveness of the proposed PAS method. [R1] Salimans T et al. Progressive distillation for fast sampling of diffusion models. ICLR 2022. [R2] Zhou, Zhenyu et al. Fast ode-based sampling for diffusion models in around 5 steps. CVPR 2024. [R3] Defang Chen et al. On the Trajectory Regularity of ODE-based Diffusion Sampling. ICML 2024.
Summary: The paper proposes a novel PCA-based Adaptive Search (PAS) method to accelerate diffusion model sampling with minimal additional computational and parameter costs. The key idea of PAS rests on the observation that the sampling trajectory of a parameterized reverse ODE of diffusion model almost lies in a 3D subspace in a high-dimensional space. Thus, PAS involves leveraging Principal Component Analysis (PCA) to identify a small set of orthogonal basis vectors and learns low-dimensional coordinates corresponding to these vectors to correct sampling errors. The main results demonstrate substantial improvements in sampling efficiency and quality with negligible cost. Specifically, PAS reduces the FID score of DDIM from 15.69 to 4.37 on CIFAR10 with 10 sampling steps, using merely around 12 parameters and minimal training time. PAS shows similar enhancements across multiple datasets (e.g., CIFAR10, FFHQ, ImageNet, LSUN Bedroom) and various pre-trained diffusion models, consistently outperforming the state-of-the-art method. Claims And Evidence: Yes, the paper's main claims are convincingly supported through empirical evaluations and comparative analyses. The paper proposes utilizing PCA to identify a small number of basis vectors spanning the high-dimensional trajectory space of sampling paths. They substantiate this claim with empirical evidence showing that sampling trajectories reside primarily within a low-dimensional subspace. Clear experimental results confirm that correction along these PCA-derived directions is effective in reducing truncation errors. Methods And Evaluation Criteria: Yes, the paper proposes PAS, a lightweight, plug-and-play method to correct discretization errors during diffusion sampling. PAS leverages the inherent geometric property that sampling trajectories lie in a low-dimensional subspace, utilizing PCA to represent and correct sampling directions with minimal parameters efficiently. The evaluation utilizes standard datasets, such as CIFAR10, FFHQ, ImageNet, LSUN Bedroom, and Stable Diffusion v1.4. The evaluation metric FID objectively measures the quality of generated images. Comparisons against existing state-of-the-art methods (e.g., DPM-Solver++, iPNDM) provide strong evidence for PAS’s effectiveness in terms of improved sampling quality and efficiency at significantly lower parameter and computational costs. Theoretical Claims: This paper do not provide theoretical claims. Experimental Designs Or Analyses: Yes, the paper conducts extensive experiments to validate the effectiveness of the proposed method. The authors emphasize minimal computational costs by reporting clearly the training time (e.g., less than 2 minutes on CIFAR10) and the number of parameters (approximately 10), however, direct quantitative performance comparisons against existing training-based methods is lacking. Including such comparisons could better highlight PAS's advantage over a more costly training-based method. Supplementary Material: Yes, I reviewed the supplementary material. Part A provides a detailed discussion and comparison with related works. Part B includes comprehensive training details and further discussion of the evaluation metric. Part C presents additional experimental results supporting the claims and validating the robustness of the proposed method Relation To Broader Scientific Literature: The key contributions of this paper respond to training costs limitations in prior few step generation works, notably the Progressive Distillation method(Salimans & Ho, 2022). While Progressive Distillation substantially improves sampling speed by training distilled models that drastically reduce sampling steps, it faces critical limitations including significant additional training costs and storage demands for the distilled model. This paper addresses these practical bottlenecks by leveraging PCA to identify a compact set of basis vectors spanning the sampling directions, thus requiring minimal retraining and alleviating memory constraints. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper introduces a highly original approach by leveraging PCA to significantly reduce the dimensionality of sampling corrections in diffusion probabilistic models, thus effectively addressing computational and storage bottlenecks prevalent in existing methods. 2. The explanations throughout the paper are clear and logically structured. Weaknesses: 1. While the paper extensively demonstrates empirical effectiveness, it could further clarify theoretical insights behind the PCA-based trajectory correction-particularly why sampling trajectories of all samples exhibit strong consistent geometric characteristics. 2. Discussion of potential limitations regarding the reliance on PCA and robustness to stochastic samplers, remains limited and could be expanded upon. Other Comments Or Suggestions: No. Questions For Authors: 1. The paper highlights the efficiency advantages of PAS over existing training-based methods. Could you provide a more detailed comparison—especially in terms of sampling quality and computational trade-offs—against widely-used training-based methods such as Salimans & Ho (2022)? 2. The PAS demonstrates promising results for deterministic solvers (e.g., DDIM). Could PAS be similarly effective if adapted to stochastic sampling methods? Have you conducted any preliminary experiments or analyses in this direction? Clarification on applicability or limitations would be valuable. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's efforts and valuable review. Below are our responses to all questions. We kindly hope you could consider increasing the score if you are satisfied. >**Experimental Designs Or Analyses: Direct performance comparisons with training-based methods are lacking, highlighting PAS's advantage.** **A**: We appreciate your suggestion! We did not include direct performance comparisons because high-cost training-based methods and PAS have their own application scenarios. Training-based methods can achieve one-step sampling but are expensive, making them difficult to apply to large models like Stable Diffusion. In contrast, PAS focuses on high-quality sampling under 10 NFE with negligible cost, making it a **more attractive and practical** solution. Furthermore, we added a comparison of parameter size and training time: Training-based one-step sampling methods require **at least 35.7M** parameters (DDPM, CIFAR10 model), while PAS uses **only ~10**. The table below summarizes training time↓ (A100 hours) on CIFAR10, using the estimation method and certain results from [R1]. **Note**: Training a CIFAR10 model from scratch takes ~200 A100 hours [R1]. |PD[R2]|Guided PD[R3]|CD[R4]|CTM[R5]|PAS(Ours)| |-|-|-|-|-| |~195|~146|~1156|~83|<0.04| >***W1: Theoretical explanation of why the sampling trajectories of all samples exhibit strongly consistent geometric characteristics.*** **A**: Inspired by [WV23] and [WV24], we have added more theoretical analysis to the paper (**Reviewer pbLK, Expand theory**). Briefly, since analyzing neural networks directly is inherently intractable, [WV23] and [WV24] approximate diffusion trajectories using Gaussian score structures, providing both theoretical and empirical insights. Specifically, [WV24] derives an analytical form of the EDM trajectory in Eq. 15(Gaussian structure): $x_t=\mu+\frac{\sigma_t}{\sigma_T} x_T^\perp +\sum_{k=1}^r \psi (t,\lambda_k)c_k(T)u_k$, which shows that $x_t$ is a linear combination of $x_T^\perp$ and $\mu, u_k$. Here, $\mu, u_k$ are the dataset-dependent mean and basis vectors that define the data manifold, while $x_T^\perp$ is an off-manifold component. Furthermore, given $x_T$, $c_k(T)$ is a constant, and the rate of change of the coefficients for the vectors $x_T^\perp, \mu$, and $u_k$ depends only on the dataset and the timestep $t$. For a fixed dataset, as $\sigma_t$ decreases during the sampling process, each sample is constrained by the same temporal decay scale, converging to the data manifold. This theory heuristically explains why all samples exhibit strongly consistent geometric characteristics. >***W2 and Q2: Discuss the limitations of the reliance on PCA and the robustness of PAS to stochastic samplers.*** **A**: The effectiveness of PAS relies on the accuracy of the basis derived from PCA, which may be influenced by noise from stochastic samplers. To further explore this, we added experiments using PAS on the stochastic sampler of EDM [R6] without the 2-order correction ([R6] Algorithm 2) on CIFAR10, with the following results: FID↓ |Method\NFE|5|8|10|20| |-|-|-|-|-| |Stochasticity|55.71|27.57|21.48|10.95| |+PAS|41.44|23.81|19.04|10.48| The table clearly shows that PAS is **equally effective** for the stochastic sampler, further highlighting the robustness of PAS. Nonetheless, in general, SDE has a higher quality upper bound, with 1000 NFE outperforming ODE. But for sampling with fewer steps, ODE performs better. Therefore, PAS with ODE would result in faster sampling speeds. >***Q1: A comparison of sampling quality and computational trade-offs with training-based methods like PD[R2].*** **A**: We have added some comparative results on the training cost and sample quality between PD and PAS, as shown in the following tables: FID↓(A100 hours↓) |Method\NFE|1|2|4|8| |-|-|-|-|-| |DDIM+PD[R2]|9.12(~195)|4.51(~171)|3.00(~159)|2.57(~146)| |Method\NFE|5|6|8|10| |-|-|-|-|-| |DDIM+PAS|17.13(~0.01)|12.11(~0.02)|7.07(~0.03)|4.37(~0.04)| |iPNDM+PAS|13.61(~0.01)|7.47(~0.02)|3.87(~0.03)|2.84(~0.04)| As shown in the tables above, PD requires ~146 A100 hours of training time for 8 NFE to achieve 2.57 FID, whereas PAS with iPNDM requires **only 0.04 A100 hours with 8 parameters** to achieve 2.84 FID at 10 NFE. Therefore, PD and PAS each have their own application scenarios. PAS achieves impressive performance with negligible cost, making it a **more attractive and practical** solution. [R1] Zhou Z et al. Simple and fast distillation of diffusion models. NeurIPS 2024. [R2] Salimans T et al. Progressive distillation for fast sampling of diffusion models. ICLR 2022. [R3] Meng C et al. On distillation of guided diffusion models. CVPR 2023. [R4] Song Y et al. Consistency models. ICML 2023. [R5] Kim D et al. Consistency trajectory models: Learning probability flow ode trajectory of diffusion. arXiv:2310.02279. [R6] Tero Karras et al. Elucidating the design space of diffusion-based generative models. NeurIPS 2022. --- Rebuttal Comment 1.1: Comment: Thanks the authors for the rebuttal. My concerns are addressed. I recommend acceptance as all reviewers agreed. --- Reply to Comment 1.1.1: Comment: We are so glad to hear that your concerns have been addressed! Thank you once again for recognizing our work!
Summary: The authors leveraged the previous finding that the diffusion sampling trajectories are low dimensional, and that part of it is more curvy. Then they developed a method to learn the PCA basis of current ongoing trajectory during sampling and then learn coefficient to recombine the PC vectors to correct for the truncation errors in the small NFE solvers. They can train the PC correction coefficients efficiently on a bunch of “ground truth” trajectory with higher NFEs, and obtain corrections for sampler with fewer NFEs, effectively distilled out a dataset specific fast sampler based on the PC correction coefficients. The method is evaluated on CIFAR10 32x32 to Stable diffusion 512, and showed effectiveness through the scales. ## Update after rebuttal The authors showed a close comparison with the Gaussian approximation based teleportation / acceleration method in [WV24], and an updated benchmark. The updated theoretical link between the low dimensional trajectory to the Gaussian theory also made it stronger. The reviewer is happy to maintain the score as such. Claims And Evidence: Most claims are clear and supported. Some interpretation of why the method works could be made clearer. - **Minor point** “*Thus, we can infer that the sampling trajectory first appears linear, then transitions to a curve, and ultimately becomes linear again under the attraction of a certain mode.*” In Fig. 3a the authors interpreted the S shape truncation error as the error at middle range is due to higher curvature. This is possible but my interpretation is the earlier trajectory is quite linear, and later trajectory may not be linear but took smaller steps, so the L2 error do not increase that much (See [KMTS22] Fig.3 [WV23] Fig.1). If the author do want to consolidate the curvature intuition, you could directly perform an empirical analysis of how much the trajectory rotate at each part. - The failure of ablation study without Adaptive search is worth noticing, from the FID, it seems it failed spectacularly. I’m curious about the author’s interpretation of it (e.g. curvature). My interpretation is that the PCA basis you got from some part of the trajectory (e.g. see Fig. 18 in [WV23]) may be not good, or too contaminated with noise, so not useful to correct the trajectory. Thus those PC may not generally help correct for truncation error. I think a deeper analysis of that part may illustrate why this method may or may not work. [KMTS22] Karras, T., Aittala, M., Aila, T., & Laine, S. (2022). Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems, 35, 26565-26577. [WV23] Wang, B., & Vastola, J. J. (2023). Diffusion models generate images like painters: an analytical theory of outline first, details later. Methods And Evaluation Criteria: The experimental set up and evaluations are quite standard and performed well! - There are some very recent results the authors didn’t include in the benchmark, which could have some shared mechanism with the paper. [WV24] (detailed in missing reference part.) - One aspects the authors didn’t mention is that even though training is super efficient, obtaining the sampling trajectory for training is likely not efficient. In your case obtaining 5k to 10k sampling trajectory with high NFE could take some time. The authors could mention the time of obtaining such trajectories in additional to training (around L311-318). This is actually a bit similar to [WV24], where their acceleration is parameter / training free, but to sample the data to compute PCA takes time, if they don’t have access to training set. [WV24] Wang, B., & Vastola, J. J. (2024). The Unreasonable Effectiveness of Gaussian Score Approximation for Diffusion Models and its Applications. arXiv preprint arXiv:2412.09726. Theoretical Claims: There is not much theoretical claims in this paper. The intuition of correcting the more curvy part of trajectory is interesting, the authors could provide a slightly more formal treatment of the Though more connection to some theoretical framework can better frame or motivate the method. [WV24] Experimental Designs Or Analyses: Standard design and analysis. Supplementary Material: Reviewed Appendix A and B Relation To Broader Scientific Literature: See below. Essential References Not Discussed: **Methods** - Regarding Figure 2 results of PCA of sampling trajectory, one reference the authors missed is [WV23], where they also systematically studied the PC structure of sampling trajectory and provided a theoretical account explaining the low dimensional (mostly 2d) nature of the sampling trajectory. Basically they showed the low dim subspace is the one spanned by the initial noise and the final image sample, which explains your Fig.2B, since different trajectory starts from different initial noise, they won’t have a shared subspace / low dimensional structure. Authors of [WV23] also found a better way to understand these trajectories is to look at the PCs of the projected outcome x0_hat, their PC should be closer to those of the image manifold. Seems Fig 2. in [WV23] and Fig.1 in your paper has interesting connections. The authors should mention these results in line 156-160 “*This indicates that the entire sampling trajectory lies in a three-dimensional subspace embedded …* ” - The theory and observations of early diffusion trajectory is well approximated by Gaussian score trajectory leads to some strong predictions about what the PC’s you are getting, they should somehow be aligned with the PC of target data manifold [WV23]. So the authors could discuss those as further theoretical justifications of the method. **Results** - In the main benchmark table 2, it seems the results the authors get from PAS-acceleration are similar or sometimes slightly worse than the results from a recent paper [WV24] Fig.15, which also achieves accelerated sampling via analytical teleportation, based on the PCA of data manifold without any retraining. They also tested on EDM pretrained on CIFAR and FFHQ, AFHQ, so the NFE and FID numbers are totally comparable to those in your table. The authors could consider adding them to the benchmark. - I have some suspicion that the mechanism of acceleration may be shared, i.e. due to the Gaussian score structure in diffusion. Due to the Gaussian structure, the trajectory from any initial noise state can be analytically predicted based on mean and PC of data, so they can teleport the solution. I guess the mechanism of your PAS acceleration is likely to be related to that. The authors might benefit from discussing the connections. - On the other hand, one difference is, their acceleration mainly happen at earlier part of trajectory (e.g. high noise regime) and from Table 1, seems PAS works by correcting more at the low noise regime, suggesting different correcting mechanism. - Mentioning this is not to reduce the contribution or novelty of the current result. I think their methods were hard to be applied to Stable Diffusion or very high res images, due to they need to compute PCA of the dataset. Trajectory based PCA and training seems to circumvent a bit. [WV23] Wang, B., & Vastola, J. J. (2023). Diffusion models generate images like painters: an analytical theory of outline first, details later.  [WV24] Wang, B., & Vastola, J. J. (2024). The Unreasonable Effectiveness of Gaussian Score Approximation for Diffusion Models and its Applications. arXiv preprint arXiv:2412.09726. Other Strengths And Weaknesses: - The method is simple to implement and parameter and compute efficient, also conceptually beautiful. - The illustrations and tables were well made and authors put efforts into explaining their ideas. Other Comments Or Suggestions: - Table 1 caption is not very clear, authors could more clearly annotate what are the list of numbers is showed in the table. Questions For Authors: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's detailed review and insightful suggestions! **Abbreviation: CaE (Claims And Evidence), MaEC (Methods And Evaluation Criteria), ERND (Essential References Not Discussed)** >***CaE1: The later trajectory may not be linear but instead takes smaller steps.*** **A**: Your interpretation is reasonable. However, [KMTS22] and [WV23] only analyze the trajectory in 2D, which may lose some information. [R1] visualizes trajectory in 3D ([R1] Fig. 4) and offers a "boomerang" explanation (linear-nonlinear-linear) in Sec. 3, showing that the later part is indeed linear. We have added a detailed analysis in the revised version. [R1] Chen D et al. On the trajectory regularity of ode-based diffusion sampling. ICML2024. >***CaE2: The degradation without adaptive search may be due to the obtained PCs not being good.*** **A**: Notably, this paper has conducted a similar analysis in lines 377-384(Right): "The errors in the linear part are negligible; the correction does not further reduce errors and instead introduces noise", highlighting the **necessity and effectiveness of the adaptive search**. >***Expand theory (MaEC1, Theoretical Claims, and ERND_M1): The study misses citing [WV23] and [WV24], which could theoretically motivate Fig. 2b and the correction of the more curved parts(Fig. 1).*** **A**: We have carefully revisited [WV23] and [WV24], which approximate diffusion trajectories by Gaussian score structure and provide both theoretical and empirical insights. These works are indeed inspiring and insightful! Specifically, [WV23] offers a theoretical analysis in Sec. 4.2, suggesting that diffusion trajectories resemble 2D rotations on the plane spanned by the initial noise ($x_T$) and the final sample ($x_0$). This provides a theoretical underpinning for our Fig. 2b, where trajectories corresponding to different initial noises lie in different subspaces. **Correction of the more curved parts**: [WV24] derives an analytical form of the EDM trajectory in Eq. 15 (Gaussian structure): $x_t=\mu+\frac{\sigma_t}{\sigma_T} x_T^\perp +\sum_{k=1}^r \psi (t,\lambda_k)c_k(T)u_k$, which shows that $x_t$ is a linear combination of $x_T^\perp$ and $\mu,u_k$. PAS is designed to learn statistical patterns of the dataset that are independent of the initial noise $x_T$, i.e., the coefficients of $\mu,u_k$. As $\sigma_t \to 0$, the influence of $x_T^\perp$ on $x_t$ diminishes. So, in the low-noise regime (i.e., more curved parts), PAS batter captures the PCs of the data ($\mu,u_k$), leading to more accurate correction. This offers a theoretical explanation for why PAS tends to work better in the more curved regions (Fig. 1). We have added relevant discussions in Sec. 3 and the Related Work of the revised manuscript. The alignment between PAS and the conclusions of [WV23], [WV24] is indeed intriguing! >***MaEC2: Discuss the time taken to obtain the sampling trajectory for training (similar to [WV24]).*** A: We did not discuss the cost because other training-based methods also require sampling full or partial trajectory for training. In the revised version, we added a discussion on the time taken: 3.26m for CIFAR10 and 1.79h for Bedroom256 with 50 NFE and a 5k trajectory on an A100 GPU. >***ERND_M2 and ERND_R1-3: Discuss the connections and differences between PAS and [WV23], [WV24], and include them as a benchmark.*** **A**: The proposed PAS corrects sampling in the low-noise regime, while [WV24] accelerates sampling in the high-noise regime. So, combining [WV24] with PAS is indeed interesting. Theoretically, we can analytically warm up the PCs from the Gaussian structure and then **correct the sampling starting from the teleported solution** $x_{t'}$, further improving the FID in [WV24] Fig. 15. However, [WV24] is limited by the computational cost of estimating means and covariances on large pixel datasets. Furthermore, another key challenge is how to effectively combine the Gaussian (analytical but imprecise under low noise) and neural (precise but costly) structure to obtain more exact PCs with minimal cost. This will be a focus of our future work. >***ERND_R4: The PAS is hard to apply to very high-res images.*** **A:** No, the PAS can **be easily applied to high-res images**. This is because we use `torch.pca_lowrank` to decompose the trajectory matrix as $\mathbf{X}^{\prime}\in \mathbb{R}^{(N-i+2)\times D}$ (Eq. 13), where $i\leq N,N\leq 10$, so its dimension is low and the rank $r\ll D$. For Stable Diffusion(SD), $D$=64x64x4 from the latent space. In the table below, we compare the time for 1PCA and 1NFE, and the PCA time is negligible. pre 128 samples time↓ (s) | |SD|Bedrooom256| |-|-|-| |1NFE|30.20|10.05| |1PCA|0.06|0.2| >***Other Suggestions: Table1 caption needs clarification.*** **A**: Thanks for pointing out the issues; we have revised them. Briefly, the list of numbers denotes all the time points requiring correction from adaptive search, with each element $i\in [N,1]$. --- Rebuttal Comment 1.1: Comment: The authors’ willingness to revisit [WV23] and [WV24] and clarify the connection between the current method and established theoretical results is much appreciated. This connection greatly benefits readers by providing deeper insight into the methodology. Thanks for mentioning the nice paper [R1] which also complements the view from [WV23]. I am particularly enthusiastic about the proposed future direction of using analytical teleportation as a warm-up phase, followed by PAS to correct the more curved segments of the trajectory. However, it might still be helpful to **include the teleportation method in the benchmark tables**, as its contribution to achieving low FID scores would offer readers a more comprehensive view. I agree that combining these techniques will lead to improved FID outcomes. Additionally, I concur that PAC appears more scalable to high-resolution images on complex datasets. This scalability stems from its approach of factorizing only the current trajectory, in contrast to analytical teleportation, which requires pca_lowrank on a larger set of training images. Overall, the paper is clean and highly informative—kudos to the authors for their excellent work! --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and the recognition of our work! We are so glad to hear that the rebuttal helped clarify how the proposed PAS can be easily applied to high-resolution images on complex datasets. We also agree that combining analytical teleportation with PAS and including the results in the benchmark tables is beneficial. Per your suggestion, we have added additional experiments on the CIFAR10, as shown below: We use $\sigma\_{skip} = 10.0$ for teleportation in equation ([WV24] Algorithm 1): $\mathbf{x}_{t^{\prime}} \leftarrow \boldsymbol{\mu}+\frac{\sigma\_{skip}}{\sigma\_{\max }}\left(\mathbf{I}-\mathbf{U} \mathbf{U}^{T}\right)\left(\mathbf{x}\_{T}-\boldsymbol{\mu}\right)+\sum\_{k=1}^{r} \sqrt{\frac{\sigma\_{s k i p}^{2}+\lambda\_{k}}{\sigma\_{\max }^{2}+\lambda\_{k}}} \mathbf{u}\_{k} \mathbf{u}\_{k}^{T}\left(\mathbf{x}\_{T}-\boldsymbol{\mu}\right)$. CIFAR10, FID↓ | Method\NFE | 5 | 6 | 8 | 10 | | ---------------------------- | -------- | -------- | -------- | -------- | | DDIM | 49.68 | 35.63 | 22.32 | 15.69 | | + teleportation | 24.50 | 18.41 | 12.04 | 8.78 | | + PAS (**Ours**) | 17.13 | 12.11 | 7.07 | 4.37 | | + teleportation + PAS (**Ours**) | **9.15** | **5.16** | **3.65** | **3.16** | | | | | | | | iPNDM | 16.55 | 9.74 | 5.23 | 3.69 | | + teleportation | 7.25 | 4.89 | 3.08 | 2.49 | | + PAS (**Ours**) | 13.61 | 7.47 | 3.87 | 2.84 | | + teleportation + PAS (**Ours**) | **5.16** | **3.76** | **2.77** | **2.40** | As shown in the table, PAS can **indeed** be combined with teleportation to **further improve FID** outcomes. We observe that **iPNDM + teleportation + PAS** outperforms the results reported in [WV24] Fig. 15 for NFE $\in$ {5, 6, 8, 10} on CIFAR10. These results have been included in the benchmark tables (our Tab. 2). Your comments also motivate us to further explore the potential of hyperparameter settings (e.g., $\sigma_{skip}$) in the combination of PAS and teleportation, as well as evaluate their combination with more datasets and solvers (e.g., DPM-Solver-v3 [R2]). More results will be incorporated into the benchmark tables in the revised version of the paper. Thank you again for your meticulous review and insightful comments, which have greatly improved our paper! [R2] Zheng K et al. Dpm-solver-v3: Improved diffusion ode solver with empirical model statistics. NeurIPS 2023.
null
null
null
null
null
null
null
null