title stringlengths 15 163 | paper_decision stringclasses 4
values | review_1 stringlengths 853 32.6k | rebuttals_1 stringlengths 0 15.1k | review_2 stringlengths 1.03k 35.6k | rebuttals_2 stringlengths 0 15.1k | review_3 stringlengths 807 27.4k ⌀ | rebuttals_3 stringlengths 0 15k ⌀ | review_4 stringlengths 780 22.2k ⌀ | rebuttals_4 stringlengths 0 15.1k ⌀ | review_5 stringclasses 171
values | rebuttals_5 stringclasses 166
values | review_6 stringclasses 25
values | rebuttals_6 stringclasses 24
values | review_7 stringclasses 4
values | rebuttals_7 stringclasses 4
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Zero-Inflated Bandits | Accept (poster) | Summary: This paper considers stochastic multi-armed and contextual bandits where the reward distributions are zero-inflated distributions. Formally, each reward observation is a draw of a product random variable $R_t = X_tY_t$ where $X_t$ is a distribution with mean $\mu$ and $Y_t$ is a Bernoulli distribution with parameter $p$. It is assumed that $\mu$ and $p$ are unknown and both may vary across arms.
The paper proposes UCB and TS algorithms for various versions of these problems, with accompanying theoretical and empirical analysis.
The main innovation in the design of UCB and TS algorithms is to not form a single confidence bound for $p\mu$ the expected value of $R_t$, or draw a single TS sample from it, but to form separate indices for $\mu$ and $p$ and combine these, since a quantities based on the entire zero-inflated distribution may scale unnecessarily in $\mu$. The more substantial challenge lies in the analysis, where handling a random number of observations of the non-zero component presents additional complications over the standard analysis used in parametric bandit proofs.
# POST REBUTTAL: Satisfied with proposed modifications and retain a positive score.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I could not read the entirety of the supplementary material in the time available, but I checked the results leading to Lemma 2.2 in detail and did not find issues. I did however feel that the theoretical contribution could be better explained in the main text:
- Can you give a more detailed sense of how Lemma 2.2 and A.1 resolve the issues of handling the distribution of X in the main text? It seems this is one of the main contributions, and while we get a good sense of the challenge, and the fact that these lemmas resolve it, there isn’t a good sense of how within the main text.
- Similarly in Section 4.2, it’s hard to tell whether there is novelty in the proofs that are seconded to appendix, and if so how much. I think it would help readers to understand the theoretical contribution if this could be concisely expressed in the main text.
Experimental Designs Or Analyses: I felt the experiments were appropriate, and checked the specification in Appendix C, but had some observations about the presentation of results:
- There doesn’t appear to be an explanation of what the figures are showing in the Experiment section. Are the plotted lines means or medians, are the vertical lines plus/minus standard deviations, or quantiles, or max/mins? Why do some curves appear not to have error bars at all?
- There are elements of the experimental setup that are unclear. E.g. for the MAB problems, what are the parameters of the Gaussian/Mixed Gaussian/Exponential components, I could only find details of p.
Supplementary Material: Sections B, C, and E.
Relation To Broader Scientific Literature: I felt this was mostly done appropriately, it seems that
- Liu et al. (2023, arxiv:2311.14349) Thompson sampling for zero-inflated count outcomes with an application to the Drink Less mobile health study.
is worth mentioning but it was the only other relevant paper on zero-inflation in bandits I could find.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: The paper provides a thorough treatment of the zero-inflated bandit problem, covering MAB and contextual problems, TS and UCB-based approaches, and problem-dependent and problem-independent results. The experiments are sensible to evaluate these algorithms and the work is well motivated and connected to the literature and potential avenues for future work. Aspects of the theoretical and methodological contribution may indeed be translations of existing tools to this new setting, but I think there is sufficient innovation here to interest an ICML audience.
Other Comments Or Suggestions: Minor observations:
- L038: a chapter or page reference to Lattimore and Szepesvari would be helpful here, as done on L068 and L430.
- L154: clarify the context in which it is commonly assumed – I presume this means in design and analysis of bandit algorithms (but the rest of the paragraph was about defining sub-Weibulls so it’s not the clearest).
- L177: Error with the reference to Corollary “??”
- A lot of equations are squashed into in-line text, presumably to save vertical space in the 8 page template. I’d recommend using some of the post-acceptance additional allowance to remedy this so things are easier to read, especially on pages 4 and 5.
- L238: I’m not sure the less than equal to notation used in Theorem 4.1 is defined?
A proof read for grammar would improve the readability of the paper, e.g.:
- L048: distributions*
- L076: follow*
- L078: games*
- L115: follow*
- L146: prone to be heavily influenced by their estimation errors*
- L234-5: We note*
- L230: established used as an adjective here suggests they already exist and or not a new contribution, I think you mean that you establish these results in this paper?
Questions For Authors: Mostly repeating questions from above for ease of reference:
- Sections 1 and 2 refer to Figure 1 (a) and (b) but without a full explanation of what the figures show. For instance, it’s not clear how 1 (a) provides an example of bandit algorithms failing to utilize the distribution property, and how the confidence bounds in 1 (b) are constructed. Could this be rectified?
- Can you give a more detailed sense of how Lemma 2.2 and A.1 resolve the issues of handling the distribution of X in the main text? It seems this is one of the main contributions, and while we get a good sense of the challenge, and the fact that these lemmas resolve it, there isn’t a good sense of how within the main text.
- Similarly in Section 4.2, it’s hard to tell whether there is novelty in the proofs that are seconded to appendix, and if so how much. I think it would help readers to understand the theoretical contribution if this could be concisely expressed in the main text.
- There doesn’t appear to be an explanation of what the figures are showing in the Experiment section. Are the plotted lines means or medians, are the vertical lines plus/minus standard deviations, or quantiles, or max/mins? Why do some curves appear not to have error bars at all?
- There are elements of the experimental setup that are unclear. E.g. for the MAB problems, what are the parameters of the Gaussian/Mixed Gaussian/Exponential components, I could only find details of p.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful and encouraging review. Below, we provide point-by-point responses to your comments. We look forward to refining our manuscript to address these valuable suggestions.
*Theoretical Claims:*
We sincerely thank you for your thoughtful and encouraging comment. As you rightly pointed out, obtaining valid concentration bounds for the non-zero component $X$ is a core challenge in both algorithm design and regret analysis. Lemma 2.2 (sub-Weibull noise) and Lemma A.1 (heavy-tailed noise) are indeed key technical contributions that enable us to build confidence intervals. By decoupling the ZI mechanism from the tail structure of X, these lemmas preserve tightness without imposing overly conservative assumptions.
We also appreciate your suggestion regarding Section 4.2. Due to space limitations, we deferred many details of the regret analysis to the appendix, including several nontrivial proof techniques that are tailored to the ZI structure and product-form rewards. In the revision, we will explicitly outline in the main text which parts of the analysis are standard and which are novel, to help readers more clearly appreciate the theoretical contribution.
*Experimental Designs & Analyses:*
All plots show mean cumulative regret over 25 independent runs, with shading indicating $\pm 1 / 10$ standard deviation. Sometimes the shaded region is nearly invisible when variability is small. We will update figure captions and the main text to clarify this, ensuring the error bars' scale is apparent.
To improve reproducibility, we will explicitly detail our reward models:
- Gaussian: Nonzero parts from a $N(\mu_k, 1)$, with $\mu_k \sim U(0, 100)$;
- Mixed Gaussian rewards: The non-zero rewards from
$$
(1 - p_k) \times N \left( \frac{\mu_k}{2(1 - p_k)}, \sigma^2\right) + p_k \times N \left( \frac{\mu_k}{2p_k}, \sigma^2\right)
$$
which ensures that the overall mean of the non-zero component is $\mu_k$.
- Exponential: The non-zero rewards from an exponential distribution with mean $\mu_k$, where $\mu_k \sim U(0, 100)$.
We will include these details in Section 5 and Appendix C of our revised submission.
*Relation To Broader Scientific Literature:*
Thank you for bringing [1] to our attention. We will include this citation in our introduction, noting that it treats zero-inflated count outcomes via Poisson/negative binomial models. Our method accommodates more general real-valued ZI rewards (Gaussian, exponential, heavy-tailed, etc.). Despite this difference in scope, we recognize the close connection and will highlight its relevance.
*Other Comments & Suggestions:*
- L038: We will include the specific reference to Chapter 9 in [1], consistent with the other citations in the manuscript.
- L154: Yes, the assumption refers to a common modeling assumption for reward distributions in the design and analysis of bandit algorithms. We will clarify this in the revised version.
-L177: This was intended to refer to a corollary comparing constant terms in regret bounds between our method and the canonical UCB algorithm. We appreciate the reviewer catching this and will correct it in the final version.
- R238: The ``less than or equal to" symbol ($\lesssim$) is used to indicate inequality up to constants independent of bandit specific parameters.
- R230: We appreciate the clarification. Our intention was to say that these results are established within this paper. We will revise the sentence to avoid ambiguity.
We also appreciate your suggestions regarding inline equations, grammar, and formatting. We will use the post-acceptance phase to improve readability, split long inline equations, and proofread thoroughly.
*Question:*
We appreciate your helpful question. The purpose of Figure 1(a) is to present a motivating real-world example (introduced in [L81]) that illustrates the prevalence of zero rewards in practice. This highlights the limitations of traditional bandit algorithms that do not account for such structural sparsity and motivates the need for methods that explicitly model the zero-inflated nature of the reward.
Figure 1(b) compares illustrative confidence bounds under different algorithms. In addition to our proposed method and the Monte Carlo baseline (constructed from empirical samples), the other bounds correspond to those used by the UCB baselines described in Appendix C (Simulation Supplement). We acknowledge that these details were not sufficiently clarified in the main text, and we will revise the captions and discussion accordingly to make the interpretations of both Figure 1(a) and 1(b) more transparent in the updated manuscript.
*References:*
[1] Liu, X., Deliu, N., Chakraborty, T., Bell, L., & Chakraborty, B. (2023). Thompson sampling for zero-inflated count outcomes with an application to the Drink Less mobile health study. arXiv preprint arXiv:2311.14359.
[2] Lattimore, T., & Szepesvári, C. (2020). Bandit algorithms. Cambridge University Press.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and congratulations of a well-written paper with positive reviews. I welcome the commitments to make improvements to the paper, and retain my accept score. | Summary: This paper “Zero-Inflated Bandits” focuses on the issue of sparse rewards in multi-armed bandit (MAB) and contextual bandit applications. The authors propose a zero-inflated bandit (ZIB) algorithm framework to enhance learning efficiency by leveraging the zero-inflated distribution structure.For zero-inflated multi-armed bandits, the paper presents a model that characterizes the reward distribution with parameters such as non - zero probability \(p_{k}\) and mean of the non - zero part \(\mu_{k}\). To address the shortcomings of naive approaches, the product method is introduced to construct more effective upper confidence bounds (UCB). The Thompson Sampling (TS) approach is also extended for this model. In the context of zero-inflated contextual bandits, the model is further extended, and UCB and TS algorithm templates are proposed. These algorithms construct confidence bounds for exploration and estimate parameters to optimize decisions.Theoretical analysis of the regret bounds for UCB and TS algorithms in both MAB and contextual bandits is conducted. Extensive experiments, including simulations in MAB and contextual bandits and a real - data application, demonstrate that the proposed UCB and TS algorithms consistently achieve lower sub - linear regrets, outperforming baseline methods that ignore the zero-inflated structure or directly quantify uncertainty.
Claims And Evidence: The independence assumptions in the TS algorithm proofs are too strong. In reality, factors affecting rewards are often interrelated. I suggest the authors list all assumptions in a dedicated section, discuss their implications, justifications, and potential consequences of violation to enhance the research's transparency.
Methods And Evaluation Criteria: The paper's omission of analyzing the time and space complexity of the proposed algorithms is a weakness. Understanding the computational resources required by these algorithms is crucial for practical applications, especially in large - scale and real - time scenarios. Without such analysis, it's hard to assess the algorithms' scalability and efficiency. I recommend the authors conduct and report this analysis.
Theoretical Claims: The paper briefly mentions the link to heavy tail and long tail bandit research. Since zero-inflated distributions can be a special case of heavy tail distributions, a more in - depth discussion is needed. This should cover how the proposed algorithms relate to existing ones in these areas and how the distributions' properties interact.
Experimental Designs Or Analyses: (a) There is an absence of comparison with heavy tail and long tail bandit algorithms in experiments. Given the relevance, such comparisons are crucial to assess the proposed algorithms' performance comprehensively. (b) The absence of comparisons with semi-parametric bandit algorithms in the experiments is a significant oversight. Given the relevance of semi-parametric bandit research to the topic of this paper, such comparisons are essential for a comprehensive evaluation of the proposed algorithms. (c) if the author could conduct a real online AB testing for the method, it would be better because the real environment will break assumptions normally.
Supplementary Material: Yes. The appendix of the paper is excessively long, which not only makes the paper cumbersome but also potentially deters readers from fully engaging with it. More importantly, amidst the voluminous content, the key points are not well - highlighted.
Relation To Broader Scientific Literature: Focusing on zero-inflated bandits is valuable for real - world applications with sparse rewards, and the proposed algorithms can enhance learning efficiency.
Essential References Not Discussed: There are some citation omissions. The work by Peng Y, et al. titled "A practical semi-parametric contextual bandit" presented at IJCAI in 2019 should be included in the references. This paper likely contributes to the semi-parametric bandit literature and its omission undermines the comprehensiveness of the paper's literature review. Another is Liu X, et al. Thompson sampling for zero-inflated count outcomes with an application to the Drink Less mobile health study, which is very related to this work.
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: How can one pre-determine whether the current environment is long-tailed or zero-inflated before utilization?
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thorough and constructive feedback. Below, we respond to each of your points. We hope this addresses your concerns.
*Claims & Evidence:*
Our main independence assumption is that $X$ (nonzero rewards) and $Y$ (the indicator of a nonzero outcome) are independent in the decomposition $R = X \cdot Y$. We do not assume independence across arms or across time. This independence assumption is introduced mainly for notation simplification and analytical tractability, as discussed in lines L130–L140. We agree we should state this more explicitly in both the MAB and GLM contexts.
*Methods & Evaluation Criteria:*
We appreciate the suggestion to detail computational cost. We have found that our ZI-based approach retains the same big-O complexity as standard baselines for both MAB and GLM. The only difference is a small constant overhead from maintaining two estimators (one for the zero indicator, one for nonzero magnitude) instead of a single estimate of $R$. We will clarify this in our revision.
*Theoretical Claims:*
Thank you for your insightful suggestion. We agree that connections to heavy-tailed bandits [1,2,3] and asymmetric bandits [4,5,6] are relevant and worth highlighting. While our approach is specifically tailored to the ZI structure, it remains compatible with heavy-tailed settings (e.g., allowing only $(1+\epsilon)$-th moments as in [1,2,3]) and provides a structural alternative to modeling asymmetry, rather than relying on auxiliary tools like empirical quantiles or calibration, which increase computational complexity (e.g., [5] and [6]). We will incorporate these related works and clarify distinctions in our revision—thank you again for the valuable feedback.
*Experimental Designs & Analyses:*
Following your advice, we included Q-SAR [6] for MAB and SPUCB [7] for GLM as baselines in two uploaded anonymous figures ([Figure 1](https://anonymous.4open.science/r/ZIB_ICML-2535/MAB_extra_QSAR.pdf) and [Figure 2](https://anonymous.4open.science/r/ZIB_ICML-2535/GLM_CB_extra.pdf)). In zero-inflated regimes, Q-SAR's reliance on quantile updates can become unstable when zeros dominate; it tends to over/underestimate crucial quantiles. In contrast, our ZI-based methods explicitly model the sparse reward mechanism, yielding more stable learning and lower regret in both MAB and contextual settings.
We also evaluated a standard A/B testing baseline (uniform allocation) in a zero-inflated Gaussian environment with periodically drifting means, as shown in the [anonymous figure](https://anonymous.4open.science/r/ZIB_ICML-2535/AB_testing_new.pdf). This controlled synthetic setting introduces nonstationarity by perturbing each arm’s mean with Gaussian noise (standard deviation 5) every $T/3$ rounds. We also explored alternative drift magnitudes and observed qualitatively similar trends. Even under moderate nonstationarity, our UCB and TS methods adapt better than fixed allocation. We will include these comparisons in the revised appendix.
*Supplementary Material & Essential References Not Discussed:*
We agree the current appendix is lengthy. To address this, we will: (1) Add a summary table at the appendix start, mapping each section to its key results; (2) Reduce repetition and highlight core takeaways, ensuring a clearer structure. We will also reference in the related work and clarify the ties to zero-inflated approaches.
*Questions:*
Determining whether an environment is zero-inflated or heavy-tailed can be guided by domain knowledge (e.g., high-frequency ``no feedback" in ads or loan offers) and data diagnostics (e.g., moment tests [9], sub-G plots [10]). Though a rigorous classification lies beyond this paper’s scope, we consider it an important direction for real-world deployments.
References:
[1] Bubeck, S., Cesa-Bianchi, N., & Lugosi, G. (2013). Bandits with heavy tail.
[2] Zhang, J., & Cutkosky, A. (2022). Parameter-free regret in high probability with heavy tails.
[3] Cheng, D., Zhou, X., & Ji, B. (2024). Taming Heavy-Tailed Losses in Adversarial Bandits and the Best-of-Both-Worlds Setting.
[4] Zhang, M., \& Ong, C. S. (2021). Quantile bandits for best arms identification.
[5] Shi, Z., Kuruoglu, E. E., & Wei, X. (2022). Thompson Sampling on Asymmetric Stable Bandits.
[6] Zhang, M., & Ong, C. S. (2021). Quantile bandits for best arms identification.
[7] Peng, Y., Xie, M., Liu, J., Meng, X., Li, N., Yang, C., … & Jin, R. (2019, August). A practical semi-parametric contextual bandit.
[8] Liu, X., Deliu, N., Chakraborty, T., Bell, L., & Chakraborty, B. (2023). Thompson sampling for zero-inflated count outcomes with an application to the Drink Less mobile health study.
[9] Trapani, L. (2016). Testing for (in)finite moments.
[10] Zhang, H., Wei, H., & Cheng, G. (2023). Tight non-asymptotic inference via sub-Gaussian intrinsic moment norm. | Summary: The submission studies multi-armed bandits whose reward function follows the zero-inflated (ZI) distribution. The motivation is to investigate the advantages of distribution modeling and exploiting the problem specific structure. UCB and TS are modified to solve the MAB and the contextual bandit problems under the zero-inflated regime. The corresponding regret bounds and experimental verifications are provided.
## Update after rebuttal
The authors' reply addressed my concerns. Given that, I have revised my recommendation to weak accept.
Claims And Evidence: Theoretical claims are supported by proofs.
Experimental claims have less support. See Experimental Designs Or Analyses for more comments.
Methods And Evaluation Criteria: The notion of regret is an appropriate evaluation criterion for the theoretical results.
The synthetic datasets and the real-world dataset are suitable for evaluating the ZI methods.
Theoretical Claims: The submission provides matching regret bounds for the ZI methods. Due to the multiplicative nature of the reward function (equation (1)), the submission developed technical tools to facilitate the proof. These technical contributions are Lemmas E.2 [1391], E.3 [1480], E.4 [1502].
However, if the motivation of this submission is to investigate the benefits of exploiting the ZI structure, then proving matching regret bounds is simply a sanity check on the proposed methods. Ideally, the proofs should show when and how the regret bounds are improved by considering the ZI structure. Unfortunately, these results are lacking.
Experimental Designs Or Analyses: There is only one real-world dataset used to verify the proposed method. However, in Introduction, there are other practical scenarios that can be used to verify the proposed methods [054R–081L]. In addition, the baselines in the real-world experiment are UCB and TS, which are developed for MAB. Instead, appropriate baselines should be the contextual bandit algorithms.
One motivation for proposing the ZI model is to deal with sparse reward signals. Baselines aimed at tackling sparse rewards should be included in the synthetic experiments. Related papers are https://arxiv.org/abs/1706.01383 and https://proceedings.neurips.cc/paper_files/paper/2023/hash/9408564a4229f4a933ac9bd09a29ee96-Abstract-Conference.html. Moreover, the proposed method does not always perform the best in figures 5 [1032] and 7 [1131]. The pros and cons of the proposed method are not discussed in the main text.
Supplementary Material: I have read Appendices B, C, and E.
Relation To Broader Scientific Literature: The developed algorithms and the technical contributions provide bandit solutions for the ZI regime. It is, however, unclear if the technique of this submission can be generalized to other regimes.
Essential References Not Discussed: Literature with sparse rewards should be included as a part of related work. Related papers are https://arxiv.org/abs/1706.01383 and https://proceedings.neurips.cc/paper_files/paper/2023/hash/9408564a4229f4a933ac9bd09a29ee96-Abstract-Conference.html.
Other Strengths And Weaknesses: Weaknesses
The submission did not justify the rationale for the ZI model [118L]. Why is ZI a good model for sparse rewards while the others are not? What is the intuition behind $X$ and $Y$? Why does $X$ have to be defined by $\mu$ and $\epsilon$? Can’t $X$ just be a distribution? Why is $\epsilon$ restricted to sub-Weibull, sub-Gaussian, and heavy-tailed only?
There is no discussion of the applicability of ZI in realistic scenarios (applications are mentioned in Introduction, but it is difficult for a reader to connect the applications to the modeling part [110L–140L]).
What is the feedback signal? There are X, Y, R in the main text [110L–140L], but only R and Y in the algorithm.
There are unreferenced pointers ([177R] and [1254]).
Other Comments Or Suggestions: Please see the comments in the above parts.
Questions For Authors: Please see the questions in the above parts.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed review. We greatly value your time and suggestions, and we hope the following clarifications and enhancements address your concerns. We respectfully ask you to consider revising your evaluation score if our replies resolve your reservations.
*Theoretical Claims:*
Our principal contribution lies not only in matching known regret bounds (a necessary sanity check) but in showing how and why leveraging the ZI structure yields performance gains. As discussed in Section 2 ([R110–R149]) and supported by Lemma D.1 [1278] and Figure 10 [1347], ignoring ZI can inflate variance estimates and produce looser concentration bounds. In contrast, identifying and estimating the nonzero reward part more accurately (rather than lumping all observations into a single variance term) avoids under-exploration. This insight is partly recognized in various works (e.g., [1], [2]) but had not been systematically applied to ZI bandits. We will revise the main text to highlight the link between better variance estimation (using ZI) and improved regret.
*Experimental Designs & Analyses:*
- In Section 5 ([R345–R356]), we indeed used contextual UCB and TS baselines that incorporate covariates via a GLM-based model (described in Appendix C.2). We will clarify this in the main text.
- Our chosen U.S. online auto loan dataset is both large and exhibits notable ZI properties (high reward sparsity, heterogeneous covariates). Additional datasets are desirable but space-limited, and we plan to expand this line of empirical validation in future work.
- We compare our UCB (or TS) method to other UCB (or TS) algorithms in each experimental setting. Occasional underperformance against certain proxy-base UCB methods occurs mainly with exponential rewards when $p_k \sim U[0.1, 0.3].$ However, as shown in Lemma D.1, such proxies can become unreliable under high variance or ZI. Our approach is generally more robust and practical due to directly modeling ZI.
We have also added two uploaded anonymous figures ([Figure 1](https://anonymous.4open.science/r/ZIB_ICML-2535/size_ratio_1.pdf) and [Figure 2](https://anonymous.4open.science/r/ZIB_ICML-2535/size_ratio_2.pdf)) to illustrate how large variance-to-mean ratios may briefly favor alternative baselines but ultimately highlight the value of stable ZI modeling.
*Essential References Not Discussed:* We appreciate you pointing us to [3] and [4]. These works consider sparsity in different senses (e.g., many arms having zero mean or zero losses in partial monitoring). Our ZI setting, by contrast, deals with a stochastic zero draw, even for arms with nonzero mean, leading to different concentration/variance behaviors. We will clarify these distinctions in our introduction and related work sections.
*Weaknesses:*
- Rationale: Our approach targets scenarios where actions yield zero reward with high probability (not merely zero mean). This modeling accurately depicts domains like loan offers, recommender systems, or online ads, where a user typically rejects or ignores an action, creating a structural zero.
- Decomposing Rewards: We define $R = X \times Y$, with $Y = 1(R \neq 0)$ and $X = 1(R \neq 0) \cdot R$. This decomposition (discussed in [L130–L140]) separates the high-probability zero event from the nonzero reward.
- Intuition Behind $X, Y, \mu$, and $\epsilon$: We let $\mu = E[X]$ and $\epsilon = X - \mu$ to center analyses on deviation from the mean. We classify $\epsilon$ into sub-Gaussian, sub-Weibull, or heavy-tailed categories, each with different tail decay properties (following [1], [2]). Handling these classes separately is standard in bandit theory. More extreme no-moment or adversarial settings (e.g., [5]–[8]) fall outside this paper’s scope, though we will mention them as possible extensions.
- Observed Variables: Only $R$ is directly observed during interaction. We define $X$ and $Y$ to analyze the structure of zero vs. nonzero rewards, but the algorithm indeed only sees $R$. We will clarify this point.
- Unreferenced Pointers: We have fixed [1254] (now pointing to Section 5) and [177R] (referring to a corollary comparing constants with classical UCB). These errors will be corrected in the revised manuscript.
References:
[1] Lattimore, T., & Szepesvári, C. (2020). Bandit Algorithms.
[2] Zhou, P., Wei, H., & Zhang, H. (2024). Selective Reviews of Bandit Problems in AI via a Statistical View.
[3] Kwon, J., Perchet, V., & Vernade, C. (2017). Sparse Stochastic Bandits.
[4] Tsuchiya, T., Ito, S., & Honda, J. (2023). Stability-Penalty-Adaptive FTRL: Sparsity, Game-Dependency, and Best-of-Both-Worlds.
[5] Yun, H., & Park, B. U. (2023). Exponential concentration for geometric-median-of-means.
[6] Bubeck, S., Cesa-Bianchi, N., & Lugosi, G. (2013). Bandits with heavy tail.
[7] Zhang, J., & Cutkosky, A. (2022). Parameter-free regret in high probability with heavy tails.
[8] Cheng, D., Zhou, X., & Ji, B. (2024). Taming Heavy-Tailed Losses
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. My concerns are addressed. I will revise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your constructive and positive feedback, and appreciating our rebuttal!! | Summary: This paper considers a multi-armed bandit setting, where reward distributions are contaminated with a zero point mass with weight $p$. To accommodate this special reward structure, which returns rewards of 0 with probability $1-p$, and rewards distributed according to an arm-specific sub-Weibull distribution otherwise. The authors leverage a product trick, which uses a union bound over the uncertainty in parameter $p$ and the mean of the nonzero distribution to obtain product confidence intervals, which underlie the modified UCB and Thompson Sampling algorithms proposed in their work. The authors provide theoretical results that show their approach achieves known minimax lower bounds, and their empirical results demonstrate the benefits of their approach on both synthetic and real-world data.
Claims And Evidence: All claims and proofs are well supported, although the regret rates could be discussed more cleanly with respect to existing work.
For example, as long as the mixed distribution satisfies some subgaussian (or sub-Weibull) property, then minimax regret rates will be attained by specifying the correct subgaussian (or sub-Weibull) factor. The main gain (at least in terms of order optimality of regret) is only in the heavy-tailed case, which could be emphasized more cleanly.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria (for both the synthetic example and real-world experiment) evaluate the proposed method.
One interesting case to see (at least from an empirical standpoint) would be bounded rewards (rather than just sub-gaussian reward distributions), which often occurs in practice. It may be the case that modeling the nonzero distribution may be most beneficial when the nonzero distribution component is skewed far away from zero.
It could also be interesting to test larger values of $p$ - this setting could also be adversarial to the benefits of this approach.
Theoretical Claims: Proofs were briefly skimmed, but not evaluated in detail.
Experimental Designs Or Analyses: The experiments do correspond closely with the claims of the authors, and capture the multiple nonzero reward generating distributions that may be contaminated with a zero point mass.
It would be helpful to have higher simulation numbers - 50 and 25 simulations for figures 2 and 3 seems somewhat smaller than expected. Likewise, it would be helpful to get better intuition on why this approach works well for synthetic data, and works relatively worse on the real world data.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The key results of this paper lies in its practical relevance - zero-inflated distributions are very common in settings such as digital advertisement. While this paper does not introduce novel tools, it provides a practical, simple solution for a setting that occurs in many practically relevant scenarios where bandits are applied.
Essential References Not Discussed: All relevant references seem to be present in this work.
Other Strengths And Weaknesses: We summarize the strengths and weaknesses of this paper below:
** Strengths **
* Most importantly, this work considers a common setup that occurs in practice, across many different fields. Zero-inflated distributions for arm rewards is a practically relevant setting to study.
* The authors offer a simple, computationally lightweight solution to this setup that requires little modifications to existing bandit algorithms.
* The method appears to perform empirically well, especially for heavy-tailed distributions.
** Weaknesses **
* For heavy-tailed distributions contaminated with a zero point mass, those distributions are no longer so heavy-tailed. It is unclear whether the modeling of zero inflation or a poorly specified scale factor is the cause of increased performance.
* The authors rely on a union bound, which is sufficient for order optimal regret. One wonders if there could be a better way to split the confidence (even if union bounding) than $\alpha/2$, or to avoid wasteful union bounds all-together.
Other Comments Or Suggestions: There seems to be a reference error in Line 176 of the manuscript (end of Section 2.1).
Questions For Authors: Beyond zero inflation, are there other reward models that are best captured with this hierarchical approach? The product method for constructing confidence intervals appears to be a general approach for more complicated reward generation (i.e., we don't necessarily need to fix one distribution to be Bernoulli). Are there other settings where this could be done?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for your thoughtful and encouraging feedback. Below, we respond to your comments point by point, and we hope our replies provide clear and satisfactory answers to your questions.
*Claims & Evidence:*
While sub-Gaussian or sub-Weibull assumptions allow existing algorithms to attain minimax rates, ZI introduces unique challenges. In particular, a preponderance of zeros can inflate variance estimates and skew concentration. By explicitly modeling and leveraging the ZI structure, our approach preserves minimax guarantees across a wide array of distributions, including those with heavy-tailed nonzero components. We will revise Lemma 2.1 and related text ([L161–L164]) to emphasize these benefits more clearly.
*Methods & Evaluation:*
We tested our approach with bounded, skewed Beta rewards ([anonymous figure](https://anonymous.4open.science/r/ZIB_ICML-2535/MAB_for_Beta.pdf)) of the form $X_k = 2\mu_k Beta(p_k, p_k)$ so that $E[X_k] = \mu_k$. Our UCB method outperforms baselines in most settings except when zero-inflation is minimal, where an exact Hoeffding-based UCB can be slightly better (though it relies on unavailable knowledge). Even at high $p_k \sim U(0.75, 0.95)$ Low ZI ([anonymous figure](https://anonymous.4open.science/r/ZIB_ICML-2535/MAB_for_large_p.pdf)), our approach remains strong under Gaussian and Exponential components. We will add these experiments to the appendix.
*Experimental Designs & Analyses:*
We agree that increasing simulation replications to 100 will yield more reliable comparisons, and we have begun doing so for both MAB and GLM contextual bandits. The performance gap between synthetic and real data likely stems from unobserved confounders in the real-world dataset; still, under the same ZI assumptions, our UCB and TS methods outperform other baselines. We will clarify these observations in the revision.
*Weakness 1:*
Our discussion of heavy-tailed rewards refers to the conditional distribution of nonzero outcomes, since the zero mass makes the entire distribution no longer heavy-tailed in the strict sense. By modeling the zero mechanism separately, our approach avoids erroneously inflating or deflating uncertainty due to frequent zeros, enhancing robustness across diverse reward distributions.
*Weakness 2:*
As highlighted in [L421–R423], our current allocation of the failure probability between the concentration bounds for $X$ and $Y$ can be conservative. A more refined analysis, possibly peeling confidence sets for $X$ and $Y$ individually, could yield tighter bounds and smaller constants, aligning with advanced concentration methods (e.g., Section 9.3 of [1], Section 1.2 of [2]). We will mention these refinements in the revision.
*Other Comments & Suggestions:*
We have fixed the reference error at [176R]. Furthermore, as noted in our Broader Implications [R424–R436], our product-form confidence intervals naturally extend to hierarchical or multi-layer reward structures. By decomposing the variance of each sub-component and applying Freedman or Bernstein-type bounds, one can tackle even more complex reward mechanisms. This direction may be especially relevant for recommendation systems or multi-stage decision-making.
For instance, consider a reward model of the form
$$
(Y_1, \ldots, Y_m) \sim \operatorname{Multi} (1; p_1, \ldots, p_m), \qquad R = \sum_{j = 1}^m X_j Y_j,
$$
where the reward is determined by sampling one of the $X_j$’s with probability $p_j$. This model captures structured reward uncertainty arising from latent selection or allocation mechanisms. In such settings, the reward inherits a mixture structure that can be decomposed, and concentration bounds can be constructed component-wise. Specifically, Freedman's inequality or Bernstein-type bounds for martingales can be adapted to leverage variance information
$$
P(S_n - n r > t) \leq P (S_n - nr \geq t, V_n \leq v) + P (V_n > v),
$$
where $S_n$ is the cumulative reward and $V_n$ is its empirical variance. The first term allows for tighter bounds via Bernstein-type inequalities when variance is controlled, while the second term can be analyzed similarly to our ZI setting, noting that
$$
V =\sum_{j = 1}^m p_j E [X_j]^2 - \bigg( \sum_{j = 1}^m p_j E [X_j] \bigg)^2.
$$
This decomposition lends itself naturally to algorithm design, in which estimates and confidence bounds are tracked separately for each layer of the reward model. We believe such hierarchical formulations represent a promising direction for future research beyond ZI, especially in domains like recommendation systems, multi-stage decision-making, or online pricing, where reward generation often involves latent stochastic mechanisms.
*References:*
[1] Lattimore, T., & Szepesvári, C. (2020). Bandit Algorithms. Cambridge University Press.
[2] Ren, H., & Zhang, C. H. (2024). On Lai’s Upper Confidence Bound in Multi-Armed Bandits. arXiv preprint arXiv:2410.02279. | null | null | null | null | null | null |
Measuring Diversity in Synthetic Datasets | Accept (poster) | Summary: The paper introduces DCScore, a novel method for measuring diversity in synthetic datasets from a classification perspective. From my analysis, the key innovation is reformulating diversity evaluation as a sample classification task, where each sample should be distinguishable enough to form its own class. The authors demonstrate this approach satisfies important theoretical properties while outperforming existing metrics across multiple datasets and evaluation criteria.
Claims And Evidence: The central claims about DCScore's effectiveness are well-supported through extensive experiments across different evaluation scenarios. From my experience, the reported correlations (Spearman's ρ > 0.96) with multiple diversity pseudo-truths are quite strong. However, it would be better to add more investigation of failure cases and limitations
Methods And Evaluation Criteria: From my analysis, the validation of theoretical properties (effective number, symmetry, etc.) strongly supports the method's soundness. However, it would be better to add more discussion of the hyperparameter sensitivity of the classification temperature τ.
Theoretical Claims: I have carefully reviewed the theoretical foundations and proofs in Section 4.2 and Appendix B. The axiomatic guarantees (effective number, identical samples, symmetry, monotonicity) are mathematically sound and well-proven. The complexity analysis comparing with VendiScore is thorough.
Experimental Designs Or Analyses: From my perspective, a key strength is using multiple correlation measures (τg, human, LLM) as pseudo-truths. However, it would be better to add diversity evaluations on more real-world synthetic datasets beyond those augmented by LLMs.
Supplementary Material: I thoroughly reviewed the appendices containing implementation details, proofs, and additional experiments. The material substantially strengthens the paper's claims, particularly Appendix B's theoretical proofs.
Relation To Broader Scientific Literature: The work builds meaningfully on existing diversity metrics while making novel contributions. From my experience, it would be better to add discussion of connections to other classification-based metrics in machine learning beyond just diversity measurement.
Essential References Not Discussed: The paper should discuss recent work on classification-based evaluation metrics, particularly "Beyond Accuracy: Behavioral Testing of NLP Models with CheckList" (Ribeiro et al., 2020) which provides relevant insights about classification-based evaluation.
Other Strengths And Weaknesses: The key strength is the novel classification perspective that provides both theoretical guarantees and strong empirical results. The main weakness is limited discussion of the method's applicability to non-text modalities.
Other Comments Or Suggestions: NA
Questions For Authors: How does the method perform on multi-modal synthetic data where text is just one component?
What is the sensitivity of the method to the choice of classifier architecture beyond the examined options?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's recognition of our work. We respond to the reviewer’s question as follows. **Limited by the space, we present our additional experiments in an anonymous URL** (https://anonymous.4open.science/r/ICMLRebuttal_DCScore).
---
>Q1: It would be better to add more investigation of failure cases and limitations
R1: Thank you for your suggestive comment. **One of the primary limitations of DCScore is its inapplicability to multimodal data**. This limitation arises from the challenges associated with feature extraction and alignment across different modalities, which can affect the calculation of the classification probability matrix in DCScore. **We will include a detailed discussion of this limitation in future versions of our paper and explore potential solutions in our future research.**
>Q2: It would be better to add more discussion of the hyperparameter sensitivity of the classification temperature τ.
R2: We would like to clarify that we **have conducted sensitivity experiments on the classification temperature $\tau$ in Section 5.4** of our paper. These experiments explore how different values of $\tau$ influence the classification resolution. Specifically, a lower $\tau$ enhances the discriminative ability of DCScore to different samples.
>Q3: It would be better to add diversity evaluations on more real-world synthetic datasets beyond those augmented by LLMs.
R3: Thank you for your suggestion. We have conducted several experiments across different existing text and image datasets. Specifically, in **Figure 4 of the anonymous URL**, we present diversity scores for the AGNews/SST/Yelp_A.P. datasets augmented by AttrPrompt[1]. In **Figure 2 of the anonymous URL**, we show results for image data. Overall, DCScore shows strong correlation with baseline methods.
Reference:
[1] Large language model as attributed training data generator: A tale of diversity and bias, Neurips 2023.
>Q4: From my experience, it would be better to add discussion of connections to other classification-based metrics in machine learning beyond just diversity measurement.
R4: Thank you for your thoughtful review comment. We will include a discussion on classification-based metrics in machine learning in future versions of our paper. To the best of our knowledge, we **have identified classification-based methods primarily within the domain of metric learning**. To further enhance our work, we would greatly appreciate any additional guidance or references within this topic.
>Q5: The paper should discuss recent work on classification-based evaluation metrics, particularly "Beyond Accuracy: Behavioral Testing of NLP Models with CheckList" (Ribeiro et al., 2020) which provides relevant insights about classification-based evaluation.
R5: Thank you for your suggestion. This paper (Beyond Accuracy: Behavioral Testing of NLP Models with CheckList) primarily focuses on guiding users in conducting Behavioral Testing of NLP Models, but it appears to not mention classification-based evaluation metrics. However, we recognize the relevance of this work to our paper and **will include a discussion of its insights in the related work section of the future version.**
>Q6: The main weakness is limited discussion of the method's applicability to non-text modalities.
R6: Thank you for your valuable review. We conduct experiments on the image modality (colored mnist dataset), please refer to **Figure 2 of the anonymous URL**. We follow the setting of [1] and observe that **DCScore presents higher correlation with the label number compared to VendiScore.**
Reference:
[1] Ospanov et al., Towards a Scalable Reference-Free Evaluation of Generative Models, NeurIPS 2024.
>Q7: 1.How does the method perform on multi-modal synthetic data where text is just one component? 2.What is the sensitivity of the method to the choice of classifier architecture beyond the examined options?
R7: For 1, our method has limitations in evaluating multi-modal synthetic data, which is an area we plan to explore further. Specifically, **the effectiveness of DCScore in accurately evaluating multi-modal data depends on the extraction of multi-modal representations and the alignment of different data modalities**. We believe this is a very worthwhile research direction and appreciate your guidance on this matter.
For 2, in addition to the factors we have already explored, we believe **the sensitivity of the classifier architecture lies in its ability to effectively distinguish differences between samples**. The fundamental requirement for our method to function correctly is that the classifier can accurately identify distinct samples. | Summary: - The paper introduces DCScore, a novel method for measuring diversity in synthetic datasets generated by large language models (LLMs).
- Key innovation: DCScore formulates diversity evaluation as a sample classification task, leveraging mutual relationships among samples, rather than using traditional n-gram statistics or reference-based methods.
- Main contribution: A principled diversity evaluation metric that satisfies important diversity-related axioms (effective number, identical samples, symmetry, and monotonicity).
- Technical approach: The method maps diversity-sensitive components into a representation space, computes pairwise similarities through a kernel function, and summarizes diversity through classification probabilities.
- Experimental validation: DCScore demonstrates stronger correlation with diversity pseudo-truths (like generation temperature) and human judgment compared to baseline methods.
- Computational efficiency: Both theoretical analysis and empirical results show DCScore has lower computational costs compared to existing approaches, particularly for non-linear kernels.
Claims And Evidence: - The theoretical claims about DCScore satisfying the four axioms (effective number, identical samples, symmetry, monotonicity) are well-supported with formal proofs in the paper.
- The computational efficiency claims are supported by both theoretical complexity analysis (Table 2) and empirical measurements (Figure 4), though the advantage varies by kernel type.
- The claim about DCScore's correlation with human judgment is supported (Table 4), but would be strengthened with more details on the human evaluation protocol and inter-annotator agreement.
- The claim about correlation between DCScore and downstream task performance is supported by limited evidence (Table 7) that could benefit from more extensive experimentation. Additionally, it seems like the number of epochs was not fixed for this experiment.
Methods And Evaluation Criteria: - Using Spearman's ρ to measure correlation with diversity pseudo-truths is appropriate for evaluation.
- The selection of diversity pseudo-truths (generation temperature, human judgment, GPT-4 evaluation) is sensible and provides multiple validation angles.
- The choice of baseline comparison methods (Distinct-n, K-means inertia, VendiScore) covers multiple baseline approaches.
- The evaluation on both self-generated and publicly available datasets is appropriate, though more domain diversity would strengthen claims of generalizability.
- Testing across multiple embedding functions and kernel types demonstrates robustness, a valuable evaluation approach.
- The computational efficiency evaluation is practical and relevant, especially for large synthetic datasets.
- The axiomatic analysis provides theoretical validation for why the method works, strengthening the methodology.
- A comparison with reference-free diversity metrics from other domains (e.g., ecology) could have further contextualized DCScore's advantages.
Theoretical Claims: I did not check the correctness of proofs very carefully. Overall, they seem mostly correct. They could better explore potential limitations in edge cases, such as what happens with extremely imbalanced datasets or outliers.
Experimental Designs Or Analyses: - The correlation analysis with generation temperature (τg) is sound, using appropriate statistical measures (Spearman's ρ) across a well-distributed range of temperatures (0.2-1.2).
- The experimental setup for human evaluation lacks some important details - specifically how agreement was measured across evaluators and why only 3 annotators were used.
- The downstream task training experiment provides valuable real-world validation, but could be strengthened with more diverse task types beyond text classification.
- The batch evaluation protocol (averaging results across batches generated from the same context) is appropriate but could introduce bias if contexts vary significantly in diversity potential.
- The ablation studies for embedding functions and kernel types are methodologically sound, though the paper could better explain the criteria for selecting the specific functions tested.
- The comparison against baseline methods is fair, with appropriate implementation details provided for each method.
- The paper appropriately tests on both self-generated and publicly available datasets, but the dataset sizes (100 samples per sub-dataset in some experiments) are somewhat limited for diversity evaluation.
- Statistical significance testing is notably absent from the correlation analysis, which would strengthen confidence in the reported differences between methods.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: - The paper connects to recent LLM synthetic-data generation work by Ye et al. (2022) and Abdullin et al. (2024), providing evaluation metrics for these generative approaches.
- The computational efficiency focus addresses challenges with other methods (such as VendiScore).
- The evaluation methodology connects to research on human-aligned evaluation metrics by Holtzman et al. (2019).
- The temperature-based diversity relation builds on sampling strategy research by Caccia et al. (2018) and Chung et al. (2023).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ## Strengths
- The paper addresses a practical need in LLM-generated datasets, making it timely and relevant to current research directions.
- The method is adaptable across embedding functions and kernel types, demonstrating flexibility for different applications.
- The computational efficiency advantages make DCScore practically applicable to large-scale dataset evaluation.
## Weaknesses
- Limited experimental evaluation across diverse domains beyond text classification and story completion.
- Human evaluation is sparse.
- Limited discussion of potential failure cases or situations where DCScore might not accurately capture diversity.
- Computational results focus on small to medium datasets (up to 64k); scalability for extremely large datasets remains unproven.
Other Comments Or Suggestions: - Figure 4 is very small and is quite hard to read.
Questions For Authors: 1. Can you provide more details about the human evaluation protocol? Specifically, how was inter-annotator agreement measured, and what were the specific instructions given to evaluators?
2. Have you tested DCScore on non-basic-text modalities (e.g., code, mathematical expressions, structured data)? If not, what adaptations would be necessary to apply it to these domains?
3. How does DCScore perform on datasets with substantial outliers or on highly imbalanced datasets where certain types of content dominate?
4. Have you explored whether DCScore can be used for diversity-aware sampling from generative models, beyond just evaluation of already-generated datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for reading our paper. We respond to the reviewer’s question as follows. **Limited by the space, we present our additional experiments in an anonymous URL** (https://anonymous.4open.science/r/ICMLRebuttal_DCScore).
---
>Q1: The claim about DCScore's correlation with human judgment would be strengthened with more details on the human evaluation protocol and inter-annotator agreement.
R1: Thank you for your valuable comment. **We will update the appendix in the subsequent version of our paper to include details of the human evaluation protocol**. Human evaluation data were generated at six temperatures, with five results per context (prompt) in each subset. During the evaluation, annotators were asked to select the more diverse subset from pairs of subsets. Across six temperatures, this resulted in 15 comparisons, and with three annotators, a total of 45 judgments were made. Subsets were ranked by the frequency of being chosen as more diverse. This process was repeated five times with different contexts to derive the final human diversity ranking.
>Q2: The claim about correlation between DCScore and downstream task performance could benefit from more extensive experimentation. Additionally, it seems like the number of epochs was not fixed for this experiment.
R2: We** have included some downstream task experiments in Appendix E.2. In Table 7**, except for the scenario with $\tau_{g}=1.2$(360 epochs), all other results were obtained with 120 epochs. **The higher dataset diversity, e.g. datasets with $\tau_{g}=1.2$, required more epochs for model convergence**, with details in Appendix E.2. **Figure 8 also presents results for $\tau_{g}=1.2$ at 240 and 120 epochs**.
>Q3: A comparison with reference-free diversity metrics from other domains (e.g., ecology) could have further contextualized DCScore's advantages.
R3: Our method is primarily concerned with evaluating the diversity of synthetic datasets. We believe that **comparisons with methods from more closely related domains might carry more conviction**. The axiomatic requirements for diversity evaluation can vary significantly across different fields, and using metrics from ecology to assess text or image datasets might result in an unfair comparison. Additionally, **the main idea behind the Vendi score method originates from ecology, which can be considered an indirect comparison with metrics from other fields.**
>Q4: Limited discussion of potential failure cases or situations where DCScore might not accurately capture diversity.
R4: The main limitation of our method is that **it is not applicable to multimodal data**. Due to space constraints, please refer to **R1 of the Response to Reviewer QL32** for more details.
>Q5: The batch evaluation protocol is appropriate but could introduce bias if contexts vary significantly in diversity potential.
R5: The batch evaluation protocol was chosen to ensure that the diversity pseudo-truths are accurately reflected when generating multiple samples from the same context. This approach helps maintain consistency in evaluating the diversity of generated datasets. We adopted this evaluation strategy to conduct correlation experiments, and under this setting, bias is not introduced.
>Q6: The paper appropriately tests on both self-generated and publicly available datasets, but the dataset sizes (100 samples per sub-dataset) are somewhat limited for diversity evaluation.
R6: As shown in Q8, we conducted experiments on datasets with **sizes reaching up to 64k samples.** The smaller datasets, consisting of 100 samples per sub-dataset, were specifically designed for batch evaluation protocols (**Please refer to R5** for more details).
>Q7: Limited experimental evaluation across diverse domains beyond text classification and story completion.
R7: Thank you for your suggestion. We provide experimental results on image data (colored mnist dataset), please refer to **Figure 2 of anonymous URL**. DCScore presents a higher correlation with the label number compared to VendiScore.
>Q8: Computational results focus on small to medium datasets (up to 64k); scalability for extremely large datasets remains unproven.
R8: We provide experimental results on extremely large datasets (up to 120k) in **Figure 6 of the anonymous URL. Notably, DCScore exhibits similar changing trends with the baseline methods across these large-scale datasets**.
>Q9: Have you explored whether DCScore can be used for diversity-aware sampling from generative models, beyond just evaluation of already-generated datasets?
R9: Our method is similar to approaches like the Vendi score, thus **it can be applied to diversity-aware sampling from generative models**. However, we are more focused on scenarios of synthetic data diversity evaluation. We also provide a detailed discussion of potential application scenarios for DCScore in Appendix A.2. | Summary: The paper introduces DCScore, a novel metric for measuring diversity in synthetic datasets. Unlike traditional methods (e.g., Distinct-n, VendiScore), DCScore models diversity as a classification task and uses semantic embeddings to compute pairwise similarity among samples. It leverages a softmax-based classification probability matrix to quantify dataset diversity. The authors provide theoretical guarantees for DCScore, showing that it satisfies fundamental diversity axioms. Empirical results demonstrate that DCScore correlates strongly with human judgments while being more computationally efficient than VendiScore.
Claims And Evidence: This paper assumes that classification probability correlates directly with diversity, but it does not sufficiently justify why treating each sample as a separate category is a robust measure of diversity.If classification probability is used as a proxy for diversity, the paper should provide empirical or theoretical validation, such as, showing that classification-based diversity correlates well with entropy-based or clustering-based diversity measures.
The paper correctly highlights that VendiScore requires O(n3) complexity due to eigenvalue decomposition, while DCScore reduces this to O(n2).However, VendiScore can be optimized to O(d 2n)using low-rank approximations, which the authors do not address.
Methods And Evaluation Criteria: The method of treating diversity evaluation as a classification problem is interesting, but it is not fully justified.
The method assumes that classification probability correctly represents diversity, but this assumption is not tested against other diversity definitions (e.g., entropy-based or clustering-based measures).
No test on datasets with strong semantic similarity (e.g., paraphrased text or redundant images), which would be important to validate the method.
Theoretical Claims: The paper propose DCScore,but if classification probability does not fully represent diversity, then the theoretical guarantees may be incomplete.
Experimental Designs Or Analyses: There is no ablation study on how different embedding functions affect DCScore, the author should analyze how different embeddings (e.g., SBERT, CLIP) affect DCScore.
Supplementary Material: The supplementary material includes additional experimental details but lacks deeper theoretical analysis.The author should add mathematical proofs for why classification probability is a good diversity metric
Relation To Broader Scientific Literature: The paper discusses N-gram-based, reference-based, and transformation-based diversity metrics.However, it does not mention entropy-based diversity measures, which are commonly used in active learning and generative models.
Essential References Not Discussed: The paper does not discuss entropy-based diversity metrics, which have been used in representation learning and active learning.
Other Strengths And Weaknesses: Strengths
1.Proposes a novel classification-based diversity metric.
2.Improves computational efficiency over VendiScore.
Weaknesses:
1.Lacks theoretical justification for using classification probability.
2.Highly sensitive to how categories are defined.
3.Does not compare with entropy-based or contrastive learning-based diversity metrics.
Other Comments Or Suggestions: Please see the above weakness
Questions For Authors: Please see the above weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for reading our paper. We respond to the reviewer’s question as follows. **Limited by the space, we present our additional experiments in an anonymous URL** (https://anonymous.4open.science/r/ICMLRebuttal_DCScore).
---
>Q1: it does not sufficiently justify why treating each sample as a separate category is a robust measure of diversity. the paper should provide that classification-based diversity correlates well with entropy-based or clustering-based diversity measures.
R1: Thank you for your insightful comments. **We have indeed provided both experimental and theoretical validation to support DCScore as a robust measure of diversity.** We offer empirical validation demonstrating a strong correlation with the entropy-based method, as shown in **Figure 5 at the anonymous URL**.
Theoretically, we have proven that DCScore satisfies several axioms that an ideal diversity evaluation method should meet, as detailed in Section 4.2 and Appendix B of our paper.
Empirically, we have demonstrated that DCScore has a high correlation with multiple diversity pseudo-truths, such as generation temperature $\tau_{g}$, human evaluation, and LLM evaluation, as shown in Section 5.2. We believe these pseudo-truths more accurately reflect the true diversity of datasets.
>Q2: VendiScore can be optimized to O(d 2n)using low-rank approximations, which the authors do not address.
R2: As shown in Table 2 of our paper, **VendiScore can achieve $\mathcal{O}(d^{2}n)$ complexity when using a linear kernel**, such as Inner Product. We have provided a detailed analysis of this in Section 4.3. However, in practical evaluation scenarios, the diversity of synthetic datasets often requires more complex kernels beyond linear ones.
>Q3: 1.Classification probability correctly represents diversity, but this assumption is not tested against other diversity definitions (e.g., entropy-based measures). 2.No test on datasets with strong semantic similarity.
R3: For 1, DCScore and entropy-based or clustering-based measures operate on fundamentally different principles, **making it infeasible to validate one method's assumptions within the framework of another**. Limited by the space, please refer to R4 for more details.
For 2, as shown in **Figure 1 of anonymous URL**, we conduct evaluation experiments on datasets with strong semantic similarity and **observe that DCScore performs well in this scenario**.
Reference:
[1] Ospanov et al., Towards a Scalable Reference-Free Evaluation of Generative Models, NeurIPS 2024.
>Q4: If classification probability does not fully represent diversity, then the theoretical guarantees may be incomplete. The author should add mathematical proofs for why classification probability is a good diversity metric
R4: Thank you for your suggestion. For diversity metrics, **there is no ground truth or golden rule, only characterization through axioms**. Therefore, in Section 4.2, we demonstrate that DCScore satisfies these axioms, indicating that it is a good diversity evaluation metric. From the theoretical perspective, Diversity evaluation aims to capture dataset richness by identifying sample relationships, which aligns with the classification perspective of DCScore. Thus, **a sample-level classification probability can fully represent diversity**.
>Q5: There is no ablation study on how different embedding functions affect DCScore, the author should analyze how different embeddings (e.g., SBERT, CLIP) affect DCScore
R5: We would like to clarify that **we have conducted experiments on the impact of embedding functions**, and the results are presented in **Appendix E.3 and Table 11**. DCScore demonstrates strong performance across various embedding functions.
>Q6: The supplementary material includes additional experimental details but lacks deeper theoretical analysis.
R6: Thank you for your valuable feedback. In **Appendix B** of our paper, we provide a theoretical analysis of the properties satisfied by DCScore, demonstrating that it adheres to the axioms expected of an ideal diversity evaluation method. Additionally, in **Section 4.3**, we offer a theoretical analysis of the computational complexity of DCScore.
>Q7: The paper does not mention entropy-based diversity measures, which are commonly used in active learning and generative models.
R7: Thank you for your suggestion. We believe **the omission may stem from our different taxonomy of existing methods**. According to our classification, entropy-based diversity measures fall under Transformation-based Methods. We will clarify this distinction in future versions of our paper and include a more detailed discussion of entropy-based methods.
>Q8: Weaknesses: Highly sensitive to how categories are defined
R8: We would like to clarify that **DCScore is not sensitive to category definitions**. It doesn't require predefined categories and the category count is the dataset sample size. | Summary: The paper introduces a classification-based evaluation metric, DCScore, for assessing the diversity of synthetic datasets. The authors address computational challenges while satisfying axiomatic requirements and providing a holistic analysis. Additionally, they evaluate the diversity of generated datasets across various scenarios.
---
## update after rebuttal
I thank the authors for their response. I believe the authors provided more detailed insights into the advantages of their work during the rebuttal, such as the optimization stability or computational efficiency of DCScore compared to Vendi Score in some cases.
While I appreciate the authors' response, I remain concerned that several claims lack sufficient explanation or need additional experiments, hence, I will keep my score. Below are key points that, in my view, must be addressed to strengthen the manuscript:
- **Convergence of DCScore (R9)**: One important part that is missing is the convergence analysis of DCScore. The authors indeed provided additional experiments with 2D Gaussian mixtures, such analysis must also be demonstrated on complex datasets to validate the method's convergence. Also, the authors mentioned their task is distinct from generative model evaluations because the target scenario lacks ground truth values, but I believe convergence is important in their task too and even in generative models' evaluation, we don't have ground truth values.
- **Evidence of why DCScore is closer to the essence of diversity evaluation. (R4)**: While the classification-based approach is novel, the authors must provide a more rigorous comparison and evidence demonstrating why DCScore fundamentally captures the essence of diversity better than clustering-based methods like Vendi Score.
- **Fair comparison between Vendi Score with DCScore (R1, R5, R8)**: The authors claimed that DCScore is *Suitable for highly diverse scenarios*, or *In datasets with distinctive samples, Vendi Score fails, but DCScore succeeds*. However, I argue these limitations could be addressed in Vendi Score through proper hyperparameter selection. Since DCScore has a temperature hyperparameter ($\tau$), the authors should comparably test Vendi Score using a Gaussian kernel (not cosine similarity) since Gaussian kernel has bandwidth parameter ($\sigma$) that could make it equally adaptable to such scenarios.
Claims And Evidence: The paper proposes DCScore, a classification-based diversity metric for synthetic datasets. I am not convinced that using a classification method can provide new insights or even achieve results comparable to eigendecomposition methods in all cases. Using the entropy of eigenvalues can reveal the number of modes in the dataset and their underlying relationships, whereas relying solely on the trace of P may obscure important structural details.
For example, consider the Monotonicity axiom in a scenario where a large percentage of the dataset consists of duplicate samples. The kernel matrix develops a block structure with just a few dominant eigenvalues. Using an entropy based metric on eigenvalues could capture the low diversity, whereas a trace-based approach might mistakenly indicate high diversity. This is just a simple example, however, I expect a more in-depth analysis between Vendi Score and DCScore.
Methods And Evaluation Criteria: The problem formulation of evaluating synthetic datasets indeed makes sense and the authors extensively experimented with different scenarios to evaluate the score.
Theoretical Claims: Proofs of Properties of DCScore are correct and straightforward.
Experimental Designs Or Analyses: The problem formulation indeed makes sense and the authors extensively experimented with different scenarios to evaluate the diversity of the synthetic dataset.
Supplementary Material: Yes, I reviewed Additional related works, Proofs of properties, and Additional experiments.
Relation To Broader Scientific Literature: This paper uses a classification perspective to evaluate the diversity, which is quite new in the diversity evaluation literature. They also study the diversity evaluation of synthetic datasets.
Essential References Not Discussed: There are several related works on reference-based methods that seem to be missing such as [1, 2]. The paper notes that DCScore is similar to the Vendi Score and compares their computational complexities in Section 4.3. While DCScore reduces the complexity from $\mathcal{O}(n^3)$ to $\mathcal{O}(n^2d + n^2)$ (except for the cosine similarity kernel), there are additional works addressing this challenge that are not cited. For instance, in [3] (see Corollary 2), it is shown that the RKE (Vendi$_{\alpha=2}$) score can be computed in $\mathcal{O}(n^2)$ for all kernels. Additionally, [4] presents an approach that estimates the Vendi score with a computational complexity of $\mathcal{O}(n)$. I believe these papers are highly correlated and should be mentioned when claiming to improve the computational complexity.
I also believe that the authors need to mention the similarity of Equation 4 (the definition of the classification probability matrix $P$), as it is similar to the contrastive learning literature. For example, in Equation 1 of the SimCLR framework [5], the authors used the same function for contrastive learning of visual representations.
[1] Naeem et al., Reliable Fidelity and Diversity Metrics for Generative Models, ICML 2020.
[2] Kynkäänniemi et al., Improved Precision and Recall Metric for Assessing Generative Models, NeurIPS 2019.
[3] Jalali et al., An Information-Theoretic Evaluation of Generative Models in Learning Multi-modal Distributions, NeurIPS 2023
[4] Ospanov et al., Towards a Scalable Reference-Free Evaluation of Generative Models, NeurIPS 2024
[5] Chen et al., A Simple Framework for Contrastive Learning of Visual Representations, ICML 2020
Other Strengths And Weaknesses: **Strengths:**
- Introduce new settings for synthetic data evaluation using LLMs.
- This paper uses a classification perspective to evaluate the diversity, which is quite novel.
**Weaknesses:**
- The classification-based approach may primarily offer improved sample complexity compared to Vendi Score, without clearly demonstrating how it provides new insights into synthetic data diversity. The paper should clarify whether the classification perspective yields benefits beyond computational efficiency.
- It is not well established when or why DCScore outperforms Vendi Score. The paper needs to illustrate scenarios where Vendi Score fails to capture diversity correctly, but DCScore succeeds (Beyond computational gains)
- Although the paper emphasizes the reduction in computational complexity, it lacks a comparison with recent works that also offer efficient computations of Vendi Score. Without this, the claimed advantages of DCScore remain somewhat unsubstantiated.
Other Comments Or Suggestions: I believe the authors should revise the title of each paper, as they currently use the template title “Submission and Formatting Instructions for ICML 2025.”
Questions For Authors: I believe the idea of evaluating the diversity of synthetic data is crucial, however, I am concerned about the novelty and the contribution of the paper. I have the following questions from the authors. Answering these can elaborate the contribution of this paper.
1. DCScore proposes a synthetic diversity metric with a classification perspective. Could the authors elaborate on what new insight this perspective can bring? Is it only improving the sample complexity compared to Vendi Score, or using this perspective will better match the task of diversity evaluation of synthetic datasets?
2. Using entropy of eigenvalues as a diversity measure makes sense to me as it shows how many significant directions of variation exist in the dataset, but only using the trace of the probability matrix is not very intuitive as it highly relies on individual datapoints. Could authors provide a more detailed explanation of why such a choice is a suitable metric of diversity? (One case of failure I imagine is when we add replicas already existing samples in the dataset. Could authors compare the effect of replicas on Vendi Score and DCScore?)
3. What is the advantage of DCScore over Vendi Score? Can you provide a case when DCScore behaves as we expect and Vendi score fails to do that?
4. Can the authors provide a computational complexity comparison between DCScore and the mentioned related works [3, 4] as they also provide an efficient computation of Vendi Score? Are there some cases where DCScore outperforms compared to the mentioned papers that I'm missing?
[3] Jalali et al., An Information-Theoretic Evaluation of Generative Models in Learning Multi-modal Distributions, NeurIPS 2023
[4] Ospanov et al., Towards a Scalable Reference-Free Evaluation of Generative Models, NeurIPS 2024
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewers for providing detailed review on our submission. We respond to the reviewers’ concerns and questions one by one. **Limited by the space, we present our additional experiments in an anonymous URL** (https://anonymous.4open.science/r/ICMLRebuttal_DCScore).
---
>Q1: Relying solely on the trace of P may obscure important structural details. I expect a more in-depth analysis between Vendi Score and DCScore.
R1: A more in-depth analysis is as follows:
- **Computational Efficiency**: As shown in Table 2, with a general kernel, DCScore has a complexity of $\mathcal{O}(n^2)$ during summarization, while VendiScore has $\mathcal{O}(n^3)$, making DCScore more efficient.
- **Optimization Stability**: As an optimization target, the trace of **P** (DCScore) provides simpler gradients, enabling more stable optimization. In contrast, Vendi Score can have gradient issues due to identical eigenvalues.
- **Suitable for highly diverse scenarios**: DCScore highlights distinct samples (Please refer to R5), while Vendi Score may underestimate diversity. Thus, our approach aligns with the trend of generating diverse data.
**For more advantages of our method, please refer to R4.**
>Q2: There are several related works on reference-based methods that seem to be missing such as [1, 2]. There are additional works [3, 4] addressing complexity challenges that are not cited.
R2: Thank you for your suggestion. **We will include the discussion of the related works you pointed out in the revised manuscript.** For paper 3, the complexity of $\mathcal{O}(n^2)$ only considers the calculation of the Frobenius norm and excludes the specific kernel calculation due to varying complexities across different kernels (e.g. RBF kernel:$\mathcal{O}(n^2d)$, Graph Kernels (random walk):$\mathcal{O}(n^3)$).
>Q3: The authors need to mention the similarity of Equation 4 , as it is similar to the contrastive learning literature, such as Equation 1 of the SimCLR.
R3: Thank you for your suggestion. The core concept of contrastive learning—distinguishing samples—is similar to DCScore's classification perspective, reflecting a common idea across fields. We will highlight this in the revised manuscript.
>Q4: The paper should clarify whether the classification perspective yields benefits beyond computational efficiency. Could the authors elaborate on what new insight this perspective can bring?
R4: As noted in Papers 3 and 4, **entropy-based metrics like Vendi Score face high computational costs, underscoring the need for efficiency**. Additionally, DCScore offers some new insights:
- A novel perspective closer to the essence of diversity evaluation (identifying sample differences).
- More stable optimization due to its simpler gradient, unlike Vendi Score's instability from eigenvalue calculations [1].
- A clearer interpretation of sample uniqueness compared to entropy-based metrics.
- Sensitivity to unique samples via Softmax and Trace operations, aiding outlier detection and reducing model impact.
Reference:[1] Eigenvalue optimization[J]. Acta numerica, 1996.
>Q5: The paper needs to illustrate scenarios where Vendi Score fails to capture diversity correctly, but DCScore succeeds.
R5: Thank you for your suggestion. **In datasets with distinctive samples, Vendi Score fails, but DCScore succeeds**. In this regard, eigenvalue distributions of the kernel matrix can be highly uneven (for example, some eigenvalues are much larger than others), leading to a significant reduction in Shannon entropy. Consequently, the Vendi score yields a diversity estimate much smaller than the actual sample size. In contrast, DCScore accurately estimates the diversity of a dataset based on classification probabilities.
>Q6: 1. Could authors provide a more detailed explanation of why such a choice is a suitable metric of diversity? 2. Could authors compare the effect of replicas on Vendi Score and DCScore?
R6: For 1, as noted in **R1, R4, and R5**, DCScore is more computationally efficient and serves as a more stable optimization target. As theoretically shown in Section 4.2, DCScore satisfies the axioms required for an ideal diversity evaluation method.
For 2, **DCScore does not fail in the presence of repeated samples**. According to Section 4.2 (Effective Number), we provide an proof regarding the evaluation under repeated samples. Furthermore, we provide an evaluation of repeated samples in **Figure 1 of anonymous URL**.
>Q7: Can the authors provide a computational complexity comparison between DCScore and the mentioned related works [3, 4] as they also provide an efficient computation of Vendi Score?
R7: We present a comparison in **Figure 3 at the anonymous URL. DCScore outperforms efficient methods in terms of computation time on the AGNews and SST datasets**. Notably, the efficient computation method from Paper 4 is incompatible with non-shift-invariant kernels, which don’t meet diversity evaluation requirements.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their hard work and for providing additional numerical results in a short time. I have carefully read the authors' response and believe it provided additional insights into their work.
**R1: In-depth analysis between Vendi Score and DCScore**
- I completely agree that the paper's method is more stable for optimization compared to Vendi Score (in case where $\alpha \ne 2$). When $\alpha=2$ (RKE), it can be computed using Frobenius norm and it is stable.
- Suitable for highly diverse scenarios: May I ask for more clarification on why the authors claim that the Vendi score failed in the replica experiment because the "diversity estimate was much smaller than the actual sample size"? Replicating exact samples does not add diversity (just imagine a model that outputs replication of the same image), and the Vendi score captured this scenario. Also, choosing the kernel bandwidth plays a significant role here (just like the temperature hyperparameter in DCScore).
**R4: New insights of DCScore**
I thank the authors for providing these insights. It will definitely improve the draft by supporting these numerically.
**R5: Scenarios where CDScore has more advantages**:
As I mentioned in R1, I did not quite understand why Vendi failed and DCScore succeeded. Also, this raises a concern regarding the **sample convergence** of DCScore. How many samples do we need for DCScore to converge? (I do not expect numerical results for this concern at this point and just want to emphasize the importance of addressing this point.)
**R7:** Could the authors clarify what settings they used (e.g., how many RFFs they used)? I appreciate the point about the summerization part complexity, but isn't the bottleneck for the Vendi score?
**R7: non-shift-invariant kernels don’t meet diversity evaluation requirements**
I would like to show my concern regarding this statement. I believe shift-invariant kernels are necessary for diversity evaluation.
When the kernel is not shift-invariant, the diversity score changes when you shift the data, even though the actual diversity of the data doesn't change. That's why shift-invariant kernels are necessary for proper diversity evaluation. Could the authors elaborate on why this fails to meet diversity evaluation requirements?
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments. We respond to the reviewer’s question as follows. **Due to differences in the experimental setup, we spent some time conducting sample convergence experiments**. We would appreciate it if you could take our responses into consideration when making the final evaluation of our work.
---
>Q8: May I ask for more clarification on why the authors claim that the Vendi score failed in the replica experiment because the "diversity estimate was much smaller than the actual sample size"?
R8: There seems to be a misunderstanding. **We do not claim that the Vendi Score fails in the replica experiment.** Perhaps the reviewer intended to ask **why the Vendi Score fails in highly diverse scenarios**. We provide a detailed explanation below:
- As indicated in R5, highly diverse scenarios involve more distinctive samples (dissimilar samples), which can lead to uneven eigenvalue distributions (where some eigenvalues are much larger than others). The Vendi Score is based on Shannon entropy, which is used to measure system uncertainty. In scenarios with uneven eigenvalue distributions, the system has more information in certain feature directions, while the information in other directions is relatively small. Consequently, the overall uncertainty of the system is reduced, leading to a significant decrease in Shannon entropy. As a result, the Vendi Score underestimates the diversity of datasets with distinctive samples.
>Q9: How many samples do we need for DCScore to converge?
R9: Thank you for your question and for providing additional insights. The convergence setting differs from our targeted scenarios. **Our proposed method aims to evaluate already generated synthetic datasets**, such as text datasets from LLMs, which is distinct from generative model evaluations (Because our target scenario lacks ground truth values). To clarify our approach, we conducted a sample convergence experiment comparing it to Vendi Score. **We used diversity evaluation methods on datasets generated by WGAN-GP**, following the WGAN-GP [1] settings on 8, 25 Gaussian toy datasets. We present the experimental results in **Figures 7, 8 at our anonymous URL** (https://anonymous.4open.science/r/ICMLRebuttal_DCScore). Specifically, we observe that DCScore **requires 300 samples to converge** and achieves a diversity score that is closer to the actual mode number.
Reference:
[1] Improved Training of Wasserstein GANs, NIPS 2017.
>Q10: (1) Could the authors clarify what settings they used (e.g., how many RFFs they used)? (2) I appreciate the point about the summerization part complexity, but isn't the bottleneck for the Vendi score?
R10: Thank you for your questions.
(1) **We set RFFs (rff_dim) to 768**, which matches the embedding dimension of BERT (the model used for extracting embeddings is bert-base-uncased). Additionally, we set the sigma parameter and batch size to 20 and 512, respectively, with the batch size being consistent with that of DCScore.
(2) As shown in Table 2 of our paper, the Vendi Score has **a high complexity** ($\mathcal{O}(n^3)$) when using general kernels. This is due to the computational requirements for calculating eigenvalues. Therefore, we believe this is one of the key points where the Vendi score can be improved.
>Q11: Could the authors elaborate on why this (non-shift-invariant kernels) fails to meet diversity evaluation requirements?
R11: Thank you for your question. We agree with the necessity of shift-invariant kernels in diversity evaluation. However, we would clarify our claim for three following reasons:
- A general diversity evaluation method should ideally accommodate various scenarios, including those involving non-shift-invariant kernels, to ensure broader applicability.
- Scenarios that may require non-shift-invariant kernels include the evaluation of diversity in image modalities, which often necessitates capturing higher-level features such as textures and edge information. In such cases, more complex kernels (e.g., polynomial kernels) are often needed to capture these nonlinear features.
- **Certain non-shift-invariant kernels exhibit robustness to noise and outliers**[1], ensuring that the diversity evaluation results are not unduly affected by a small amount of anomalous data. This robustness is also essential for effective diversity evaluation.
Reference:
[1] Kernel approximation using analogue in-memory computing, Nature 2024.
---
We once again thank you for your highly constructive comments, which have been extremely helpful in improving the quality of our work. | null | null | null | null | null | null |
Convergence Analysis of Natural Gradient Descent for Over-parameterized Physics-Informed Neural Networks | Reject | Summary: This paper considers the convergences of a certain class of PINNs with 2 layers in the overparametrization regime (NTK) and makes two contributions: 1) improves the convergence of GD (conditions for LR) and 2) shows quadratic convergence of natural gradient descent. Specifically for 1) the LR dependency on the smallest eigen value of the gram matrix (fisher information matrix) is removed and for 2) it considers a different stability criterion for the jacobian for the derivation.
Claims And Evidence: The claims are justified via theoretical derivations.
Methods And Evaluation Criteria: There was no evaluation as this is a purely theoretical paper
Theoretical Claims: Theoretical claims are built on top of previous works and appear correct. I could largely follow the arguments, however, I didn't check for the correctness of derivations rigorously.
Experimental Designs Or Analyses: N/A
Supplementary Material: Did not check
Relation To Broader Scientific Literature: The main focus is on the niche domain of PINNs and the paper is targeted towards such a niche audience. In fact, the paper assumes familiarity of the previous theoretical work and reuses notations (even in the abstract without defining them properly until in the method section). The paper may not be easily accessible for a general audience, even though the topic (convergence of NGD) might be interesting to them.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ## Strengths
1. The contributions are interesting and will be useful to the theoretical community, and it may be useful in practice. The proof ideas appear novel.
## Weaknesses
1. The paper jumps to the derivations straightway, but it might be better to contextualize it first. This would make the paper more accessible to a general audience. also, please define the nations when they were introduced.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our article and for your insightful comments. We apologize for the oversight in notations. We agree that adding some context before diving into derivations would improve readability, especially for a broader audience. In the revised version, we will make sure to clarify the definitions of the notations when they are first introduced and add more introductory context.
In this work, for gradient descent, we improve both the requirements for the learning rate $\eta$ and the width $m$. Furthermore, we establish convergence guarantees for natural gradient descent (NGD) in training over-parameterized two-layer PINNs, demonstrating that NGD achieves: (1) an $\mathcal{O}(1)$ learning rate, and (2) faster convergence compared to gradient descent.
Although this is a purely theoretical paper, we can provide some experiments to justfy our theoretical results. When we are able to revise our paper, we will add the detailed experimental results as a separate section in the manuscript, and **the code to reproduce the experiments will be added to the Github**.
We conduct experiments on three problems, the 2D Poisson equation with reference solution $u _{ref}=\sin(\pi x)\sin(\pi y)$, the 1D Heat equation with reference solution $u _{ref}=e^{-\frac{\pi^2t}{4}}\sin(\pi x)$ and the 2D Helmholtz equation with wave number $k=4$, and the reference solution $u _{ref}=\sin(\pi x)\sin(k\pi y)$.
All codes are conducted by the Pytorch framework. The configurations used in these examples are listed in Table 1. We report the relative $L^2$-error of the NGD optimizer, the SGD optimizer, the Adam optimizer, and the L-BFGS optimizer in Table 2.
The relative $L^2$-error is defined as follows:
$$ \frac{||\hat{u}-u _{ref}|| _2}{||u _{ref}|| _2},$$
where $\hat{u}$ denotes the predicted solution and $u _{ref}$ represents the reference solution. $N _f$ is the number of interior sampling points, and $N _b$ is the number of boundary sampling points.
**Table 1: Configurations of Different Equations**
| | $N _f $ | $N _b$ | batch size | hidden layers | hidden neurons | activation function |
|---|---|---|---|---|---|---|
| 2D Poisson | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
| 1D Heat | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
| 2D Helmholtz | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
**Table 2: Relative $L^2$-error of Different Optimizers**
| | SGD | Adam | L-BFGS | NGD |
|---|---|---|---|---|
| 2D Poisson | 1.45e-01 | 5.32e-03 | 3.17e-03 | **1.12e-04** |
| 1D Heat | 5.43e-01 | 6.91e-03 | 4.98e-03 | **3.42e-04** |
| 2D Helmholtz | 8.48e+00 | 1.06e+00 | 3.35e+00 | **6.67e-03** |
In all the experiments,we run the NGD and L-BFGS method for 500 epochs, while the SGD and Adam are trained for 10, 000 epochs. The loss decay during training demonstrating that the NGD method converges significantly faster than other optimization methods.
Table 3 presents the convergence performance of the NGD method with different learning rates on the 2D Poisson equation. The experimental results demonstrate that NGD maintains stable convergence across a wide range of learning rates without significant degradation in final accuracy.
**Table 3: Relative $ L^2 $-error Comparison Across Different Learning Rates for NGD method**
| Learning Rate | 0.5 | 0.1 | 0.05 | 0.01 | 0.005 | 0.001 |
|---|---|---|---|---|---|---|
| Relative $L^2$-error | 1.18e-03 | 3.24e-04 | 1.87e-04 | 1.12e-04 | 1.22e-04 | 1.68e-04 |
In addition, a comparative analysis of the model performance is performed with progressively increasing network widths. Table 4 presents the variation of $L^2$-error with respect to network width for 1D Poisson equation with $u_{ref}=\sin(4\pi x)$. The results demonstrate that increasing network width leads to accuracy improvements.
**Table 4: Relative $ L^2$-error Comparison Across Different Network Width for NGD method**
| Width $m$ | 20 | 40 | 80 | 160 | 320 | 640 | 1280 | 2560 |
|---|---|---|---|---|---|---|---|---|
| Relative $L^2$-error | 1.59e-03 | 7.21e-04 | 5.18e-04 | 3.8e-04 | 3.08e-04 | 2.76e-04 | 1.78e-04 | 7.05e-05 |
From an experimental perspective, NGD demonstrates rapid convergence during the training process. Compared to other optimization algorithms, it requires significantly fewer epochs to converge. Furthermore, the experimental results illustrate the strong robustness of the NGD method with respect to hyperparameter selection. Therefore, the empirical findings validate our theoretical conclusions. | Summary: The manuscript concerns convergence results for PINNs for shallow neural networks with $\operatorname{ReLU}^3$ (or certain smooth) activation functions in the overparametrized setting. Both gradient descent and natural gradient descent are considered. The considered model PDE is a heat equation.
## update after rebuttal
I maintain my score. The authors claimed that the natural gradient they are considering is different from Gauss-Newton's method, which is incorrect. I pointed this out, but never received an answer.
Claims And Evidence: Full proofs for every statement are provided.
Methods And Evaluation Criteria: Not applicable.
Theoretical Claims: I checked the proof strategy but I am not familiar enough with the mathematical machinery to certify the correctness of the proofs.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: Not applicable.
Relation To Broader Scientific Literature: The discussion of the recent literature on convergence results in the NTK regime seems reasonable, although I am no expert in the area. In the broader context, some contextualization may help:
- The form of natural gradient descent considered here is exactly the classical Gauss-Newton method. Including a short remark reminding readers of this fact should be considered.
- The analyzed method agrees with energy natural gradients as proposed by Müller and Zeinhofer **in the case of linear PDEs**. As this is also the setting considered in this paper it can be mentioned.
Essential References Not Discussed: I am not aware of any.
Other Strengths And Weaknesses: The article would greatly benefit from simulations, especially for the natural gradient part. From a practioners point of view, it is important to know to what extent theory and practice meet. Even simple experiments in one spatial dimension would already be interesting (this keeps the scaling with the dimension d under control). Examining equation (23) shows that the cost of NG is manageable: The Jacobians are of size $n \times p$, where $n = n_1 + n_2$ is the sample size and $p = m(d+2)$ is the number of trainable parameters. Consequently, assuming $n > p$ the dominating cost is $\mathcal O(n^2p)$ to compute $J\cdot J^\top$ which should be feasible for $p$ in the millions and $n$ in the hundreds — which is already quite interesting.
Other Comments Or Suggestions: My main concern with the article is the lack of simulations.
Questions For Authors: The authors frequently compare to Gao et al. when discussing their results and the improved constants. The cited results in Gao do not contain a dependence on the dimension $d$ of the computational domain. Can the authors comment on this? Is this dependence hidden in the constants of Gao et al?
Can the authors comment on the dependence of the results on the norm of the right-hand side $f$? A complicated right-hand side — for example oscillatory — leads to a complicated solution. It is well known that PINNs can struggle in such situations.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's time and valuable feedback on our manuscript. Let us address your questions point by point.
**Q1: Relations to other methods.**
**A1**: We apologize for the insufficient background discussion of Natural Gradient Descent (NGD) in our work. Although NGD shares some similarities with the classical Gauss-Newton method, key differences exist. For instance, in our problem, the Gauss-Newton iteration is given by
$$w(k+1)=w(k)-(J(k)^T J(k))^{-1} J(k)^T u(k),$$
whereas the NGD iteration follows
$$w(k+1)=w(k)-\eta J(k)^T (J(k) J(k)^T )^{-1} u(k).$$
Müller and Zeinhofer proposed ENGD for PINNs and empirically demonstrated its ability to yield highly
accurate solutions. However, NGD formulation differs from ENGD, and they did not provide theoretical convergence guarantees.
**Q2: Lack of simulations.**
**A2**: Thanks for the suggestion. We provide some experiments to validate our theoretical results. In the revised manuscript, we will add a separate section presenting detailed experimental results, and **the code to reproduce the experiments will be added to the Github**.
The configurations used in these examples are listed in Table 1. We report the relative $L^2$-error of NGD, SGD, Adam and L-BFGS optimizers in Table 2. $N _f$ is the number of interior sampling points, and $N _b$ is the number of boundary sampling points.
**Table 1: Configurations of Different Equations**
| | $ N _f$ | $N _b$ | batch size | hidden layers | hidden neurons | activation function |
|---|---|---|---|---|---|---|
| 2D Poisson | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
| 1D Heat | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
| 2D Helmholtz | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
**Table 2: Relative $L^2$-error of Different Optimizers**
| | SGD | Adam | L-BFGS | NGD |
|---|---|---|---|---|
| 2D Poisson | 1.45e-01 | 5.32e-03 | 3.17e-03 | **1.12e-04** |
| 1D Heat | 5.43e-01 | 6.91e-03 | 4.98e-03 | **3.42e-04** |
| 2D Helmholtz | 8.48e+00 | 1.06e+00 | 3.35e+00 | **6.67e-03** |
In all the experiments,we run the NGD and L-BFGS method for 500 epochs, while the SGD and Adam are trained for 10, 000 epochs. The loss decay during training demonstrating that the NGD method converges significantly faster than other optimization methods.
Table 3 presents the convergence performance of the NGD method with different learning rates on the 2D Poisson equation. The experimental results demonstrate that NGD maintains stable convergence across a wide range of learning rates without significant degradation in final accuracy.
**Table 3: Relative $ L^2 $-error Comparison Across Different Learning Rates for NGD method**
| | | | | | | |
|-------|-------|-------|-------|-------|-------|-------|
| Learning Rate | 0.5 | 0.1 | 0.05 | 0.01 | 0.005 | 0.001 |
| Relative $L^2$-error | 1.18e-03 | 3.24e-04 | 1.87e-04 | 1.12e-04 | 1.22e-04 | 1.68e-04 |
In addition, a comparative analysis of the model performance is performed
with progressively increasing network widths. Table 4 presents the variation of $L^2$-error with respect to network
width for 1D Poisson equation with $u_{ref}=\sin(4\pi x)$. The results demonstrate that increasing network width leads to accuracy improvements.
**Table 4: Relative $ L^2$-error Comparison Across Different Network Width for NGD method**
| | | | | | | | | |
|------|-------|-------|-------|------|-------|-------|--------|--------|
| Width $m $ | 20 | 40 | 80 | 160 | 320 | 640 | 1280 | 2560 |
| Relative $L^2 $-error | 1.59e-03 | 7.21e-04 | 5.18e-04 | 3.8e-04 | 3.08e-04 | 2.76e-04 | 1.78e-04 | 7.05e-05 |
**Q3: Dependence on $d$.**
**A3**: The dimension-related terms in Gao's results are implicitly hidden in their outcomes. However, our results outperform theirs. For instance, when estimating the initial value $L(0)$, for one of the terms $||w|| _2^2\sigma(w^Tx)$, Gao employed the bounded differences inequality, specifically applying $| ||w|| _2^2\sigma(w^Tx)|\lesssim ||w|| _2^3\sim d^{3/2}$. This leads to their estimate of $L(0)$ being at least $L(0)=\Omega(d^3)$. In contrast, our analysis, by leveraging Lemma C.2, achieves a better bound of $L(0)=\mathcal{O}(d^2)$. In their analysis, similar bounding techniques were employed in other parts as well, thus our results concerning the dimension $d$ are superior to theirs.
**Q4: Dependence on $f$.**
**A4**: From the analysis, we can see that the initial value $L(0)$ depends linearly on $f$, and the requirement for the width $m$ is $m=\Omega(L(0))$. Therefore, a complicated $f$ leads to a higher requirement on $m$, making convergence more challenging. | Summary: The paper investigates the theoretical convergence of natural gradient descent for overparameterized physics-informed neural networks under NTK regime. The paper extends the previous works on gradient descent to natural gradient descent, with improved bounds. The improvement is based on some inequality techniques adopted and the inherent benefits of natural gradient descent over the vanilla gradient descent. The mathematical proof is overall correct, and the study is meaningful for understanding and improving the training of PINNs.
## update after rebuttal
I expect authors can include experiments/implementation details, discussions, and strong motivations of the study, since NGD is not popular in PINNs training.
Since my score has already been positive, I will keep it. I think the paper should be accepted.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The performance of natural gradient descent for PINNs has not been well investigated. Researchers prefer to use gradient-based methods (e.g., gradient descent, SGD, Adam) or quasi-Newton's methods (e.g., LBFGS) to train PINNs. The natural gradient descent is rarely adopted in PINNs literature. Therefore, I am concerned about the applicability of the optimization algorithm studied in this paper.
Theoretical Claims: The proof should be correct.
Experimental Designs Or Analyses: No experiments in the paper. But I think experimental investigations on natural gradient descent for PINNs is necessary. Otherwise, the analysis conducted in the paper does not make sense.
Supplementary Material: Yes. I review the appendix of detailed proofs.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: The motivation for the analysis is not strong, since the algorithm (natural gradient descent) is never adopted for training PINNs. I think experimental investigations on natural gradient descent for PINNs is required to support your motivation and analysis. For example, natural gradient descent is exactly outperforming other methods, which is somewhat supported by your theoretical results. Does the convergence results improve with wider networks? Therefore, I think this paper's motivation and theoretical results are not supported and accompanied by some experimental evidence.
Other Comments Or Suggestions: NA
Questions For Authors: Why do you consider natural gradient descent, which is rarely applied for training PINNs? Why not consider LBFGS, which I think is more interesting.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's time and valuable feedback on our manuscript. Let us address your questions point by point.
**Q1: Experimental investigations on NGD for PINNs.**
**A1**: Thanks for the suggestion. We provide some experiments to validate our theoretical results. In the revised manuscript, we will add a separate section presenting detailed experimental results, and **the code to reproduce the experiments will be added to the Github**.
The configurations used in these examples are listed in Table 1. We report the relative $L^2$-error of NGD, SGD, Adam and L-BFGS optimizers in Table 2. $N _f$ is the number of interior sampling points, and $N _b$ is the number of boundary sampling points.
**Table 1: Configurations of Different Equations**
| | $ N _f$ | $N _b$ | batch size | hidden layers | hidden neurons | activation function |
|---|---|---|---|---|---|---|
| 2D Poisson | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
| 1D Heat | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
| 2D Helmholtz | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
**Table 2: Relative $L^2$-error of Different Optimizers**
| | SGD | Adam | L-BFGS | NGD |
|---|---|---|---|---|
| 2D Poisson | 1.45e-01 | 5.32e-03 | 3.17e-03 | **1.12e-04** |
| 1D Heat | 5.43e-01 | 6.91e-03 | 4.98e-03 | **3.42e-04** |
| 2D Helmholtz | 8.48e+00 | 1.06e+00 | 3.35e+00 | **6.67e-03** |
In all the experiments,we run the NGD and L-BFGS method for 500 epochs, while the SGD and Adam are trained for 10, 000 epochs. The loss decay during training demonstrating that the **NGD method converges significantly faster than other optimization methods.**
Table 3 presents the convergence performance of the NGD method with different learning rates on the 2D Poisson equation. The experimental results demonstrate that **NGD maintains stable convergence across a wide range of learning rates without significant degradation in final accuracy**.
**Table 3: Relative $ L^2 $-error Comparison Across Different Learning Rates for NGD method**
| | | | | | | |
|-------|-------|-------|-------|-------|-------|-------|
| Learning Rate | 0.5 | 0.1 | 0.05 | 0.01 | 0.005 | 0.001 |
| Relative $L^2$-error | 1.18e-03 | 3.24e-04 | 1.87e-04 | 1.12e-04 | 1.22e-04 | 1.68e-04 |
In addition, a comparative analysis of the model performance is performed
with progressively increasing network widths. Table 4 presents the variation of $L^2$-error with respect to network
width for 1D Poisson equation with $u_{ref}=\sin(4\pi x)$. The results demonstrate that **increasing network width leads to accuracy improvements** .
**Table 4: Relative $ L^2$-error Comparison Across Different Network Width for NGD method**
| | | | | | | | | |
|------|-------|-------|-------|------|-------|-------|--------|--------|
| Width $m $ | 20 | 40 | 80 | 160 | 320 | 640 | 1280 | 2560 |
| Relative $L^2 $-error | 1.59e-03 | 7.21e-04 | 5.18e-04 | 3.8e-04 | 3.08e-04 | 2.76e-04 | 1.78e-04 | 7.05e-05 |
**Q2: The influence of width for convergence results.**
**A2:** Theoretically, the algorithm achieves guaranteed convergence once the network width exceeds a sufficient threshold, with no further improvement in convergence rate from additional width increases. This fact is consistent with established results on gradient descent for PINNs and regression problems. Empirically, wider networks exhibit lower training and test errors.
**Q3: Why consider NGD?**
**A3**: From a theoretical perspective, existing convergence analyses of optimization algorithms for PINNs have primarily focused on gradient descent. However, as shown in this paper's improvements on gradient descent, gradient descent imposes relatively stringent requirements on certain hyperparameters (e.g., learning rate) and exhibits slow convergence rates. Therefore, we turned our attention to NGD, which we think may account for curvature information of the function. Theoretically, NGD has more relaxed requirements for the learning rate and achieves faster convergence rates. Experiments also demonstrate that NGD converges more rapidly during training.
As for why L-BFGS was not considered, it is because for non-convex optimization problems like PINNs, L-BFGS is more complex and harder to analyze compared to NGD. Given that recent studies have shown quasi-Newton methods (e.g., SOAP [1]) perform well in optimizing PINNs, analyzing the convergence of such algorithms for PINNs will be an important future direction. Additionally, more efficient implementations or variants of NGD also represent a key area for future research.
[1]: Vyas N, Morwani D, Zhao R, et al. Soap: Improving and stabilizing shampoo using adam[J]. arXiv preprint arXiv:2409.11321, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you so much for adding experiments comparing Adam, LBFSG, and NGD. The experimental results show the superior performances of NGD, which supports the motivation of this paper to investigate the theoretical convergence of NGD for PINNs. I am just curious about one more thing: how do you deal with the inverse of the Gram matrix? It should be extremely high-dimensional and singular. I don't think the vanilla NGD works well in practice. What kind of practical tricks do you adopt? Thank you.
---
Reply to Comment 1.1.1:
Comment: Thanks for the very timely comment. We can understand your concern about the computing of the Gram matrix.
However, for the Gram matrix in NGD, we should note that it differs from the classical Gauss-Newton method.
**Practical tricks can make computing the inverse of the Gram matrix manageable and stable**. We explain it as follows.
In our problem, the classical Gauss-Newton iteration is given by $$w(k+1)=w(k)-(J(k)^T J(k))^{-1} J(k)^T u(k),$$ whereas the NGD iteration follows $$w(k+1)=w(k)-\eta J(k)^T (J(k) J(k)^T )^{-1} u(k).$$
Here $\eta$ is the learning rate, the Jacobian $J(k)\in R^{n\times p}$, where $n=n_1+n_2$ is the sample data size and $p=m(d+2)$ is the neural network parameters, as given by Equation (23) in our paper.
The dominating computing cost is the inverse of $J\cdot J^T \in R^{n\times n}$, which can be manageable and stable for small $n$ and large $p$ (it is also noticed by Reviewer 4Wzk).
In practice, we apply **stochastic batch technique to choose small sample size $n$ in every iteration, and large network papameters $p$ to make $J\cdot J^T$ invertible** (we also use torch.pinverse($J\cdot J^T$) in the code). $$
For example, for the 2D Poisson equation in above Table 2, the size in $J$ is $n=100$ and $p=128*(2+2)=512$, and it is enough to make the computing stable. Much larger $p=m(d+2)$ is also given in above Table 4, shows accuracy improved when network parameter $p$ incresed.
We note that for the classical Gauss-Newton iteration, the Gram matrix is $J^T\cdot J \in R^{p\times p}$.
For larger network parameters $p$, the Gram matrix of Gauss-Newton method will be extremely high-dimensional and singular as you said.
However, this does not occur in our NGD method.
Finally, we believe that practical implementation techniques for NGD in PINNs—similar to the classical Newton method's evolution into L-BFGS—require further investigation and will be a focus of our future work.
We would be most grateful for your input if any additional revisions are needed during the remaining time of the discussion period. | Summary: The paper investigates the convergence properties of gradient descent (GD) and natural gradient descent (NGD) for training two-layer Physics-Informed Neural Networks (PINNs). The authors improve the learning rate of GD from $\mathcal{O}(\lambda_0)$ to $\mathcal{O}(1/\|H^{\infty}\|_2)$, where $\lambda_0$ is the least eigenvalue of the Gram matrix $H^{\infty}$, leading to faster convergence. They also establish the positive definiteness of Gram matrices for various smooth activation functions, such as logistic, softplus, and hyperbolic tangent, applicable to a wide range of PDEs. The paper demonstrates that NGD can achieve a learning rate of $\mathcal{O}(1)$, resulting in a convergence rate independent of the Gram matrix, with quadratic convergence for smooth activation functions. By introducing a new recursion formula for GD, the authors reduce the requirements on learning rate and network width, improving convergence results. The study highlights the advantages of NGD over GD, showing faster convergence rates and less stringent network width requirements, making it a promising approach for efficiently training PINNs in scientific computing applications involving PDEs.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: N/A
Theoretical Claims: Yes.
Experimental Designs Or Analyses: N/A.
Supplementary Material: Yes, I carefully checked all the supplementary material.
Relation To Broader Scientific Literature: The key contributions of the paper are closely related to several strands of prior research in the broader scientific literature, particularly in the fields of optimization, neural networks, and PINNs.
The paper builds on prior work that demonstrates the convergence of GD for over-parameterized neural networks. Specifically, it extends findings from Du et al. (2018, 2019) and Gao et al. (2023), which show that GD can achieve zero training loss under over-parameterization. The authors improve upon these results by providing a better learning rate and milder requirements on network width.
Essential References Not Discussed: No specific related work not discussed.
Other Strengths And Weaknesses: The current paper is limited to a convergence analysis, based on the improvement proposed in the paper, it may conclude a method to guide the training of PINNs in practice. However, we don't see such a method or guidance.
Other Comments Or Suggestions: No.
Questions For Authors: Could the authors propose a method that helps the design of PINNs?
Can the authors conduct experiments to verify the correctness in practice?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their valuable time and constructive suggestions on our work.
**Q1: Methods to help the PINNs' design.**
**A1:** Thanks for the kind suggestion. The results of the paper motivate us to establish "good" Gram matrix $H^{\infty}$ to reduce the strictly learning rate requirement for GD optimizers, and this can be achieved by properly design the loss function, the network architecture, and so on.
For example, since the learning rate requirement for GD is related to $\mathcal{O}(1/||H^{\infty}||_2)$, existing works [1] have adaptively adjusted the weight in the different loss components of PINNs, to improve the eigenvalue distribution of the Gram matrix $H^{\infty}$, thus we can use a normal learning rate to accelerate PINNs' training and convergence.
[1]Wang S, Yu X, Perdikaris P. When and why PINNs fail to train: A neural tangent kernel perspective[J]. Journal of Computational Physics, 2022, 449: 110768.
**Q2: Add experiments to verify theoretical findings.**
**A2:** Thanks for the suggestion. Here, we provide some experiments to validate our theoretical results. In the revised version, we will add the detailed experimental results as a separate section, and **the code to reproduce the experiments will be added to the Github**.
We conduct experiments on three problems, the 2D Poisson equation with reference solution $u_{ref}=\sin(\pi x)\sin(\pi y)$, the 1D Heat equation with reference solution $u_{ref}=e^{-\frac{\pi^2 t}{4}}\sin(\pi x)$, and the 2D Helmholtz equation with wave number
$k=4$, and the reference solution $u_{ref}=\sin(\pi x)\sin(k\pi y)$.
All codes are conducted by the Pytorch framework. The configurations used in these examples are listed in Table 1. We report the relative $L^2$-error of the NGD optimizer, the SGD optimizer, the Adam optimizer, and the L-BFGS optimizer in Table 2.
The relative $L^2$-error is defined as follows:
$$ \frac{||\hat{u}-u _{ref}|| _2}{||u _{ref}|| _2},$$
where $\hat{u}$ denotes the predicted solution and $u _{ref}$ represents the reference solution. $N _f$ is the number of interior sampling points and $N _b$ is the number of boundary sampling points.
**Table 1: Configurations of Different Equations**
| | $ N _f$ | $N _b$ | batch size | hidden layers | hidden neurons | activation function |
|---|---|---|---|---|---|---|
| 2D Poisson | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
| 1D Heat | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
| 2D Helmholtz | 1000 | 200 | 100 | 1 | 128 | tanh(·) |
**Table 2: Relative $L^2$-error of Different Optimizers**
| | SGD | Adam | L-BFGS | NGD |
|---|---|---|---|---|
| 2D Poisson | 1.45e-01 | 5.32e-03 | 3.17e-03 | **1.12e-04** |
| 1D Heat | 5.43e-01 | 6.91e-03 | 4.98e-03 | **3.42e-04** |
| 2D Helmholtz | 8.48e+00 | 1.06e+00 | 3.35e+00 | **6.67e-03** |
In all the experiments,we run the NGD and L-BFGS method for 500 epochs, while the SGD and Adam are trained for 10, 000 epochs. The loss decay during training demonstrating that the NGD method converges significantly faster than other optimization methods.
Table 3 presents the convergence performance of the NGD method with different learning rates on the 2D Poisson equation. The experimental results demonstrate that NGD maintains stable convergence across a wide range of learning rates without significant degradation in final accuracy.
**Table 3: Relative $ L^2 $-error Comparison Across Different Learning Rates for NGD method**
| Learning Rate | 0.5 | 0.1 | 0.05 | 0.01 | 0.005 | 0.001 |
|---|---|---|---|---|---|---|
| Relative $L^2$-error | 1.18e-03 | 3.24e-04 | 1.87e-04 | 1.12e-04 | 1.22e-04 | 1.68e-04 |
In addition, a comparative analysis of the model performance is performed with progressively increasing network widths. Table 4 presents the variation of $L^2$-error with respect to network width for 1D Poisson equation with $u_{ref}=\sin(4\pi x)$. The results demonstrate that increasing network width leads to accuracy improvements.
**Table 4: Relative $ L^2$-error Comparison Across Different Network Width for NGD method**
| Width $m$ | 20 | 40 | 80 | 160 | 320 | 640 | 1280 | 2560 |
|---|---|---|---|---|---|---|---|---|
| Relative $L^2$-error | 1.59e-03 | 7.21e-04 | 5.18e-04 | 3.8e-04 | 3.08e-04 | 2.76e-04 | 1.78e-04 | 7.05e-05 |
From an experimental perspective, NGD demonstrates rapid convergence during the training process. Compared to other optimization algorithms, it requires significantly fewer epochs to converge. Furthermore, the experimental results illustrate the strong robustness of the NGD method with respect to hyperparameter selection. Therefore, the empirical findings validate our theoretical conclusions. | null | null | null | null | null | null |
How Far Is Video Generation from World Model: A Physical Law Perspective | Accept (poster) | Summary: This paper investigates whether state-of-the-art video generative models can learn fundamental physical laws from purely visual data. Inspired by the vision of video models as “world simulators” (e.g. OpenAI’s Sora), the authors conduct a systematic study using a controlled 2D physics environment. They construct a simulation testbed (based on Box2D) where simple geometric objects move and collide according to known classical mechanics laws (uniform motion, elastic collision, parabolic motion). Using this testbed, they generate large training datasets (up to 3 million videos) and train diffusion-based video generation models (with a VAE + transformer architecture similar to Sora) to predict future frames from initial conditions. The goal is to evaluate if these models, given enough data and scale, can infer and obey the underlying physical laws without explicit supervision.
Claims And Evidence: The authors claim that current video generation models fail to infer universal physical laws and instead generalize by referencing similar training cases. This is supported by experiments showing that, while models can perfectly generalize within the training distribution, they break down on novel scenarios. The structured scaling analysis provides convincing evidence: even large diffusion models trained on massive data cannot correctly predict physics in unseen setups, underscoring that scaling alone is insufficient. The claim is well-supported by quantitative metrics (e.g. high error in predicting object velocity in OOD tests) and qualitative observations of physically implausible generated motions.
Methods And Evaluation Criteria: The methods and evaluation criteria for the problem are well-chosen. The 2D simulator approach is an excellent strategic decision to tackle an otherwise intractable evaluation problem. The use of a strong diffusion model ensures the study tests the frontier of what’s possible. The experiments are structured to answer specific questions (ID vs OOD vs combinatorial generalization), and the metrics directly measure success on those terms. I also appreciate that the authors validated their VAE wasn’t a bottleneck – they report in the appendix that VAE reconstructions of videos have minimal error, so any mistakes are from the diffusion model learning, not from lossy compression. This attention to detail in evaluation bolsters confidence. Overall, the methods are appropriate and quite thorough for this study.
Theoretical Claims: This work is primarily empirical.
Experimental Designs Or Analyses: The experimental design is well-structured. The authors systematically scale up training data size and model capacity to test how generalization improves (or doesn’t) with scale. They evaluate model performance on: (1) the training distribution, (2) held-out but similar (in-distribution) scenarios, (3) genuinely novel (OOD) scenarios that involve new object properties or dynamics, and (4) combinatorial cases that mix seen components in new ways. This comprehensive coverage provides a strong empirical foundation to analyze generalization. The results show perfect in-distribution generalization (models can interpolate within the training range) and some combinatorial generalization (performance improves gradually with scale for mixtures of seen concepts). Crucially, they demonstrate complete failure in true OOD generalization, even for the largest models.
Supplementary Material: The supplementary material includes additional implementation details, model architectures, and further quantitative results. It also provides extra visualization samples of the generated videos, which help in assessing qualitative performance. These additions enhance transparency and reproducibility of the research. However, the supplement could be improved by including more qualitative examples of failure cases in OOD or combinatorial scenarios. For instance, showing a side-by-side comparison of ground-truth vs. generated trajectories in challenging cases would illustrate the model’s shortcomings more concretely and help readers visually grasp how the model deviates from true physics.
Relation To Broader Scientific Literature: This paper builds upon prior work in video generation and world models, engaging with the question of whether large generative models can learn physics without explicit supervision. It connects to a growing literature on using foundation models as simulators for real-world processes, referencing studies that scale video models (like OpenAI’s Sora) and those investigating physical common sense in AI. By demonstrating the limits of current diffusion models, the paper contributes to ongoing discussions about the necessity of structure and inductive biases in AI for learning physics. A point that could strengthen the literature comparison is a deeper discussion of alternative approaches such as physics-informed neural networks, symbolic regression of physical laws, or structured simulation engines. Contrasting the diffusion model’s performance with these could highlight what is missing (e.g., explicit enforcement of Newtonian mechanics or object permanence) and emphasize that certain insights from physics-based learning might be needed in conjunction with data-driven models.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: While I like the paper overall, I don't know how I feel about the paper's overall intuition. Of course, videos models cannot model physics OOD, but what we want is that scaling could make a large-scale pretrained video model to be a sufficient approximation of a real-world projection. The authors primarily conduct experiments by training on smaller domains physics informed data (with some further tests on CogVid and SVD), and conclude the VDMs do not model OOD physical scenarios --- well, I personally just think that is too obvious, and isn't the whole point of scaling is to model more things to be ID? Nevertheless, I think the paper could be accepted, but will not object if it is rejected.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thorough review and for appreciating our method design. Please find our responses to your questions below.
`1. More qualitative examples of failure cases in OOD or combinatorial scenarios`
Thank you for the helpful suggestion. In Appendix A.8, we have included failure cases with side-by-side comparisons in both the OOD (Figure 20) and the Combinatorial Generalization setting (Figure 22).
For OOD setting, Figure 20 already illustrates all representative failure modes, as failure in this setting primarily involves inaccuracies in velocity prediction.
For Combinatorial setting, We are now providing additional 8 failure cases via the anonymous links below. (In accordance with the ICML rebuttal policy, we include images only, not videos.) These examples will be added into the revision. We will also include many demo **videos** on our project webpage once the paper is published.
https://ibb.co/Pv5qgSfL
https://ibb.co/F9G5nSX
`2. discussion of alternative approaches such as PINN, symbolic regression of physical laws, or structured simulation engines`
We appreciate your suggestion to expand the discussion to compare video diffusion model (VDMs) with alternative approaches for physics modeling and to highlight what may be missing in purely data-driven models.
1. **Physical Consistency and Explicit Inductive Biases**: Current VDMs lack explicit representations of physical laws and instead rely on statistical correlations learned from data. As our findings show, this can result in failure cases under conditions such as unseen object velocities or mass values. In contrast, approaches like PINN, symbolic regression, and structured simulation engines encode or recover governing equations, offering stronger guarantees of physical consistency and better extrapolation to unseen velocity and mass values [1, 2].
2. Structured methods often lack **visual fidelity and scalability**. For example, PINNs are typically tailored to a single equation and require retraining when parameters or initial conditions change [3]. Most existing work also focuses on small-scale, low-dimensional problems, limiting applicability to realistic video generation. In contrast, VDMs generate high-fidelity visuals and scale more effectively across diverse scenarios.
3. **Complementarity, Not Competition**:
These observations point toward a promising direction: combining the strengths of both methods. For instance, physics engines or PINNs could be used to predict future physical states, while VDMs handle rendering and visual synthesis. Such hybrid systems could preserve both physical accuracy and visual realism.
We will incorporate the discussions into the revision.
[1] PINN: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.
[2] Genesis: A Universal and Generative Physics Engine for Robotics and Beyond
[3] Learning the solution operator of parametric partial differential equations with physics-informed DeepOnets.
`3. the conclusion that the VDMs do not model OOD physical scenarios is too obvious, and isn't the whole point of scaling is to model more things to be ID?`
Thank you for raising this important point. While the failure of current VDMs to model OOD physics may seem expected, our paper provides deeper insight:
1. **Generalization-by-memorization mechanism**: Despite optimism that scaling VDMs enables generalization to complex, unseen scenarios [4,5], our experiments show that VDMs often rely on retrieving patterns from similar training examples rather than learning underlying physical principles. This generalization-by-memorization mechanism, not been clearly articulated prior to our work, underscores the limitations of VDMs and the need for structural priors and inductive biases in physical modeling.
2. **The Limits of Turning OOD into ID — and Actionable Insights**: While we agree that the goal of scaling is to absorb more variation into the in-distribution regime, **real-world video data is vast, continuous, and high-dimensional, making it more difficult to fully cover than pure language**. For example, in robotics, variables such as object velocity, joint configurations, camera angles, noisy backgrounds, and task goals vary across a continuous and combinatorially large space.
Our paper contributes actionable insights to this challenge: We demonstrate that **scaling combinatorial diversity** in the training data—rather than simply increasing dataset size—is significantly more effective for improving physical video modeling.
We hope this helps clarify the intuition and contributions of our work.
[4] OpenAI. Sora Technical Report: Video Generation Models as World Simulators.
[5] X.AI. 1X World Model. https://www.1x.tech/discover/1x-world-model
`Summary`
We hope our responses have addressed your concerns. If you have any further questions, please feel free to reach out. | Summary: This paper explores whether scaling video generation models enables them to learn physical laws. It first provides a thorough problem definition and then evaluates video generation models under three scenarios: in-distribution, out-of-distribution, and combinatorial generalization. The authors develop a 2D simulation testbed that simulates three fundamental physical principles. Through comprehensive experiments on simulated data, they conclude that scaling video generation models alone is insufficient for effective world modeling.
Claims And Evidence: The central claim of this paper is that scaling video generation models is insufficient to uncover physical laws. This claim is well-supported by extensive experimental validation.
A secondary claim suggests that video models prioritize different factors when referencing training data. However, this conclusion—particularly the ranking of these factors—is drawn from a single scenario, uniform linear motion, which raises concerns about its generalizability to other physical contexts.
Methods And Evaluation Criteria: The paper introduces three physical scenarios to assess the model’s ability to infer physical laws. This approach is a reasonable simplification, as the selected rules are fundamental and representative of broader physical principles.
Theoretical Claims: To the best of my knowledge, the theoretical claims appear to be correct.
Experimental Designs Or Analyses: 1. In the Combinatorial Generalization setting, the paper selects eight types of objects but does not provide a rationale for this choice.
2. Additionally, in Combinatorial Generalization, the evaluation excludes velocity as a metric and instead relies on FVD, SSIM, PSNR, and LPIPS. However, these are image/video metrics and do not guarantee physical correctness.
3. As mentioned in Section 3.2, the model fails to improve performance as data or parameters scale up. However, Figure 3 shows that as the training region expands, accuracy improves for OOD data. How should this discrepancy be interpreted?
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: This paper is related to machine learning, particularly in video generation and world modeling.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: This paper focuses on an important problem—whether scaling video generation models can enable world modeling. I believe the insights provided will be beneficial to the broader research community.
Other Comments Or Suggestions: Minor Issues:
- Incorrect characters in Line80 ("color ¿ size ¿ velocity ¿")
- Some sections lack textual content, e.g., A.4.1, A.4.2, and A.4.4.
Questions For Authors: Line 258: It is mentioned that only the first frame is used as a condition. How does the model infer the physical attributes of objects and accurately predict subsequent events?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: `1. The prioritization conclusion is drawn from single uniform linear motion, rasing concerns about its generalizability to other physical contexts.`
Thank you for your thoughtful comment. We selected the uniform linear motion scenario as it provides **a highly representative and clean setting**—velocity is easy to measure, and with only one object, the pixel regions corresponding to attributes such as size, color, and speed are not affected by interference from other objects
To verify the robustness of our conclusion, we also ran experiments on **parabolic motion and collisions**. **The factor prioritization remained consistent**, supporting the generalizability of our findings. We will include these additional results in the revision.
`2. a rationale for selecting eight types of objects`
Thank you for your question. To evaluate combinatorial generalization, we define a "combination" as the physical interaction between different object types. The Phyre simulator we use provides a total of eight distinct object types that enable a wide range of physical interactions. We include **all** of them to create as complex a combinatorial space as possible for evaluating combinatorial generalization. We appreciate your suggestion and will clarify this rationale in the revision.
`3. in Combinatorial Generalization, the evaluation **excludes velocity as a metric** and instead relies on FVD, SSIM, PSNR, and LPIPS. However, these are image/video metrics and do not guarantee physical correctness.`
Thank you for your valuable comment. Please allow us to clarify why velocity was excluded as a metric in the Combinatorial Generalization setting, unlike in the ID/OOD setting.
In the ID/OOD setting, scenes are simple (e.g., 1–2 colored balls on a plain background), enabling reliable position and velocity estimation via pixel averaging and frame differencing (line 203). In contrast, the Combinatorial setting includes many irregularly shaped and similarly colored objects, making pixel assignments ambiguous and position estimation unreliable. We also tested methods such as Hough circle detection and pretrained object detectors, but they resulted in significant errors.
Given these challenges, we excluded velocity as a metric and instead relied on a combination of objective video fidelity metrics and human evaluation:
1. **FVD, SSIM, PSNR, and LPIPS** measure fidelity of generated videos to ground-truth videos ones governed by physical laws. While not exlicitly designed for physical correctness, they reflect plausibility to some extent by measuring consistency and realism with the groundtruth —an approach consistent with prior work like Genie [1].
2. We also conducted **human evaluation**, where each evaluator was specifically instructed to **focus on assessing violations of physical laws**.
We believe this combined approach offers a comprehensive evaluation for physical correctness in the complex setting. We will clarify this in the revised paper.
[1] Bruce, Jake, et al. "Genie: Generative interactive environments." ICML 2024, Best paper.
`4. How should the discrepancy between Section 3.2 and Figure 3 be interpreted?`
Thank you for your insightful question. **The difference arises from the type of generalization being evaluated**.
In Section 3.2, the evaluation is **strictly OOD extrapolation**. For example, if the training velocity range is [1.0,4.0], then the test set contains velocities outside that range, such as [0.0,0.8]∪[4.5,6.0]. In this setting, simply increasing the amount of data or model size does not significantly improve performance due to the difficulty of truly grasping the physical law and true extrapolation.
In contrast, Figure 3 reveals **a transition from extrapolation to interpolation**, where the test set lies between two disjoint training subsets, more like an interpolation scenario. As the gap between the two training regions narrows, model performance on the test set improves, reflecting the model's stronger ability to interpolate.
We will clarify this distinction in the revision.
`5. no supplementary material`
We have well-organized code and datasets and will make them public once the paper is officially published to support future research.
`6. With only the first frame as input, how does the model infer object properties and predict future events?`
In the Combinatorial Generalization setting, each object has a **distinct visual appearance and color**, allowing the model to** infer object type and physical attributes by learning associations** from training data.
As all objects **start static with zero velocity**, the model can use these inferred attributes to predict future events.
`7. Minor Issues`
Thank you for pointing out these minor issues. We have corrected them in the revision.
`Summary`
We hope our responses have addressed your concerns and strengthened confidence in our paper. | Summary: The authors create a benchmark to evaluate the physical understanding of large video models at scale. Specifically, they measure the generalization performance of the model under variations of physically meaningful quantities such as color, shape, size and velocity.
Claims And Evidence: They claim that the model does not integrate fundamental physical laws which they demonstrate by showing that scaling the dataset size and parameter number enables the model to generalize inside the training distribution but not outside. They claim that the model has a priority order when generalizing color > size > velocity > shape which they demonstrate by switching attribute between training and testing.
Methods And Evaluation Criteria: The author build there own benchmark. The benchmark is a class of 2D physics environments of multiple objects colliding and parameterized by shape, color, size and velocity which enables to isolate the evaluation of the physics understanding.
Theoretical Claims: N/A
Experimental Designs Or Analyses: They use a significant number of 100 to 1000 test cases per experiment.
Supplementary Material: All parts.
Relation To Broader Scientific Literature: Evaluating the adherence of large models to physical laws is a very important direction, given the wide adoption of these models in the industry.
Essential References Not Discussed: There are several previous work evaluating the physical understanding of large video models that are not cited:
1) Videophy: Evaluating physical commonsense for video generation
2) T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation
3) Devil: A comprehensive benchmark for dynamics evaluation in video generation
Furthermore, benchmark evaluating large video models more broadly:
1) Vbench: Comprehensive benchmark suite for video generative models.
2) Evalcrafter: Benchmarking and evaluating large video generation models
Other Strengths And Weaknesses: Strength: the paper makes an effort to investigate why they observe such results which is unlike many benchmark paper on physical accuracy of video generative models
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thorough review and for supporting the acceptance of our work.
We appreciate you pointing out these relevant works on evaluating the physical understanding of large video models. We agree that incorporating them will help position our contributions within the broader landscape of physical evaluation in video generation research. In the revised version of the paper, we will include a discussion of these works—Videophy, T2V-CompBench, Devil, VBench, and EvalCrafter—in the Related Work section under the “Video Generation” subsection. | Summary: This paper evaluates if scaling video generation models enables learning physical laws. Using a 2D simulator for object motion governed by classical mechanics, experiments reveal: (1) near-perfect in-distribution (ID) generalization with scaling, (2) failure in out-of-distribution (OOD) scenarios despite scaling, and (3) improved combinatorial generalization via increased data diversity. Models exhibit "case-based" generalization, prioritizing attributes (color > size > velocity > shape) rather than abstract rules. Key contributions include systematic scaling analysis and insights into model biases.
Claims And Evidence: Claims are largely supported. OOD failure is evidenced by consistent high errors across scaling levels. Case-based generalization is validated through experiments with flipped training data and attribute conflicts. However, the prioritization hierarchy (color > size > velocity > shape) is tested only in controlled scenarios; broader validation (e.g., real-world textures) is needed to confirm universality.
Methods And Evaluation Criteria: The 2D simulator effectively isolates physical variables, and velocity error metrics align with the goal of assessing physical law adherence. Human evaluations for combinatorial cases add robustness. However, pixel-level metrics (SSIM/PSNR) may not fully capture physical plausibility, and fixed VAE usage limits exploration of end-to-end training benefits.
Theoretical Claims: No formal theoretical claims are made. The framework for evaluating generalization (ID/OOD/combinatorial) is conceptual but well-defined. The empirical focus aligns with the paper’s goals.
Experimental Designs Or Analyses: Scaling experiments (model/data size) are rigorous, and controlled attribute comparisons clarify prioritization. However, testing only three physical scenarios (linear motion, collision, parabola) limits generalizability.
Supplementary Material: I've reviewed all parts.
Relation To Broader Scientific Literature: Connects strongly to prior work on world models and LLM memorization. Challenges assumptions in video generation (e.g., Sora’s physical reasoning claims) by demonstrating OOD limitations. Advances understanding of scaling’s role in combinatorial generalization, aligning with trends in foundation models.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Novel insights into case-based generalization; actionable scaling guidelines for combinatorial diversity; clear challenge to prevailing narratives about video models as world simulators. Weaknesses: Limited scenario diversity (2D, synthetic data); over-reliance on human evaluation for combinatorial cases; minimal discussion of real-world applicability.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for supporting the acceptance of our work. Please find our responses to your questions below.
`1. pixel-level metrics (SSIM/PSNR) may not fully capture physical plausibility; over-reliance on human evaluation for combinatorial cases`
Thank you for your comment. We agree that pixel-level metrics alone are insufficient to fully capture physical plausibility. However, we relied on a combination of objective video fidelity metrics and human evaluation:
1. **FVD, SSIM, PSNR, and LPIPS measure fidelity of generated videos to ground-truth videos** governed by physical laws. While not exlicitly designed for physical correctness, they reflect plausibility to some extent by measuring consistency and realism with the groundtruth —an approach consistent with prior work like Genie [1].
2. As you pointed out, fidelity alone may not fully represent physical plausibility. To address this, we also conducted a human evaluation, where each evaluator was specifically instructed to **focus on assessing violations of physical laws**.
We believe this combined approach offers a comprehensive evaluation for physical correctness in the complex setting.
[1] Bruce, Jake, et al. "Genie: Generative interactive environments." ICML 2024, Best paper.
`2. fixed VAE usage limits exploration of end-to-end training benefits`
Thanks for your question. Here we explain why we chose to fix the VAE and why doing so does not limit the performance of the diffusion model in our setting:
1. **Training stability**: During diffusion training, the VAE is used to define the training objective. Updating the VAE during diffusion training makes the latent space unstable, slowing convergence. Hence, widely used architectures like Stable Diffusion 3, DALL·E 3, and Hunyuan Video pretrain and fix the VAE.
2. **The VAE is not a bottleneck in our setup**. As shown in Appendix A.3.2, we validate that VAE reconstructions of the input videos exhibit minimal error, indicating that most inaccuracies arise from the diffusion model rather than the VAE. This was also positively noted by Reviewer hEH6.
We hope this explanation clarifies our rationale and addresses your concerns about fixing the VAE.
`3. The prioritization hierarchy (color > size > velocity > shape) is tested only in controlled scenarios; testing only three physical scenarios (linear motion, collision, parabola) limits generalizability; Limited scenario diversity (2D, synthetic data); broader validation (e.g., real-world textures) is needed to confirm universality;`
Thank you for your valuable insights. We use simplified synthetic scenarios for the following reasons:
1. **Abundant and Controllable Data**: Synthetic settings enable large-scale, controlled data generation, allowing systematic study of specific physical principles. **Defining settings like ID/OOD or combinatorial generalization is challenging in real-world datasets**.
2. **Isolated Physical Laws**: Each synthetic scenario is governed by a single, well-defined kinematic law. In contrast, real-world videos often involve multiple entangled factors (e.g., unknown friction, unobservable forces), making it hard to attribute behavior to specific laws.
3. **Measurable Physical Quantities**: In our controlled setup, physical quantities like velocity and mass can be reliably extracted from video frames. In real-world scenarios, such values are often unobservable, making it hard to verify whether generated videos obey physical laws.
By simplifying the rendering process, we isolate core challenges in learning physical dynamics, making our experiments quantitatively tractable and our findings interpretable.
However, we agree that broader validation with realistic data is important for future work. This would require great effort in collecting and curating controllable real-world data, and developing new metrics for evaluating physical consistency.
We appreciate your suggestions and welcome further discussion.
`4. minimal discussion of real-world applicability.`
Our work focuses on scientific insights, particularly regarding the underlying mechanisms of generalization in physical video modeling. The insights from the paper can inform such real-world scenarios:
Real-world video data is vast, continuous, and high-dimensional, making it more difficult to fully cover than pure language. For example, In real-world robotics, variables such as object velocity, joint configurations, camera angles, noisy backgrounds, and task goals vary across a continuous and combinatorially large space.
**Our paper contributes actionable insights to this challenge**: We show that **scaling combinatorial diversity** in the training data—rather than simply increasing dataset size—is significantly more effective for improving physical video modeling. And it also implies that Model Scaling is Effective When Supported by Diversity.
We will include this discussion in the revision.
`Summary`
We hope our responses improved your confidence in our paper. | null | null | null | null | null | null |
Efficient Robotic Policy Learning via Latent Space Backward Planning | Accept (poster) | Summary: The authors introduce LBP (Latent space Backward Planning), a novel approach for robotic planning. LBP works by grounding tasks into final latent goals and recursively predicting intermediate subgoals backward toward the current state. The authors evaluate LBP on simulation benchmarks and real-robot environments, demonstrating its performance over existing methods for long-horizon, multi-stage tasks.
Claims And Evidence: The experimental validation supports the claims by the authors. LBP achieves 82.3% success rate on LIBERO-LONG, outperforming baselines like MPI (77.3%) and Seer (78.6%). Ablation studies validate the contribution of each component, showing significant performance drops when removing key elements.
Methods And Evaluation Criteria: The methods are appropriate for the problem.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Experimental designs are comprehensive, evaluating on 10 LIBERO-LONG tasks and 4 real-world tasks against multiple baselines. Ablation studies effectively isolate component contributions.
Supplementary Material: I watched the videos on the companion website.
Relation To Broader Scientific Literature: Ok
Essential References Not Discussed: References are complete. Perhaps there may be some marginal connections with Hindsight Experience Replay.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive feedback and recognition of our work! If you have any further concerns or questions related to LBP, we would be happy to discuss them. | Summary: In this work, the author proposed a robotic manipulation method called LBP. This method first grounds the task into final latent goals and then recursively predicts the intermediate subgoals closer to the current state. Compared to previous fine-grained approaches, LBP is more lightweight and less prone to accumulating inaccuracies. For implementation, the goal predictor and subgoal predictor of LBP only use two-layer MLPs and use a cross-attention block to realize the goal-fusion model. The effectiveness of LBP is proven by the experiments on both the LIBERO-LONG benchmark and four real-world long-horizon tasks.
## Update after rebuttal
The generalization capabilities of LBP are demonstrated by the additional results of shifting cups. And, my misunderstandings about the baseline selection have been well addressed. However, the real-world task settings are still very simple, I'm prone to maintain my score.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods make sense, and the evaluations contain both simulation and real-world experiments, which makes the evaluation results comprehensive.
Theoretical Claims: I have checked the soundness of LBP’s theoretical claims, especially for the derivative of Eq (3,4,5) in the Sec 4.2 “Predicting Subgoals with a Backward Scheme” part.
Experimental Designs Or Analyses: - In the LIBERO-LONG benchmark, two versions of LBP have achieved almost state-of-the-art performance. However, on certain tasks such as task 6 and 7, LBP still has a large gap behind the best method. Overall, LBP’s average performance is the strongest.
- The presentation of real-world results is great. Figure 4 delivers a direct impression of each method’s performance at each stage. However, the long-horizon tasks involved in the real-world experiments are simple pick-and-place or stacking tasks. It could be better if the real-world experiments involve more contact-rich tasks, e.g. articulated object manipulation.
- Besides, the baselines selected in real-world experiments are not strong enough. I would like to recommend adding R3M[1], VC-1[2], DP[3], or other policies into your comparisons.
[1] Suraj Nair, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, and Abhinav Gupta. R3M: A universal visual representation for robot manipulation. In CoRL, 2022.
[2] Arjun Majumdar, Karmesh Yadav, Sergio Arnaud, Yecheng Jason Ma, Claire Chen, Sneha Silwal, Aryan Jain, Vincent-Pierre Berges, Pieter Abbeel, Jitendra Malik, Dhruv Batra, Yixin Lin, Oleksandr Maksymets, Aravind Rajeswaran, Franziska Meier. Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence? In arXiv, 2023.
[3] Cheng Chi, Zhenjia Xu, Siyuan Feng, Eric Cousineau, Yilun Du, Benjamin Burchfiel, Russ Tedrake, Shuran Song. Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. In RSS, 2023.
Supplementary Material: I have reviewed the supplementary materials, including the implementation details, benchmark details, and additional results.
Relation To Broader Scientific Literature: Previous methods usually predict consecutive frames to model future outcomes, which could bring the propagation of inaccuracies. LBP is a new scheme that aims to achieve the balance between efficiency and long-horizon processing capabilities.
Essential References Not Discussed: No
Other Strengths And Weaknesses: No generalization experiment results are provided to prove LBP’s robustness.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the reviewer's positive feedback and recognition of our work! Below are our responses to the concerns raised.
## Experimental Designs Or Analyses
>The presentation of real-world results is great...It could be better to involve more contact-rich tasks.
Thanks for your suggestion! We are willing to try different classes of robotic tasks in future version, including contact-rich tasks.
> The baselines selected in real-world experiments are not strong enough. I would like to recommend adding R3M, VC-1, DP, or other policies into your comparisons.
+ In our experiments, **the LCBC baseline is actually implemented using Diffusion Policies (DP) [1] with language instructions**. More details can be found in Appendix A of our paper.
+ R3M [2] and VC-1 [3] primarily focus on representation learning, **while LBP is a planning framework that allows flexible representation choices**. For planning in latent space, we adopt DecisionNCE [4] and SigLIP [5], two recent strong methods in robotic representation learning. As shown in [4], DecisionNCE outperforms R3M, making it a sufficiently strong choice for our experiment. SigLIP has been widely adopted in many robotic frameworks like OpenVLA [6].
## Other Strengths And Weaknesses
> No generalization experiment results are provided to prove LBP’s robustness.
+ We test LBP on the longest real-world task `shift cups` with different backgrounds and distracting objects and find that **LBP maintains robust performance in these complex scenarios**, still outperforming the strongest baseline LCBC in base setting. The corresponding videos have also been updated on our website `(Click the link at the end of our abstract)`.
+ In our planning framework, the generalization capability also depends on the selected latent space. If adopting stronger latent spaces, the generalization capability of LBP can be further improved.
||shift cups|||||
|-|-|-|-|-|-|
||stage 1|stage 2|stage 3|stage 4|stage 5|
|LCBC (Base setting)|85.0|55.0|48.3|20.8|0.0|
|LBP (Distracting objects)|87.5|75.8|48.3|35.0|9.0|
|LBP (Different backgrounds)|91.6|84.1|55.8|37.5|13.3|
|LBP (Base setting)|97.5|87.5|74.1|50|26.6|
[1] Chi, et al. Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. RSS 2023.
[2] Nair, et al. R3M: A universal visual representation for robot manipulation. CoRL 2022.
[4] Majumdar, et al. Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence? NeurIPS 2023.
[4] Li, et al. DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning. ICML 2024.
[5] Zhai, et al. Sigmoid Loss for Language Image Pre-Training. ICCV 2023.
[6] Kim, et al. OpenVLA: An Open-Source Vision-Language-Action Model. CoRL 2024.
---
Rebuttal Comment 1.1:
Comment: The generalization capabilities of LBP are demonstrated by the additional results of shifting cups. And, my misunderstandings about the baseline selection have been well addressed. However, the real-world task settings are still very simple, I'm prone to maintain my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your positive recognition of our work. Let us address your concerns on our real-world task settings.
> On the real-world task settings
+ Due to the limited timeframe of rebuttal, we are unable to include additional real-world tasks. However, we are fully committed to exploring other tasks in the future as suggested by the reviewer.
+ Nevertheless, we would like to emphasize that the core contribution in our work is proposing a general efficient planning framework, LBP, that provides a recursive backward subgoal planning scheme for long-horizon tasks.
+ Thanks to this scheme, **even lightweight MLP-based planners can outperform significantly larger models, as we have already observed in our experiments**. This demonstrates the efficiency of LBP and indicates its potential scalability to more complex tasks. We believe this would be greatly inspiring in a period when most of the field is dominated by scaling up models to improve long-horizon planning performance. | Summary: This paper focuses on latent space planning to accomplish robotic tasks. It breaks down a long horizon language conditioned manipulation task into predicting the final goal. Then using the final-goal to predict sub-goals moving from the goal state to the initial state. Once these have been learned a sub-goal/final-goal conditioned context policy is learned. During inference time, the approach generates the final subgoal and then other sub-goals, which are used to predict the action which is then rolled out. Experiments are performed on Libero-Long benchmark and shows some improvments over baselines.
Claims And Evidence: Yes the claims seem reasonable.
Methods And Evaluation Criteria: Yes
Theoretical Claims: none
Experimental Designs Or Analyses: yes, the libero long experiments is a decent choice since the dataset is focused on long horizon tasks. However, there is nothing in the method which is unique to manipulation, hence other common long horizon tasks could also have been used for comparison (e.g. different variants of AntMaze etc). The real-world experimental design seems good. Finally, the ablation analysis makes sense and the main components of the approach have been validated.
Supplementary Material: no
Relation To Broader Scientific Literature: Many prior approaches have focused on sub-goal generation and then using goal-conditioned supervised learning, which is what the proposed approach is doing. The only big difference is how sub-goal generation happens. The paper claims following a final-goal to initial state approach should be better. This has also been applied
Essential References Not Discussed: see below. Also, classical works such as doing backwards chaining using skill trees should also be cited [1]. There is a large body of work around this which predates a lot of modern deep learning based approaches. None of these approaches are cited or discussed.
[1] Konidaris et al. Robot Learning from Demonstration by Constructing Skill Trees
Other Strengths And Weaknesses: **Pros:**
The paper focuses on an important problem. Developing robust planning approaches would be super useful for robot tasks. Overall, the paper is also well written (although some details are missing see below).
**Cons:**
**Few baselines:** There has just been a tremendous amount of work on heirarchical approaches for control tasks. Many papers have tried techniques similar to the ones proposed in the paper. But these approaches have not been compared against, some of them have not been cited at all [1, 2, 3]. I think some of these alternative approaches which do sub-goal generation differently should be compared against and properly discussed.
Another interesting baseline would ideally be using denser language labels for the entire task. Here the need for sub-goals is motivated by the language labels not being dense enough for the task and not providing enough semantic value (Line 180 — language descriptions often reduce to task identifiers …). However, given improved understanding of large multi-modal models it may be possible to zero-shot get denser labelings for a task from a long horizon video. In case the policy performance with dense language relabeling is worse then it is possible to conclude that latent image goal/sub-goal embeddings are indeed crucial but it is unclear if this the case with the current set of experiments.
**Fixed recursive time for sub-goal generation:** This seems like a very big assumption. For many tasks the challenging part of the task might be much shorter than the other parts, in this case a fixed approach might just miss the right sub-goal for the task. This will always be a problem in the proposed approach since it apriori relies on no information to do sub-goal generation. This makes the proposed approach quite unscalable to more challenging and interesting tasks.
**Inference time action selection:** How does action selection happen at inference time. Once all the sub-goals are selected does the policy use the first sub-goal to generate action and rollout. When does the policy switch to next sub-goal?
[1] Hierarchical reinforcement learning with timed subgoals
[2] Zhang et al. Generating adjacency-constrained subgoals in hierarchical reinforcement learning
[3] Lei et al. Goal-conditioned Reinforcement Learning with Subgoals Generated from Relabeling
Other Comments Or Suggestions: please see above
Questions For Authors: please see above
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ## **Experimental Designs Or Analyses**
>Other long horizon tasks for comparison (e.g. AntMaze etc).
Thanks for the positive comments to our experimental designs. While other long-horizon tasks like variations of AntMaze exist, we excluded them from our primary benchmark suite for specific reasons aligned with the focus of our work.
- Firstly, our method is explicitly designed and evaluated for language-guided robotic control, as detailed in the introduction of our paper. AntMaze, lacking language instructions, does not allow for the validation of language-driven task execution, which is a key application of our approach.
- Furthermore, its low-dimensional state space and explicit coordinate goals make it significantly simpler than the complex image-based tasks that are the current focus of modern robotic research, as evidenced by [1, 2, 3, 4].
[1] Black, et al. Zero-Shot Robotic Manipulation with Pretrained Image-Editing Diffusion Models. ICLR 2024.
[2] Tian, et al. Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation. ICLR 2025.
[3] Nair, et al. R3M: A universal visual representation for robot manipulation. CoRL 2022.
[4] Chi, et al. Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. RSS 2023.
## **Essential References Not Discussed**
>Classical works on backward chaining with skill trees should also be cited.
Thanks for the suggestion! We will discuss these relevant references in the final version.
## **Other Strengths And Weaknesses**
> Lacking baselines
We wish to emphasize that LBP's primary contribution lies in enhancing language-guided robotic control, a significant challenge in the field. Our experimental evaluation includes comparisons against the most relevant SOTA robotic methods, such as SUSIE, Seer, and OpenVLA.
1. Other hierarchical approaches for control tasks
The hierarchical RL methods mentioned by the reviewer are not appropriate baselines for LBP due to their fundamentally different application scope. They are tailored for simpler, often customized environments (such as the AntMaze benchmark) and, crucially, do not support language-conditioned tasks, making them incomparable to LBP's core functionality.
Although they are not suitable as baselines, **we will include a discussion of these methods in the related work of the final version**.
2. Another interesting baseline is to use denser language labels generated by large multi-modal models.
- As far as we know, **benchmarks with dense language labels are rare in the robotics community**, as collecting reliable and sufficient language annotations is both costly and impractical.
- Moreover, such methods **typically require significantly larger models** to process diverse and dense language inputs while also handling out-of-domain scenarios at test time.
In contrast, our LBP method offers a more robust, efficient, and lightweight approach for subgoal specification, which can take advantage of rich observation data without dense language labels, as is the common case.
> **Fixed recursive time for sub-goal generation (Q1) & Inference time action selection (Q2).**
We would like to address a potential misunderstanding concerning how LBP is used for action selection.
- **LBP is designed to predict (update) future subgoals at every step of the task execution** as we describe in lines 272-274. This dynamic planning scheme ensures that all the parts in task horizon would be covered during planning, thus addressing the "challenge part" (Q1).
- At test time, we rollout action based on the fusion of all the subgoals generated at this time, as we describe in Section 4.3 (Q2). | Summary: To enable real-time planning for long-horizon and multi-stage tasks, the paper proposes LBP, a backward planning scheme in the latent space. By eliminating the need for pixel-level generation, the proposed scheme significantly improves inference speed while alleviating compound errors. Additionally, it enhances on-task planning through guidance from the final goal. The evaluation is conducted on LIBERO-LONG and real-world setups.
Claims And Evidence: The main idea of the paper is backward planning in the latent space. However, it fails to provide two key ablations: forward planning and parallel planning. The absence of these comparisons makes it difficult to conclude the superiority of backward planning. In fact, it is possible that the planning order has minor effects and the performance gain is attributed to the informative subgoals in the SigLIP and DecisionNCE latent spaces.
Methods And Evaluation Criteria: The proposed methods are aligned with the motivation and evaluated using appropriate criteria.
Theoretical Claims: I have checked the theoretical claims in this paper.
Experimental Designs Or Analyses: The paper provides extensive experiments with sufficient details.
Supplementary Material: I have reviewed all appendices.
Relation To Broader Scientific Literature: Recent goal-conditioned robot planning typically uses generative models for goal prediction. Unlike these approaches, the paper suggests that predicting latent goals enhances computational efficiency and achieves better performance. Although the proposed backward planning scheme sounds technically novel, I have concerns about its superiority due to a lack of ablation studies.
Essential References Not Discussed: As far as I know, all closely related works are cited appropriately.
Other Strengths And Weaknesses: W1) **Missing key ablations.** The proposed backward planning is not compared to forward planning and parallel planning using the same latent subgoals, which compromises the validity of the main contribution. In some cases, leaving causal uncertainty while determining closer steps first could be beneficial for decision making [1].
W2) **Predicting final goals directly may lead to large errors.** There is no evidence suggesting that distant final goal prediction is easier than progressive prediction.
W3) **Inference speed is not reported.** Since the authors highlight the efficiency over previous generative planners, it would be beneficial to report the inference frequency.
[1] Diffusion Forcing: Next-Token Prediction Meets Full-Sequence Diffusion. Boyuan Chen, et al.
Other Comments Or Suggestions: Please see the questions below.
Questions For Authors: Q1) **A little confusion.** I am confused by the sentence on Line 220-221. It states that the proposed mechanism suffers from less compounding error because it is completely supervised with ground truth data. My question is: if ground truth is not used, then what kinds of supervision could be?
Q2) **Differences to DiffuserLite.** The proposed method reminds me of DiffuserLite [2], which also introduces an efficient coarse-to-fine planning process that transits from long horizon to short horizon. Could you elaborate on the differences?
[2] DiffuserLite: Towards Real-Time Diffusion Planning. Zibin Dong, et al.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your efforts and valuable feedback!
# Claims and Evidence
> **Missing ablations to forward planning**
We add an ablation study comparing LBP to latent forward planning. The results demonstrate that LBP significantly outperforms the forward planning paradigm in both **subgoal predicting accuracy** and **final policy performance**. `The updated results of prediction errors are visualized on our project page, linked at the end of our abstract`.
- **LBP obtains substantially lower prediction error**.
- **Train forward planner**: We learn a forward planner in latent space for our real-robot tasks, which predicts the subgoal 10 steps ahead, similar to SuSIE [1]. At each step, the forward planner autoregressively generates latent subgoals towards the final goal.
- **Evaluate planning accuracy**: We randomly sample 3000 data points as the current state from our real-robot datasets and compute the mean square errors (MSE) between predicted subgoals and their corresponding groundtruths.
- **Visualization of prediction error results**: Please refer to Figure 5 on our website, which illustrates that *forward planning struggles with long-horizon subgoal prediction due to rapid error accumulation*. Given that long-horizon tasks often span hundreds of frames, this error compounding makes forward planning impractical. In contrast, *LBP consistently produces accurate subgoals with significantly lower error magnitude, maintaining reliability throughout planning horizon.*
- **LBP obtains substantially stronger long-horizon performance**.
The tables below show that LBP significantly outperforms latent forward planning in all the long-horizon real-robot and simulation tasks, benefiting from the recursive backward planning scheme and its subgoal generation accuracy. Note that all settings remain the same to ensure a fair comparison.
||stack 3 cups|||stack 4 cups|||
|-|-|-|-|-|-|-|
||stage 1|stage 2||stage 1|stage 2|stage 3|
|latent forward planning|78.3|6.7||71.6|21.6|5.0|
|LBP (ours)|**94.1**|**75.0**||**96.6**|**77.5**|**43.3**|
||move cups|||shift cups|||||
|-|-|-|-|-|-|-|-|-|
||stage 1|stage 2||stage 1|stage 2|stage 3|stage 4|stage 5|
|latent forward planning|43.3|5.0||95.0|65.0|11.6|0.0|0.0|
|LBP (ours)|**90.0**|**65.8**||**97.5**|**87.5**|**74.1**|**50.0**|**26.6**|
||libero-long|
|-|-|
|LCBC|73.0|
|latent forward planning|73.6|
|LBP (ours)|**82.3**|
**Lastly, we are unsure which specific approach the reviewer refers to as "parallel planning". We would greatly appreciate any further clarification and descriptions on this. We would be happy to explore the comparison if time allows.**
# Weaknesses
> "Predicting final goals directly may lead to large errors."
- As shown in the above prediction error results, while predicting final goals may introduce some errors, they are negligible compared to the accumulated errors in forward (progressive) planning.
- Grounding the task objective in final goals also stabilizes subgoal predictions along the horizon, keeping subgoal prediction errors low and demonstrating the effectiveness of error control in our recursive backward planning scheme.
- Predicting the final goal not only plays a key role in LBP but also is not as difficult as it seems, as it is relatively deterministic given the current state and task description.
> "It would be beneficial to report the inference frequency."
We present the inference time of LBP and a competitive generative planner, SuSIE. Other baselines either adopt large VLA models or are not planning methods, which are not meaningful to compare on inference latency. **The results show that LBP is significantly more efficient than SuSIE.** Each model is tested on a single GPU.
||Inference time|
|-|-|
|SuSIE|28.13s|
|LBP|0.013s|
# Questions
>A little confusion on line 220-221.
We apologize for confusing the readers. We meant to say:
- This recursive mechanism suffers from considerably fewer compounding errors, as the λ-recursion effectively reduces the number of planning steps, and *the training of $f_w$ incorporates supervision (of groundtruths) in every recursion level*.
>Differences to DiffuserLite.
- *LBP is based on real-world task settings*: Unlike LBP, DiffuserLite cannot handle language-conditioned tasks and struggles in high-dimensional spaces due to high computational cost of diffusion.
- *LBP enjoys simplicity in design*: DiffuserLite trains separate diffusion models at each level, while LBP trains a single MLP to recursively predict subgoals.
- *LBP enjoys computational efficiency*: DiffuserLite only uses the first prediction in trajectories for next-level trajectory generation, which results in many redundant computations. In contrast, LBP predicts one subgoal at each level with MLP, which is more efficient and proves effective for policy guidance.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts to address my concerns.
For "parallel planning", I meant a variant that predicts all subgoals simultaneously instead of a progressive manner. This can be achieved through multi-round joint refinement. Since this formulation always accounts for task completion, it also suffers less from compounding errors.
Due to the limited time, I will not request this ablation during the rebuttal period. As mentioned by reviewer sA8h, the proposed backward planning introduces a strong assumption of the task horizon. Different tasks across different benchmarks have various intervals for critical goals, and a completely recursive scheme may not be adaptive enough.
---
Reply to Comment 1.1.1:
Comment: Thanks for the response and the clarification for the "parallel planning" baseline.
> Comparisons to parallel planning.
We add an ablation study comparing LBP to the parallel planning baseline. To ensure a fair comparison, all experimental settings for the parallel planning baseline are strictly aligned with those of LBP, including rough model size and hyperparameter setups of the MLP-based planner.
- We report MSE between predicted subgoals and corresponding groundtruths as below. **Notebly, LBP consistently produce more accurate and reliable subgoals in various horizons.** While parallel planning does not accumulate error, it tends to predict inaccurate subgoals throughout the planning horizon. This can be attributed to the challenging training objective that requires supervision for all the subgoals simultaneously, which demands higher model capacity and significantly increased computational cost. Besides, we update Figure 5 with parallel planning errors in [our website](https://lbp-authors.github.io/).
||Subgoal errors on `stack 3 cups`|||||Subgoal errors on `move cups`||||
|-|-|-|-|-|-|-|-|-|-|
|task progress|0.125|0.25|0.5|1.0||0.125|0.25|0.5|1.0|
|latent forward planning|**0.015±0.001**|0.027±0.002|0.050±0.008|0.098±0.025||0.039±0.004|0.105±0.037|0.224±0.066|0.353±0.236|
|latent parallel planning|0.369±0.288|0.250±0.123|0.102±0.166|0.226±0.276||0.027±0.100|**0.018±0.141**|0.091±0.131|0.082±0.018|
|LBP|0.018±0.002|**0.018±0.003**|**0.016±0.002**|**0.014±0.003**||**0.024±0.013**|0.044±0.024|**0.036±0.011**|**0.020±0.004**|
||Subgoal errors on `stack 4 cups`|||||Subgoal errors on `shift cups`||||
|-|-|-|-|-|-|-|-|-|-|
|task progress|0.125|0.25|0.5|1.0||0.125|0.25|0.5|1.0|
|latent forward planning|0.015±0.000|0.036±0.009|0.154±0.035|0.489±0.064||0.173±0.031|1.580±0.313|5.934±0.158|4.292±0.355|
|latent parallel planning|0.086±0.286|0.073±0.410|0.135±0.360|0.294±0.055||1.074±0.659|0.850±0.331|0.575±0.195|0.636±0.050|
|LBP|**0.009±0.001**|**0.014±0.003**|**0.016±0.002**|**0.014±0.001**||**0.085±0.013**|**0.223±0.035**|**0.202±0.079**|**0.319±0.106**|
- We further evaluate the policy performance of latent parallel planning on both real-world and simulation benchmarks. The results show that **LBP achieves significantly better performance across all those long-horizon tasks**.
||stack 3 cups|||stack 4 cups|||
|-|-|-|-|-|-|-|
||stage 1|stage 2||stage 1|stage 2|stage 3|
|latent forward planning|78.3|6.7||71.6|21.6|5.0|
|latent parallel planning|75.0|10.0||75.0|30.0|10.0|
|LBP (ours)|**94.1**|**75**||**96.6**|**77.5**|**43.3**|
||move cups|||shift cups|||||
|-|-|-|-|-|-|-|-|-|
||stage 1|stage 2||stage 1|stage 2|stage 3|stage 4|stage 5|
|latent forward planning|43.3|5.0||95.0|65.0|11.6|0.0|0.0|
|latent parallel planning|55.0|6.7||96.6|48.3|8.3|0.0|0.0|
|LBP (ours)|**90.0**|**65.8**||**97.5**|**87.5**|**74.1**|**50.0**|**26.6**|
||libero-long|
|-|-|
|latent forward planning|73.6|
|latent parallel planning|76.6|
|LBP (ours)|**82.3**|
> "This recursive scheme may not be adaptive enough."
There appears to be a misunderstanding about how LBP operates during inference and we have provided an explanation in the response to reviewer sA8h. We wish to emphasize that **the subgoals planned by LBP are highly adaptive rather than fixed, allowing the model to effectively capture future guidances across various horizons** for reasons below:
- **Adaptive training**: We train the planner in varying horizons and with $\lambda$-recursion subgoal supervisions as in Eq.5. $\lambda$-recursion scheme allows the planner to predict subgoals adaptively according to the rest of task progress instead of in fixed planning steps. Sampling from trajectories in varying horizons helps it generalize across different temporal contexts at inference time. More details can be found in Section 4.2 and Section 4.4.
- **Adaptive inference**: *LBP replans at each step*, enabling the generated subgoals to **dynamically cover the entire task horizon** and provide sufficient guidance for policy extraction.
- **More adaptive than existing works**: Compared to recent planning methods that rely on fixed planning steps [1,2] or lack the ability to replan [3], LBP can update subgoals adaptively according to task progress in every action step, attributing to the strong performance on long-horizon tasks.
For further clarification, we provide **an illustrative video** on [our website](https://lbp-authors.github.io/), to demonstrate how the subgoals update adaptively at inference time.
[1] Black, et al. Zero-Shot Robotic Manipulation with Pretrained Image-Editing Diffusion Models. ICLR 2024.
[2] Tian, et al. Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation. ICLR 2025.
[3] Du, et al. Learning Universal Policies via Text-Guided Video Generation. NeurIPS 2023. | null | null | null | null | null | null |
Re-ranking Reasoning Context with Tree Search Makes Large Vision-Language Models Stronger | Accept (spotlight poster) | Summary: This paper introduces a novel framework to enhance Large Vision Language models (LVLMs) capability for visual question answering. Two major contributions are demonstrated in this paper. First is about creating a comprehensive knowledge base enriched with automatically generated reasoning contexts. Second is about employing a tree search-based re-ranking mechanism called MCTS-HR, which strategically orders retrieved examples to improve the accuracy of the LVLMs' responses.
The experiments demonstrate that RCTS outperforms other methods on various visual reasoning datasets by better guiding the LVLMs to understand and utilize contextual information. Further, ablation studies on the proposed methods also validate the contributions of its key components.
Claims And Evidence: From the motivation, method and experimental results of this paper, there are three major claims being made:
1. The proposed RCTS achieves state-of-the-art performance acorss the board: The author shows how the proposed method performs on multiple reasoning VQA benchmarks, like ScienceQA, MMMU, MathV, etc. For example, on ScienceQA, RCTS achieved a 78.99% accuracy with Qwen2-VL (2B), surpassing Zero-Shot by +11.81% and Vanilla-RAG by +7.05%.
However, from the major Table 2, it is not clear why for InternVL-2 method, the Vanilla-Rag performs worse than the Zero-shot on three benchmarks. Is that because of the quantization method for models over 7B? The author needs to add explanations on this.
2. MCTS-HR effectively re-ranks retrieved examples: By re-ranking the retrieved samples, RCTS ensures that LVLMs leverage high-quality contextual reasoning. In Figure 6, the author shows the comparison between Vanilla RAG and RCTS.
3. Hybrid rewards in MCTS-HR are beneficial: Figure 5 shows that using hybrid rewards in MCTS-HR leads to the best performance compared to using either self-reward or mutual-reward alone.
Methods And Evaluation Criteria: Methods:
The overall methods are reasonable. One concern after reading the paper, which is also missed in the experiments, is about the RAG cost. I understand the knowledge base could be built beforehand. But it is also important to show the database building cost, retrieving cost, and the reranking cost to help the readers better understand the proposed method.
Criteria:
The paper follows standard evaluation criteria with other LVLMs and RAG methods, which makes sense to the problem. Also, the author includes both reasoning and non-reasoning VQA benchmarks to show the advancement of the proposed method.
Theoretical Claims: There is no theoretical claims in this paper.
Experimental Designs Or Analyses: As I stated in the Claims And Evidence section, my concern is majorly about why, in Table 2 for InternVL-2 method, the Vanilla-Rag performs worse than the Zero-shot on three benchmarks? I could not find the discussion about this in the paper. And does this finding consistant with the other RAG papers using InternVL-2?
Supplementary Material: Yes, the supplementary provides the detailed steps of the proposed methods, the experiments setup details and qualitative cases.
Relation To Broader Scientific Literature: The paper is related to LVLMs, multimodal information retrieval and RAG. These fields have been discussed in the related works.
Essential References Not Discussed: Not aware of important missings in literature.
Other Strengths And Weaknesses: 1. One thing not clear is about the retrieving key of the method. The embedding of the query image and text might not have very high similarity with correlated information in the database, especially when the query requires long reasoning chains. For example, the information in the database is not word-similar to the query question but might be an important thinking step to respond to the query. It is not clear how this has been considered in the paper.
2. I could not find the discussion of the option of N in the experiments. Will the method still work when N is very small, and will it bring marginal benefits when increasing it?
Other Comments Or Suggestions: No, I do not have other comments.
Questions For Authors: No, I do not have other questions for the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your constructive reviews and address your concerns as follows.
**Q1**: From the Table 2, it is not clear why for InternVL-2, the Vanilla-Rag performs worse than the Zero-shot. Is that because of the quantization method for models over 7B?
**A1**: Thank you for pointing out this issue. Actually, we initially had similar concerns and conducted multiple experiments, obtaining consistent results. Here are some possible reasons we analyzed:
1. This issue may not be caused by model quantization, since we use the same quantized models for all experiments. The performance degradation is consistent across all methods.
2. Since InternVL-2 has been trained on ScienceQA and MathQA datasets, this might be because ICL alters the distribution of the model's outputs, resulting in failed predictions.
3. Compared to Qwen2-VL, we observe that Qwen2-VL has better performance in multiple rounds of interaction capabilities, while our RCTS implemented ICL through multi-turn conversations. This may be the reason why vanilla-RAG is inferior to zero-shot in InternVL-2.
4. As for why RCTS did not encounter this issue, this might be attributed to the fact that MCTS-HR incorporates heuristic rewards, which helps to validate the predicted results and can therefore mitigate the impact of ICL on the model distribution to a certain extent.
**Q2**: It is important to show the database building cost, retrieving cost, and the reranking cost to help the readers better understand.
**A2**: Regarding the cost issues of our model, all cost experiments are evaluated on an A800 GPU, using vllm 0.6.4, with Qwen2-VL-7B-GPTQ-Int4. Additionally, we randomly select 100 samples from ScienceQA and MathV datasets for cost comparison, and repeat tests 5 times to mitigate hardware fluctuations.
1. KB Construction Cost. The time required to construct the KB is positively correlated with the difficulty of the problems. This is because once the Score (L202) reaches the maximum value, the current reasoning context is returned as the optimal reasoning path. The cost-time is shown below:
|Cost-Time (seconds)|ScienceQA|MathV|
|-|-|-|
|KB Construction (per)|0.94±0.06|4.28±0.22|
2. Hybrid Retrieval Cost. As mentioned in Q4 by Reviewer wbuQ, we employ Bert-Base + ViT-L (422M parameters) to extract text and image embeddings. Besides, we pre-store the features from KB using Faiss for fast retrieval. The average retrieval time per sample ranges from 5–30 ms, which is significantly shorter than the inference time required for MCTS we discuss below.
3. MCTS Inference Cost. MCTS inherently requires more simulations due to its rollout mechanism, thus consuming more cost. To enable faster responses for simpler questions, we introduced an early-stopping strategy based on answer consistency (L642–645), i.e., if the initial branch and the greedy retrieval branch yield the same answer, the result is returned, bypassing MCTS’s multi-round simulations. Therefore, our inference cost also varies depending on the difficulty of the problem. The cost time of MCTS-Reranking is shown as below:
|Cost-Time (seconds)|ScienceQA|MathV|
|-|-|-|
|MCTS-Reranking (per)|29.55±4.5|62.32±8.6|
**Q3**: One thing not clear is about the retrieving key of the method.
**A3**: Thank you for pointing this out. Actually, we have observed this issue in L60 under the "challenge ii)". Specifically, our hybrid retrieval method computes text embeddings using the user's question and the KB questions, without including the long reasoning contexts and corresponding answers (L187-191, L211-213). This design ensures efficient retrieval of the most similar image-text pairs from the KB that align with the user’s query.
However, relying solely on feature similarity is insufficient, as the information in the KB which is not similar to the query might be an important thinking step to respond to the query. For this issue, we propose a tree search approach with Heuristic Rewards, termed MCTS-HR, which dynamically re-ranks and selects the most related samples rather than similar samples. By evaluating candidate samples through heuristic rewards, MCTS-HR identifies the most pertinent samples (i.e., those beneficial for addressing the user's question) from the candidate feature-similar samples (L209-211).
**Q4**: The discussion of the option of N in the experiments.
**A4**: Thank you for pointing this out. We take more ablation studies on N with Qwen2VL-7B, conducted on ScienceQA and MathV dataset.
It can be observed that a smaller N degrades performance, since the reduction in candidate similarity samples narrows the action space of MCTS, limiting its ability for re-ranking. On the other hand, selecting an excessive number of N introduces more noise into the MCTS action space, i.e., samples that are neither similar nor particularly helpful. Taking these into account, we set N to 20.
|N|ScienceQA|MathV|
|-|-|-|
|5|89.4|26.0|
|10|90.6|27.0|
|15|90.7|27.6|
|20|91.4|29.0|
|30|91.2|26.3|
|50|91.3|27.6| | Summary: This paper focuses on RAG for VQA tasks. The authors propose constructing a reasoning-based KB with examples of successful reasoning and then introduce an MCTS-based method for finding the best set of ICL examples, motivated by the fact that existing models can only take a fairly small number of ICL examples (compared to what is retrieved).
The approach first constructs a KB of reasoning that leads to correct outputs. The main contribution is the MCTS search method, which helps choose which example to select by using the consistency of the answer (were an example to be selected) as heuristic reward, combined with a mutual consistency reward. Here, the authors formulate retrieval as a sequential decision-making task, where the action space consists of selecting examples from the KB. They start by retrieving a set of relevant examples with vector similarity and then rerank according to their heuristic tree-search algorithm.
The approach is evaluated on multiple datasets: ScienceQA, MMMU, MathV, VizWiz, VSR-MC and across two strong VLMs. The results unequivocally demonstrate improvements over examples retrieved with vanilla RAG and randomly-retrieved examples.
The authors ablate key parts of their method, including their rewards, showing that both rewards are required for strong performance.
## update after rebuttal
The rebuttal has clarified some of the questions I had. I will maintain my positive recommendation.
Claims And Evidence: - the claims are clear: the authors are claiming that by selecting examples for ICL via their method they can make better use of their generated KB
- the evidence largely points to this, however, I think the setup for the Vanilla-RAG baseline is not completely clear. It's later improved by Figure 6 but this information comes too late and it is still not completely clear to me what the source of the examples used in Vanilla RAG is.
Methods And Evaluation Criteria: - The evaluation domains make sense, and the method is evaluated on a range of datasets and across two recent models.
- The proposed method performs strongly but the description of the method is not very clear. There are a few parts that were unclear to me:
1. L200-201: what is the score being used here? Why do you need SC if you have ground-truth answers?
2. L256-259: the purpose of these rewards (especially the mutual consistency reward) should be made clear earlier on.
3. How are images/text embedded? Which model is used?
4. Which model is used to construct the KB?
Overall, the methods section could benefit from more sign-posting on why particular decisions are being made. Right now it reads like a sequence of somewhat arbitrary decisions; the ablations later show that these decisions help but there is little intuition given on why they should help.
Theoretical Claims: No theoretical claims made.
Experimental Designs Or Analyses: Experimental design is sound.
Supplementary Material: I looked through the whole appendix
Relation To Broader Scientific Literature: The contribution of the paper relates more broadly to the question of data selection under a budget. The idea of using MCTS for ICL selection in this way seems novel and could be applied to other domains as well. In general, picking the right examples from a superset is an important problem.
Essential References Not Discussed: - https://arxiv.org/pdf/2307.07164 uses a reward model to learn to retrieve ICL examples
- https://arxiv.org/pdf/2402.07812 seems to also use MCTS combined with retrieval for guiding model outputs, along with a proxy reward.
Other Strengths And Weaknesses: Strengths:
- overall the results of the paper are strong, with consistent gains across domains
- the idea of using self-consistency as a signal for MCTS this way is widely applicable
Weaknesses:
- It's not very clear what the action space is or why this needs to be a sequential problem. In this case, the authors are selecting a set of ICL examples. It doesn't seem to me like order matters here, i.e. does selecting a different example first makes a difference? If not, why is it being modeled as a sequential task? An ablation here on how much order matters would be helpful.
- Computational cost is mentioned in the limitations, it would be good to have a number to put to this (as the MCTS method seems like it could involve a high number of calls)
Other Comments Or Suggestions: I have a few small quibbles about the language in the paper:
- The "known" vs "understood" claim is not clear and in my opinion doesn't add much to the introduction, the authors should be clearer on what they mean by understanding, which is a pretty hazy concept
- L162 "humans always learn by examples": this is a big claim and not at all settled fact, it either needs more evidence or should be hedged.
Typos:
- L211: "solely on single-modal." -> "solely on a single modality"
- typo in fig 6 caption (retrievd)
- L609: Native, retrieved
- L630: Formally
- L736: backwards quotes (also in the rest of appendix)
Questions For Authors: - What dataset is Vanilla RAG retrieving from? Is the info coming from the same KB but without MCTS reranking?
- L262 "employs QA pairs retrieved... as candidate actions", does this mean that you are treating selecting a QA pair is an action?
- (from methods:) How are you embedding images and text? Which model is used?
- Which model is used to construct the KB?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your insightful and valuable reviews and address your concerns as follows.
**Q1**: What dataset is Vanilla RAG retrieving from? Is the info coming from the same KB but without MCTS reranking? And I think the setup for the Vanilla-RAG is not clear. It's later improved by Fig. 6 but this information comes too late.
**A1**: Thank you for pointing this out. Yes, the retrieved examples come from the same knowledge base, excluding the reasoning context, as shown in Table 1. Besides, Vanilla-RAG uses the same hybrid retrieval module, which is the same as our RCTS. The difference is that it only relies on the feature similarity without MCTS reranking. And we will revise this explanation earlier about Vanilla-RAG in Section 3.1 and Section 4.2.
**Q2**: L200-201: what is the score being used here? Why do you need SC if you have ground-truth answers?
**A2**: As shown in Fig. 3(b), the score is defined as the ratio of correctly predicted answers to the total number of predicted answers, computed as: $\text{Score} = \frac{N_{\text{correct}}}{N_{\text{total}}}$, where $N_{\text{correct}}$ = number of correct responses and $N_{\text{total}}$ = total number of responses.
For subsequent question, SC serves to build a VQA knowledge base with reasoning contexts. While we possess both questiones and ground-truth answers from the knowledge base, the associated reasoning contexts are unavailable. Additionally, since the model-generated reasoning paths are not entirely reliable, we use the ground-truth to verify and score these reasoning contexts. Table 6 further demonstrates the reliability of our reasoning contexts.
**Q3**: The purpose of rewards (especially the mutual consistency reward) should be made clear earlier on.
**A3**: Thank you for pointing this out. We conduct more discussions about the purpose of rewards in Reviewer eQW2 **Q3**. And we will add this explantion earlier on Introduction (L88).
**Q4**: How are you embedding images and text? Which model is used?
**A4**: Following Lin et al. [1], we employ the Bert-base model (110M parameters) as our text encoder for generating text embeddings. For extracting image embeddings, we utilize a ViT-L coupled with a 2-layer MLP, totaling 312M parameters. We will add this in Section 4.2.
**Q5**: Which model is used to construct the KB?
**A5**: Thank you for pointing this out. We use Qwen2-VL (7B) to construct the KB. We will add this in L290.
**Q6**: L262, does this mean that you are treating selecting a QA pair is an action? And it's not clear what the action space is or why this needs to be a sequential problem. In this case, the authors are selecting a set of ICL examples. It doesn't seem to me like order matters here, i.e. does selecting a different example first makes a difference?
**A6**: As illustrated in Eq. 5, our action space comprises multiple retrieved QA pairs, where each QA pair represents a distinct action. Besides, this can be treated as a sequential problem because ICL performance depends critically not only on the quality of the retrieved samples but also on the their orders. Accordingly, our RCTS treats different example orders as sequential decision-making problems within a set of actions (L236). Furthermore, through comprehensive ablation studies, we systematically investigate and demonstrate the importance of example orders in Reviewer **x7Ua Q1**.
**Q7**: Computational cost is mentioned in the limitations, it would be good to have a number to put to this.
**A7**: We discuss the computational cost at Reviewer **yd6M Q2**.
**Q8**: Essential references should be discussed.
**A8**: Thank you for pointing these important references, we will cite them and add discussions in the revised version. We summarize the differences as follows:
1. LLM-R [2] introduces an iterative training framework to retrieve high-quality in-context examples for large language models. In contrast, our method employs a training-free MCTS-based reranking approach, offering greater generalization compared to trained methods.
2. RATP [3] leverages MCTS + RAG to enhance the self-reflection and self-critique capabilities across numerous private healthcare documents. While sharing a similar MCTS + RAG concept, there are differences in design details, such as the setup of proxy rewards and our heuristic rewards, as well as variations in the action space design. Additionally, RATP’s knowledge base consists of document-style data, whereas our method focuses on example pairs with reasoning contexts.
**Q9**: A few small quibbles and typos about the language in the paper.
**A9**: Thank you again for pointing this out, we will fix them in our revision.
**Reference**
[1] Lin et al., 2024, PreFLMR: Scaling up fine-grained late-interaction multi-modal retrievers
[2] Wang et al., 2023, Learning to Retrieve In-Context Examples for Large Language Models
[3] Pouplin et al., 2024, Retrieval Augmented Thought Process for Private Data Handling in Healthcare | Summary: This paper presents a method to refine the selection of retrieved examples for multimodal language model in-context learning. It has two key components. The first component is to ask LLM to generate a set of rationale/reasoning contexts given a QA pair and select the context that has the highest probability of generating the answer given the question and context. The second component is to refine the selection of K candidates from an initial retrieved candidate set. It first select candidates based on the distribution from similarity scores, and then gradually update the distribution by checking 1) consistency between the answer generated from a selected candidate with the question and the actual answer from the selected candidate and 2) whether the answer generated from a selected candidate with the question can positively contribute to the prediction other questions. They conduct experiments on different benchmarks and find that the proposed method outperforms methods without retrieval, vanilla in-context learning with random examples, and vanilla retrieval.
**Update after Rebuttal.** The authors addressed most of my concerns, but my main concern about the knowledge base still partly remains because this work considers training data the knowledge base instead of traditional external knowledge. Although the authors use arguments such as dynamical change of the knowledge or generalization to different knowledge, these arguments are a bit weak considering that the knowledge candidates are large-scale training data. So I believe it is appropriate to keep my current score.
Claims And Evidence: One major concern I have for this paper is the assumption of this paper. Unlike other KB-VQA work that mostly focuses on an external knowledge base (e.g., Wikipedia), the knowledge base in this paper is a large number of question-answer pairs for each question. Based on the large number of question-answer pairs, the authors propose a training-free algorithm to use those question-answer candidates as in-context examples to enhance open-source LLMs such as Qwen2-VL and InternVL-2. But it is not quite reasonable to me to not do some fine-tuning (either full parameter fine-tuning or PEFT), given the large number of examples for each task, while merely using them as candidates for in-context learning. I would expect the authors to provide more explanations on the motivation.
This paper can be much stronger if there is another setting to compare methods that are trained on this dataset. The authors can leverage this inference algorithm to select candidates on top of those fine-tuned models while showing it can still benefit from it.
Methods And Evaluation Criteria: The methods of generating reasoning context make sense of this problem, and the motivation of using reranking methods to refine the candidates for answer generation is clear. The benchmarks are also legitimate.
But I would expect the authors to provide more explanations on the choices of the two heuristics rewards. For self-consistency rewards, it promotes the selection of QA pairs that the answer matches the generated answer. But what is the rationale behind this heuristics?
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental designs are legitimate. It clearly demonstrates the effectiveness of the proposed methods compared to methods without retrieval, vanilla in-context learning with random examples, and vanilla retrieval. And the ablation study also demonstrates the effectiveness of the use of MCTS and reasoning context.
Supplementary Material: No
Relation To Broader Scientific Literature: This paper can provide a paradigm for improving the quality of multimodal examples when conducting multimodal in-context learning.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: This paper is well-structured and easy to follow.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your constructive reviews and address your concerns as follows.
**Q1**: One major concern is the assumption of this paper. Unlike other KB-VQA that mostly focuses on an external knowledge base, the knowledge base in this paper is a large number of QA pairs for each question.
**A1**: In RCTS, we propose a novel paradigm with reasoning VQA samples serves as the knowledge base, different from traditional RAG that relies on external sources (e.g., Wikipedia or Online Search), as detailed in L34-40.Specifically, our approach leverages reasoning QA pairs to explicitly model ICL, enhancing the model's ability to solve complex problems.
To address potential concerns about the source of KB, we leverage multiple open-source VQA datasets (e.g., ScienceQA, OK-VQA, VizWiz) and further employ an automated reasoning construction framework to build a high-quality reasoning KB.
In addition, regarding concerns about the assumptions in this scenario, RCTS aims to transition VLMs’ responses from merely known to better understood (know-how reasoning) by prepending it with retrieved reasoning examples, extensive experiments (Section 4.3) demonstrate the effectiveness of our approach, achieving a significant improvement on complex reasoning benchmarks compared to Vanilla-RAG baselines.
**Q2**: It is not quite reasonable to me to not do some fine-tuning, given the large number of examples for each task, while merely using them as candidates for in-context learning. I would expect the authors to provide more explanations on the motivation.
**A2**: Thank you for raising this important concern. Actually, the core innovation of our paper lies in exploring a method that leverages massive reasoning QA pairs as a knowledge base for performance improvement without requiring training. We will provide more explanations motivated by three key considerations:
1. Generalization Capability
Although fine-tuning methods (SFT/PEFT) are effective, the debate between SFT and ICL hinges on a trade-off between specialization and generalization. While SFT offers more tailored and often higher performing models for specific tasks, it can lead to loss of the model’s generalization abilities, as discussed by Chen et al.[1]
2. Flexible Knowledge Base Construction
Compared to fine-tuning, our framework is training-free and can be adaptively extended to multiple domains by simply expanding the knowledge base, as mentioned in L161-164, offering greater universality. Additionally, we have added this explanation in our Introduction to further emphasize our motivation.
3. Experiment Validation
We follow the suggestions and explicitly compared RCTS vs. fine-tuning variants. Specifically, we take VQA pairs from the knowledge base corresponding to MathV as training samples and use Llama-Factory[2] for SFT on Qwen2-VL-2B. As shown below, the results showed that while fine-tuning improved accuracy on our test set, it caused a significant drop on out-of-domain benchmarks, validating our design choice. We believe this trade-off favors applications requiring broad adaptability.
|Method|MathV|ScienceQA|
|-|-|-|
|Zero-Shot|18.75|67.18|
|Fine-tuning-on-MathV-1epoch|22.69|44.56 (OOD)|
|Fine-tuning-on-MathV-3epoch|23.03|43.67 (OOD)|
|RCTS (ours)|22.04|78.99|
**Q3**: Expect the authors to provide more explanations on the choices of the two heuristics rewards. For self-consistency rewards, it promotes the selection of QA pairs that the answer matches the generated answer. But what is the rationale behind this heuristics?
**A3**: Regarding the self-consistency reward, we primarily leverage the self-consistency property of VLM models (Wang et al.[3]), i.e., 'Self-consistency leverages the intuition that a complex problem typically admits multiple different ways of reasoning path leading to its unique answer.' Thus, the essence of selecting generated answers $\{A_i^{(n)}\}$ (L265) that match the predicted answers $\tilde{A}_i$ (L266) lies in choosing responses where the predicted answers and the predicted reasoning paths remain consistent.
For the mutual heuristic reward, we posit that if the answer to one question is correct, it will positively contribute to reasoning for other related questions, and vice versa (L277-280). Specifically, the reward is based on whether the reasoning context generated by MCTS-HR can generalize effectively to other questions (from KB), thereby selecting robust and transferable response.
Besides, we comprehensively take into account the two aforementioned rewards and conducted thorough ablation experiments to verify their effectiveness as shown in table 5.
**Reference**
[1] Chen et al., 2020, ACL, Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting
[2] Zheng et al., 2024, ACL, LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models
[3] Wang et al., 2023, ICLR, Self-Consistency Improves Chain of Thought Reasoning in Language Models
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' comments. But I still have concerns about the fine-tuning and knowledge utilization parts.
The authors explained that their method demonstrates better generalization across different domains. Although I agree with the authors that this training-free methods show good performance on both MathV and ScienceQA in the above table, I cannot ignore that the cost here is that we have a large "training set" merely for retrieval purposes. When we have a large training set, it kind of weakens the argument regarding the out-of-distribution. Because we just have a lot of in-domain data that could have been used to fine-tune the model rather than just serving as the candidates.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewers' thoughtful feedback and acknowledging our perspective about generalization capabilities. We will reply to your outstanding concern in the order as follows:
1. Regarding the concern of the fine-tuning parts, actually, the decision to fine-tune a model depends significantly on the suitability of the approach for specific application scenarios. In cases where the knowledge base is static and abundant Visual Question Answering (VQA) samples are available, fine-tuning the model with the knowledge base proves to be a better strategy. This is supported by the empirical results presented in our earlier response. Nevertheless, our approach, which takes advantage of retrieved reasoning contexts, can be applied to solve more flexible and open-ended scenarios, i.e., those characterized by limited training resources and the need for frequent, dynamic updates to the knowledge base. These scenarios demand a more adaptable solution that can meet evolving requirements without relying on extensive retraining.
2. Regarding the unknown distribution of the test data, where it is impossible to predict the specific problems a model may encounter, our approach provides a more practical solution for domain-specific needs. Specifically, one can construct a personalized knowledge base within their target domain and then leverage frozen Vision-Language Models (VLMs) to improve response reasoning and overall performance. Moreover, as our RCTS operates without the need for additional fine-tuning, it is particularly suited for customized RAG applications. This leads to a key practical advantage our approach: it enables multiple users to share a single deployed vision-language model. By maintaining a personalized knowledge base for each user, our method achieves rapid customization without incurring additional training and deploying costs. This not only reduces computational overhead but also significantly cuts down on deployment costs.
3. Regarding the reviewers' concerns about knowledge utilization and the associated cost, actually, our method emphasizes reasoning-context knowledge rather than an reliance on excessively large knowledge base. By employing our MCTS-HR re-ranking strategy, we could achieve stronger performance by focusing on the patterns of "relevant" examples. Specifically, "relevant" here is distinct from mere "similar". The key to our RCTS lies in identifying contextually appropriate knowledge, ensuring that the focus remains on quality and relevance rather than relying on the scale of the knowledge base.
We hope this clarification helps to better align our method focus with the reviewers’ expectations, and we will provide a more detailed explanation of the motivations behind our proposed approach in the revision. | Summary: This work proposed a multi-modal RAG method to retrieve reasoning context examples from the knowledge base. The main components of this work are (1) a CoT knowledge base, (2) knowledge retrieval metrics with hybrid vision-language embeddings, and (3) Monte Carlo Tree Search (MCTS) to retrieve the most related examples. Experiments on several benchmarks and ablation studies demonstrate the effectiveness of the components and the pipeline.
Claims And Evidence: The claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The evaluation makes sense.
Theoretical Claims: This work does not include theoretical proofs.
Experimental Designs Or Analyses: The overall experimental designs look good to me.
Supplementary Material: I mainly reviewed Case Example of RCTS and Prompts in Experiment.
Relation To Broader Scientific Literature: I think no.
Essential References Not Discussed: Currently no.
Other Strengths And Weaknesses: Strengths:
* The performance of the proposed framework looks strong. It outperforms baseline methods by large margins on several benchmarks. The ablations on the components are relatively comprehensive.
* The overall writing of this paper is clear. The figures and tables are well organized and designed.
* The ideas of reasoning knowledge base and hybrid embeddings for retrieval make sense.
Weaknesses:
* My biggest concern is on the complexity of the framework, especially the application of MCTS. In this work, MCTS is used to select most related examples. However, it is not clear to me why in-context example retrieval needs such a complex tree-based search strategy, especially for an ordered chain of examples. Does the order of retrieved examples matter? Are there any observations on the chain structure, like how the model selects the examples in such orders? The application of MCTS looks effective, but the inspiration behind the application is not strong to me. Also, the time computation cost is not provided, and it is not clear whether the overall framework has a heavy time cost.
* Although the other two components, reasoning knowledge base and hybrid embeddings, make sense and work positively to the final performance, the novelty and differences of these two modules compared to the references are not clear.
--------------
The rebuttal addressed my major concern on the impact of example order and the necessity of MCTS. Therefore, I am happy to increase my score.
Other Comments Or Suggestions: Please see Weaknesses.
Questions For Authors: Please see Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your constructive reviews and address your concerns as follows.
**Q1**: Why in-context example retrieval needs such a complex tree-based search strategy? Does the order of retrieved examples matter?
**A1**: As demonstrated by Tan et al.[1] and Liu et al.[2], the order of retrieved in-context examples has an important effect on the generative model's response due to path dependency properties, i.e. different example orders implicitly guide the model's attention toward distinct reasoning patterns. Recognizing both the importance of example order and the combinatorial complexity in identifying optimal order, we employ a tree-based search strategy for re-ranking. This approach effectively balances exploration of potential orders with exploitation of high-performing orders through its hierarchical search structure. Besides, we further conduct experiments with shuffle/ordered examples on ScienceQA dataset to empirically validate the impact of order, with the results shown below:
|Model|Retrieval-Order|Retrieval-Shuffle|RCTS-Shuffle|RCTS (ours)|
|-|-|-|-|-|
|Qwen2-VL-2B|71.94|71.26|73.90|78.99|
|Qwen2-VL-7B|86.68|87.95|88.23|91.44|
|InternVL2-8B|93.00|92.57|93.16|94.20|
Where, 'Retrieval-Order' refers to the top-3 samples sorted by similarity (i.e., Vanilla-RAG), 'Retrieval-Shuffle' denotes randomly reordered top-3 retrieval samples. 'RCTS-Shuffle' represents the shuffled orders of our RCTS re-ranking. The results demonstrate that different orders exert influence on model's performance.
**Q2**: The complexity of the framework, especially the application of MCTS. And the inspiration behind the application of MCTS.
**A2**: We justify our adoption of MCTS from the following aspects:
1. As mentioned in Q1, the order of the examples is important, which motivates us to optimize the example order. Similar to path planning (Eiffert et al.[3]), this problem can be modeled as a sequential searching problem, which is very suitable to be solved using MCTS.
2. MCTS effectively balances exploration of potential orders with exploitation of high-performing orders through its hierarchical search structure, making it more efficient than brute-force-search. Besides, our approach produce substantial performance gains, clearly justifying the computational investment in MCTS.
Therefore, we adopt MCTS over conventional retrieval methods to transition from exploiting "similar examples" to "relevant examples" (L28).
**Q3**: Are there any observations on the chain structure, like how the model selects the examples in such orders?
**A3**: Through observation of some examples of our RCTS in supplementary materials , we summarize some patterns below:
1. In the initial expansion of RCTS, RCTS prioritizes expanding from the most similar examples. When the reward value Q is high, RCTS assigns higher priority (Fig. 15). However, if the similar example face significant degradation (i.e., have very low reward values), RCTS will exclude it (Fig. 14).
2. During the expansion process of RCTS, as shown in Fig. 11 and Fig. 12, valuable examples (i.e., those with higher reward values) are repeatedly explored and utilized, even if they appear in different orders. In contrast, examples with low reward values are only used once and are not revisited afterward.
3. After RCTS reaches the maximum number of simulation rounds, it selects the branch with the highest cumulative reward value Q as the final result, even though other branches may also contain correct answers.
Note: For all examples in the supplementary materials, the branch expansion follows a left-to-right sequence.
**Q4**: The time computation cost.
**A4**: We discuss this concern at Reviewer **yd6M Q2**.
**Q5**: The novelty and differences of reasoning knowledge base and hybrid embeddings compared to the references.
**A5**: 1. For reasoning knowledge base, while both our approach and AutoCoT[4] automate the generation of reasoning chains, our key novelty lies in proposing self-consistency strategy to verify the correctness of the reasoning context, making it suitable for RAG applications. We elaborate on this motivation in L185–190. 2. For hybrid embeddings, we directly adopt the retrieval module (frozen) from PreFLMR[5] in L211. The core contributions of this work lie in making them suitable with reasoning knowledge base and an innovative tree-based search strategy for re-ranking examples.
**Reference**
[1] Tan et al., 2023, TACL, Lost in the Middle: How Language Models Use Long Contexts
[2] Liu et al., 2024, Order Matters: Exploring Order Sensitivity in Multimodal Large Language Models
[3] Eiffert et al., 2020, ICRA, Path Planning in Dynamic Environments using Generative RNNs and Monte Carlo Tree Search
[4] Zhang et al., 2023, ICLR, Automatic chain of thought prompting in large language models
[5] Lin et al., 2024, ACL, PreFLMR: Scaling up fine-grained late-interaction multi-modal retrievers | null | null | null | null | null | null |
PhantomWiki: On-Demand Datasets for Reasoning and Retrieval Evaluation | Accept (poster) | Summary: The paper proposes PhantomWiki, a benchmark that dynamically generates a fictional universe for evaluating retrieval and multi-hop reasoning. The proposed benchmark can be generated on-demand and is free of data leakage because the fictional events are independent with the real-world.
The PhantomWiki data generation pipeline first generates a random universe of n characters, as well as their social relationships and personal facts, e.g., date of birth, job, and hobby. The facts are then converted into articles by filling pre-defined templates. The questions are also generated by templates and answers derived using Prolog by inferring from the rules of the universe.
Experiments are carried out with varying difficulty levels and universe sizes: 1/ in-context prompting that the entire universe is placed in the context window, 2/ RAG prompting where a neural retriever is used to search for top-4 most relevant documents, 3/ Agentic prompting where tool use is enabled.
Claims And Evidence: This is a dataset paper. I think the main claim could be viewed as "PhantomWiki is a scalable and data leakage-resistant framework for disentangled evaluation of reasoning, retrieval, and tool-use abilities." I think this claim is supported because the dataset is generated from a fictional and controlled environment.
Methods And Evaluation Criteria: ### Strength
Overall, I liked the considerate design of the PhantomWiki data generation pipeline.
1. It generates a knowledge graph of a universe and uses logical programming to provide answers to multihop questions.
2. The dataset is fictional and free of data leakage.
### Weakness
Article content and questions lack realism. The articles are generated using templates, are very short and lack realism for real-world retrieval augmented generation applications. The multi-hop questions are highly contrived; they are unlikely to appears in real-world situations.
Theoretical Claims: The paper does not make theoretical claims.
Experimental Designs Or Analyses: The experiments are technically sound. My main concern is on bias in retrieval evaluation:
The dataset is not well-suited for evaluating retrieval. Although the paper claims to evaluate retrieval, the questions and articles are not appropriate for neural retrieval. For instance, a question like "What is the job of the father of the uncle of the sister of {personA}?" requires iterative bridge entity resolution, making neural retrieval ineffective. In addition, in the retrieval experiment, RAG prompting only retrieved 4 most relevant documents, which are likely insufficient for many multi-hop questions.
Supplementary Material: No.
Relation To Broader Scientific Literature: Prior work on RAG evaluation often confounds with parametric knowledge. This work create fictional world that decouples the impact of data leakage.
Prior work on reasoning with synthetic data does not consider the retrieval setup where the content may not fit in context window as task complexity grows.
Essential References Not Discussed: This is another related work that evaluates reasoning using synthetic data.
Levy, M., Jacoby, A., & Goldberg, Y. (2024, August). Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 15339-15353).
Other Strengths And Weaknesses: No. Please refer to my comments in the above sections.
Other Comments Or Suggestions: It would be helpful to define how F1 is calculated in the appendix.
Questions For Authors: 1. Is there any mechanism to resolve conflicts or ambiguous questions? Does all templates lead to answerable questions?
2. If the dataset is focused on context free grammar, is it easy to learn after fine-tuning on the same distribution because CFG is well defined?
3. Is this dataset intended for any training or evaluation only?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback on the design of PhantomWiki and for acknowledging its novelty in providing a scalable and data leakage-resistant evaluation framework. We address your concerns in turn.
**1. Using templated articles**
> Article content and questions lack realism. The articles are generated using templates, are very short and lack realism for real-world retrieval augmented generation applications.
We agree that the current articles and questions are intentionally stylized and minimalistic due to their templated construction. To explore how LLMs can be leveraged to improve the realism of PhantomWiki articles, we **add new experiments using Llama-3.3-70B-Instruct to rephrase the articles**. We prompt the model in two ways (see https://imgur.com/a/ffvugvg): the “short” prompt instructs the LLM to paraphrase the templated articles, while still retaining all factual information; the “long” prompt permits the LLM to expand on the articles, without contradicting existing facts. We experimented with multiple temperature and top-p settings to mitigate hallucinations while encouraging creative outputs (see e.g., https://imgur.com/a/wFvjAFP).
Additionally, we quantify the effect of using these rephrased articles by reporting F1 scores of Llama-3.3-70B-Instruct with ZeroShot and CoT prompting: see https://imgur.com/a/DQuvQIA.
Remarkably, we find **similar trends in performance regardless of whether we use LLM-generated or templated articles**. Thus, **templated articles** provide **four key benefits**: (1) they are cheap—no GPU or API costs, (2) fast—no latency from querying LLMs, (3) 100% factually consistent and (4) they allow for larger universe sizes in limited context windows. We leave as future work how to ensure that question-answer pairs are consistent with rephrased articles without relying on human intervention.
**2. Use case of PhantomWiki and evaluating LLMs fine-tuned on PhantomWiki data**
> If the dataset is focused on context free grammar, is it easy to learn after fine-tuning on the same distribution because CFG is well defined?
> Is this dataset intended for any training or evaluation only?
These are great questions! To assess the viability of PhantomWiki for training language models, we **add new fine-tuning experiments**: see https://imgur.com/a/QHftpCM. Specifically, we perform full fine-tuning of Qwen2.5-0.5B-Instruct and parameter-efficient fine-tuning of Qwen2.5-3B-Instruct (via LoRA applied to all linear layers). We also experiment with two popular training algorithms: Group Relative Policy Optimization (GRPO) and supervised fine-tuning (SFT). For Qwen2.5-0.5B, we find that GRPO and SFT both improve F1 compared to prompting-based methods, likely due to improved ability to output the proper answer format. For Qwen2.5-3B, we find that GRPO improves F1 slightly, whereas SFT worsens F1, likely due to overfitting on the training samples. These experiments **show that further advances beyond fine-tuning are needed to truly close the gap on PhantomWiki**. Please see also our rebuttal to reviewer YauR for full experiment details.
**3. Evaluating retrieval**
> the questions and articles are not appropriate for neural retrieval.
> a question like "What is the job of the father of the uncle of the sister of {personA}?" requires iterative bridge entity resolution, making neural retrieval ineffective.
> RAG prompting only retrieved 4 most relevant documents, which are likely insufficient for many multi-hop questions.
To address all three concerns, we **include new multi-hop RAG baselines** (namely, IRCoT and Self-Ask) using BM25 (top-k=5) as the retriever and Llama-3.3-70B-Instruct as the generator model: see https://imgur.com/a/OMjtf3t. We find that these baselines **outperform** the original ZeroShot-RAG and CoT-RAG baselines on **low-difficulty** questions, but **similarly struggle** on questions with **high difficulty**.
**4. Ambiguous questions & Calculation of F1 score**
PhantomWiki questions always have at least one correct answer and often have multiple correct answers, e.g., “Who is the friend of X?” if X has several friends, or “What is the hobby of the sister of X?” if X has multiple sisters. Furthermore, we explicitly instruct the model to return all possible answers (see prompts in App. C). We compare the LLMs’ predicted answer list against the ground-truth list to compute precision, recall, and F1 per question. Final F1 scores (e.g., in Table 2) are averages over all 500 questions in a PhantomWiki instance. **We will include this clarification in Section 4.3.**
**5. Additional references**
We will include your suggested references on synthetic data generation in our final manuscript.
We hope that these **new experiments** and clarifications have significantly strengthened our message and empirical results, and if so we would like to politely ask you to consider raising your score. Thank you! | Summary: This paper presents a solution for creating a high-quality benchmark to evaluate the RAG and reasoning abilities of LLMs. Specifically, the proposed method, PhantomWiki, introduces a novel pipeline for generating unique, factually consistent document corpora with diverse question-answer pairs for evaluation. PhantomWiki generates a new instance on demand for each evaluation, effectively mitigating data leakage and inflated performance issues. By varying the difficulty of questions and the size of the corpus, the framework disentangles reasoning from retrieval capabilities.
Through experiments involving various document sizes, question difficulties, and frontier long-context LLMs, PhantomWiki demonstrates itself as a challenging benchmark for state-of-the-art models. It offers a scalable and leakage-resistant evaluation method for assessing reasoning, retrieval, and tool-use abilities.
Claims And Evidence: Please see the “Other Strengths And Weaknesses” section below.
Methods And Evaluation Criteria: Please see the “Other Strengths And Weaknesses” section below.
Theoretical Claims: Please see the “Other Strengths And Weaknesses” section below.
Experimental Designs Or Analyses: Please see the “Other Strengths And Weaknesses” section below.
Supplementary Material: There is no supplementary material available.
Relation To Broader Scientific Literature: This paper relates to retrieval-augmented generation and reasoning abiltiies of long-context LLMs. It also relates to dataset and benchmark construction and evaluation.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths:**
1. The proposed benchmark framework, PhantomWiki, is free from data leakage and can be adjusted to control reasoning complexity and retrieval difficulty. The authors also ensure that the generated documents are accurate and error-free, making the dataset high quality.
2. The details of the dataset construction are well-documented, accompanied by solid experiments with various complexity and context lengths, and analysis for evaluating reasoning and retrieval abilities under different conditions.
3. If publicly released, the benchmark framework could benefit the community in evaluating future generations of language models.
**Weaknesses:**
1. This paper could be improved by incorporating more recent baselines. The reason RAG performs poorly in Figure 3 is that it retrieves documents only once. However, several recent methods ([1][2][3], etc.) allow for multiple retrievals, significantly enhancing answer accuracy.
2. While using a context-free grammar and Prolog to generate question-answer pairs is intriguing, the concept of generating questions beyond the pre-training set of LLMs is not novel. For instance, [4][5][6] also generate questions that are resistant to data leakage and are dynamic rather than fixed. The innovation of this paper lies primarily in proposing an unreal universe setting and the ability to dynamically generate questions and corpus sizes of varying difficulty, which may be limited.
3. The domain of the generated questions is limited to the universe. While this helps prevent data leakage, it also restricts the comprehensive evaluation of LLMs. The generated questions tend to be less diverse, primarily focusing on reasoning between personal relationships and objects. In contrast, the questions in [4] are more diverse, covering temporal, spatial, mathematical, social, scientific, and personal contexts.
[1] Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy
[2] Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions
[3] Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity
[4] ToolQA: A Dataset for LLM Question Answering with External Tools
[5] RealTime QA: What's the Answer Right Now?
[6] FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Other Comments Or Suggestions: Please see the “Other Strengths And Weaknesses” section.
Questions For Authors: Please see the “Other Strengths And Weaknesses” section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and encouraging assessment of PhantomWiki. We are pleased that you find PhantomWiki a high-quality benchmark for fine-grained evaluation of reasoning and retrieval. We are also delighted to hear that our work is **well-documented**, and presents **solid experimentation and analysis**. We address your concerns below and outline how we plan to revise the paper accordingly.
**1. Multi-hop RAG baselines**
> This paper could be improved by incorporating more recent baselines. e.g. IRCoT
We add a **new experiment** with results of IRCoT from Trivedi et al. (2022) and Self-Ask from Press et al. (2022) in comparison to ZeroShot-RAG and CoT-RAG: see https://imgur.com/a/OMjtf3t. We would like to point out that **IRCoT and Self-Ask continue to struggle on questions with high difficulty**, measured by number of reasoning steps. We will include a more detailed comparison of multi-hop RAG to Agentic prompting techniques in our main paper.
**2. Comparison to other existing benchmarks**
> While using a context-free grammar and Prolog to generate question-answer pairs is intriguing, the concept of generating questions beyond the pre-training set of LLMs is not novel.
We appreciate the reviewer’s observation regarding prior benchmarks that also generate dynamic or leakage-resistant question-answer pairs. However, we respectfully argue that **PhantomWiki is fundamentally different** in motivation, construction, and evaluative capabilities compared to these works:
- ToolQA [4] focuses on tool-use evaluation, involving curated tools and questions that explicitly require external tools (e.g., calculators, databases). While it minimizes data overlap, its scope is about measuring LLMs’ ability to use predefined tools. In contrast, PhantomWiki evaluates **reasoning and retrieval in a self-contained world** and no external tools are assumed. The choice of external tools is left to the LLM and the prompting method.
- REALTIME QA [5] and FreshQA [6] both target dynamic, real-world, up-to-date question answering, emphasizing temporal freshness and world knowledge. These benchmarks evaluate how well LLMs adapt to new information by querying real events or fast-changing facts. Our work takes the opposite approach: we **intentionally avoid real-world knowledge to enable clean disentanglement of reasoning and retrieval**.
The core novelty of PhantomWiki lies **not just in generating unseen questions**, but in generating **entire fictional universes**, complete with articles, knowledge graphs, and questions. As you also point out, our datasets are **coherent**, and allow for **fine-grained control** over question difficulty and retrieval complexity—these functionalities distinguish PhantomWiki from [4,5,6]. LLMs can thus be evaluated on-demand on their ability to navigate and reason over an unseen yet structured universe.
We appreciate the reviewer’s framing—you're right that our main contribution lies in the fine-grained controlled evaluation that fictional synthetic universes offer. We’ll make this positioning clearer in the revised manuscript and expand the related work section to reflect this comparison.
**3. On Question Diversity and Domain Scope**
> The generated questions tend to be less diverse, primarily focusing on reasoning between personal relationships and objects.
We appreciate the reviewer’s point about the limited scope of the current question types, which primarily focus on relationships and attributes within a fictional universe. This design was intentional for the initial version of PhantomWiki, as it allowed us to **carefully control the reasoning complexity** and ensure data consistency and leakage resistance.
That said, we completely agree that increasing the diversity of question types—especially to include temporal, causal, and dynamic aspects of the universe—would significantly expand the benchmark’s utility and better reflect real-world reasoning demands.
Due to space constraints in this paper we are planning future work on extending PhantomWiki to support temporal components, such as characters aging, changing jobs, or forming new relationships over time. This will enable questions involving temporal reasoning, event sequences, and dynamic changes in the knowledge graph. These additions would naturally move PhantomWiki closer to the evolving and dynamic nature of benchmarks like REALTIME QA [5] and FreshQA [6], while still retaining the **key benefits of synthetic control and contamination resistance**.
We’ll highlight these future directions more clearly in the discussion section of the revised manuscript, and we appreciate the reviewer for raising this valuable point.
Thank you again for your detailed review, and please let us know if you have any more questions or concerns that could help improve the paper further!
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their efforts and thoughtful responses.
I appreciate the inclusion of additional experiments on multi-hop RAG baselines. Indeed, when multiple retrievals are allowed, multi-hop RAG demonstrates slightly better performance than naïve RAG. This enhancement contributes to a more comprehensive evaluation of the proposed benchmark.
I agree that RealTimeQA and FreshQA focus on dynamic world knowledge. While they also provide non-contaminated data for evaluation, their purpose differs from that of PhantomWiki. However, ToolQA evaluates not only LLMs’ ability to use tools but also their reasoning capabilities, as LLMs must determine which tools to utilize for different questions.
While PhantomWiki generates entire fictional universes to enable fine-grained control over reasoning complexity, the current version's questions are primarily limited to relationships and attributes. Additionally, its approach to "controlling reasoning complexity" is based solely on recursion depth, which may not align with real-world scenarios. In reality, it is rare for questions to require reasoning over more than three or four chained relationships (in the paper, the maximum depth is 20). In contrast, the questions generated by ToolQA are more diverse and realistic, better reflecting real-world reasoning challenges.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments! We are happy to hear that the **new multi-hop RAG experiments have helped us address the first point in your initial review and strengthen our results**, and that **PhantomWiki differentiates itself well** from RealTimeQA and FreshQA through deliberate evaluation of **external knowledge-independent** reasoning abilities.
We also agree with you that ToolQA has many strengths that complement those of PhantomWiki. Would like to expand on these two works and highlight the differences.
**ToolQA**:
* ToolQA has been initially designed to evaluate whether a model answers questions using tools or recalls a memorized answer. Its main strengths include a range of application domains, the use of (combinations of) tools, and diverse question templates (albeit in a relatively small number per application domain).
* With one exception, ToolQA's underlying datasets are **static and memorizable**. At the time of publication (2023), the datasets were contemporary and therefore suitable for assessing tool use vs memorization. However, this selection was **short-term** and is now **at risk of becoming obsolete** for this purpose. For example, ToolQA's math questions were based on the "error cases made by ChatGPT" on GSM8K; it has since been demonstrated that commercial LLMs might be memorizing GSM8K's test case answers (cf. L44-49 of manuscript). We also believe that criteria like "information is too detailed for LLMs’ internal knowledge" (AirBnB; B.1 of ToolQA) will soon (if not already) become obsolete and in itself a **weak protection from data leakage**. Since ToolQA's datasets have been **manually curated**, any updates of ToolQA for next-generation LLMs is a nontrivial and yet short-term solution.
* The only dataset underlying ToolQA that could in principle be non-static (Agenda) is generated using LLMs. As shown in our updated experiments, even rewriting a fixed set of facts can introduce inconsistencies which **need to be manually verified**; in their OpenReview submission, the authors of ToolQA mention reviewing the dataset for "3-4 rounds… to guarantee the validity", and that "it is quite **difficult to guarantee question quality**".
**PhantomWiki**:
* We instead focus on a more limited, but **more comprehensive and long-lasting methods for evaluation of retrieval and reasoning**.
* We focus on making our benchmark **more resistant to updated LLM knowledge cutoffs** by introducing **on-demand, automatic** generation of new dataset versions **without any human involvement**.
* We **intentionally stress-test** LLM reasoning and retrieval abilities through arbitrarily difficult reasoning questions. Just like we would expect a math LLM to be able to add **any** two numbers (not just the "real-world" two numbers, or a "real-world" number of sequential additions), we expect a true reasoning LLM to be able to do **multi-branch multi-step reasoning regardless of the number of steps**—as straightforward logic programs can answer these questions trivially. We would also like to emphasize that we control difficulty not just through the number of steps, but also by testing on **all possible solutions** to a question ("multi-branch reasoning"). We will make these points more clear in the revised manuscript.
* We demonstrate that **frontier LLMs already struggle with PhantomWiki, despite its limited universe setting and despite its tasks being easily solvable** (for instance by logic programs). We appreciate and fully agree with your suggestions that PhantomWiki can be made even more diverse and realistic by, e.g., augmenting the universe with spatio-temporal data, introducing new types of entities and relationships (even arbitrary ones with *no* real-world meaning to completely disentangle internal knowledge from reasoning), and extending the context-free grammar to have an even larger diversity of verifiable QA templates. Due to space constraints and a substantial increase in methodological complexity of these additions—and PhantomWiki's value to the community even in its current version—we decided to postpone them to future work. | Summary: The manuscript introduces PhantomWiki, a generator for fictional universe of characters in the form of a fandom wiki. The knowledge graph for characters, their relations, and facts about them, are generated by sampling simple distributions. Articles are generated from these facts using templates. Questions are generated using templates, and answers are generated using Prolog solver on the original facts. Different size of fictional wiki can be generated using different numbers of characters, and easier/harder questions may be considered in terms of the number of "hops" required to answer them. PhantomWiki thus generates benchmarks involving novel "facts" to evaluate different language models and techniques in different regimes.
Claims And Evidence: The manuscript does not provide an explicit list of claims, so I identified some myself.
### Claim 1: PhantomWiki can be tuned to benchmark different aspects/regimes
The benchmark can be made smaller or larger than a model's context length, and the questions can be made to require more or less documents to be answered. Varying these parameters yield consistent results on In-Context, RAG and Agentic strategies. Ignoring eventual Claim 3 issues (below), I judge that Claim 1 is supported by evidence.
### Claim 2: PhantomWiki is resistant to data contamination
By construction, models evaluated on PhantomWiki cannot memorize the exact answer to a question, nor any of the key information required in the intermediate steps, because this data is generated on-the-fly, immediately before evaluation.
However, once PhantomWiki is release in the wild, models can be trained, fine-tuned, etc. to perform better on PhantomWiki. Indeed, the probability distribution over the different generated datum can be learned, and learning how to best extract key information pieces from template-generated documents is easy for LLMs.
My assessment is that, while there exists a weaker claim that is supported, the claims made in the current manuscript are too broad. This is the topic of my Question 1 below.
### Claim 3: (Implicit) PhantomWiki is a good/useful benchmark that assesses aspects that other benchmarks don't assess
Assuming that there is no bug in the code, there should, by construction, be valid answers(s) planted in the dataset for each question (and the manuscript mentions that the case where there are multiple correct answers is handled properly). In a sense, the benchmark should be "more valid" than one that relies on human or LLM annotations.
However, although the manuscript presents experimental results for combinations of LLMs and reasoning/retrieval paradigms (In-Context, RAG, Agentic), PhantomWiki **itself** is not subjected to any direct evaluation of fitness for a purpose, nor are its scores related to existing benchmarks for consistency and/or redundancy assessment, nor is the score variability between two instances of PhantomWiki assessed.
My Question 2 below pertains to this topic.
Methods And Evaluation Criteria: The results presented make sense on their own. However, the purpose of the manuscript is to introduce a benchmarking method, and that benchmarking method itself is not directly evaluated. See my comments on Claim 3 and Question 2.
Theoretical Claims: The description of how the original graph/data is generated makes sense, though is quite simplistic. The use of a single template to generate all entries is very disputable (and part of my concerns in Claim 3), especially considering that this template basically amounts to concatenate sentences of the form "The X of Y is Z.". The generation of questions and answers make sense, though I did not verify in depth the grammars and templates.
Experimental Designs Or Analyses: The experimental design uses the introduced benchmark to assess different LLMs in different retrieval/reasoning modes. For that purpose, the experimental design appears decent to me. However, as mentioned before, there are no direct experiments evaluating the benchmark itself.
Supplementary Material: I browsed it but didn't delved in depth.
Relation To Broader Scientific Literature: The related work section focuses on agent and/or tool-use evaluation benchmarks, but neglects "long-context" ones. Some suggestions are provided in the next section.
> Importantly, none of these benchmarks creates the underlying corpus, a limitation which we bridge in this work.
∞Bench (Zhang et al., 2024) uses a key entity replacement strategy to this end. However, it is subjected to the "Beethoven might have met Mozart" failure mode mentioned by the authors. RepLiQA (Monteiro et al. 2024) have humans generate the corpus. However, the questions can all be answered using a single document.
This work could be framed as the generation and verbalization of a knowledge graph. Some relevant works are listed below.
Essential References Not Discussed: Some long-context benchmarks:
- Hsieh et al. RULER: What’s the Real Context Size of Your Long-Context Language Models? COLM 2024. https://openreview.net/pdf?id=kIoBbc76Sy
- Wang et al. Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA. EMNLP 2024. https://aclanthology.org/2024.emnlp-main.322.pdf
- Zhang et al. ∞Bench: Extending Long Context Evaluation Beyond 100K Tokens. ACL 2024. https://aclanthology.org/2024.acl-long.814.pdf
On creating the corpus:
- Monteiro et al. RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content. NeurIPS 2024. https://proceedings.neurips.cc/paper_files/paper/2024/file/2b23626015b6311369e95a70735cbb72-Paper-Datasets_and_Benchmarks_Track.pdf
Knowledge graphs:
- Agarwal et al. Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training. ACL:HLT 2021. https://aclanthology.org/2021.naacl-main.278.pdf
- Ye et al. Generative Knowledge Graph Construction: A Review. EMNLP 2022. https://aclanthology.org/2022.emnlp-main.1.pdf
Other Strengths And Weaknesses: The core idea of automatically generating a benchmarking corpus, with associated questions and answers, parametrized in terms of corpus size and question complexity, is a promising one.
Other Comments Or Suggestions: > friendship graph using the Erdős–Rényi model
Friendship (social) networks, real and fictional, diverge in many way from this model. In my personal opinion, the two main aspects worth capturing are transitivity ("I am more likely to be friend with the friend of my friend than with with a random person") and heavy-tailed degree distribution ("most people have few friends, few people have lots of friends"). If the authors are interested to learn more, they may start with https://en.wikipedia.org/wiki/Triadic_closure and https://en.wikipedia.org/wiki/Scale-free_network for these two aspects, respectively, and https://en.wikipedia.org/wiki/Social_network_analysis for a more general discussion. No actual changes are requested on this manner: these issues are very weakly coupled with the manuscript's concrete goals.
Questions For Authors: ### Question 1: Do you have a solution to propose for my concerns with Claim 2?
One possible avenue is to tone down the resistance to data contamination claims. Another avenue is to try to actually fine-tune models on multiple instances of PhantomWiki, and assess how this affects performances on fresh instances. In any case, please state what edits would be made to the manuscript.
### Question 2: Can you provide more direct evidences that PhantomWiki provides actionable assessments of language models?
One possible avenue is to relate the results on PhantomWiki to pre-existing benchmarks and/or human evaluations. Other avenues involve designing and running new experiments, but I understand that time is short. Finally, perhaps I've missed something obvious, and all you have to do is to explain it better to me.
In any case, please keep in mind the following question "**What would negative results look like?**". In the (potentially counterfactual) scenario where PhantomWiki turned out to not be a useful benchmark, what different observations would have been made? I acknowledge that the narrative consistency of results mentioned in Claim 1 is weak evidence in PhantomWiki's favour, but I'm looking for something more direct.
### Question 3: Could you provide examples of a small corpus?
Say, generate $n=4$ articles in an appendix, with some associated question/answer pairs? Figure 2 (2) is very minimal...
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **1. Question 1 / Evaluating language models fine-tuned on PhantomWiki data**
> Another avenue is to try to actually fine-tune models on multiple instances of PhantomWiki, and assess how this affects performances on fresh instances.
We add a **new experiment** and find that **fine-tuning on PhantomWiki data helps** performance on held-out instances, but only **to a limited extent**: see https://imgur.com/a/QHftpCM (please see also our rebuttal to reviewer YaUR for full experiment details). Both prompting and fine-tuning approaches struggle as question difficulty increases. Thank you for suggesting this experiment idea! We are excited to see how PhantomWiki inspires research in LLM reasoning.
**2. Response to Claim 3 and Question 2**
> PhantomWiki itself is not subjected to any direct evaluation of fitness for a purpose, nor are its scores related to existing benchmarks for consistency and/or redundancy assessment, nor is the score variability between two instances of PhantomWiki assessed.
We appreciate your framing of “what would negative results look like?”, which we find especially helpful for clarifying our benchmark’s utility. Below, we outline how PhantomWiki provides actionable assessments and how we plan to revise the manuscript.
A poorly designed benchmark might fail to appropriately differentiate models and/or distinguish between reasoning paradigms. Our experiments suggest the opposite: **PhantomWiki differentiates performance** based on both model architecture and prompting techniques, and **elucidates failure modes** (e.g., hallucinated intermediate hops or context retrieval mismatches) that align with known model weaknesses.
Based on our evaluation results, PhantomWiki is _not_ a good benchmark for reading comprehension, as evidenced by the near perfect F1 scores of all LLMs/prompting techniques for questions that require 1 reasoning step. Indeed, PhantomWiki is meant to complement more complex reading comprehension benchmarks like DROP and SQuAD. Nonetheless, it is a good benchmark for evaluating **multi-step and multi-branch reasoning**. Quantitatively, we can see this from the rapid drop in performance in Figure 3 as the number of reasoning steps increases.
One area where LLMs struggle is with complex relations. For example, Llama-3.3-70B-Instruct struggles with “Who is the great-grandchild of X?”, not because it doesn’t know what great-grandchild means, but rather because of a high step count (child of child of child) and a high branching factor (having to find the children of each grandchild). One actionable insight is to improve language models’ ability to keep track of not only multiple steps (this is corroborated by the Self-Ask paper, which highlights a compositionality gap), but also multiple branches. PhantomWiki thus **sheds light on both of these challenges** in LLM reasoning. We will include the discussion of failure modes of different models in the revision.
In terms of benchmark stability, we run all the evaluations with the same hyperparameters on the first three seeds of generated PhantomWiki instances and **measure variability** through standard deviation across runs. This provides evidence that **PhantomWiki is robust** to sampling variance and not overly sensitive to individual instance characteristics.
> The use of a single template to generate all entries is very disputable (and part of my concerns in Claim 3), especially considering that this template basically amounts to concatenating sentences of the form "The X of Y is Z."
**In a new experiment, we use Llama-3.3-70B-Instruct to rephrase our templated articles** (see prompts: https://imgur.com/a/ffvugvg and example generations: https://imgur.com/a/wFvjAFP), which we then use for downstream evaluation: see https://imgur.com/a/DQuvQIA (please see also rebuttal w1Q5 for further details). We report similar performance trends as in Figure 3 from our manuscript, and in fact, using rephrased articles only makes PhantomWiki more challenging. Thus templated articles have the benefit of being cheaper, faster, and free of LLM hallucinations, while still providing valuable insight into LLM reasoning capabilities. Importantly, PhantomWiki is designed to be modular to incorporate LLM-generated articles, as future research in reducing LLM hallucination matures.
**3. Additional references on long-context and friendship graphs**
We greatly appreciate the reviewer’s suggestion of connecting to the long-context literature. We will revise the Related Works section to include them. We also plan to support more realistic friendship graphs, especially heavy-tailed degree distributions, in future versions of PhantomWiki.
**4. Example of a small universe**
Please see https://imgur.com/a/QLhC54R.
We hope the new experiments and discussion addressed your concerns and strengthened our results, and if so we would like to politely ask you to consider raising your score. Thank you again!
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications.
With the expectation that the camera ready will be adapted accordingly, including mentions in the introduction and/or abstract as to what the benchmark is good at evaluating (e.g., evaluate multi-step multi-branch reasoning) and what it isn't (e.g., reading comprehension), I hereby increase my Overall Recommendation from 2 to 4. | Summary: The paper presents PhantomWiki, a pipeline that generates synthetic large-scale document corpora and question-answer pairs. PhantomWiki works by generating articles containing facts about characters (e.g., “The job of x is y.”), and generating question-answer pairs from templates. PhantomWiki allows generating large corpora, exceeding the context length of current LMs. The paper experiments with in-context prompting, RAG, and agentic RAG models on PhantomWiki, and demonstrates that current models struggle with large corpora and complex questions generated using the PhantomWiki pipeline.
Claims And Evidence: While the paper has merits, and I believe the proposed method can be a promising alternative to the popular “needle-in-a-haystack” experiments, I believe that the claim that the current method is reliable in evaluation of RAG models is problematic.
My main concern regards the templated data generation approach. Specifically, the distribution of the generated documents (e.g., “The job of x is y.”) and questions (templates are presented in B.2), seems to be very different from realistic RAG settings. Second, as the data was generated via templates, I am concerned it will be easy to game with additional training (for example by using the same templates to generate training data).
Methods And Evaluation Criteria: I believe the paper could benefit from additional multi-hop RAG basslines (e.g., IR-CoT, Self-Ask).
Additionally, as the paper proposed a new evaluation method for RAG, it will be helpful to evaluate the gains from fine-tuning on the synthetic distribution (see Claims and Evidence).
Theoretical Claims: None.
Experimental Designs Or Analyses: I checked the soundness of the experiments in Sections 4-6.
Supplementary Material: I looked at the templates and prompts in the appendix.
Relation To Broader Scientific Literature: By generating a large corpora with matching question-answer pairs that is robust to data contamination teh paper proposes an interesting evaluation method for long-context reasoning, which has several benefits over previous approaches (e.g., “needle-in-a-haystack”). The paper also compares in-context prompting, RAG, and agentic-RAG models, showing all approaches struggle as the number of reasoning steps increases.
Essential References Not Discussed: I believe that the paper can benefit from a discussion regarding multi-hop RAG (e.g., IR-CoT), and synthetic data generation from KG (many methods use a KG, e.g., Wikidata, and then generate questions synthetically using the relations).
Other Strengths And Weaknesses: The paper is overall well-written and easy to follow.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and suggestions, which we believe improve our manuscript significantly. We are pleased that you find PhantomWiki a **promising alternative to the popular “needle-in-a-haystack” test**. We address your comments below and will add results for suggested experiments to our manuscript.
**1. Use of templated articles**
Thank you for pointing this out. **We have added an experiment where we use Llama-3.3-70B to rephrase our templated articles** (see prompts: https://imgur.com/a/ffvugvg and example generations: https://imgur.com/a/wFvjAFP) and evaluated on these rephrased PhantomWiki corpora: see https://imgur.com/a/DQuvQIA. We find the **drop in F1 scores versus question difficulty to be similar** to our original experiments (Table 2 of the manuscript). On the other hand, templated articles are cheap, fast, and free of LLM hallucinations, while enabling compelling evaluations for LLM reasoning capabilities. Importantly, PhantomWiki is designed to be modular to incorporate LLM-generated articles, as future research in reducing LLM hallucination advances.
**2. Evaluating LLMs finetuned on PhantomWiki data**
> Second, as the data was generated via templates, I am concerned it will be easy to game with additional training (for example by using the same templates to generate training data).
> Additionally, as the paper proposed a new evaluation method for RAG, it will be helpful to evaluate the gains from fine-tuning on the synthetic distribution (see Claims and Evidence).
Thank you for suggesting this experiment. As the relationships, names, and attributes in each PhantomWiki universe are generated randomly on-demand, we believe that naive training on PhantomWiki data can only yield limited improvements. To support this claim, we have a **new experiment** where **we generate 10 new PhantomWiki dataset instances** (question depth 20 and universe size 50) amounting to 5K training question-answer pairs. We then **perform full fine-tuning** of Qwen2.5-0.5B-Instruct and parameter-efficient fine-tuning of Qwen2.5-3B-Instruct with LoRA on all linear layers.
For each base model, we employ two popular training algorithms. The first is Group Relative Policy Optimization (GRPO) from [Shao et al.](https://arxiv.org/abs/2402.03300) with an F1-score reward between 0 and 1. We use the CoT prompt template from App. C.4, a batch size of 32, and 8 generations per prompt to sample. The second is supervised fine-tuning (SFT) with answer(s) as the ground-truth label. We use the zeroshot prompt in App. C.2 and a batch size of 4. For all training experiments, we train for 3 epochs (or until convergence) using the AdamW optimizer with initial learning rate set to $5\times 10^{-6}$ for full fine-tuning and $10^{-4}$ for LoRA fine-tuning.
We then evaluated the fine-tuned models using the PhantomWiki instances of size $n=50$ from Table 2: see https://imgur.com/a/QHftpCM. For Qwen2.5-0.5B, we find that GRPO and SFT both improve F1 compared to prompting-based methods, likely due to improved ability to output the proper answer format. For Qwen2.5-3B, we find that GRPO improves F1 slightly, whereas SFT worsens F1, likely due to overfitting on the training samples. These experiments show that **further advances beyond fine-tuning are needed to truly close the gap on PhantomWiki**; we hope that PhantomWiki will serve as a valuable tool for future research on LLM reasoning and retrieval.
**3. Additional multi-hop RAG baselines e.g. IRCoT and Self-Ask, discussion on synthetic data from KG**
Following your suggestion, **we have performed additional evaluation using IR-CoT and Self-Ask** with BM25 as the retriever and Llama-3.3-70B-Instruct as the generator model: see https://imgur.com/a/OMjtf3t. Specifically, we used the implementations from FlashRAG ([Jin et al. 2024](https://github.com/RUC-NLPIR/FlashRAG)) and re-wrote the few-shot examples where needed to match the formatting of PhantomWiki questions. We find that both IR-CoT and Self-Ask are competitive with ZeroShot-RAG and CoT-RAG. However, when decomposing model performance on question difficulty (like Figure 3), **both IR-CoT and Self-Ask struggle like other prompting methods**. In fact, this is a key contribution of PhantomWiki: quantitatively decomposing performance of prompting methods and LLMs on axes of reasoning and retrieval.
Notably, IR-CoT and Self-Ask alternate between reasoning and retrieval like agentic prompting (e.g., ReAct). A key difference: ReAct uses LLM-driven retrieval via tools, while multi-hop RAG methods delegate retrieval to an external retriever—decoupling reasoning and retrieval. We’ll highlight this nuance in the revised manuscript. We’ll also add a brief discussion about generating data from knowledge graphs.
Given these **new experiments** support the paper's message and **strengthen our empirical results**, we would like to politely ask you to consider increasing your score. Thank you again! | null | null | null | null | null | null |
Explainable Multi-modal Time Series Prediction with LLM-in-the-Loop | Reject | Summary: This paper proposed a multi-modal prediction framework that integrates a prototype-based time series encoder with three collaborating LLMs to deliver more accurate predictions and interpretable explanations. The closed-loop workflow – prediction, critique, and refinement – continuously boosts the framework’s performance and interpretability. Empirical evaluations demonstrate good performance.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: LLM, multi-modal time series analysis, time series explanation.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strength**: \
(1) Propose a prototype-based encoder that combines time series data with textual context, producing transparent, case-based rationales. \
(2) Detailed related works. \
(3) Comprehensive comparative experiments were conducted to meticulously evaluate and analyze the performance. \
(4) The writing is clear and the method pipeline is easy to follow.
**Weaknesses**: \
(1) In my opinion, this paper lacks sufficient innovation. \
(2) The related work section, especially for LLMs for time series analysis, merely lists various methods without comparing their strengths and weaknesses, failing to provide a clear motivation, or starting point for the proposed approach in this paper. \
(3) The experimental section lacks thoroughness and completeness. For instance, it fails to evaluate the model's performance on future event time prediction tasks, omits comparisons with advanced temporal point process models, and does not explore the impact of different base LLM models.
Other Comments Or Suggestions: See questions below.
Questions For Authors: (1) I find that the method proposed in this paper appears to be a mere amalgamation of existing tools, which leads me to challenge its novelty. I think the paper lacks sufficient innovation. But I am interested in seeing how other reviewers assess the novelty of this work. \
(2) How to divide text data into multiple meaningful segment? Please explain more about the text data and provide examples for clarity. \
(3) This paper primarily focuses on predicting discrete labels, which underutilizes the strengths and flexibility of LLM. Could this model be adapted to predict future event times? If so, what extensions would be necessary? \
(4) For sequence data, temporal point process model is a good choice to describe dynamics. I suggest the authors to compare the model performance with advanced TPP models. \
(5) Could the authors compare the performance of different base LLM models and provide a corresponding reproducibility analysis? \
(7) Could the authors provide detailed prompt designs for LLM models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Question 1 and Weakness 1,2: Innovation and scope:**
We appreciate the reviewer’s comments and the opportunity to clarify our contributions. Our work is motivated by the need for effective explanation, multi-modal time series understanding, and contextual reasoning—areas that are underexplored in current literature. Compared with existing LLM-based methods, our approach offers a unique advantage through: (1) explicit, case-based explanations grounded in multi-modal time series inputs, and (2) closed-loop interactions between time series models and LLMs that enable iterative understanding and refinement of real-world contexts. We will revise the related work section to more clearly articulate these distinctions.
**Question 2: Meaningful text segment**:
Thanks for the question! As discussed in Section 3.2.1 (Page 3), capturing meaningful text segments (i.e, text prototypes) relies on two components, the pretrained language model producing text embeddings for input texts, and the sequence encoder mapping text embeddings and performing prototype learning. 1) The first component decides the granularity of prototypes at the input level. For example, Bert represents each text input as multiple tokens (token-level embeddings), while S-bert represents it as multiple sentences (sentence-level embeddings). 2) The second component further encodes these text embeddings. We use the convolution-based encoder to get segment representations by applying sliding windows over the text embeddings. With multiple consecutive segment representations, we perform prototype learning with regularizations and projections to identify the most typical segments.
**Question 3 & 4 and Weakness 3:**
**Event time prediction and temporal point process:** Thanks for the question! We agree with the reviewer that this is an interesting extension. However, we would like to clarify that our setting is fundamentally different from the temporal point process (TPP) setting and its event time prediction task. TPP models are specifically designed for **irregular event sequences**, but our problem setting and datasets are for **regularly sampled multivariate time series** with categorical or continuous outcomes at each step, following the standard time series prediction paradigm. Our objective is not to predict when the next event will occur, but rather to predict what will happen at each regularly observed time step. Therefore, event-time prediction is not applicable in our setting, and TPP models are not directly relevant for comparison.
Adapting our method to this task would require major changes: the encoder must handle irregular timesteps, and the output layer must predict inter-event intervals or model intensity functions. While our prototype design and LLM agents can be modified, it is outside the scope of the current work. Nevertheless, we appreciate the reviewer’s insight and agree that TPP is a valuable framework for modeling temporal dynamics, and will discuss representative works and their differences in the updated version.
**Beyond discrete label prediction:** We provided regression results in Appendix F (pages 23-25). We also provide baselines comparisons and model analysis to demonstrate its efficacy for regression tasks. Please refer to the rebuttal for **reviewer g9bw** for detailed results and discussions.
**Question 5 and Weakness 3: Base LLMs**
Thank you for this question! To explore the effect of different base LLMs (Gemini-2.0 Flash and GPT-4o-mini), we provide the experiment results and analysis on Healthcare (Test-Positive) data. For fair comparisons, we used the same prompts and ran the same number of iterations and followed the same settings detailed in Appendix A.2 and A.4 (pages 13-14). We still use the same temperature setting (0.7 for content generation, 0.3 for prediction), as it yields the best performance empirically. The results are shown below.
|Base LLM|F1|AUC|
|-|-|-|
|GPT-4o-mini|0.932|0.981|
|Gemini-2.0-Flash|0.937|0.983|
|GPT-4o, default|0.987|0.996|
GPT-4o clearly outperforms both GPT-4o-mini and Gemini-2.0-Flash, due to its better reasoning and contextual understanding capabilities (larger model size & pre-training corpus). Both GPT-4o-mini and Gemini-2.0-Flash are still competitive compared with baselines listed in Table 1, Page 7. It reveals the impact of the LLM capability on the effectiveness of our framework, especially in tasks demanding real-world context understanding.
We also provide the plot of iterative analysis in [this link](https://anonymous.4open.science/r/rebuttal-D4F5/Base-LLMs-iteration.pdf), where we can also observe the performance improvement over iterations for different base LLMs.
**Question 7: Detailed prompt designs**
As indicated in Section 3.3.2 (Page 5), we provided the detailed prompt templates in Figures 13-17 in Appendix D (Pages 18-21). We also provided the code containing specific prompts in the supplementary materials. Please refer to them. | Summary: The paper introduces TimeXL, a multi-modal prediction framework designed to integrate both time series data and textual information, addressing a common limitation in existing time series models that often neglect auxiliary textual data available in real-world scenarios. A key contribution of the paper is a new encoding approach for textual data, leveraging prototypical explanations to extract meaningful representations. Building on this, the framework employs iteratively three agents that progressively refine the textual data, modifying it in a structured manner to improve the overall prediction quality. The effectiveness of TimeXL is demonstrated empirically in Table 1 and the appendix, where the authors present experimental results showcasing the superiority of their approach over existing methods.
Claims And Evidence: The claims made in the submission are generally well-supported by clear and convincing evidence. The authors provide strong quantitative results that demonstrate the superiority of their approach, and they supplement this with qualitative examples that illustrate how the data is transformed.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited to the problem at hand. The authors conduct a thorough evaluation by comparing their approach to 16 baselines across three distinct domains—health, finance, and weather. This diverse benchmarking provides a strong basis for assessing the generalizability and effectiveness of their method.
One limitation is the choice of evaluation metrics. While the focus on binary classification (e.g., rain/no-rain) is relevant, it would be beneficial to also report MSE or MAE, as these are commonly used metrics that provide insight into performance at each time step. Including such results would strengthen the evidence presented.
Theoretical Claims: There are no theoretical claims in the paper
Experimental Designs Or Analyses: Yes, I reviewed the experimental design and analyses presented in the submission. The comparisons against 16 baselines across three domains (health, finance, and weather) appear sound and provide a comprehensive evaluation of the proposed method. The quantitative results are compelling, and the qualitative examples help illustrate the data transformations. One potential improvement could be the inclusion of additional evaluation metrics, such as MSE or MAE, to provide more granular insights at each time step
Supplementary Material: Yes, I reviewed the supplementary material, specifically Appendix A and Appendix D
Relation To Broader Scientific Literature: The paper's key contributions relate to broader scientific literature in several important ways:
* **Text as additional data beyond time series** - Extends traditional numerical forecasting by elevating text to a primary data source rather than supplemental features
* **Agenic AI application** - Evolves from passive prediction systems to active forecasting agents that autonomously direct information gathering
* **Explanations for text encoding** - Advances beyond post-hoc explanations by integrating interpretability directly into the encoding process
* **Iterative text refinement** - Connects to active learning approaches but specifically for text improvement, introducing dynamic feedback loops absent in static-input forecasting systems.
Together, these contributions form a cohesive framework addressing limitations in forecasting system interpretability, adaptability, and information utilization.
Essential References Not Discussed: --
Other Strengths And Weaknesses: --
Other Comments Or Suggestions: --
Questions For Authors: --
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **The regression-based setting:**
We sincerely appreciate the reviewer for the comments. In Appendix F (Pages 23-25), we implemented a regression-based variant and provided a demonstration on the same finance dataset with numerical ground truths to show its capability for numerical value forecasting. To further address the reviewer's concern, we provide more details and additional experiments below.
As introduced in Appendix F, we adapt TimeXL for regression with two minor modifications. First, we add a regression branch in the encoder design, as shown in Figure 23. On the output of time series prototype layers, we reversely ensemble each time series segment representation as a weighted sum of time series prototypes, and add a regression head for prediction. Accordingly, we add another regression loss term to the learning objectives. Second, we adjust the prompt for prediction LLM by adding time series inputs and requesting numerical forecasts, as shown in Figure 24. As such, TimeXL is equipped with regression capability. Next, we evaluate our method on the same finance dataset with the same task as classification, despite that the main prediction target is the raw material stock price instead of trends. Here we follow the same settings detailed in section 4.1, and Appendix A.2-A.4. We update the results by comparing state-of-the-art baselines in the table below..
| **Model** | **RMSE** | **MAE** | **MAPE(%)** |
|-|-|-|-|
| **DLinear** | 7.871 | 6.400 | 4.727 |
| **Autoformer** | 7.215 | 5.680 | 4.263 |
| **Crossformer** | 7.205 | 5.313 | 3.808 |
| **TimesNet** | 6.978 | 4.928 | 3.512 |
| **iTransformer** | 5.877 | 4.023 | 2.863 |
| **TSMixer** | 7.447 | 5.509 | 3.911 |
| **FreTS** | 7.098 | 4.886 | 3.460 |
| **PatchTST** | 5.676 | 4.042 | 2.853 |
| **LLMTime** | 11.545 | 5.300 | 3.774 |
| **PromptCast** | 4.728 | 3.227 | 2.306 |
| **OFA** | 6.906 | 4.862 | 3.463 |
| **Time-LLM** | 6.396 | 4.534 | 3.238 |
| **TimeCMA** | 7.187 | 5.083 | 3.620 |
| **MM-iTransformer** | 5.454 | 3.789 | 2.687 |
| **MM-PatchTST** | 5.117 | 3.493 | 2.491 |
| **TimeCAP** | 4.456 | 3.088 | 2.196 |
| **TimeXL** | **4.161** | **2.844** | **2.035** |
The main observations are consistent with the classification setting: The multi-modal variants of state-of-the-art baselines (MM-iTransformer and MM-PatchTST) benefit from incorporating real-world contexts; Our method achieves the best results, highlighting the advantage of synergizing multi-modal time series encoder with language agents to enhance interpretability and thus predictive performance.
Moreover, we provide the component analysis based on the refined texts that achieve the best validation performance across iterations. It includes evaluation results of the multi-modal time series encoder, input ablations for LLM-based predictions, and the fusion of both components (TimeXL). The results are summarized in the table below, which also show patterns consistent with those observed in the classification setting. (1) The results clearly demonstrate that real-world financial texts provide complementary information to the LLM, leading to improved accuracy in numerical prediction. (2) The identified prototypes provide contextual guidance to prediction LLM, leading to clear performance gains. (3) By fusing the predictions from both encoder and prediction LLM, TimeXL further improves the prediction and outperforms all variants, underscoring the effectiveness of mutual enhancement between the two components.
In general, (1) highlights the importance of multi-modal inputs, while (2) and (3) highlight our proposed interaction between encoder and LLM for more accurate numerical predictions.
| Variants | RMSE | MAE | MAPE(%) |
|-|-|-|-|
| Multi-modal Encoder | 4.198 | 2.891 | 2.064 |
| Prediction LLM using Time Series | 4.728 | 3.227 | 2.306 |
| Prediction LLM using Time Series + Text | 4.600 | 3.121 | 2.226 |
| Prediction LLM using Time Series + Text + Prototype (ours) | 4.352 | 3.003 | 2.165 |
| **TimeXL** | **4.161** | **2.844** | **2.035** |
We also provide an iteration analysis to show the effectiveness of reflection and refinement LLMs, as shown in the table below. The prediction performance quickly improves and stabilizes over iterations, which underscores the alternation steps between predictions and reflective refinements.
| Iteration | RMSE | MAE | MAPE(%) |
|-|-|-|-|
|Original | 4.344 | 2.951 | 2.103 |
|1| 4.224 | 2.883 | 2.069 |
|2| 4.161 | 2.844 | 2.035 |
|3| 4.174 | 2.849 | 2.036 |
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I’ll keep the accept score as is | Summary: In this paper, the authors proposed a new Multi-model time series prediction model that uses prototype-based encoder with 3 LLMs to predict, reflect and refine the semantic information. Experiments on real-world datasets show the effectiveness of the proposed method.
Claims And Evidence: Experiments are not enough.
1. The ablation study needs improvement to show the essential of introducing 3 LLMs.
2. The majority of the comparison methods are designed for regression tasks, which may not be suitable for studied tasks.
Methods And Evaluation Criteria: Yes
Theoretical Claims: No theoretical claims
Experimental Designs Or Analyses: 1. The ablation study needs improvement to show the essential of introducing 3 LLMs.
2. The majority of the comparison methods are for regression tasks. It seems to be an unfair comparison, since we do not know whether the baseline methods are well-tuned for classification tasks.
Supplementary Material: No
Relation To Broader Scientific Literature: The novelty of paper seems to be using prototypes to enhance the multi-modal time series prediction and using LLM to refine the input text.
Essential References Not Discussed: Some recent refs regarding aligning time series understanding and reasoning [1][2][3] should be discussed.
[1] From News to Forecast: Integrating Event Analysis in LLM-Based Time Series Forecasting with Reflection. NeurIPS 2024
[2] ChatTime: A Unified Multimodal Time Series Foundation Model Bridging Numerical and Textual Data. AAAI 2025
[3] ChatTS: Aligning Time Series with LLMs via Synthetic Data for Enhanced Understanding and Reasoning.
The authors cited [1][2] and missed [3]. They did not explain the difference between the proposed methods with [1] and [2], neither compared with them.
Other Strengths And Weaknesses: Strengths:
1. The idea of using LLM to refine the input text to encoder is interesting.
2. Experiments show the strong performance of proposed methods.
Weaknesses:
1. It is difficult to tell which part contributes most to the final results, prototype or LLM loop. More ablation studies on different modules of the proposed method are needed.
2. The proposed structure is very complex and it is a bit difficult to follow the paper.
Other Comments Or Suggestions: No
Questions For Authors: 1. The majority of the comparison methods are for regression tasks. However, the authors only conducted classification tasks? Why not regression tasks?
2. What is the model choice for a time series encoder?
3. How did the author perform an ablation study in Table 2? From Figure 2, we can see that the text input can be refined by LLMs. Why is the text and prototype only used for ablation study of LLM not encoder? Encoder uses prototypes and text.
4. From Algorithm 1, it seems that iterative process only goes in training?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful feedback and provide our response below.
**Q1 and Weakness 2: Regression tasks**
We appreciate the reviewer’s interest in understanding how our approach performs on regression tasks. Below are the results of our method and state-of-the-art baselines on the same finance dataset in our paper, but with numerical ground truths.
|Model|RMSE|MAE|MAPE(%)|
|-|-|-|-|
|DLinear |7.871|6.400|4.727|
|Autoformer|7.215|5.680|4.263|
|Crossformer|7.205|5.313|3.808|
|TimesNet|6.978|4.928|3.512|
|iTransformer|5.877|4.023|2.863|
|TSMixer|7.447|5.509|3.911|
|FreTS|7.098|4.886|3.460|
|PatchTST|5.676|4.042|2.853|
|LLMTime|11.545|5.300|3.774|
|PromptCast|4.728|3.227|2.306|
|OFA|6.906|4.862|3.463|
|Time-LLM|6.396|4.534|3.238|
|TimeCMA|7.187|5.083|3.620|
|MM-iTransformer|5.454| 3.789|2.687|
|MM-PatchTST|5.117|3.493|2.491|
|TimeCAP|4.456|3.088|2.196|
|TimeXL|**4.161**|**2.844**|**2.035**|
Please kindly refer to our response to **reviewer g9bw** for more experimental results (or [this link](https://anonymous.4open.science/r/rebuttal-D4F5/Regression_results.pdf)) and discussions. We would also like to emphasize that we focus on classification-based prediction as many real-world multi-modal applications naturally involve discrete decision-making. This formulation also better highlights the interpretability benefits of our case-based explanations and the LLM’s reasoning capabilities. As for the baselines, TimesNet, OFA, TimeCAP inherently support classification, and the commonly-used TSLib (Appendix A.2) also uses most methods for classification. We also acknowledged regression tasks and provided results in Appendix F (Pages 23-25), including initial results and method designs.
**Q2: Encoder choice**
As discussed in Section 3.2.1 (Page 3), the model choice affects the explanation granularity, and we used convolutional neural networks followed by prototype layers for both modalities to capture segment-level prototypes for prediction and explanation.
**Q4: Iterative process**
The iterative process relies on training supervision to improve the text quality of training, validation, and testing sets. As detailed in Section 3.3.2 (Pages 5-6), TimeXL iteratively generates reflective feedback (by reasoning on text, prediction, and ground truths) to refine training texts, where validation data evaluates feedback quality per iteration. Feedback with the best performance is then applied to refine testing texts, which mimics applying a trained model to testing data (Page 6, Lines 284-288).
**Q3 and Weakness 1 & 3:**
**Refined input from LLMs**: In table 2, the ablation study on testing data is based on the refined texts, where the refinement is guided by reflective feedback from the best iteration selected by validation data (Page 6, lines 284-288).
**Text and prototype only used for ablation study of LLM, not encoder**: We clarify that the prototypes are outputs of the encoder (Section 3.2.2, Prototype Projection, Page 4). As predictions and explanations cannot be derived without these prototypes, the ablation is not applicable. However, text and prototypes can both be input to the prediction LLM, and we perform such input ablation to show the performance gains from the contextual guidance provided by prototypes (Section 4.6, Page 8).
**Importance of LLM components**: We show the importance of reflection and refinement LLMs via iterative analysis in Figure 5, Section 4.5. The original performance means no usage of reflection and refinement LLMs. It is clear that the text quality improves over the iterative reflection and refinements (upper two subplots). TimeXL’s prediction also improves (lower two subplots), because of the improved texts. The importance of prediction LLM is shown in Table 2. The multi-modal encoder indicates no usage of prediction LLM. After fusing with prediction LLM (text+prototype), its performance further improves (TimeXL). Appendix B includes more ablations .
**Reference Discussion**
Due to space limits, we summarized the insights of these works in the related work due to their differences from ours. News to Forecast constructs a large news database for method development and performs text-to-text prediction by fine-tuning LLM with selected news. Our method uniquely provides *explicit explanations* (standard case-based framework) from *both modalities*, showing key time series and text segments that contribute to the prediction. Besides, we emphasize the mutually augmented prediction between the multi-modal time series model and LLM for better performance. The most recently arxived ChatTime and ChatTS differ in scope: ChatTime examines modal translation capability, targeting zero-shot forecasting on common benchmarks, and reasoning tasks. ChatTS targets time series understanding and reasoning tasks instead of prediction. However, we thank the reviewer for pointing them out, and will provide detailed discussions in the updated version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, which addressed many of my concerns. And I would like to raise my score. | null | null | null | null | null | null | null | null |
Fixing the Loose Brake: Exponential-Tailed Stopping Time in Best Arm Identification | Accept (poster) | Summary: This paper is related to the best arm identification (BAI) problem in multi-armed bandits, where the objective is to recommend the (unique) arm with highest expected reward at a fixed error rate $\delta$ ($\delta$-correctness property) after collecting as few observations as possible. The stopping time of a run of a best arm identification bandit algorithm is the number of observations collected up to the recommendation time. This paper focuses on the problem of guaranteeing that the distribution of stopping times is light-tailed and emphasizes on the fact that few papers from the literature try to tackle the problem of sometimes very long or infinitely long runs, even though the authors show that some of the most well-known algorithms from prior works exhibit such an undesirable behavior. The authors advocate for a stronger type of guarantee on the sample complexity than what is proposed in the literature so far, named exponential tail stopping time. The authors introduce a $\delta$-correct algorithm named FC-DSH which exhibits an exponential tail stopping time, and a meta-algorithm called BrakeBooster which takes as input any $\delta_0$-correct algorithm with weaker guarantees on the stopping time and returns a $\delta$-algorithm with exponential-tail stopping time.
## update after rebuttal
I keep the score to 3. The problem of non-terminating runs seems limited to older algorithms, so I am still on the fence regarding the significance of the problem. But the strengths of the paper still hold so I would suggest acceptance.
Claims And Evidence: Most of the theoretical and practical claims are convincing and clear. However:
- While I understand the point of disentangling correctness and sample complexity, I disagree with the following claim: Lines 153-155, Page 3 “In practice, one may desire to be loose on the correctness (large δ) yet want to ensure that the stopping time is small with very high confidence (small δ).” If this were true, then one would usually turn to fixed-budget or anytime settings (where an error bound is provided at each sampling time) rather than to a fixed-confidence setting.
- Lines 98-99, Page 2 “The computational efficiency of FC-DSH’s is not worse (orderwise) than other algorithms.” As there are no experiments comparing the performance of FC-DSH and other fixed-confidence BAI algorithms, and as there is an increasing sampling budget for phases, it is unclear to me if this sentence is true.
- I am still not convinced about the importance of the problem of heavy-tailed stopping time distributions in fixed-confidence BAI. As demonstrated by experiments (Figures 1 and 3-4), not stopping at all or long stopping times are extremely rare events, even in carefully crafted bandit instances such as the one used in Theorems 2.4-2.5 (e.g., $1/(8\sqrt{\pi})(\delta/3.3)^ 118 \approx 10^{-99}$ for $\delta=0.5$ from Theorem 2.4). Moreover, in practice, one actually performs somehow the approach of BrakeBooster: stop the bandit algorithm after some time ($10^ 6$ for instance) and restart the algorithm with a new random seed for synthetic data / with new observations for real-life data (discarding prior observations in the process). But I agree that this approach only allows for correctness to hold and does not provide very strong guarantees on the stopping time. But is it worth it to sacrifice global performance for an event of small probability?
- Table 1: LUCB has an upper bound in high probability on the sample complexity (their Corollary 7).
Methods And Evaluation Criteria: The authors focus on a single aspect of multi-armed bandits for fixed-confidence BAI, which is the tail of the distribution of stopping times, whereas most papers from the literature focus on the performance in high probability or in (asymptotic) expectation (as shown in Table 1). As such, especially as there is no experiments comparing to baselines (both in terms of quality of the recommendation, average stopping time or even a comparison of the distribution of stopping times of FC-DSH compared to the ones plotted for Successive Elimination in Figures 3-4, or better, more recent algorithms as those listed in Table 1), it makes the assessment of the theoretical improvement incurred by this paper difficult.
Theoretical Claims: I have checked the supplementary material for the correctness and exponential tail stopping time guarantees for FC-DSH and BrakeBooster (not the technical lemmas in Section C.2) and they seem all correct to me.
Experimental Designs Or Analyses: The experiments of counting the number of forcefully terminated runs in Successive Elimination look correct. However, no code is provided (supplementary zip file or anonymous code repository) to check the reproducibility of the results.
Supplementary Material: I have checked in detail the supplementary material for the correctness and exponential tail stopping time guarantees for FC-DSH and BrakeBooster (not the technical lemmas in Section C.2), along with the experimental Section D.
Relation To Broader Scientific Literature: The authors propose a new type of guarantee on the sample complexity of a bandit algorithm, which diverges from traditional approaches (high probability upper bound [1], upper bound in expectation [2]). The algorithmic contributions are based on the doubling trick (which “almost” doubles the sampling budget in a trial) which is very used in the bandit literature [3] and on a well-known fixed-budget algorithm named Sequential Halving [4-5]. The technical tools (concentration inequalities, correctness analysis by contradiction) are standard in the bandit literature (see prior citations).
[1] Kalyanakrishnan, S., Tewari, A., Auer, P., & Stone, P. (2012, June). PAC subset selection in stochastic multi-armed bandits. In ICML (Vol. 12, pp. 655-662).
[2] Garivier, A., & Kaufmann, E. (2016, June). Optimal best arm identification with fixed confidence. In Conference on Learning Theory (pp. 998-1027). PMLR.
[3] Besson, L., & Kaufmann, E. (2018). What doubling tricks can and can't do for multi-armed bandits. arXiv preprint arXiv:1803.06971.
[4] Karnin, Z., Koren, T., & Somekh, O. (2013, May). Almost optimal exploration in multi-armed bandits. In International conference on machine learning (pp. 1238-1246). PMLR.
[5] Zhao, Y., Stephens, C., Szepesvári, C., & Jun, K. S. (2023, July). Revisiting simple regret: Fast rates for returning a good arm. In International Conference on Machine Learning (pp. 42110-42158). PMLR.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths
- The idea of a meta-algorithm converting any “base” fixed-confidence BAI algorithm into an algorithm with stronger guarantees on the stopping time is interesting and well-executed.
- The paper is well-written and attracts attention on a not-so-investigated problem in the BAI literature.
- Disentangling correctness and stopping time probabilities is a good idea.
Weaknesses
- The importance of the problem is not obvious to me.
Other Comments Or Suggestions: - Caption of Figure 1: “Historgram” should be “histogram”.
- Throughout Section A in Appendix: “⊥” instead of the symbol for probability.
- Experiments in the supplementary material (Section D): the exact number of forcefully stopped runs should be given for Figures 3-4, it is hard to estimate the proportion of these runs among the 1,000 trials from the histograms alone.
Questions For Authors: The main weakness of this paper and the reason why I rated it 3 is because I am not convinced by the importance of getting an exponential-tail stopping time, especially as logarithmic factors and large constants are present in the analysis of the algorithmic contributions (as mentioned in the discussion). To clarify this:
- Isn’t it better to get a polynomial bound on the stopping time or a non-asymptotic bound on the expected stopping time and stronger guarantees on the sample complexity in high probability than to get an exponential-tail stopping time and a less good “recommendation performance” constant (derived from fixed-budget algorithms, where it is unclear whether such constants would match (even with logarithmic factors) the lower bounds on the minimal sample complexity for $\delta$-correct algorithms [1-2])?
[1] Kaufmann, E., Cappé, O., & Garivier, A. (2016). On the complexity of best-arm identification in multi-armed bandit models. The Journal of Machine Learning Research, 17(1), 1-42.
[2] Degenne, R. (2023, July). On the existence of a complexity in fixed budget bandit identification. In The Thirty Sixth Annual Conference on Learning Theory (pp. 1131-1154). PMLR.
- Could you perform the same kind of experiments run on Successive Elimination on FC-DSH or on BrakeBooster with Successive Elimination?
- Can you compare computationally speaking the cost of FC-DSH compared to other algorithms from the literature?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the significance of our problem, its novelty in bandit literature, and the strength of an exponentially decaying stopping time over high-probability bounds. The detailed description and anonymous link of the additional experiments we did are in **Additional empirical evidences** section from the rebuttal for reviewer **94Yb**. Please take a look. We address the comments below.
**I am still not convinced about the importance of the problem of heavy-tailed stopping time distributions in fixed-confidence BAI ... ?**
A light-tailed distribution offers several benefits:
1) Even-though the event of non stopping happens with a low probability, it leads to an unknown expected stopping time, rendering the user clueless. We think, a little bit of inflation in the expected stopping time is better than a totally unknown expected stopping time in an implementation perspective.
2) High probability stopping time can be considered as a single point guarantee. In contrast our results provide a broader guarantee, where we can predict what happens to the stopping time when the requirements are modified.
3) The algorithms that provide a light tailed guarantee can be easily adapted as anytime algorithms.
Furthermore, we think our contribution is theoretical, that is, we want to prove that it is possible to obtain a exponentially decaying distribution of stopping time theoretically, rather than proposing a practically efficient algorithm. Moreover, our work is a first step of showing that it is possible. There is no evidence we are aware of that indicates that achieving exponential stopping time necessarily will inherently sacrifice the performance.
**While I understand the point of disentangling correctness and sample complexity, I disagree with the following claim ... ?**
That is good a point. If we be less rigorous and consider the fixed budget settings have $\delta$-correctness ($\delta$ specified) and $0$-stopping time (stopping time = budget and $\delta = 0$ here) and anytime settings have $\delta$-correctness ($\delta$ non-specified) and $0$-stopping time (stopping time = whenever the user stops and $\delta = 0$ here), even then we do not have the full freedom to choose different $\delta$. Hence, even though this is a good strategy, it does not exactly serve our intentions.
**The computational efficiency of FC-DSH’s is not worse (orderwise) than other algorithms .... ?**
The computational complexity we have mentioned here is not sample complexity. It is just processor time needed for algorithm implementation
**Table 1: LUCB has an upper bound in high probability on the sample complexity (their Corollary 7)**
Thanks for pointing it out. We will correct it in the final version.
**Isn’t it better to get a polynomial bound on the stopping time or a non-asymptotic bound on the expected stopping time ..... ?**
This is an important topic for discussion. We do not view the exponential tail as an exclusive guarantee such that to achieve it, we always have to incur some inflation in the sample complexity. There could be some algorithms (TS-TCI could be a strong candidate) which can achieve the best of both worlds. Our paper prevails as the early attempt to introduce the importance of exponential tail and taking a first step to achieve it. This will lead to more imminent research to come up with algorithms that can achieve the best of both worlds.
Furthermore the experiments (Figure 1b and 2b) we have done indicates that LUCB1 which achieves polynomial tail exhibits a worse performance compared to our FC-DSH which achieves an exponential tail.
**Could you perform the same kind of experiments ... ?**
We did some additional experiments comparing FC-DSH to TS-TCI, LUCB1 and SE (Figure 1 and 2). Also analyzing the effect of Brakebooster on SE (Figure 3b). The detailed description and anonymous link are in Additional empirical evidences section from the rebuttal for reviewer 94Yb. Please take a look.
**Can you compare computationally speaking the cost of FC-DSH compared to other algorithms from the literature?**
We could do that. However we have observed that orderwise there is no difference in the computational cost of our algorithm and the other algorithms in the literature. We also speculate that, since the trials can be implemented in parallel ($L_{r,c}$ trials used for voting are independent, hence can be parallelized), computational cost can even be improved.
We’d be more than happy to address any other questions and concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal.
**I am still not convinced about the importance of the problem of heavy-tailed stopping time distributions in fixed-confidence BAI ... ?**
Thanks for your reply. I understand the point now. However, the impact of your work would have been perhaps more significant if it showed indeed that the constraint on the stopping time still allows to (nearly) match known lower bounds on sample complexity. The samples are independent across stages in the BrakeBooster algorithm, so it seems likely that any BAI algorithm wrapped with BrakeBooster would perform worse sample-complexity-wise than its counterpart without the meta-algorithm.
**While I understand the point of disentangling correctness and sample complexity, I disagree with the following claim ... ?**
OK, thanks for your answer, it is convincing.
**Isn’t it better to get a polynomial bound on the stopping time or a non-asymptotic bound on the expected stopping time ..... ?**
Thanks for your reply. I guess it is the same concern as the first question, and then showing that there is an algorithm which is known to be good sample-complexity-wise and also with a good bound on the stopping time would make this work perhaps more significant.
**Could you perform the same kind of experiments ... ?**
Thanks for running those experiments. The problem of non-terminated runs seems to be more prevalent in Successive Elimination than in more recent algorithms, which makes the problem of non-terminated runs perhaps less significant than I expected.
**Can you compare computationally speaking the cost of FC-DSH compared to other algorithms from the literature?**
I am not sure that you can consider the number of trials L and the sampling budget at each trial T at each stage as constants, especially if you set L1 as in Theorem 4.1 and in Corollary 4.3 (e.g., $L_1 \approx \frac{4\log(1+2/0.05)}{\log\frac{1}{4e \times 0.05}} \approx15$ which is at least a multiplicative factor of the min(sample complexity of the base algorithm,T)). But I understand that your algorithm is primarily a theoretical contribution.
I am still not entirely sold on the impact of this specific problem (see my replies above). However, the strengths of the paper listed above still hold, and, as such, I will keep the score as it is.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for the discussion. We clarify that we do not claim our algorithm is currently the best choice for practical use; rather, it represents the first attempt to address a previously overlooked problem in theoretical BAI research. While acknowledging the imperfections of our algorithm, we argue that these should not affect evaluating the importance of the problem itself. Our goal was not optimality in sample complexity but rather demonstrating that our algorithm's complexity remains close to the base algorithm, differing only by logarithmic factors.
We believe the direction of obtaining exponentially-decaying stopping time is worth exploring regardless of whether it ends up leading to practical algorithms or not from the beginning. | Summary: This paper studies the fixed-confidence best-arm identification problem for $1$-sub-Gaussian distributions. The authors remark that asymptotic guarantees on the expected sample complexity doesn’t prevent a large tail of the empirical stopping time. Worse, high probability guarantees of the sample complexity doesn’t prevent the algorithm from never stopping with a nonnegligible probability. The latter statement is highlighted both theoretically (Theorems 2.4 and 2.5) and empirically (Figure 1 and Appendix D). Therefore, the authors introduce the $(T_{\delta}, \kappa)$-exponential stopping tail property (Definition 2.8), which captures the fact that the tail of the stopping time is exponentially decreasing for large enough time. This condition is sufficient to prove both high probability bound and expected sample complexity bounds (Proposition 2.9). The authors introduce FC-DSH. This is a variant of the anytime algorithm DSH, which is an anytime version of the fixed-budget algorithm Sequential Halving. The algorithm proceeds in phases whose length doubles. Within each phase, SH runs for the current budget, i.e., uniform sampling on the set of active arms which is halved at the end of each of the $\log_2 K$ stages, and a gap-based stopping rule is evaluated at the end of the phase. The observations between phases $m$ and stages $l$ are not shared. FC-DSH is $\delta$-correct (Theorem 3.1) and satisfies an exponential stopping tail property (Theorem 3.2). The authors propose the BrakeBooster meta-algorithm, which runs a base algorithm with a 2D doubling trick, both on the budget and on the number of independent runs for each base algorithms. The observations between phases $(r,c)$ and runs $l$ are not shared. Given a $\delta_0$-correct base algorithm that have high probability upper bound on its sample complexity, BrakeBooster is $\delta$-correct (Theorem 4.1) and satisfies an exponential stopping tail property (Theorem 4.2).
**## update after rebuttal**
The discussions between the authors and the different reviewers provided a more detailed and nuanced perspective on the research question tackled in this paper. Including those insightful discussions will improve the paper in its revised version. Two primary practical concerns remains for me: (1) the degradation of the empirical performance when using BrakeBooster and (2) the probability of non-termination might be negligible in modern BAI algorithms. However, I recognize the theoretical contributions of the paper that study more sophisticated guarantees on the stopping time. Personally, I would be excited to see more works on this direction, e.g., matching lower/upper bounds on the higher moments (i.e., variance) of the stopping time. Therefore, I decided to raise my score to weak accept.
Claims And Evidence: **On Theorems 2.4 and 2.5.** As currently stated in the main, I would argue that Theorems 2.4 and 2.5 are misleading/false given what is proven in Appendix A. On the specific considered instance, the probability that the algorithms never stop is lower bounded by $\Omega(\delta^{118})$. Therefore, it is far from being an absolute constant bounded away from $0$ for all $\delta$: it goes fast to $0$ when $\delta \to 0$.
**On Theorem 2.5.** Algorithm 5 seems quite far from lil-KL-LUCB of Tanczos et al., (2017). Could the authors precisely describe what is meant by “adapted and simplified for sub-Gaussian distribution” ? In particular, why those differences are not introducing undesirable behavior compared to the original algorithm ?
**On Theorem 2.7 and Theorem 2.5.** Theorem 2.7 shows that LUCB1 has a polynomial tail guarantee. Given the proof of Theorem 2.5, I am not sure to understand why a similar result cannot be shown for LUCB1, even though it would seem to contradict Theorem 2.7. Could the authors discuss the differences between Algorithm 5 and LUCB1 ? They seem awfully close. Is it solely a difference of the bonuses ? If so, it doesn’t alter the argument in the proof of Theorem 2.5 since it only controls the bad event that the initial draw of the best arm is not a good one, then argue that it will never be sample again. The same phenomenon hold for LUCB1 as it pulls both the empirical best arm and a distinct arm with the largest UCB.
**Targeted base algorithms for BrakeBooster.** BrakeBooster is tailored to improve the guarantees of algorithms that are $\delta$-correct and achieve high probability sample complexity guarantees. However, the authors themselves argue that “the high probability sample complexity [are] weak and rather unnatural”. Therefore, it seems unclear what is the benefit of constructing a meta-algorithm to “boost” the guarantees of algorithms with weak guarantee that suffers from large tail of stopping time. It would have been great if BrakeBooster was adaptive to improved theoretical guarantees of base algorithms. For example, when given a base algorithm with asymptotic guarantees, it could (1) show exponential stopping tail or (2) obtain non-asymptotic upper bound on the expected sample complexity. In other words, designing a meta-algorithm to improve the “best” known algorithms seems more promising than a meta-algorithm that improves algorithms with “poor” guarantees or performance.
**Discarding samples in BrakeBooster.** Crucially, BrakeBooster doesn’t share the observations across the different runs of the base algorithms. While this independence is key to derive theoretical guarantees, it seems wasteful in terms of sample complexity. This phenomenon is most likely blown out of proportion due to the large number of independent runs before stopping. Therefore, it seems legitimate to conjecture that the “lighter” tail of the empirical stopping time comes at a cost of a significant increase of the average empirical stopping time. If this conjecture is true, it would be a significant limitation of the usefulness of BrakeBooster. An empirical study of BrakeBooster seems necessary to understand the impact of this contribution.
**On FC-DSH.** While shown to be $\delta$-correct and having exponential stopping tail, an empirical study of FC-DSH seems necessary to understand whether it is a practical algorithm that performs well compared to existing algorithms.
Methods And Evaluation Criteria: See “Experimental Designs Or Analyses” section for details on the empirical evaluation.
Theoretical Claims: I checked the correctness of the theoretical claims. To the best of my understanding, there is no major issue.
**On Lemma B.1.** There seems to be a minor error. The statement in Line 872 should read as “for all $T \ge 2T_{\delta}$”. In the current proof, the author requires that $T_m > T_{\delta}$ in Line 886 in order to use the assumption from the Lemma. However, this condition is not true, yet it could be shown if $T \ge 2T_{\delta}$”.
Experimental Designs Or Analyses: **Empirical results for FC-DSH.** At the moment, there is no empirical evaluation of the performance of FC-DSH. In particular, it would be interesting to understand what is the empirical impact for FC-DSH of keeping the observations between each phase $m$ or/and each stage $l$. Given the rich literature in terms of sampling rules for BAI, it would be also relevant to understand what is the impact of using uniform sampling rule within each phase/stage by comparing FC-DSH to more adaptive sampling rules. It would allow comparing the proven exponential tail bound of FC-DSH with the empirical tail behavior of other algorithms, hence allowing to conjecture which other sampling rules might enjoy this property.
**Empirical results for BrakeBooster.** BrakeBooster is specifically designed for algorithms such as SE, i.e. $\delta$-correct and high probability upper bound on the sample complexity. Figure 1 and Appendix D show the limitation of SE. Those limitations are supposed to be solved by using BrakeBooster on top of SE. Therefore, it seems rather natural to empirically confirm the usefulness of BrakeBooster when applied to SE, especially since the authors state Corollary 4.3 for SE explicitly. Without empirical evidence of the benefits of using BrakeBooster, it seems rather unclear that BrakeBooster is actually helpful in practice.
**Appendix D and Figure 1.** To the best of my reading, the value of the $\delta$ parameter used in the experiments is missing. What is this value ? It is difficult to understand how “bad” are the “stopping failure” without putting them into perspective with the confidence level that is being targeted.
**Suggestions on the current setup.**
- It would be interesting to compare the targeted confidence $\delta$ with the empirical proportions of runs for which there is a “stopping failure”. Inherently, it should be smaller, since those algorithms are $\delta$-correct. Is it of the same order of magnitude or several order of magnitudes lower ?
- It would be interesting to add a line in the plots to give a proxy for the lower bound, e.g., $H_1 \log(1/\delta)$.
Supplementary Material: I reviewed all the supplementary material in details.
Relation To Broader Scientific Literature: To the best of my understanding, the authors discuss relevant literature adequately.
Essential References Not Discussed: To the best of my knowledge, there is no essential reference that is missing.
Other Strengths And Weaknesses: **Theorem 4.1.** The proof of $\delta$-correctness appears convoluted and doesn’t provide lots of insights despite taking almost one page of the main content. It would be better to allocate space to understand the theoretical novelty in the analysis of FC-DSH or BrakeBooster.
**Seemingly loose upper bounds.** In the Appendices, the proofs seem to use loose upper bounding in order to swallow the second-order terms in the first-order term by worsening its dependency. This is obfuscated by the use of $O(\cdot)$ notation in the final results. It would be interesting to write a tighter analysis with a smaller first-order term, and only argue in the end that the second-order term can be “removed” with the $O(\cdot)$ notation.
Other Comments Or Suggestions: **Theorem 3.2.** In the main, it would be better to write the explicit statement proved in Appendix B.2. This gives a better understanding of the actual dependency in $K$, $H_2$ and multiplicative constants.
**Theorem 4.2.** In the main, it would be better to write the explicit statement proved in Appendix C.1. This gives a better understanding of the actual dependency in $\delta$, $\delta_0$ and $T^\star_{\delta_0}(\mathcal A)$. For example, the term $\log(1/\delta_0)$ should be left in the upper bound instead of being swallowed by the $O(\log (T))$ notation. When $\delta_0\to 0$, it allows seeing that the $\log(1/\delta_0)$ term “cancels out” the dependency in $\delta_0$ from $T^\star_{\delta_0}(\mathcal A)$, which is likely also in $\log(1/\delta_0)$.
- Appendix A uses the notation $\perp$ to denote the probability. To be consistent with the rest of the paper, it would be better to use $\mathbb P$.
- Lines 342-345. The event $E_1$ and $E_2$ are introduced, but there are not used.
- Lines 268-270. “The complexity of best-arm identification problems is often characterized by an instance-dependent quantity $H_2$”. I would argue that this sentence is slightly incorrect. $H_2$ is an instance-dependent quantity that appears in the analysis of fixed-budget BAI. However, for fixed-confidence BAI, the instance-dependent quantity $H_1$ is closer to the true characteristic time $T^\star$ for the asymptotic regime, i.e., $H_1 \le T^\star \le 2 H_1$. As highlighted by equation (2), the quantity $H_2$ seems to be sub-optimal by a multiplicator factor $\log_2 K$, which gets looser for larger instances.
Questions For Authors: 1. **Improved analysis of FC-DSH.** Could the authors discuss whether their analysis of FC-DSH can be adapted to account for the recent improved analysis of FB-DSH ? For example, Zhao et al. (2023) show an improved rate compared to the rate $H_2$ used in this paper. Moreover, Kone et al. (2024, Bandit pareto set identification: the fixed budget setting) show that it is possible to keep past observations instead of discarding them at the end of each phase. It would be especially interesting to allow for keeping the samples, since it is known to have a large impact on the empirical performance of DSH.
2. **Majority voting.** Is it clear that majority voting is the best aggregation strategy based on $L$ independent runs of a given algorithm ? Would it be possible to define an aggregation strategy that leverage additional information on the independent runs, such as the empirical stopping time, to reweight each vote ?
3. Could the authors highlight precisely what are the theoretical novelties in the analysis of FC-DSH or BrakeBooster ?
Several other questions have been asked in the previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the value of our proposed exponential stopping tail property and finding it helpful. Please see empirical results in rebuttal to 94Yb. The maximum characters limit our rebuttal content, will add more in discussion.
**On Theorems 2.4 / 2.5**: We agree that the lower bound $\Omega(\delta^{118})$ is not an absolute constant. However, the exponent $\delta^{118}$ can be regarded as a constant in the sense that it is independent of $H_1$, $K$, and $T$. Since $\delta$ is a user-specified input, it is reasonable to expect this bound to reflect a small constant probability relative to $\delta$. We will clarify this in our final version.
**On Theorem 2.7 / 2.5**: Algorithm 5, lil-KLUCB (Tanczos et al., 2017), and LUCB1 share similar sampling and stopping rules but differ in confidence bound construction. Algorithm 5 uses a simplified bound, $\log(t^2/\delta)$, aligning with Algorithm 4. In contrast, lil-KLUCB employs $\log(\log_2(t)/\delta)$ with a refined $\delta$, while LUCB1 uses $\log(t^4/\delta)$, achieving a polynomial tail guarantee. We chose the simpler bound for analysis, but both lil-KLUCB and Algorithm 5 can fail to stop with a small, non-negligible probability, unlike LUCB1. We believe LUCB1’s polynomial tail lower bound is provable, yet our algorithms and non-stopping results reveal overlooked issues, showing that LUCB1’s guarantee is not the strongest possible.
**Base algorithms**: Our algorithm doesn’t need a sample complexity bound as input; it works with any $\delta$-correct algorithm and improves if the base algorithm does. Feel free to pair it with $\delta$-correct algorithms boasting asymptotic sample guarantees-likely have strong finite-time guarantees too, though they may be tough to analyze (eg, TS-TCI/EB-TCI). The meta-algorithm’s strength is enabling exponential stopping times by leveraging existing algorithms with weaker guarantees. These base algorithms balance sample complexity and adaptive sampling rounds differently: elimination algorithms need few adaptive decisions, suiting batch sampling or limited adaptivity, while fully adaptive settings favor TS-TCI. Thanks for suggesting adaptation to the best algorithm—it’s a promising direction to explore.
**Discarding samples**: We recognize the practical downside of discarding samples. However, our meta-algorithm’s key contribution is demonstrating, for the first time in the literature, a strategy that transforms a base algorithm with a weak guarantee into one with a stronger guarantee. While avoiding speculation, we think it’s feasible to refine the approach to eliminate or greatly reduce sample discarding. For insight, see Minsker (2023), “U-statistics of growing order and sub-Gaussian mean estimators with sharp constants,” where the author eliminated sample abandonment in the median-of-means algorithm, improving performance bounds using U-statistics.
**On Lemma B.1**: We believe the reviewer may have misread this section of our proof. From Lines 884–886, we apply Lemma B.1’s assumption that FC-DSH satisfies $\mathbb{P}(\tau \geq T_m) \leq \exp\left(-\frac{T_m}{cH_2\log_2(K)}\right)$. Then, from Lines 886–888, we rely on the property $T < T_m$, established in Line 880.
**App D / Fig 1**: We set $\delta$ to 0.05. In our experience, empirical failure rates are typically several times lower than the chosen $\delta$, even with stringent stopping conditions (e.g., from the track-and-stop paper). This pattern isn’t unique to our work but is common across fixed-confidence studies.
**Theorem 4.1**: Many algorithms lack an exponentially decaying stopping time tail due to hard elimination steps that may discard the optimal arm without recovery. FC-DSH Novelty: The innovation lies in its tail probability analysis, a complex task not required for FB-DSH’s fixed-budget guarantee. These proofs involve event divisions absent in prior work, unlike FB-DSH’s simpler analysis (Karnin et al. 2013, Zhao et al. 2023).
**Q1**:This falls outside our scope, but to our knowledge, the accelerated rate from Zhao et al. (2023) is unlikely to apply in the fixed-confidence setting. Fixed-confidence requires decision correctness, necessitating hypothesis testing. For your second point: Analyzing DSH without discarding samples is feasible—it swaps a $\log\log K$ factor in sample complexity for $\log K$. This involves a union bound over $K$ arms, as in the successive reject algorithm, which avoids sample discarding. We discarded samples to streamline the analysis.
**Q2**: If you’re referring to improving sample complexity, our aggregation approach is generally optimal, barring constant or logarithmic factors, as further improvement would contradict established lower bounds. Reducing the number of voters while maintaining the same effect is a potential avenue, though its feasibility remains uncertain to us. Still, your suggestion could trim logarithmic or constant factors, and we’re keen to explore it in future work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thorough and detailed answers, as well as the additional experiments. At the time being, I am inclined to keep my negative score.
For the sake of discussion, I detailed some follow-up comments.
**Lemma B.1**. I understood the current proof of Lemma B.1 as outlined in your answer. The authors might have misread my comment. By definition of the assumption within Lemma B.1, the inequality only holds for all phase $m$ such that $T_m \ge T_{\delta}$. To the best of my understanding, the authors do not show that $T_m \ge T_{\delta}$. Therefore, they cannot use their assumption without an additional argument. Taking $T \ge 2 T_{\delta}$ and using that $T_{m} > T/2$ would imply that $T_m \ge T_{\delta}$. This allows to use the assumption of Lemma B.1 and conclude the proof with a slightly modified statement (e.g., multiplicative factor two).
**Lower bounds in Theorems 2.4 and 2.5**. By taking $\delta = 0.05$ as done in your experiments, the constant would be of the order $\Omega (3. 10^{-154})$. Therefore, from an empirical perspective, it seems computationally challenging to test that “the probability of not stopping is positive”. Relative to $\delta$, the story is unchanged as it yields $\Omega (\delta^{117})$.
**Comments on additional experiments**. I completely agree with the three points raised in the Rebuttal Comment by Reviewer 94Yb.
- An empirical comparison with FC-DSH that doesn’t reuse the previous samples would be valuable to observe the impact of dropping observations. While the modification that reuses the sample perform well empirically, it lacks theoretical guarantees for now. Could the authors detail what are the technical challenges to study this improved algorithm ?
- Figure 3(b) on SE seems to corroborate the intuition that BrakeBooster might be wasteful in terms of samples. For an easy instance, “terminat[ing] within 350K rounds” and exhibiting a larger bulk of stopping times seems to be a mild evidence of empirical success. Since LUCB1 perform better than SE, it would be interesting to see empirically how BrakeBooster + LUCB1 perform. This would be insightful to see what is the cost of BrakeBooster for the sample complexity of a “relatively good” algorithm.
**Practical significance of heavy-tail distributions in modern BAI algorithms**.
I tend to agree with Reviewer YUrK. Empirically, it would be great to exhibit a recent or widely-used BAI algorithm having heavy-tailed stopping time. While being introduced in a seminal paper, SE is not considered as a competitive BAI algorithm. To the best of my knowledge, most recent papers do not even include it in their experimental results due to its known empirical/theoretical shortcomings.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for the discussions:
- **Lemma B.1.**: We acknowledge the issue pointed out and will correct it following your suggestion. Thank you for bringing this to our attention.
- **Lower bounds**: First of all, the message we wanted to convey with that theorem is that the probability of not stopping after $t$ does not decay as a function of time step $t$. That is, as $t \rightarrow \infty$, the probability of not stopping $\geq \text{Const} \neq 0$. In practice, $\delta$ is never too small, so we believe it is fair to consider $\delta$ as a constant. Moreover, the large exponent arises due to looseness in the analysis, reflecting a trade-off between achieving absolute tightness and avoiding overly complicated derivations. We believe our plot demonstrates clearly that the "no-stopping" case occurs with a noticeably higher probability in practice compared to the theoretical analysis. We conducted an additional experiment to show the percentage of failed (non-stopping) trials as a function of the confidence parameter. Please see version 2 in the same link at https://zenodo.org/records/15164857. Specifically, we considered a problem instance with 4 arms having mean rewards {1.0, 0.6, 0.6, 0.6}, and varied $\delta$ across the range $[10^{-5},\ldots,10^{-1}]$. We ran over 100K independent trials. As shown in Figure 4 (in the new anonymous link), the failure rate appears to scale approximately log-linearly with $\delta$. Notably, even when the confidence level is set as low as $\delta = 10^{-5}$, the SE algorithm still exhibits a non-stopping rate of 2.7\%. While our presented lower bound of $\Omega (\delta^{118})$ may appear loose, it serves its purpose of demonstrating that SE fails to stop at all with non-negligible probability.
Frankly, we are confused why the numerical evaluation of the equation appearing in theory is can be problematic -- this is extremely common in theory work and particularly not an issue in our context in our opinion. We repeatedly emphasize that the main contribution is theory, and ICML is a venue that values theory work, too.
- **Practical significance**:
We disagree with the reviewer that SE is not considered as a competitive BAI algorithm. The answer depends on whether or not fully-adaptive decision-making is possible.
For example, a very recent study by Jin et al. (NeurIPS 2024) compared algorithms including Successive Elimination and highlighted practical limitations of fully sequential algorithms like Track-and-Stop, despite their optimal theoretical properties:
"The well-known Track-and-Stop algorithm solves the BAI problem with asymptotically optimal sample complexity. However, it is a fully sequential algorithm, which is hard to be implemented in parallel. The learner in such an algorithm receives immediate feedback for each arm pull, and adjusts the strategy for the next arm selection based on the previous observations. Unfortunately, this sequential approach may not be feasible in many real-world applications. For instance, in medical trials, there is typically a waiting time before the efficacy of drugs becomes observable, making it impossible to conduct all tests sequentially...."
- Optimal Batched Best Arm Identification, Jin et al, NeurIPS 2024 | Summary: This paper considers the distribution of the stopping time in the fixed confidence BAI problem. It discovers that while most existing algorithms only have stopping time bounds in expectation or in high probability, which fail to achieve exponential decreasing rate (in time step $T$) for the misidentification probability.
To address this, the authors propose FC-DSH, an algorithm that guarantees an exponential-tailed stopping time. Additionally, a meta algorithm BrakeBooster is introduced, which can transform any fixed confidence BAI algorithm into one with an exponentially decaying stopping time.
Claims And Evidence: Yes, the authors provide theoretical proofs in section 2 to indicate the some current algorithms cannot stop with a constant probability. The proposed FC-DSH and the meta algorithm BrakeBooster are introduced in section 3 and 4 respectively, accompanied by theoretical guarantees.
Methods And Evaluation Criteria: The paper is mainly on the theoretical side of the existing BAI algorithms.
The proposed methods make sense from the theoretical standpoint. It makes use of and develops the doubling trick in the bandits literature to get the exponential stopping tail.
While the authors emphasize the theoretical contributions, the empirical performance of the proposed algorithms is not presented. Although the use of doubling trick can be beneficial in terms of theoretical analysis, the empirical performance of the algorithms with doubling trick usually have poor performance. It is expected that the authors can provide more discussions regarding this issue.
Theoretical Claims: I skimmed through the analysis and the proofs look reasonable to me.
Experimental Designs Or Analyses: This paper focuses on the theoretical side.
Only one experiment about successive elimination is provided. No empirical experiments are provided to illustrate the proposed algorithms.
Supplementary Material: I skimmed through the proofs of FC-DSH and BrakeBooster, which look reasonable to me.
Relation To Broader Scientific Literature: This paper lies in the field of Bandits Algorithms, in particular, it is related to Best Arm Identification with fixed confidence in Bandits. It identifies the flaws in previous algorithms and propose two algorithms to fix the problem.
Essential References Not Discussed: The references look good to me.
Other Strengths And Weaknesses: **Strengths**
This paper further develops the doubling trick to a “two-dimensional” case and proposes the meta algorithm BrakeBooster. The choices of the hyper parameters are also well explained.
**Weaknesses**:
The proposed FC-DSH shares a similar design as the algorithm in Zhao et al. (2023) and the only modification is only the stopping rule.
Other Comments Or Suggestions: As said in previous sections, it would be great if the authors can provide some empirical results on the performance, even if the results may not be as good as those algorithm without exponentially decaying error probability.
In particular, empirical comparisons between the proposed works and the existing works listed in Table 1 is suggested.
Questions For Authors: None.
Ethical Review Concerns: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for understanding our theoretical contribution and your interest in additional empirical studies. We address your comments below.
Regarding the reviewer's comment that "FC-DSH is a lot like the algorithm in Zhao et al. (2023), with just a change to the stopping rule," we would like to clarify that, while FC-DSH shares some design similarities, its novelty lies in the analysis. Since bounding the tail probability is, in general, a nontrivial task, we had to prove statements that were not necessary when developing a fixed budget guarantee for FB-DSH. For example, we had to bound the probability that DSH fails to satisfy the stopping condition even if the best arm is chosen as the estimated best arm (i.e., $\mathbb{P}(\exists i\neq 1: L_1^{(m)} \le U_i^{(m)}, J_m = 1)$). To bound this, we need to bound the probability of a suboptimal arm $i$ reaching the expected-to-reach stage $\ell_i^*$ (defined in Lemma 4), i.e., $\mathbb{P}(L_1^{(m)} \le U_i^{(m)}, \ell_i \ge \ell_i^*, J_m = 1)$ and $\mathbb{P}(\ell_i < \ell_i^*)$ presented in Lemma 4 and Lemma 5, respectively. Proofs for these require a careful division of the events that is not found in prior work, to our knowledge. In contrast, the guarantee of FB-DSH requires analyzing just the probability of the optimal arm failing to reach the final stage $\mathbb{P}(J_m \ne 1)$, which can be done by our Lemma 6 or by existing proofs from Karnin et al. (2013) or Zhao et al. (2023).
**Additional empirical studies**
Anonymous link for empirical results: https://zenodo.org/records/15117826
We sincerely thank the reviewer for your interest in additional empirical studies. In response, we have conducted and will include the following experiments in the final version of the paper: (1) a study demonstrating that our FC-DSH algorithm (Algorithm 1) consistently terminates, exhibits light-tailed stopping-time behavior, and performs comparably to two widely used fixed-confidence best-arm identification (FC-BAI) algorithms - LUCB1 (Kalyanakrishnan et al., 2012) and TS-TCI (Jourdan et al., 2022) - as presented in Table 1; and (2) an empirical validation of the proposed meta-algorithm, BrakeBooster, confirming its ability to mitigate the stopping-time issues encountered by the Successive Elimination (SE) algorithm.
We implement a similar experimental setup as in our paper, but introduce a variation with 4 arms having of mean rewards of {1.0, 0.6, 0.6, 0.6}. We set the confidence level to $\delta = 0.05$ across all experiments. We conduct 1,000 trials and record the stopping times for each. Trials that did not terminate within $1$ million steps were forcefully stopped. Although this setup features a larger reward gap - making it an easier problem instance - the Successive Elimination (SE) algorithm still fails to stop in a significant number of trials (92 out of 1000 trials), as shown in Figure 1a.
Figure 1b shows that our FC-DSH algorithm consistently stops, alongside two other fixed-confidence best-arm identification (FC-BAI) algorithms: LUCB1 and TS-TCI. As the reviewer noted, the use of the doubling trick may degrade practice performance. To mitigate this issue, we modify our FC-DSH to reuse all samples collected in previous phases and stages. The modified FC-DSH outperforms LUCB1 and performs comparably to TS-TCI as shown in Figure 1b and Figure 2 (Left). This modified version highlights the practical potential of FC-DSH, and we believe that developing a theoretical understanding of sample reuse would be an interesting direction for future work. To highlight the differences in tail behavior, Figure 2 (Right) presents the empirical CDF of the stopping times. As expected, LUCB1 exhibits a slightly heavier tail - consistent with its theoretical polynomial tail guarantee discussed in the paper.
Moreover, we apply BrakeBooster on top of the Successive Elimination (SE) algorithm and, as expected, observe that all trial runs successfully terminate within 350K rounds, as illustrated in Figure 3b - a clear contrast to the behavior of SE alone. While BrakeBooster is not yet optimized for efficiency, it serves as an important first step toward developing a general-purpose meta-algorithm. Its primary value lies in demonstrating the feasibility of such an approach, which has the potential to extend exponential tail stopping-time guarantees to a broad class of base algorithms.
We’d be more than happy to address any other questions and concerns.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed reply! I do not have any question for the theoretical part.
It is great to see the empirical results of the algorithm. I have a few additional points that I hope the authors can clarify:
1. In terms of Figures 1(b) and 2, the empirical performance of a variant of FC-DSH is good **by reusing the previous samples**, outperforming LUCB1. However, the algorithm reuses the previous samples, so it **does not** enjoy the theoretical guarantees, including the $\delta$-PAC property and exponentially decaying rate of the misidentification probability. As the authors indicated, this method requires further investigation. Since this paper targets at devising algorithms which enjoy finite stopping time guarantees, this variant of FC-DSH is irrelevant to the goal and the experiments do not support the theoretical findings. Therefore, it would be convincing if the authors can implement the proposed algorithm following its **exact theoretical design without reusing the previous samples** (so that it enjoys the proposed theoretical guarantees). I believe most papers in the bandits community implement their algorithms follow the exact design, including LUCB1.
2. Regarding Figure 2(b), while I acknowledge that the authors wish to show the CDF of (the variant of) FC-DSH is light-tailed and that of LUCB1 is polynomial-tailed, the comparison should be refined. Although the data is mean-centered, the **variance** can also influence the shape of the curve of the CDF.
3. For Figure 3, can the authors please provide the $\alpha$-quantile (for $\alpha=0.1\times k,k\in[10]$)? It seems SE stops quite early most of the time, and its histogram concentrates around 0. Based on Figure 3, it indicates the proposed Brakebooster does mitigate the stopping time issue observed in previous algorithms (at least SE) as the theorems indicate. But it is obtained at the cost of (much) higher sample complexity, even under an easy instance with a moderate $\delta=0.05$. In addition, the stopping time performance of SE can be improved by using a smaller $\delta$, e.g., $0.01$ or $0.005$, which can be more practical and easy to be implemented.
Given the empirical results, I am still concerned about the practical side of the proposed method and hope the authors can clarify.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for the discussion.
- **Q1**: Our choice of not reusing sample was purely for a cleaner presentation as our focus was theoretical. The change in the analysis is just to use a union bound over all the arms, which would introduce a factor of $K$ in place of a $\log_2(K)$ outside $\exp(-(\cdots))$, which replaced a $\log\log$ factor into a $\log$ factor. What matters is ensuring that the sample is above certain count, and that's it. Note that successive rejects (Audibert et al. 2010) reuses samples from its original form and enjoys guarantees in the same mechanism we described. For this reason, it is common to develop theory with no sample reuse and experiment with sample reuse; e.g., Jun and Nowak, 2016, Baharav and Tse, 2019; Zhao et al, 2023 etc.
- Jun and Nowak, "Anytime exploration for multiarmed bandits," ICML 2016.
- Baharav and Tse, "Ultra fast medoid identification," NeurIPS 2019.
- Zhao et al., "Revisiting simple regret," ICML 2023.
- **Q2**: We agree with the reviewer that variance can affect the shape of the CDF curve. To investigate this further, we conducted an additional experiment and included a new plot showing the tail probability $P(X > x)$. Please see version 2 in the same link at https://zenodo.org/records/15164857. We used the same problem instance with 4 arms having mean \{1.0, 0.6, 0.6, 0.6\} but increased the number of trials from 1K to 1 million. As shown in Figure 3a and Figure 3b, we are unable to clearly confirm whether LUCB1 exhibits a polynomial tail (if it does, it should ultimately show a linearly decaying trend). Note that the LUCB1's paper does not include any experimental studies to we don't know for sure how it behaves. There are two plausible interpretations. Firstly, LUCB1 may exhibit an exponential tail, which would mean that the current theoretical guarantee of LUCB1 is loose. It is an interesting research direction for tigher analysis. Secondly, LUCB1 might indeed have a polynomial tail but it may take a lot more number of simulations to verify. That said, our results show that LUCB1 is much worse than FC-DSH. Furthermore, as shown in Figure 3b, we are surprised to observe that TS-TCI exhibits interesting behavior. This result reinforces our belief that exponential tail guarantees are far from obvious, even for well-studied algorithms, and merit further attention. We welcome any additional suggestions for experimental validation to better understand and confirm the nature of the tail behavior in these algorithms. We will add these experiments to the final version.
- **Q3**: We apologize for the mistake in our experimental results. In fact we accidentally ran, brake booster + SE ($\mathcal{A} = \{1,0.9, 0.9, 0.9\}$ ) and SE ($\mathcal{A} = \{1,0.6, 0.6, 0.6\}$ ) on different settings. Here are the refined results: Experiment 1 (Figure 1) $\mathcal{A} = \{1,0.6, 0.6, 0.6\}$ and Experiment 2 (Figure 2) $\mathcal{A} = \{1,0.9, 0.9, 0.9\}$: Considering the space for the rebuttals, we have plotted the CDF instead of giving the $\boldsymbol{\alpha}$-quantiles here.
Finally, it seems the reviewer is mainly not satisfied with the practical perspective. We believe getting a rejection due to lack of practically-interesting algorithms while the contribution is theory would be harmful to the community. We believe a healthy research community will be formed when **raising issues from theoretical viewpoint** and **validating if it leads to practical algorithms** can be recognized as two separate contributions (and thus each can be a standalone paper). That said, we agree that it is important to keep an eye on practical aspects, and we will include experiments showing the limitations of the proposed algorithms in the final version for the benefit of the readers. | Summary: This paper address the best arm identification problem under fixed confidence, emphasizing the importance of exponential-tailed stopping time guarantees. Unlike existing algorithms prone to heavy tails or indefinite stopping, the authors propose two novel theoretical results: (1) an algorithm with exponential-tailed stopping time superior to prior methods, and (2) a meta-algorithm transforming any high-probability stopping algorithm into one with exponential-tailed guarantees, highlighting room for improvement in current methods.
Claims And Evidence: I do not identify the claims of the paper as problematic.
Methods And Evaluation Criteria: The evaluation criteria is the comparison of the theoretical upperbound. I did identify any big problem in the theoretical criteria. However, It would be better to explicitly write down the theoretical comparison; see in Q1 of "Questions For Authors"
Update a typo; sorry for any inconvenience. "I did not identify any big problem in the theoretical criteria"
Theoretical Claims: I did not check the correctness of any proofs
Experimental Designs Or Analyses: There is no experimental design for proposed Algorithm 1 and 2.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The research problem is very interesting and has a broader imact in the literature. It has been ignored by the community that the family of successful elimination algorithms are not $\delta$-correct. This paper aims to fix this problem.
Essential References Not Discussed: Some non-aymtotic upper bounds in the family of track-and-stop algorihtms needs to be compared. For example, theorem 2 of "Fast Pure Exploration via Frank-Wolfe, NeurIPS 2021".
Other Strengths And Weaknesses: ## Strengths:
The research problem addressed in this paper is significant and has broader implications in the existing literature. Specifically, the observation that successful elimination algorithms are not guaranteed to be $\delta$-correct has largely been overlooked by the community. This paper addresses this gap by proposing a general algorithm to resolve this issue.
## Weaknesses:
1. The paper lacks experimental validation for the proposed Algorithms 1 and 2, even on synthetic datasets. Compared to existing methods such as the track-and-stop and top-two algorithms, the proposed algorihtms may be dramatically less efficient in simluations, because the proposed algorithms require significantly more arm pulls to maintain tail-bound guarantees. Providing empirical justification for the effectiveness of these algorithms through experiments would significantly strengthen the paper. Please clarify if this understanding is incorrect.
2. I am not satisfied with the information that Table 1 conveys. For example, the column indicating whether an algorithm has exponential-tailed behavior appears not meaningful if the algorithms are already guaranteed to be $\delta$-correct.
3. Similarly to 2, the track-and-stop and top two algorithms are asymptotically optimal, while the proposed FC-DSH and brakebookster are not. Hence, it is not a fair comparison to just give a tick mark in the column of “asymptotic expected sample complexity” without demonstrating the optimality.
4. The statement on line 131, "the value of $\liminf_{\delta \to 0} E[\tau]\/ \ln(1/\delta)$ will be independent of $B$ even if $B$ is very large," is not accurate. For any asymptotically optimal algorithm (such as track-and-stop), it must hold that: $\liminf_{\delta \to 0} E[\tau] \/ \ln(1/\delta)=\limsup_{\delta \to 0} E[\tau] \/ \ln(1/\delta) = \text{a instance-depedent constant}$
Other Comments Or Suggestions: See in "Other Strengths And Weaknesses" and "Questions For Authors"
Questions For Authors: 1. Could the authors explicitly state the high-probability sample complexity and asymptotic expected sample complexity for the Successive Elimination algorithm enhanced with Brakebooster? Specifically, it would be helpful to specify the "polylog" term mentioned in Proposition 2.9 and explicitly compare the high-probability sample complexities of algorithms with and without Brakebooster. Similarly, an explicit comparison between FC-DSH and DSH would be beneficial.
2. Regarding Algorithm 2, what are the considerations involved in selecting the parameter $T_1$? Could the authors justify why $T_1$ is introduced as a parameter rather than fixing it to $T_1 = 1$?
Overall, for the valueable research problem and interesting results introduced in this paper, I currently recommend "weak acceptance". I remain open to increasing or decreasing the rating in the discussion stage.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the significance and broader impact of our work in identifying an intriguing problem. We address your comments below. Please see empirical results in rebuttal to 94Yb.
1. Our goal was to highlight a surprisingly overlooked issue in the bandit literature and provide theoretical evidence demonstrating that an exponentially decaying tail bound is indeed achievable. Before us, no one even realized it was possible. We do not claim empirical contributions but offer an initial theoretical exploration, hoping to stimulate further discussion in this direction.
2. When comparing our method to track-and-stop or top-two algorithms, we acknowledge that it is less efficient in terms of sample complexity. As demonstrated in the paper, using BrakeBooster results in additional sample complexity, up to logarithmic factors. However, our contribution lies not in minimizing sample complexity but in providing an additional safeguard. By sacrificing a logarithmic factor in sample complexity, our method ensures that the algorithm does not run indefinitely.
3. Thank you for the reference "Fast Pure Exploration via Frank-Wolfe." We will incorporate it into our paper.
4. On weakness 2, we clarify that being $\delta$-correct does not necessarily imply an exponential tail. For instance, Successive Elimination is $\delta$-correct yet lacks an exponential tail. Thus, it remains meaningful to explore this property even when an algorithm is $\delta$-correct.
5. On weakness 3, we agree that simply placing a checkmark in the “asymptotic expected sample complexity” column without proving optimality is not a fair comparison. We will revise this to ensure our contribution is not misleading.
6. On weakness 4, we agree that the statement could be misleading and is somewhat irrelevant. We will remove it from our revision. Thank you for pointing this out.
**Regarding your questions**
On question 1: We would like to clarify that we do not claim BrakeBooster provides a guarantee of asymptotic expected sample complexity. On your comment mentioning "enhanced with BrakeBooster", we did not intend to imply an improvement in this aspect. Instead, our goal is to highlight that BrakeBooster addresses the often-overlooked issue of stopping tail behavior while aiming to preserve sample complexity as much as possible. For high-probability sample complexity, the Successive Elimination algorithm without BrakeBooster yields a sample complexity of $\mathcal{O}\left(\tau_{\text{SE}}:=\sum_{i=2}^{K} \frac{\ln\left(\frac{K}{\delta\Delta_i}\right)}{\Delta_i^2}\right)$. With BrakeBooster, this shifts to $\mathcal{O}(\tau_{\text{SE}}\log^2(\tau_{\text{SE}}))$. More broadly, regarding the polylog term in Proposition 2.9, as long as an algorithm conforms to the specific form outlined in Definition 2.8 for its exponential tail, we can always address it with the following general approach. Suppose an algorithm satisfies, for all $T \geq T_\delta$,
\begin{align}
\mathbb{P}\left( \tau \geq T \right) \leq \exp\left(-\frac{T}{\kappa \cdot \log^b(T)}\right)
\end{align}
where $b$ is any positive integer. By setting the right-hand side to be less than $\delta$, we obtain
\begin{align}
\frac{T}{\log^b(T)} > \kappa \log(1/\delta).
\end{align}
Determining a sufficient condition to ensure this inequality holds can be intricate, but it is always feasible to solve for $T$ using the following. Our objective is to establish a high-probability bound through contraposition. To do so, we explore a necessary condition for $T$, such as $T \leq c \log^b(T)$,
$$ T \le c \log^b(T) $$
$$ \leftrightarrow T^{1/b} \le c^{1/b} \log(T) $$
$$ \leftrightarrow T^{1/b} \le b c^{1/b} \log(T^{1/b}) $$
$$ \leftrightarrow T^{1/b} \le b c^{1/b} \log\left(\frac{T^{1/b}}{2 b c^{1/b}} 2 b c^{1/b}\right) $$
$$ \rightarrow T^{1/b} \le b c^{1/b} \left(\frac{T^{1/b}}{2 b c^{1/b}} - 1 + \log(2 b c^{1/b})\right) \tag*{$\ln(T) \le T - 1$} $$
$$ \leftrightarrow T^{1/b} \le 2 b c^{1/b} \left(\log(2 b c^{1/b}) - 1\right) $$
$$ \leftrightarrow T \le c \left(2 b \left(\log(2 b c^{1/b}) - 1\right)\right)^b $$
Thus $T> c(2b(\log(2bc^{1/b})-1))^b$ is a sufficient condition to say $T> c\log^b(T)$ and we therefore solve the polylog factors.
In comparing FC-DSH and DSH, FC-DSH is a refined version of DSH, enhanced with a stopping condition. DSH, an anytime algorithm, can theoretically run indefinitely without such a limit.
On question 2: Conceptually, setting $T_1 = 1$ throughout does not impact the core results. Our theorems demonstrate that the findings hold for all $T_1 \geq 1$. Including $T_1$ as a parameter provides flexibility in certain cases, as a meaningful minimum number of samples is necessary to run the algorithm effectively. Typically, we require the sample size to exceed the number of arms. On the other hand, if we happen to know the base algorithm's high probability stopping time, we can just set it as $T_1$ directly, as an efficient start of BrakeBooster.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the detailed response. I still have one thought as follows:
I did not realize that your definition of $\delta$-correct is different from the definition of $\delta$-PAC in Garivier and Kaufmann (2016). Hence, can we claim that Successive Elimination (SE) is not $\delta$-PAC (as its stopping time may not be finite with probability 1), while SE with your BrakeBoost can be $\delta$-PAC? If this claim is true, I would suggest the authors include it explicitly in the Introduction section of the paper.
---
Reply to Comment 1.1.1:
Comment: Yes this claim is correct. Thank you very much for making this contribution clear. We will include it explicitly in the introduction. Is there anything we can do to help you consider raising the score? We will do our best to address it. | null | null | null | null | null | null |
Scaling Laws for Differentially Private Language Models | Accept (poster) | Summary: This paper formulates scaling laws that accurately reflect the complexities of training Differentially Private (DP) Large Language Models (LLMs).
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: It is not clear whether the proposed methods make sense for the problem or application at hand.
Theoretical Claims: This paper does not include any proof.
Experimental Designs Or Analyses: Yes, I checked the soundness/validity of any experimental designs or analyses.
Supplementary Material: Yes, I have quickly reviewed all of the supplementary material.
Relation To Broader Scientific Literature: I believe the key contribution of the paper lies in conducting numerous experiments.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The writing of this paper is excellent, and it is easy for me to follow this paper.
2. The authors conducted thorough experiments to gain insights into DP LLM training.
Weaknesses:
1. The motivation of this paper is not clear. I believe that OpenAI, Anthropic, XAI, Google, and DeepSeek are unlikely to use differential privacy for pre-training, as it may negatively impact model performance. Therefore, I am not sure whether we need such a scaling law in reality.
2. More and more people use decoder-only models such as Qwen or Llama. I am unclear why the authors chose to use Masked Language Modeling (BERT) in this paper.
3. Could you show me other papers that use BertMega model? From Huggingface, I did not find the config of the BertMega model.
4. Scaling laws are used to predict the behavior of large models. Therefore, I think the authors need to train a 1B-2B to evaluate the correctness of their scaling laws.
5. Furthermore, could you show me the results of your pre-trained models over downstream tasks?
Other Comments Or Suggestions: 1. As for Figure 1, could you polish the y-axis? For example, Figure 1(b) sometimes uses scientific notation, and sometimes does not use it.
2. As for Figure 1(b), I guess that you try to vary the batch size, but the legend shows that you try to vary the privacy budget.
3. As for Figure 3(b), it is not clear to me that the compute budget increases when you fix the data budget and model size. My understanding is that FLOPs = 6ND, where N is the model size and D is the number of tokens.
4. As for Table 2, we usually use 35M to represent the model size instead of $3.5 \times 10^{7}$. We also use 512 for batch size and 1800 for iterations.
Questions For Authors: See strengths and weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We appreciate the feedback, and thank you for sharing ideas for specific ways to improve the paper. We are glad that you found the paper well-written and thorough, but we respectfully disagree with some of your comments, and respond to the main critiques below.
**[Motivation]** There are several complementary reasons why we believe DP scaling laws for pretraining are well-motivated and of interest to the privacy community.
* *Importance of pretraining privacy*: For current frontier models, and pretraining specifically, there is increasing need for data privacy, e.g., due to risks of memorization in large models. This makes DP pretraining research relevant as pretraining data is often scraped from the web (and can contain sensitive personal information [1]). The need for privacy will further increase if companies start training on sensitive user data as they run out of public datasets to train on.
* *Addressing utility challenges*: Utility degradation due to DP actually serves to motivate our work. With enough data and compute, and well-tuned mechanisms, the performance degradation due to DP can be mitigated to a large degree [2]. We believe scaling laws will help in this mitigation.
* *Broader Context*: Privacy-preserving machine learning falls squarely within the [ICML call for papers](https://icml.cc/Conferences/2025/CallForPapers) under the Trustworthy Machine Learning category, and the best paper from ICML 2024 highlighted DP pre-training as an important future direction [1]. Several large organizations have invested in this space already [2,3,4,5,6].
[1] Tramer et al., Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining
[2] De et al., Unlocking High-Accuracy Differentially Private Image Classification through Scale
[3] Xu et al., Federated Learning of Gboard Language Models with Differential Privacy
[4] Anil et al., Large-Scale Differentially Private BERT
[5] Pelikan et al., Federated Learning With Differential Privacy for End-to-End Speech Recognition
[6] https://www.microsoft.com/en-us/research/group/privacy-preserving-machine-learning-innovation/
**[Encoder-decoder Models vs. Decoder-only models]**
Our choice to study BERT models in this research was motivated by previous research on pre-training with differential privacy [3]. We agree that our focus on BERT models raises questions about generality to other model architectures. We acknowledged this limitation in Appendix A. We further provide evidence in Section 3.7 that despite the differences in training setups with prior (non-private) scaling laws work, we are able to reproduce the main finding of Hoffman et al., i.e., that data and model size should be increased in equal proportion with increasing compute. This provides some evidence that our results should translate to other settings. This observation is also consistent with [7] who found qualitatively similar scaling between encoder-decoder and decoder-only architectures. See also our response to Reviewer qUEy.
[7] Yao et al., Towards Neural Scaling Laws for Time Series Foundation Models
**[BertMega]** BertMega is the only non-standard model config we used, to also explore a larger BERT model than the ready-made configs listed at https://github.com/google-research/bert. We will note this explicitly in the revision.
**[1B-2B Models]** This is a good idea, and something we discuss in the limitations section (Appendix A). Scaling to larger models introduces significant engineering challenges, most notably handling model parallelism. For such models, weights—and thus their corresponding gradients—are typically sharded across devices. The gradient clipping step in DP-SGD requires storing and synchronizing per-example gradients across layers and devices for clipping, which necessitates substantial changes to existing implementations. While some work has explored this in the context of LLM fine-tuning [8], these challenges remain for training LLMs from scratch. We will be sure to highlight this as an additional challenge to scale DP training to the billion-parameter scale (in addition to DP scaling laws) in the revision.
[8]: He et al., Exploring the Limits of Differentially Private Deep Learning with Group-wise Clipping
**[Evals on downstream tasks]** Thank you for raising this point. We note that performance on downstream tasks are generally too noisy for scaling law studies themselves, as a result, we follow the previous work on LLM scaling laws and focus on the crossentropy loss. We agree that beyond the scaling law studies, it would be valuable to show some downstream evaluations on large (maybe 1B-2B models) DP pre-trained models. We will highlight this as an important direction for future work in revision.
**[Other suggestions]** We will incorporate your additional comments and suggestions to improve the presentation of our figures and tables, and clarify the relevant parts of the text in revision. Thanks for the suggestions!
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. However, I feel that it did not fully address my concerns. To clarify, here are my thoughts:
1. I fully understand that DP training is an important topic of ICML. However, could you please provide an example demonstrating whether any well-known models, such as GPT, Claude, Grok, DeepSeek, Qwen, or Llama, incorporate DP during their pre-training phase? I guess the main reason these companies do not use DP pre-training is that the model has bad performance over downstream tasks after DP pre-training. However, the authors do not show results over downstream tasks in the rebuttal.
2. I am interested in building scaling laws primarily to train a large-scale model that achieves superior performance. However, this paper does not explain how to apply the scaling law to train such models effectively. I believe that targeting models in the 1B-2B parameter range is a reasonable expectation.
3. Given that the NVIDIA A100 GPU comes with 40 GB of memory, it is feasible to fit a 1B or 2B model using data parallelism or Fully Sharded Data Parallelism (FSDP). I am not sure why they claim that "Scaling to larger models introduces significant engineering challenges, most notably handling model parallelism".
4. Ultimately, the majority of renowned models have adopted a decode-only architecture. Given the current trends, I believe it will be impractical to use an encoder-decoder model in 2025. [7] observe that similar scaling between encoder-decoder and decoder-only architectures over Time-Series. It is not clear whether they have similar results over DP pre-training. Both [1] and [2] from Google DeepMin and OpenAI focus on using a decoder-only transformer model. Therefore, I think using Encoder-decoder Models is not a good choice.
[7] Yao et al., Towards Neural Scaling Laws for Time Series Foundation Models
[1] Hoffmann, Jordan, et al. "Training compute-optimal large language models." arXiv preprint arXiv:2203.15556 (2022).
[2] Kaplan, Jared, et al. "Scaling laws for neural language models." arXiv preprint arXiv:2001.08361 (2020).
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with our response, clarifying your points, and agreeing to revisit your initial review based on this discussion. Please find our responses below.
1. We do not believe the well-known models you listed have incorporated DP in their pre-training phase. Despite this, we think our work is still well-motivated, for reasons separate from the ones provided in our initial response (which you may disagree with). We hope that you will agree with the following statements:
* There are situations where it is useful to train models on in-domain sensitive user data, where DP must be applied. [3, 5, 6] The tasks where in-domain data is useful may be more narrow than the downstream tasks used to evaluate frontier models, and cross entropy is actually a very natural evaluation metric for some tasks (e.g., next word prediction for mobile keyboards [3]).
* Due to the high compute requirements common in DP training, it is important to understand compute/privacy/utility trade-offs, and ensure the compute budget is allocated in an intelligent manner to optimize utility. [2, 4]
* No prior work has systematically studied (2), and our work provides a rigorous and thorough study of these questions, and useful guidance for practitioners.
If you think this provides a stronger motivation for our work, we will be happy to feature this more prominently in our introduction in revision.
2. Can you please clarify what you mean by “superior performance”, and what that is relative to? Our motivation is related to but distinct from the motivation for non-private scaling laws that you are referencing. As mentioned above, we are interested in building scaling laws to allocate a fixed amount of compute to deliver the best possible DP model. This is a bit different from the reviewers goal of training the largest possible model. In fact, our findings show that in some settings, aiming for the largest possible model is detrimental to utility.
3. We agree your expectation of training 1B-2B parameter models is reasonable and would strengthen the paper – we have taken your advice into consideration and will revisit our experiments on BertGiga (1.5 B parameters). However, as you must evaluate our submission in its current form, we hope you will take into consideration the following three points:
* BertMega is 77.8% the way to 1B in terms of parameter count, which is on the same order of magnitude for the purposes of this study.
* For this research, we had access to TPUv3 resources, which does feature 32 GB of RAM. However, for a 1.5B parameter model represented with 4-byte floats, the model state is 6 GB. Combined with the gradient (6 GB) and the optimizer state (12 GB), the activation memory, and other training overheads, we were not able to run DP-SGD with pure data parallelism. We will note that without per-example gradient clipping, we were able to run this experiment, suggesting some memory overhead of DP (or at least the implementation we used).
* As we showed experimentally, there are many settings where it is clearly suboptimal to train a 1B+ model with DP. For example, with a privacy budget of 4 a data budget (# contributing users) of 10M, the optimal model size is in the 10s of millions, even with an infinite amount of compute.
4. We agree that decoder-only models are preferable for autoregressive text generation and hence particularly suitable for chat bot applications. However, we do not agree with the statement that “it will be impractical to use encoder-decoder models in 2025”. The right model architecture depends on the task being solved; for example, encoder-only models like BERT are well-suited for tasks like natural language understanding, sentence classification, and question answering, while encoder-decoder architectures are well-suited for sequence-to-sequence tasks like machine translation and document summarization [8].
[8] Qiu et al., Pre-trained Models for Natural Language Processing: A Survey | Summary: The paper formulates the problem of identifying scaling laws for differentially private training of Bert models, as identifying the optimal training configurations (model size, batch size, noise-batch ratio, and iterations) given fixed data, compute, and privacy budget. Here clipping thresholds and stepsize are fixed as constants across all configurations. Their approach contains three steps.
1. For a fixed but reasonably large physical batch size, fit a function $L(M, T, \bar{\sigma})$ that estimates the training loss of an M-parameter model after T iterations with a noise-batch ratio of $\bar{\sigma}$, by repeating the training for different configurations and smoothening and interpolating the training results.
2. For other batch size $B$, assume that as long as $M, T, \bar{\sigma}$ are fixed, their training loss are equal.
3. For each fixed data, compute, and privacy budget, enumerate all possible training configurations (batch size, noise-batch ratio, and iterations), and identify the configuration that minimizes the estimated training loss $L(M, T, \bar{\sigma})$.
They then draw some insights from the behavior of the fitted scaling laws.
1. There is a small but consistent trend that with larger privacy budgets, one should train a larger model with a smaller batch size and for more iterations than one would train with a smaller privacy budget
2. Optimal model sizes under privacy are much smaller than predicted by non-private scaling laws.
3. Given a fixed privacy/data budget, there is an inflection point where increasing the compute budget provides little to no benefit.
4. The ratio of the number of training tokens to model size increases with computing budget, especially for smaller privacy budgets. This matches the flat token-to-model ratio as predicted by the prior work for non-private training (Hoffmann et al. (2022)).
Claims And Evidence: In general, the paper is well-written with clear claims and supporting details. However, there are a few places that are harder to interpret due to missing details.
1. In Figure 1, is the noise-batch-ratio fixed across all settings? If so, why is it reasonable to fix noise-batch-ratio?
2. Section 4.5, how is the noise-batch ratio computed here? Is it corresponding to the optimal configurations predicted by $L(M, T, \bar{\sigma})$ given fixed $M$ and $T$? If so, why is it reasonable to fix $M$ and $T$?
> We analyze how the noise-batch ratio behaves as a function of privacy budget (as measured by ε), compute budget (as measured by B), and data budget (as measured by N).
Methods And Evaluation Criteria: The method is interesting, and the only limitation that I see is the assumption that the training loss curve under different batch sizes is similar, as long as $M, T, \bar{\sigma}$ are fixed. The authors also discussed this in Appendix C.3. and show that this assumption may underestimate the benefit of smaller batch sizes when the noise-batch ratio is large.
Another minor point that lacks clarify, is why it is reasonable to fix the learning rate and clipping threshold across all settings, rather than, e.g., using a larger clipping threshold for larger models.
Theoretical Claims: NA
Experimental Designs Or Analyses: The experiment results are comprehensive and interesting. Other than a few minor clarity issues as discussed in Claims And Evidence, I only have one doubt regarding what the authors call "improvements in the noise-batch ratio" in Section 4.5 and "diminishing returns in terms of noise-batch ratio" in Section 4.1. I'm a bit confused about whether we are interested in the increase/decrease of noise-batch-ratio as worsening/improvement of training, rather than the direct eval loss. Maybe it is reasonable under fixed choices of other factors, e.g., training iterations, batch size, and model size, it would be good if the authors could clarify these points.
Supplementary Material: I checked Appendix C.3 and C.4 about ablation studies for the assumption that training curves under different batch sizes are similar, as long as $M, T, \bar{\sigma}$ are fixed.
Relation To Broader Scientific Literature: The paper offers a nice study of scaling laws for differentially private training and discusses many connections/differences compared to the literature on scaling laws for standard training.
Essential References Not Discussed: Related works are discussed thoroughly. However, some sentences missed citations when referring to prior works. E.g., Line 327 says "becomes nearly flat as predicted by the prior work" without any references.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: See Claims And Evidence and Experimental Designs Or Analyses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful review and feedback, and we are glad you liked the paper. Below we respond to the main questions / critiques:
**[Missing details]** We will be sure to update our discussion around Figure 1 and Section 4.5 to clarify what is shown.
* Regarding Figure 1, we do not use a constant noise-batch-ratio here; instead it is varied through the privacy budget, batch size, iterations, and data budget (as shown in the diagram in Figure 2).
* In Section 4.5, we fix M and T and only consider how the noise batch ratio (sigma / B) changes with respect to the Privacy, Data, and Compute budget.
**[Fixed batch size assumption]** Our modeling assumption that the training loss primarily depends on the batch size through the noise batch ratio (Figure 2) is a limitation we acknowledge in Appendix A. However, this modeling assumption was necessary to make this work practically feasible, as adding yet another independent variable to consider greatly increases the number of experiments and accelerator hours needed to conduct this research.
As the reviewer points out, our ablations in Appendix C.3 and C.4 do study the effect of this variable in isolation, although we believe it will be interesting to revisit this in future work to better understand the phenomenon we observed and under what conditions it manifests.
**[Fixed clipping + learning rate]** We lean on findings from prior work to establish a reasonable default DP training setup. In particular, [1] advocates for using a small clipping threshold where all (or nearly all) gradients are clipped. We used a clipping threshold of 1, and checked that this resulted in > 90% of gradients being clipped. We also used gradient normalization as [1] also suggests, since it decouples the clipping parameter from the learning rate. While more careful tuning of this parameter could help somewhat, our primary goal in this work was to focus on variables that relate directly to the compute budget.
Regarding learning rate, we actually used three learning rates, which were chosen based on comprehensive ablations discussed in Appendix C.7 and Figure 12.
[1] De et al., Unlocking High-Accuracy Differentially Private Image Classification through Scale | Summary: This paper explores the scaling laws applicable to the training of masked language models under differential privacy (DP) constraints. The authors establish that traditional scaling laws, which do not account for privacy considerations, are suboptimal when applied in DP settings. Key findings: optimal model size with DP is generally much smaller compared to non-private models.
Claims And Evidence: The paper’s claims are supported by clear and methodical evidence, with experiments spanning model sizes, noise levels, and compute budgets. Limitations are transparently addressed, and the findings provide actionable insights for DP training.
I think it should be clearly stated that these findings are specific to MLM earlier than sec2, and the authors should make educated guesses about the transferability of the findings.
Methods And Evaluation Criteria: Using BERT models (with varying sizes) and masked language modeling is appropriate, as BERT is a standard architecture for studying scaling laws. DP-Adam with per-example gradient clipping follows established DP-SGD practices. However, I’m concerned that results may not generalize to decoder-only models or tasks like autoregressive language modeling, which are common in modern LLMs. The fixed sequence length (512 tokens) and focus on pre-training (not fine-tuning) also limit direct applicability to real-world deployment scenarios.
Evaluation metrics like cross-entropy loss and compute savings are standard and meaningful, but the lack of diverse tasks/datasets and the reliance on only (ϵ,δ)-DP constrain generalizability.
Theoretical Claims: The claim that non-private scaling laws are suboptimal under DP is supported by empirical evidence shown in figures. Optimal model sizes are smaller under DP, it is empirically proven.
Experimental Designs Or Analyses: The experimental design is methodologically generally rigorous in all the experiments for the tested settings (BERT models, masked LM, ≤778M parameters). However, critical assumptions—fixed batch sizes, reliance on (ϵ,δ)-DP, and limited architectural scope—constrain broader validity.
Supplementary Material: Yes, annexes A, B, D.
Relation To Broader Scientific Literature: The contribution is quite useful as it starts providing a guidance on scaling laws in this field. However, the impact is limited to the chosen setup and there are doubts on generalizability.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths
* The paper is rich with practical insights, eg line 91 right, sec3.1, annex B2, annex D.
* The computational exploration is particularly broad and thorough, which gives the claims a solid grounding.
Weaknesses
* The caveat that the main findings are limited to BERT should be more prominent: neither title nor abstract address this.
* Citations in sec1 and sec 2 seem biased towards work by authors affiliated with Google without apparent reason. This should be corrected.
* Line 015 right: both citations are to blog posts; is there no better source?
Other Comments Or Suggestions: * Algo 1 line 64: should that not be $=\bar{g}+...$ instead of $=g+$?
* left col line 84 which typically added. line 86 is
* line 121 left: different than
* line 718 is linear
The following recent refrence is not essential but might be useful in sec1 or sec5 as it gives a reasoned overview: Miranda+ 2024 https://arxiv.org/abs/2408.05212
Questions For Authors: How do these findings transfer to causal language modeling?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful review and feedback, and for recognizing the various strengths of this work. Below we respond some of the questions / criticisms:
**[Masked Language Modeling vs. Other Tasks]** The reviewer is correct to point out that our focus on BERT models raises questions about generality of the results with respect to other model architectures, like decoder-only models used in many modern GenAI applications. We acknowledged this limitation in Appendix A. We further provide evidence in Section 3.7 that despite the differences in training setups with prior (non-private) scaling laws work, we are able to reproduce the main finding of Hoffman et al., i.e., that data and model size should be increased in equal proportion with increasing compute. This provides evidence that our results should translate to other settings.
Our choice to study BERT models in this research was based on a few considerations:
* Established pre-determined model sizes and configurations (https://github.com/google-research/bert)
* Maturity of DP training support in distributed ML libraries (at the time this research started): We decided to use PaxML (https://github.com/google/paxml/) to conduct this research, since it supports large-scale training of transformer models coordinated across many accelerators, and supports per-example gradient clipping routines which are necessary to use DP-SGD that can run in distributed settings. This open source library does not feature an easily-forkable implementation of a standard decoder-only model and thus would require a large effort to re-engineer purely from the available primitives.
We believe that future research in this space should probably study (now) open-source decoder-only models like NanoGPT (https://github.com/karpathy/nanoGPT). We believe that the findings we showed are still useful and informative in their current form, however.
**[Pretraining vs. Finetuning]** The reviewer is correct to point out that with DP, finetuning has some advantages over pretraining, and would be interesting to study. We acknowledged this limitation in Appendix A, and agree with the reviewer that this would be an interesting direction for future work.
**[Evaluation task diversity]** This is a valid criticism, and something we will be sure to add to our Limitations section in revision, and highlight this as an important consideration for future work.
**[Other weaknesses]** We have updated our references in the uploaded revision to include a more expansive set of citations. If the reviewer is aware of any relevant citations that were missed, we are happy to consider and add these as well. We have fixed the other minor issues identified during review, including the references on line 15 and the notational errors. Thanks for the feedback!
---
Rebuttal Comment 1.1:
Comment: I have read all reviews and rebuttal exchanges, including the interesting exchange between fellow Reviewer U13Y and Authors.
I maintain my score, considering the trade-offs between the following arguments.
* For the benefit of the paper, authors should definitely clarify important assumptions and goals including limitations: eg summarize motivation for decoder-only vs encoder-decoder models; practical goal of scaling laws in determining optimal model size ("engineering" knob) given data and compute budgets (often extrinsic constraints, not under the model training engineering team's influence); loss as proxy for downstream task performance. Certainly, I consider that mentioning limitation to decoder-only as little as in the current paper version is insufficient, opaque, and detrimental to the paper's resonance.
* The paper is particularly well-written; it can be improved locally; that would profile it for an oral for example.
* Conditional on agreeing with the work's motivations (which motivates making these strongly explicit), experiments are conclusive. | null | null | null | null | null | null | null | null |
Visual Abstraction: A Plug-and-Play Approach for Text-Visual Retrieval | Accept (poster) | Summary: The paper introduces a test‐time, plug‐and‐play approach VISA for text-to-visual retrieval. VISA converts visual content into dense, natural language descriptions using off‐the-shelf large multimodal models (LMMs). It then refines these descriptions via a question-answering module that leverages chain-of-thought prompting to capture query-specific details. The approach addresses two main challenges: (1) filtering out low-level, redundant visual details and (2) addressing granularity mismatches between textual queries and visual content. Extensive experiments on both image (e.g., MS-COCO, Urban1k) and video (e.g., MSR-VTT, LSMDC) datasets demonstrate that VISA improves retrieval performance over several state-of-the-art methods.
Claims And Evidence: The paper claims that converting images and videos into rich textual representations can boost retrieval performance. This claim is supported by comprehensive experimental results, e.g., consistent improvements in recall metrics across diverse datasets and detailed ablation studies that highlight the contribution of each component (general description, QA refinement, chain-of-thought). Overall, the evidence is clear and convincing.
Methods And Evaluation Criteria: The proposed method is well-motivated and leverages the strengths of existing LMMs and LLMs without requiring additional training. The experimental design is thorough, employing both short-context and long-context retrieval tasks with appropriate benchmarks and evaluation metrics. This makes the method not only conceptually sound but also practical for real-world applications.
Theoretical Claims: There are no theoretical claims in this work, which is consistent with the paper’s focus on a practical, empirically driven solution.
Experimental Designs Or Analyses: The authors compare VISA against multiple baselines on a range of datasets and perform extensive ablation studies that demonstrate the robustness and effectiveness of the proposed components. The analysis including comparisons of inference time provides a well-rounded view of the method.
Supplementary Material: The supplementary material contains additional visualizations, prompt details, and extended ablation results.
Relation To Broader Scientific Literature: The paper discusses the current literature on vision-language models, such as CLIP, Frozen, and recent works on fine-grained retrieval enhancement. It builds on prior findings by addressing the inherent limitations of training-based approaches and provides a novel test-time alternative.
Essential References Not Discussed: While the reference list appears comprehensive, a discussion of some very recent works on multimodal retrieval methods that use LMMs might further strengthen the literature context. However, no critical prior work seems to be omitted.
Other Strengths And Weaknesses: *Strengths:*
– The paper presents an innovative plug-and-play framework that sidesteps the need for expensive retraining. It integrates with existing vision-language models without additional training and leverages pre-trained models at test time, significantly reducing computational costs.
– The paper features extensive experimental validation and rigorous ablation studies. The method is evaluated on multiple benchmark datasets, and the ablation studies clearly demonstrate the contribution of each module.
– The method achieves clear improvements over established baselines across diverse datasets. Empirical results show consistent gains in key metrics on both image and video retrieval tasks, highlighting its effectiveness in addressing semantic redundancy and granularity mismatches.
*Weaknesses:*
– The reliance on large off-the-shelf models at test time could raise practical deployment concerns regarding computational latency.
– The paper lacks experiments evaluating the method from an image-to-text perspective. Including standard I2T benchmarks would provide additional insights into the robustness and generality of the approach.
Other Comments Or Suggestions: NA
Questions For Authors: 1. How about your method compared with multimodal retrieval methods that use LMMs?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > **Q1:Concern of computational latency**
We would like to clarify that our method is **compatible with smaller, more efficient retrievers**, and we have observed that models with significantly fewer parameters (e.g., 400M or 1.5B) still achieve **comparable performance** (see our response to Reviewer 1ri6, Q1).
In addition, the **general description module** can be executed entirely **offline**, prior to deployment. This design ensures that it incurs **no cost during online inference**. As noted in our response to Reviewer 1FJx (Q1), even when the **QA component is omitted**, VISA still delivers **competitive performance** relative to existing VLMs. In such case, only the **text retriever** operates in real-time, and its computational overhead can be adjusted by choosing a model of a suitable size.
Together, the use of lightweight retrievers and offline gallery processing make VISA practically deployable and efficient, even in resource-constrained settings.
> **Q2:Image-to-text (I2T) retrieval experiments**
We initially did not include text-to-image (T2I) experiments because our primary focus is on supporting diverse user retrieval demands with varying levels of query granularity—a scenario more naturally aligned with T2I settings.
To address this concern, we conducted image-to-text (I2T) retrieval experiments on Flickr30K, MSR-VTT, and Urban1K, using the same retrieval setup and backbones as in the T2I setting. Specifically, for each query image, we first retrieve the top-20 candidate text descriptions. For each candidate, we generate three questions and append the corresponding answers derived from the image. Finally, we re-rank the candidates using text-level retrieval.
| Method | Flikcr (R@1|R@5|R@10) |
| --------------------- | ---------------------------- |
| SigLIP | 94.4|99.7|99.8 |
| SigLIP + VISA | 95.0|99.8|100.0 |
| BLIP-2 | 97.6|100.0|100.0 |
| BLIP-2 + VISA | 97.9|100.0|100.0 |
| | **MSR-VTT (R@1|R@5|R@10)** |
| InternVideo2-G | 49.6|73.4|81.0 |
| InternVideo2-G + VISA | 54.4|78.0|84.8 |
| | **Urban1k (R@1|R@5|R@10**) |
| LoTLIP | 89.6|97.8|98.9 |
| LoTLIP + VISA | 93.7|99.0|99.5 |
These consistent improvements across datasets and backbones confirm that VISA not only enhances T2I retrieval, but also generalizes effectively to I2T retrieval.
> **Q3:Compared with multimodal retrieval methods that use LMMs**
As there are only a few works directly built on LMMs, we have compared our method with RAGVL in Table 1. On Flickr30k, our approach achieves higher performance with R@1 = 85.1 compared to RAGVL’s 84.4 and further improves to 86.1 when using EVA-CLIP. The plug-and-play design does not require end-to-end fine-tuning of large models. | Summary: This paper introduces Visual Abstraction (VISA), a novel test-time approach for enhancing text-to-visual retrieval by converting visual content into textual descriptions using large pre-trained models. VISA utilizes a question-answering mechanism to refine these descriptions to match the granularity of user queries accurately. The approach demonstrates superior performance over existing methods across several benchmark datasets for both image and video retrieval tasks.
Claims And Evidence: The paper's claims about the effectiveness in improving text-to-visual retrieval are well-supported by extensive experimental results. The authors present a comprehensive set of experiments and comparisons that demonstrate the approach's superiority over existing state-of-the-art methods across various datasets and query types.
Methods And Evaluation Criteria: The proposed methods are logically sound and well-aligned with the problems at hand. The evaluation is conducted using standard benchmark datasets like COCO, Flickr30K, and others, which are appropriate for the task. The use of large pre-trained models and a novel question-answering refinement process adds to the methodological robustness.
Theoretical Claims: NA
Experimental Designs Or Analyses: The experimental evaluations are thorough, with detailed ablation studies and comparisons to baseline and state-of-the-art models. The authors effectively demonstrate the benefits of their approach using both quantitative metrics and qualitative examples.
Supplementary Material: The supplementary material provides additional resources that support the paper's claims, the visualization results are interesting.
Relation To Broader Scientific Literature: The paper is related to vision-language pre-training and retrieval tasks.
Essential References Not Discussed: I do not identify any critical references or related works that are missing from the discussion.
Other Strengths And Weaknesses: This paper introduces a unique test-time enhancement for text-to-visual retrieval by converting visual content into textual descriptions. This method diverges from traditional training-time modifications, offering a fresh perspective on improving retrieval effectiveness. The plug-and-play nature of VISA allows it to be seamlessly integrated with existing visual-language models. This compatibility is a significant advantage for enhancing the recommended system without extensive redevelopment.
However, the paper has several limitations:
1. The paper primarily focuses on integrating text-based retrieval enhancements into existing cross-modal frameworks but does not explore the potential of integrating two multimodal models.
2. The differences between 'No' and 'Uncertain' responses in the QA process are not well-explained. It is unclear whether these responses are treated differently in the retrieval ranking process.
Other Comments Or Suggestions: NA
Questions For Authors: 1. This work integrates a text model into existing cross-modal frameworks. Could you explore the performance outcomes of integrating two multimodal models (VLMs) of similar parameter scales?
2. In the question-answering process, how does the system distinguish between definitive 'No' and 'Uncertain' responses? What implications do these responses have on the retrieval process?
3. How does VISA handle queries that could be interpreted in multiple valid ways visually? Are there specific mechanisms or algorithms within VISA that help resolve or manage such ambiguities? The paper does not fully address how VISA manages ambiguous queries where multiple visual interpretations are possible.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **Q1:Integrating VLMs**
Thank you for this insightful suggestion. To explore this idea, we integrate SigLIP (as the base model) with EVA-CLIP (18B parameters), which has a significantly higher parameter count compared to the text retrieval model gemma-9B. This hybrid setup yields notable performance gains over SigLIP alone:
| Method | COCO (R@1|R@5|R@10) | Flickr (R@1|R@5|R@10) |
| ------------------ | --------------------- | ----------------------- |
| SigLIP | 54.2|76.8|84.2 | 83.0|96.1|98.0 |
| SigLIP+EVA-CLIP | 56.0|79.1|85.9 | 84.1|96.7|98.3 |
| SigLIP+VISA (Ours) | 57.2|80.3|86.9 | 85.1|97.1|98.6 |
Notably, our proposed method achieves even greater improvements on both datasets. This suggests that generative visual abstraction offers more effective enhancement than simply fusing large VLMs. Importantly, VISA maintains its plug-and-play design, requiring no model retraining or architecture modification. Additionally, as detailed in our response to **Reviewer 1ri6 (Q1)**, our method is compatible with smaller, more efficient text retrievers (e.g., 400M and 1.5B parameters), which also achieves strong performance.
> **Q2:The explanation of 'No' and 'Uncertain' responses**
Our core motivation is to abstract visual content into textual descriptions, enabling more effective reasoning and relationship modeling in the text space. In the question-answering module, we distinguish between two types of negative responses: "No" and "Uncertain", each serving a different purpose:
- **"No"** is used when the question refers to a subject that is **not present in the image**, or when a specific condition, action, or attribute **does not apply**. For example, if the image shows a person in red shirt playing basketball and the question asks, "Is the person in red shirt playing football?", the correct response would be "No". This type of response provides **explicit negative evidence** which helps refine semantic alignment.
- **"Uncertain"** is used when the **question cannot be reliably answered** based on the visual content, e.g., asking about the shirt color of someone playing football when no such activity is depicted. In these cases, the response is considered **uninformative**, and the corresponding QA pair is **discarded** from downstream processing.
> **Q3:How VISA manages ambiguous queries where multiple visual interpretations are possible.**
VISA incorporates LLMs to manage query ambiguities by leveraging **context** to disambiguate meanings. For example, in a query like "How to do something in windows?" (as shown in Figure 5 of the main paper), the surrounding context helps the model infer that "windows" refers to the **operating system Windows** instead of **glass windows**. In the future, we plan to further enhance this capability with reflection mechanism to check the visual interpretations. | Summary: This paper proposes Visual Abstraction (VISA), a plug-and-play approach designed to enhance text-to-visual retrieval. Unlike traditional retrieval methods that operate in a cross-modal embedding space, VISA transforms visual content into textual descriptions using off-the-shelf large models. This transformation filters out redundant low-level visual details. Additionally, VISA incorporates a question-answering mechanism to refine descriptions based on user-specified granularity. Extensive experiments demonstrate that VISA significantly improves retrieval performance across text-to-image and text-to-video tasks, outperforming state-of-the-art models on both short- and long-context queries. The approach requires no additional training and can be seamlessly integrated into existing retrieval systems.
Claims And Evidence: Claim 1: VISA enhances retrieval by converting visual content into text, filtering out redundant details, and improving alignment with textual queries.
Evidence: Experiments show that VISA improves recall@1 across multiple datasets, demonstrating better retrieval performance.
Claim 2: The question-answering process helps refine descriptions, ensuring a more precise match to user queries.
Evidence: Ablation studies indicate that removing the QA process reduces retrieval accuracy, confirming its contribution to performance.
Methods And Evaluation Criteria: After carefully checking the manuscript, the proposed methods and evaluation criteria make sense for text-to-visual retrieval.
Theoretical Claims: There is no theoretical claim.
Experimental Designs Or Analyses: Experimental evaluations are comprehensive.
Supplementary Material: The supplementary material is well-organized.
Relation To Broader Scientific Literature: VISA relates to prior vision-language pretraining models such as CLIP, EVA-CLIP, and FLAME, but differs by transforming retrieval into a text-enhance problem
Essential References Not Discussed: The paper sufficiently covers relevant references.
Other Strengths And Weaknesses: strengths:
1. VISA introduces a plug-and-play test-time retrieval enhancement, avoiding the need for expensive retraining.
2. VISA is model-agnostic and can enhance retrieval performance without modifying existing models.
3. Experimental results show consistent recall@1,5,10 gains across multiple datasets.
weaknesses:
1. Although VISA is plug-and-play, text-based retrieval process introduces additional computational overhead. Does text retrieval necessarily require using LLMs with over 7B parameters? This parameter scale is significantly larger than most vision-language models (VLMs), which may raise concerns about efficiency and deployment feasibility.
2. What is the impact of abstraction length on retrieval performance? Does generating longer textual descriptions always improve accuracy, or is there an optimal level of abstraction that balances efficiency and effectiveness?
3. Can VISA be extended beyond text-to-visual retrieval? Could this approach be generalized to other cross-modal tasks while maintaining its plug-and-play nature?
4. The paper lacks image-to-text (I2T) retrieval experiments. While the work focuses on text-to-image retrieval from a user-driven perspective, cross-modal retrieval generally includes both T2I and I2T tasks. It would be valuable to see how VISA performs in the reverse retrieval direction.
Other Comments Or Suggestions: The paper lacks citations for InternVideo in Table 4. Please ensure that all referenced methods are properly cited to maintain clarity.
Questions For Authors: Please refer to weaknesses.
Ethical Review Concerns: No
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > **Q1:Efficiency Concerns**
Thanks for the suggestion regarding the efficiency of using large text retrievers. Importantly, VISA does not require LLMs with over 7B parameters to work effectively. To demonstrate this, we conducted experiments using smaller text retrievers with **400M and 1.5B parameters**, and the results are summarized below:
| Text Retriever | Flickr30K **(R@1|R@5)** | MSR-VTT **(R@1|R@5)** | Urban1k **(R@1|R@5)** |
| -------------- | ------------------------ | ---------------------- | ---------------------- |
| None | 83.1|95.8 | 52.0|74.6 | 85.9|97.1 |
| stella-400M | 85.2|97.2 | 53.4|75.0 | 93.3|98.9 |
| stella-1.5B | 85.4|97.1 | 53.6|75.4 | 93.4 \| 98.9 |
| gemma2-9B | 86.1|97.3 | 54.4|75.3 | 94.6|99.4 |
As shown, **even relatively small models such as Stella-400M yield significant performance gains** over the baseline without a text retriever. This demonstrates that the performance gains from VISA are not solely due to model size, but rather arise from the integration of visual abstraction into the retrieval pipeline. These findings reinforce the generalizability and flexibility of VISA. In scenarios where inference efficiency is a priority, smaller and more efficient models can be adopted without substantial performance loss. For additional discussion on latency and FLOPs, please refer to our response to **Q1 of Reviewer 1FJx**.
> **Q2:Impact of abstraction length**
We explored how varying the length of the generated general descriptions affects retrieval performance. Specifically, we modified the prompt for the captioning model to: *“Please generate descriptions of the given image in approximately {num} words.”* The corresponding results are available at the anonymous URL: **[https://imgur.com/a/amTI9n5](https://imgur.com/a/amTI9n5)**.
Our results show that **longer visual captions generally improve retrieval performance**, as they encode richer semantic information such as object attributes, spatial relationships, and contextual details. However, we also observe a clear **performance saturation** beyond a certain caption length. That is, once the caption adequately captures the essential visual content, adding more tokens results in **diminishing returns**. Importantly, the optimal caption length depends on the **complexity of the query.** For short-context queries (e.g., Flickr30K), moderately long captions are sufficient to achieve strong performance. In contrast, long-context queries(e.g., Urban1K) benefit from longer, more detailed captions that better capture fine-grained visual elements.
> **Q3:Using VISA on other cross-modal tasks**
We believe that the **visual abstraction mechanism** is broadly applicable to other cross-modal tasks. For example:
- Video Moment Retrieval: By segmenting a video into temporal clips and generating visual abstractions for each segment, one can match a textual query to the semantic descriptions of these segments, thereby localizing relevant time intervals.
- Text-based Re-identification and Composed Image Retrieval (CIR): This can benefit from generating abstractions for both **gallery items** and **query items** (such as image queries in CIR) and comparing them in the language space.
We plan to include this discussion in the next version.
> **Q4:Image-to-text (I2T) retrieval**
We initially did not include text-to-image (T2I) experiments because our primary focus is on supporting diverse user retrieval demands with varying levels of query granularity—a scenario more naturally aligned with T2I settings.
To address this concern, we conducted image-to-text (I2T) retrieval experiments on Flickr30K, MSR-VTT, and Urban1K, using the same retrieval setup and backbones as in the T2I setting. Specifically, for each query image, we first retrieve the top-20 candidate text descriptions. For each candidate, we generate three questions and append the corresponding answers derived from the image. Finally, we re-rank the candidates using text-level retrieval.
| Method | Flikcr (R@1|R@5|R@10) |
| --------------------- | ---------------------------- |
| SigLIP | 94.4|99.7|99.8 |
| SigLIP + VISA | 95.0|99.8|100.0 |
| BLIP-2 | 97.6|100.0|100.0 |
| BLIP-2 + VISA | 97.9|100.0|100.0 |
| | **MSR-VTT (R@1|R@5|R@10)** |
| InternVideo2-G | 49.6|73.4|81.0 |
| InternVideo2-G + VISA | 54.4|78.0|84.8 |
| | **Urban1k (R@1|R@5|R@10**) |
| LoTLIP | 89.6|97.8|98.9 |
| LoTLIP + VISA | 93.7|99.0|99.5 |
These consistent improvements across datasets and backbones confirm that VISA not only enhances T2I retrieval, but also generalizes effectively to I2T retrieval. | Summary: The paper studies the problem of text-to-visual retrieval, which involves both text-to-image retrieval and text-to-video retrieval. The authors propose a framework to enhance the retrieval via converting the visual content to the text domain, and then do the retrieval. Experiments show improvement of the proposed method over several previous models.
Claims And Evidence: The authors claim they achieve state-of-the-art performance, while their numbers for text-to-image retrieval is lower than the actual state-of-the-art BLIP-2.
Methods And Evaluation Criteria: The proposed method uses LLM for better text-to-image retrieval. One biggest advantage of using CLIP/SigLIP for text-to-image retrieval is that it is efficient. But with LLM introduced and runned for multiple times as in the proposed method, the FLOPS may be significant increased so it is against the motivation of developing an efficient retrieval system.
Also, at the evaluation side, the authors are not comparing with state-of-the-art architectures, such as BLIP-2.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The authors are not comparing with state-of-the-art architectures, such as BLIP-2, while they claim they achieve the state-of-the-art results. It is a significant problem of doing this. Actually their numbers for text-to-image retrieval is lower than BLIP-2.
Supplementary Material: No
Relation To Broader Scientific Literature: Text-to-visual retrieval is an important task, and effective innovations for the task will be of a wide interest. But my concern is the authors have not honestly compared with the well-known state-of-the-art models while making such a claim. Besides, they have not considered about the efficiency of retrieval, which is very important factor in the community.
Essential References Not Discussed: The authors have not properly compared their methods with state-of-the-art BLIP-2.
Other Strengths And Weaknesses: I think the major weakness of the paper is the evaluation and method design.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Is it possible to include the FLOPS comparison of the proposed method and the original model?
2. Is it possible to include BLIP-2 results for text-to-image retrieval and compare with it?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: > **Q1: Efficiency Concerns (FLOPs and latency)**
Thank you for the valuable suggestions. We include the FLOPs and latency comparisions on the Flickr dataset below. For clarity, we divide the retrieval process into two stages:
- Offline stage precomputes visual features (via VLM) and generates general descriptions (via VISA) for the gallery candidates.
- Online stage performs the text encoding (via VLM), QA-based description refinement and text-level re-ranking (both via VISA) during inference.
| Type | Module | GFLOPs | Latency (second/dataset) |
| -------- | ------------------------------------ | ---------- | ------------------------------ |
| offline | VLM (SigLIP/EVA-CLIP) | 335/4560 | 4/18 |
| offline | General Description (LLaVA-v1.6-34B) | 138000 | 437.5 |
| **Type** | **Module** | **GFLOPs** | **Latency (second/per query)** |
| online | VLM (SigLIP/EVA-CLIP) | 26/9 | 0.0002/0.0029 |
| online | Question Generator (Qwen2.5-32B) | 26870 | 0.02 |
| online | Answer Generator (Qwen2VL-7B) | 15450 | 1.00 |
| online | Text Retriever (gemma-9B) | 4160 | 0.13 |
In the offline stage, generating general descriptions using large multimodal models (LMMs) indeed incurs high FLOPs (138,000 vs. 335 compared with SigLIP, approximately 400×). However, this process could be executed ahead of time and does not influence the real-time inference latency. In the online stage, inference overhead is significantly reduced through techniques like KV cache and parallel execution, supported by efficient frameworks such as SGLang. In practice, this results in only ~1 second of additional latency on an NVIDIA 4090 server compared to the original SigLIP pipeline.
We acknowledge that VISA introduces a trade-off between efficiency and performance, and it may not be ideal for strict low-latency applications. However, we would like to emphasize two key points:
- Test-Time Augmentation Paradigm. To our knowledge, VISA might be the first work to systematically explore **test-time augmentation** for VLM-based retrieval. This paradigm aligns with trends in LLM research (e.g., test-time scaling), where huge additional compute yields significant performance gains. Specifically, VISA achieves average R@1 improvements of **+2.3% (video)** and **+6.7% (long-context image retrieval)**. We also include BLIP-2 comparisons below, showing VISA improves over BLIP-2 by **+0.6% (COCO)** and **+0.5% (Flickr30K)** in R@1.
- Flexible Trade-Offs. Even using only the General Description module (without QA) yields strong performance boosts of **+2.5%, +1.3%, and +8.3%** R@1 across three tasks (see Table 6(a)), offering a **lighter-weight deployment option** (0.13 second) when needed.
> **Q2: Comparison to BLIP-2**
Thanks for highlighting this important baseline. We initially did not compare against BLIP-2 because our evaluation follows the widely adopted zero-shot retrieval protocol in recent VLM research. In contrast, **BLIP-2 is finetuned on COCO and then evaluated on COCO and Flickr**, making it less directly comparable to the evaluation setup in Table 1.
Here we include BLIP-2 in our evaluation (referencing Table 5 from the [BLIP-2 paper](https://arxiv.org/pdf/2301.12597)). As shown below, VISA continues to yield improvements when applied on top of BLIP-2. This demonstrates the compatibility and effectiveness of our method even when applied to strong VLMs like BLIP-2.
| Method | COCO (R@1|R@5|R@10) | Flickr (R@1|R@5|R@10) |
| --------------------- | --------------------- | ----------------------- |
| BLIP | 65.1|86.3|91.8 | 86.7|97.3|98.7 |
| BLIP-2 | 68.3|87.7|92.6 | 89.7|98.1|98.9 |
| BLIP-2 + VISA(Ours) | 68.9|88.0|92.9 | 90.2|98.4|99.2 |
More broadly, while BLIP-2 is highly competitive for short-query retrieval tasks such as COCO and Flickr30K, our method demonstrates notable improvements on long-text queries and video retrieval tasks as well, across **10 widely-used datasets**. This highlights the generality of VISA as a plug-and-play enhancement applicable to a range of backbone models.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the rebuttal. I have read the rebuttal by the authors and other reviewers’ comments. The biggest concerns as I proposed have still not been addressed, while shared by other reviewers (1ri6, nqY2).
In the rebuttal, the authors provided some comparison regarding the efficiency, but there are several points I want to bring to the AC and other reviewers’ attention:
- (i) The cost of the offline stage the method is over 400 times higher than the baseline, which introduces a huge burden if the number of retrieval samples becomes large, it is definitely something we cannot ignore.
- (ii) The authors make the points that the method ‘offering a lighter-weight deployment option (0.13 second) when needed’, but they hide it is actually 650 times the cost of the baseline. This again brings a significant more cost when there are lots of samples.
- (iii) The authors mention something irrelevant to make their points, which is misleading. For example, the authors mention about the efficient frameworks, but this is something that will result in the improvement over all architectures. The significantly high cost the proposed method introduced is essentially not changed.
The architecture the authors proposed is very complicated compared with the original CLIP/SigLIP architectures, but only gives marginal boost on state-of-the-art BLIP-2. The reason the authors provided in the rebuttal is not convincing to me - although BLIP-2 is fine-tuned on COCO, it is still zero-shot on Flickr, and is on the leaderborad of PaperWithCode. I suspect that the authors hide their comparison with BLIP-2 because the improvement is much less than the other backbones. It is not honest to make the state-of-the-art claim in the submission.
Additionally, can the authors guarantee that the LLMs they used is zero-shot on COCO and Flickr? This raises another significant concern that the entire method the author proposed is not making a fair comparison with the previous models.
Essentially, I don’t think the paper is making a proper contribution to the text-to-visual retrieval community.
Given the above reasons, I think this paper is not enough and not ready for ICML and I therefore vote for strong rejection of the paper. | null | null | null | null | null | null |
Robust Automatic Modulation Classification with Fuzzy Regularization | Accept (spotlight poster) | Summary: The paper introduces Fuzzy Regularization (FR) as a novel solution to mitigate prediction ambiguity in Automatic Modulation Classification (AMC). This ambiguity is caused by similar characteristics between modulation schemes, especially under noisy conditions. The FR approach is characterized by three key features: modeling prediction ambiguity, dynamic sample reweighting through adaptive loss scaling, and promoting margin maximization between similar modulation classes. The experimental results show that FR significantly improves the robustness and classification accuracy across various datasets and noise levels.
## update after rebuttal
I will keep my score.
Claims And Evidence: The claims about FR improving model performance and robustness are supported by extensive experimental evidence, including results across multiple datasets (RadioML 2016.10a, 2016.10b, and 2018.01A). The improvements in accuracy and robustness are clearly documented. However, the theoretical analysis and description of the FR mechanism could be clearer and more rigorously formalized, particularly concerning the adaptive gradient mechanism.
Methods And Evaluation Criteria: The proposed method, Fuzzy Regularization (FR), is appropriate for addressing the problem of prediction ambiguity in AMC tasks, particularly under noisy conditions. The use of standard benchmark datasets and performance metrics like F1-Score, ACC, and H-ACC is suitable for evaluating the method's effectiveness. The experimental design, which compares FR with existing methods, helps establish the validity of FR as an enhancement to AMC models.
Theoretical Claims: The theoretical claims around prediction ambiguity and the need for a regularization mechanism to address it are well-grounded. However, the mathematical formulation of the FR mechanism could benefit from more clarity. The paper lacks a formal definition of how the adaptive gradient mechanism works mathematically, which would strengthen the theoretical claims.
Experimental Designs Or Analyses: The experimental designs are sound, with clear benchmarks and comparisons with other state-of-the-art (SOTA) methods. The paper includes a comprehensive evaluation across different noise levels, which validates the robustness of the FR method. The use of multiple datasets and the careful selection of evaluation metrics (F1-Score, ACC, H-ACC) further strengthens the experimental analysis.
Supplementary Material: The supplementary material is reviewed, and it includes useful appendices that explain the datasets, the methods used, and the evaluation metrics. This information is helpful for understanding the experimental setup, though some sections could be organized more clearly.
Relation To Broader Scientific Literature: The paper positions itself within the broader context of automatic modulation classification, comparing FR to other regularization techniques and deep learning models. It highlights the gap in the literature regarding the explicit handling of prediction ambiguity and offers a new approach with FR. However, the paper could better compare FR with other regularization strategies specifically designed for handling ambiguity in classification tasks, such as label smoothing or other entropy-based methods.
Essential References Not Discussed: The paper cites key references, but it could benefit from more discussion on the connection to related regularization techniques, particularly those addressing prediction uncertainty or ambiguity, such as label smoothing, entropy regularization, and adversarial robustness techniques.
Other Strengths And Weaknesses: The paper is strong in its originality, proposing a novel regularization mechanism that specifically targets prediction ambiguity, a critical issue in AMC. The experimental results are robust and demonstrate the method's efficacy across various noise conditions. However, the explanation of the FR mechanism lacks some clarity, particularly in its formalization. The paper would benefit from a deeper theoretical exploration of the method.
Other Comments Or Suggestions: Some sections could benefit from clearer writing and better organization, especially the presentation of the mathematical formulas and experimental setup. It would also be helpful to provide a more detailed comparison with other ambiguity-handling techniques.
Questions For Authors: - Can you provide a more formal and detailed explanation of the adaptive gradient mechanism used in Fuzzy Regularization? This would help clarify how it dynamically adjusts during training.
- How does the FR mechanism compare to label smoothing or other entropy-based regularization methods in terms of performance and robustness? Would it be beneficial to combine these techniques?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for professional comments. We have tried our best to address your questions and revised our paper by following suggestions from all reviewers.
**Q1: Can you provide a more formal and detailed explanation of the adaptive gradient mechanism used in Fuzzy Regularization? This would help clarify how it dynamically adjusts during training.**
RE:Thank you for your question, it helps to articulate our work more clearly. We have explained the establishment of Eq. (6) in more detail and hope that this will help you to understand the automatic gradient mechanism in FR. First choose the first k predicted values of each sample to calculate the degree of fuzziness ($\mathrm{M}$), and the theoretical maximum value $\mathrm{M}\_{\mathrm{max}}=\left(\sum\_{i=1}^{k}\hat{y}\_{i}\right)^{2}-{k\mu}^2$, minimum value $\mathrm{M}\_{\mathrm{min}}=k\times\left(\frac{\sum\_{i=1}^{k}\hat{y}\_{i}}{k}\right)^{2}-{k\mu}^2$, and mean value $\mu=\frac{\sum\_{i=1}^{k}\hat{y}\_{i}}{k}$ for a single sample are known. Then $\mathrm{M}$ can be expressed as: $\mathrm{M}=\sum\_{i=1}^\mathrm{k}\left(\hat{y}\_{i}-\mu\right)^2=\sum\_{i=1}^\mathrm{k}\hat{y}\_{i}^2-{k\mu}^2$; we focus only on the distribution state, so by normalizing the formula we get $\mathrm{M\_{norm}=\frac{M-M\_{min}}{M\_{max-M\_{min}}}=\frac{\sum\_{i=1}^{k}\hat{y\_{i}}^{2}-k\times(\frac{\sum\_{i=1}^{k}\hat{y\_{i}}}{k})^{2}}{(\sum\_{i=1}^{k}\hat{y\_{i}})^{2}-k\times(\frac{\sum\_{i=1}^{k}\hat{y\_{i}}}{k})^{2}}}$.The automatic gradient mechanism is realized by the log(\*) function to correct the gradient of $M\_{norm}$. The absolute value of the gradient of the log(\*) function decays symmetrically when it deviates from the central axis, which can satisfy the second definition of the design of FR. The final loss of a single sample is: $\mathrm{Loss}=\log(\mathrm{M}\_{\mathrm{norm}})=\log\left(\frac{\sum\_{i=1}^k\hat{y}\_i^2-k\times\left(\frac{\sum\_{i=1}^k\hat{y}\_i}{k}\right)^2}{\left(\sum\_{i=1}^k\hat{y}\_i\right)^2-k\times\left(\frac{\sum\_{i=1}^k\hat{y}\_i}{k}\right)^2}\right)$.
**Q2:How does the FR mechanism compare to label smoothing or other entropy-based regularization methods in terms of performance and robustness? Would it be beneficial to combine these techniques?**
RE:We would like to thank the reviewers for their valuable comments. First we define Ent to denote the entropy-based method, LS to denote the label smoothing method, and FRLS to denote the joint training of the label smoothing and FR methods. NF_* indicates that the noise factor of the dataset is *.For the sake of intuition and brevity, the data recorded is the performance difference between the two methods, e.g. Ent_FR is the value obtained by subtracting the entropy-based model accuracy from the FR model accuracy in the original article.
Through Table I we find that the performance difference of entropy-based methods compared to WF methods is both good and bad. We analyze that the reason for the deterioration of performance may be due to the gradient update of the entropy function and other problems, which is one of the difficulties solved in this paper. Although the performance of the entropy-based method can be close to the FR method in some tasks, the performance is weaker than the FR method in general tasks. This also shows that the FR method is better than the entropy-based method.
Table 1
||NF_0|NF_0|NF_20%|NF_20%|NF_40%|NF_40%|NF_60%|NF_60%|
|-|-|-|-|-|-|-|-|-|
||Ent_FR|Ent_WF|Ent_FR|Ent_WF|Ent_FR| Ent_WF| Ent_FR| Ent_WF|
|DAE|-4.18%|-1.05%|-0.83%|+1.6%|-0.83%|+0.9%|-4.77%|-1.3%|
|FEA|-0.42%|+0.03%|-2.84%|+0.3%|-0.07%|+0.4%|-0.17%|+0.24%|
|MCL|-3.56%|+1.52%|-0.82%|+2%|-0.4%|+0.65%|-0.14%|+0.5%|
|Res|-1.51%|-0.18%|-1.05%| -0.1%|-2.34%|-0.6%|-4.13%|-0.1%|
|Thr|-6.97%|-4.71%|-5.84%|+2.3%|-8.59%|-2.1%|-9.03%|-1.1%|
With Table 2 we find that the FRLS models generally outperform the models trained with label smoothing loss, which further demonstrates the FR validity. Meanwhile, we find that the model trained by FRLS joint loss does outperform the model trained by FR supervision on some tasks. This may be due to the fact that LS optimizes the target labels, which make the values of non-target classes non-zero.This helps the model to learn more information from other classes. FR mainly focuses on the model's predictive distribution information. The two focus on different information, so the performance of the model can be further optimized when trained jointly. We will continue to study the joint training strategy in depth.
Table 2
||NF_0|NF_0|NF_20%|NF_20%|NF_40%|NF_40%|NF_60%|NF_60%|
|-|-|-|-|-|-|-|-|-|
||FRLS_FR|FRLS_LS|FRLS_FR|FRLS_LS|FRLS_FR|FRLS_LS|FRLS_FR|FRLS_LS|
|DAE|-0.1%|+1%|+1.9%|+1.3%|+0.9%|+1.7%|+1.9%|+0.7%|
|FEA|+0.9%|+1.9%|+0.8%|+0.6%|+0.6%|-0.5%|-0.8%|+1.2%|
|MCL|+1.4%|+0.7%|+1.7%|+2.1%|+1.8%|+0.4%|+1%|+0.2%|
|Res|-0.8%|+0.3%|+0.8%|+0.6%|+1.2%|+0.7%|-1.9%|+0.4%|
|Thr|+0.6%|+0.1%|+0.9%|+1.2%|+2.2%|+3.1%|+2%|+2%| | Summary: This paper proposes a regularization method aimed at enhancing classification performance in signal classification tasks, particularly for those with low signal-to-noise ratios. It achieves this by constraining the model's predictive ambiguity for samples during the task, thereby increasing the inter-class distance between different categories and reducing the intra-class distance within the same category, which in turn improves classification performance. The authors have validated the effectiveness of this regularization on both synthetic and benchmark datasets.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes. The method proposed in this paper is effective in enhancing the performance of classification tasks. The evaluation criteria selected in this paper can effectively distinguish the performance differences between the proposed method and the baseline.
Theoretical Claims: There are no theoretical claims in the paper.
Experimental Designs Or Analyses: I have reviewed the experimental section, including Experiments Settings (4.1),
Comparison with Other Methods(4.2), Generalizability of the Fuzzy Regularization (FR)(4.3), Robustness of the Fuzzy Regularization (FR)(4.4), Training Behaviour Analysis(4.5), Parameter Sensitivity Analysis(4.6).
Supplementary Material: I have reviewed all sections of the supporting materials, including Signal Visualization(A.1), Regular Gradient Problem(A.2), Datasets(A.3), Compared Methods (A.4), Evaluation Metrics(A.5).
Relation To Broader Scientific Literature: It contributes to the field of signal classification by introducing a regularization technique that enhances the reliability of classification tasks. This regularization technique is proposed to address the phenomenon of ambiguity, and its effectiveness has been validated through multi-dimensional experiments.
Essential References Not Discussed: This paper includes key relevant literature to help readers understand the research background and significance of the issue.
Other Strengths And Weaknesses: Strengths:
1. The paper elaborates on the research motivation in detail, with a well-structured and fluently written narrative.
2. The paper discusses the existing issues in automatic modulation classification tasks and provides corresponding solutions.
3. The experimental results consistently demonstrate that models incorporating FR regularization outperform the baseline, highlighting the effectiveness and versatility of this method in signal classification tasks.
Weaknesses:
1. The work appears to bear some resemblance to curriculum learning. Could the authors provide a detailed explanation of the similarities and differences between these two methods?
2. The backward derivation of this regular derivative is not seen in the text, but it is important for the subsequent optimization of the model, and it is hoped that the author can add this part of the content.
Other Comments Or Suggestions: In the appendix, the numbering for Datasets should be A.3 instead of A.4.
Questions For Authors: See weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for professional comments. We have tried our best to address your questions and revised our paper by following suggestions from all reviewers.
**W1: The work appears to bear some resemblance to curriculum learning. Could the authors provide a detailed explanation of the similarities and differences between these two methods?**
RE: This is very good question. There are two differences between FR and curriculum learning. Firstly, the training strategy is different. Curriculum learningis a two-stage learning strategy: learning the easy samples then the difficult samples [1]. FR did not have this strategy and focused on marginal samples(which can be regarded as the difficult samples in course learning) from the beginning. The second and biggest difference is that the both of them utilize different information. Since FR can measure the degree of predictive ambiguity of the samples, it utilizes more information about the predictive distribution of the samples than course learning.
[1] Liu Y, Wang J, Xiao L, et al. Foregroundness-aware task disentanglement and self-paced curriculum learning for domain adaptive object detection[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023.
**W2: The backward derivation of this regular derivative is not seen in the text, but it is important for the subsequent optimization of the model, and it is hoped that the author can add this part of the content.**
RE: In order to facilitate the detailed explanation of the subsequent derivation process, we provide a more detailed additional description of Eqs. (7)(8), which is supplemented as follows:
$$\mathrm{F}(\hat{y}\_{j})=\frac{\sigma}{T(\hat{y}\_{j},\tau)\sqrt{2\pi}}exp\left[-\frac{\sigma^2}{2}log\left(T\left(\hat{y}\_{j},\tau\right)\right)^2\right]$$
$$\mathrm{s.t.}\quad\begin{cases}\sigma=-\frac{1}C\sum\_{j=1}^c\frac{\sum\_{i=1}^{k}\widehat{y}\_{ji}log(\widehat{y}\_{ji})}{log(\tau)}, \\\T(\widehat{y}\_{j},\tau)=\frac{\sum\_{i=1}^{k}\widehat{y}\_{ji}^{2}-k\left(\frac{\sum\_{i=1}^{k}\widehat{y}\_{ji}}{k}\right)^2}{\left(\sum\_{i=1}^{k}\widehat{y}\_{ji}\right)^2-k\left(\frac{\sum\_{i=1}^{k}\widehat{y}\_{ji}}{k}\right)^2}.&&&\end{cases}$$
where $\hat{y}\_{j}$ denotes the jth sample, $\hat{y}\_{ji}$ denotes the probability that the sample belongs to class i, C denotes the sample size of the batch, $\tau$ denotes the $\tau$ classification task and k denotes the selection of the first k values.
1. Calculating derivatives of intermediate variables
1.1 The derivative of $\sigma$ with respect to $\widehat{y}\_{ji}$
$\frac{\partial\sigma}{\partial\hat{y}\_{ji}}=-\frac{1}{\mathrm{Clog}\tau}\cdot\frac{\partial}{\partial\hat{y}\_{ji}}(\hat{y}\_{ji}\mathrm{log}\hat{y}\_{ji})=-\frac{\log\hat{y}\_{ji}+1}{C\log\tau}$.
1.2 The derivative of $T(\widehat{y}\_j,\tau)$ with respect to $\widehat{y}\_{ji}$
Let $S\_1=\sum\_{i=1}^k\hat{y}\_{ji}, S\_2=\sum\_{i=1}^k\hat{y}\_{ji}^2$.
Then $T=\frac{S\_2-\frac{S\_1^2}{k}}{S\_1^2-\frac{S\_1^2}{k}}=\frac{kS\_2-S\_1^2}{(k-1)S\_1^2}$, we let the numerator in $T$ be N and the denominator D.
So $\frac{\partial T}{\partial\hat{y}\_{ji}}=\frac{\partial N/D}{\partial\hat{y}\_{ji}}=\frac{\partial N}{\partial\hat{y}\_{ji}}\cdot\frac{1}{D}-\frac{N}{D^2}\cdot\frac{\partial D}{\partial\hat{y}\_{ji}}$.
1.2.1 The derivative of the molecule N
Due $\frac{\partial S\_2}{\partial\hat{y}\_{ji}}=2\hat{y}\_{ji}, \frac{\partial S\_1}{\partial\hat{y}\_{ji}}=1$. So $\frac{\partial N}{\partial\hat{y}\_{ji}}=k\cdot\frac{\partial S\_2}{\partial\hat{y}\_{ji}}-2S\_1\cdot\frac{\partial S\_1}{\partial\hat{y}\_{ji}}=2k\hat{y}\_{ji}-2S\_1$.
1.2.2 The derivative of the denominator D
$\frac{\partial D}{\partial\hat{y}\_{ji}}=(k-1)\cdot2S\_1\cdot\frac{\partial S\_1}{\partial\hat{y}\_{ji}}=2(k-1)S\_1$.
1.2.3 The final derivative of $T(\widehat{y}\_{j},\tau)$ with respect to $\widehat{y}\_{ji}$
$\frac{\partial T}{\partial\hat{y}\_{ji}}=\frac{2k\hat{y}\_{ji}-2S\_1}{(k-1)S\_1^2}-\frac{(kS\_2-S\_1^2)\cdot2(k-1)S\_1}{(k-1)^2S\_1^4}=\frac{2k(\hat{y}\_{ji}S\_1-S\_2)}{(k-1)S\_1^3}$.
2. Chain rule for derivation
Due $\frac{\partial\ln F}{\partial\hat{y}\_{ji}}=\frac{1}{\sigma}\frac{\partial\sigma}{\partial\hat{y}\_{ji}}-\frac{1}{T}\frac{\partial T}{\partial\hat{y}\_{ji}}-\sigma(\log T)^2\frac{\partial\sigma}{\partial\hat{y}\_{ji}}-\frac{\sigma^2\log T}{T}\frac{\partial T}{\partial\hat{y}\_{ji}}$.
So $\frac{\partial F}{\partial\hat{y}\_{ji}}=F\cdot\frac{\partial\ln F}{\partial\hat{y}\_{ji}}=F(\hat{y}\_j)\cdot\left[\frac{\partial\sigma}{\partial\hat{y}\_{ji}}\left(\frac{1}{\sigma}-\sigma(\log T)^2\right)+\frac{\partial T}{\partial\hat{y}\_{ji}}\left(-\frac{1}{T}-\frac{\sigma^2\log T}{T}\right)\right]$.
**Other Comments Or Suggestions**
We will correct the numbering problem in the appendix. | Summary: This paper proposes a method to improve the reliability of signal classification models by means of fuzzy regularization. Starting from the prediction fuzzy phenomenon, the authors first experimentally prove that the prediction fuzzy phenomenon is a common phenomenon in automatic modulation recognition, and then deeply discuss the impact of this phenomenon on the model performance and propose a corresponding solution, i.e., fuzzy regularization. Finally, the effectiveness and generalization of the method are verified through experiments.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: I have meticulously reviewed the experimental section in the fourth chapter of the article, and the overall experiments are well-conceived. I have also noted that the authors have ensured consistency across various controllable parameters in the experiments, such as random seeds, which greatly contributes to the fairness of the experiments.
Supplementary Material: I have diligently examined the supplementary materials, which include the visualizations of signals, introductions to the models and datasets, as well as explanations regarding the gradient issues.
Relation To Broader Scientific Literature: The authors have made pivotal contributions to the field of signal classification. They have conducted an in-depth analysis of the ambiguity phenomena in signal classification tasks and proposed an effective solution to address these issues.
Essential References Not Discussed: No significant omissions of relevant literature were detected.
Other Strengths And Weaknesses: Strengths:
1.The overall content of this paper is complete and clear. The author elaborates on the process of identifying problems, exploring issues, solving problems, and verifying the validity of the methodology.
2. The design of the fuzzy regularity proposed in this paper is not complicated, but the overall design is more skillful and at the same time more comprehensively considered. For example, when designing the fuzzy regularity in Section 3.3, the authors consider eliminating the influence of the model predictive distribution on the predicted fuzzy values through normalization.
3.The validity and generalizability of the method were verified by experiments.
Weaknesses:
1.Does FR sharpen misclassified samples?
2. Has the author considered that FR regularization might lead to overconfidence issues?
3.The parameter k in the regularization proposed by the authors is quite significant, but how should I go about selecting this parameter k?
Other Comments Or Suggestions: C in equation (8) does not find a specific meaning in the context.
It is best to cite reference sources for the dataset.
Questions For Authors: Please See Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for professional comments. We have tried our best to address your questions and revised our paper by following suggestions from all reviewers.
**W1: Does FR sharpen misclassified samples?**
RE: Thanks for your valuable comment. We can suppress this phenomenon by adjusting the hyperparameters $\gamma$. In the training process, the model is trained with the guidence of both the FR and the cross-entropy function. When the FR originally misclassifies the sample, its corresponding sample' prediction probability value becomes sharp. Although this leads to the decrease of the FR value, but at the same time, it will also lead to the increase of the value of the cross-entropy function. Hence, we can inhibit the occurrence of this phenomenon by selecting an appropriate $\gamma$ value. That is to say, the decrease of the FR loss is smaller than the increase of the cross-entropy loss when the wrong prediction becomes sharp.
For the appropriate value, we have pointed out in Section 4.6 of the paper that the proposed model usually performs better when the FR canonical value is two orders of magnitude different from the value of the cross-entropy function.
**W2: Has the author considered that FR regularization might lead to overconfidence issues?**
RE: We thank the reviewer for the insightful comments. The work [1,2] shows that overconfidence mainly targets the following two types of samples. The one is that the model's predicted probability value for the class is very high, but the classification result is wrong. And the other is that the model's predicted probability value for the class is very high, but the actual accuracy is much smaller than the predicted probability value.
Firstly, for the first class of samples, we have explained in W.1 that the generation of this class is suppressed by choosing an appropriate hyperparameter $\gamma$. For the second type of samples, we have introduced an adaptive gradient mechanism in the design of FR. This mechanism ensures that the gradient returned by the FR decreases as the value of the predicted probability increases. As a result, the model's predicted probability value for the sample does not always keep increasing. In summary, FR does not lead to overconfidence issues.
[1] A. Roitberg, et al. Is My Driver Observation Model Overconfident? Input-Guided Calibration Networks for Reliable and Interpretable Confidence Estimates, in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 12, pp. 25271-25286
[2] D Wu, et al. The overconfident and ambiguity-averse newsvendor problem in perishable product decision[J]. Computers & Industrial Engineering, 2020, 148: 106689.
**W3: The parameter k in the regularization proposed by the authors is quite significant, but how should I go about selecting this parameter k?**
RE: For the choice of the value of $k$, it is suggested to set the maximal number of confusion categories. This is because if a deterministic class has $m$ similar classes, then the model's output value for these $m+1$ classes is closest. This is helpful for FR to accurately measure the degree of predictive ambiguity of a sample.
In this paper, $k$ is set the number of coding categories of the modulation with the most coding types in the dataset. For example, in a quadrature dataset, the categories are: 'QAM16', 'QAM64', 'QPSK' and 'WBFM'. Of these, QAM has two coding types, while the other modulations each have only one coding type. Therefore, the value of $k$ is initially chosen to be 2.
**Other Comments Or Suggestions**
C represents the number of samples included in a single round of training for the model.
We cite the source of the dataset in Text 4.1Experiments Settings and indicate the download link for the dataset in Appendix A.3. | Summary: This article primarily introduces a fuzzy regularization method applicable to the field of automatic modulation. The authors first measure the prediction ambiguity by an entropy function or a regular function, and then gradually introduced adaptive gradient and exponential normal distribution to further optimize the metrics to design FR. The authors' main contribution is that FR mitigates ambiguities and improves the classification accuracy in the signal classification task, and the method is more effective at low signal-to-noise ratios.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The methods and evaluation criteria proposed in the paper are effective for addressing the intended problems.
Theoretical Claims: The paper does not involve theoretical claims.
Experimental Designs Or Analyses: I have checked all the experiments in the experimental section.
Supplementary Material: I have read all the content in the appendix.
Relation To Broader Scientific Literature: The core contribution of this paper is proposing a novel and effective regularization method for signal classification tasks. The authors elucidate the negative impact of the ambiguity phenomenon on the task and design a corresponding regularization constraint to mitigate this phenomenon, thereby enhancing the performance of the classification task.
Essential References Not Discussed: There are no additional relevant literatures that need to be supplemented.
Other Strengths And Weaknesses: Strengths:
1. This paper focuses on discussing an important issue.
2. The methodology presented in this paper is simple, novel, yet effective.
3. The authors have validated the effectiveness of the method through experiments from multiple perspectives.
Weaknesses:
1. Section 4.5 lacks experimental validation regarding the time taken for a single training round. Fast convergence in terms of training epochs does not necessarily equate to rapid convergence in actual time. It is hoped that the authors can provide more detailed experiments in this aspect.
2. As dataset sizes continue to expand, could this regularization potentially reduce the training efficiency of the model?
Other Comments Or Suggestions: See Weaknesses
Questions For Authors: 1. Section 4.5 lacks experimental validation regarding the time taken for a single training round. Fast convergence in terms of training epochs does not necessarily equate to rapid convergence in actual time. It is hoped that the authors can provide more detailed experiments in this aspect.
2. As dataset sizes continue to expand, could this regularization potentially reduce the training efficiency of the model?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for professional comments. We have tried our best to address your questions and revised our paper by following suggestions from all reviewers.
**W1: Section 4.5 lacks experimental validation regarding the time taken for a single training round. Fast convergence in terms of training epochs does not necessarily equate to rapid convergence in actual time. It is hoped that the authors can provide more detailed experiments in this aspect.**
RE: Thanks for your professional suggestion. The reviewer's concern: although FR converges faster in terms of training epochs; However, if FR brings too much time consumption in a single round, then FR cannot be considered as effective in speeding up the experimental convergence. According to this suggestion, we conduct addtional experiment and record the shortest training time, the longest training time, and the average training time per round for all rounds in the training process. During the experiments we ensured the consistency of the controllable parameters. The experimental results are as follows:
|||Noise2016a_20%|||Noise2016a_40%|||Noise2016a_60%||
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
||MIN|MAX|AVG|MIN|MAX|AVG| MIN|MAX|AVG|
|FR|0.75s|2.14s|0.90s|0.76s|1.60s|0.91s|0.75s|1.92s|0.91s|
|Without_FR|0.71s|1.91s|0.89s|0.77s|1.50s|0.92s|0.70s|1.67s|0.80s|
From the experimental result, it can be seen that there is no significant change in the shortest training time, longest training time, and average training time per round whether the FR is added or not. Combined with the fact that FR can speed up the convergence rounds during training as mentioned in the manuscript, it can be shown that FR can speed up the convergence in actual time.
**W2: As dataset sizes continue to expand, could this regularization potentially reduce the training efficiency of the model?**
RE: Thanks for the valuable question. The calculation of FR focuses on $\sigma$ and $T(\hat{y}_{i},\tau)$ and their values are obtained by matrix operations. Therefore, this process does not consume too much time. The experiment further proved that FR does not lead to a slowdown in training efficiency. We conducted experimental analyses on three different sized datasets, and recorded the shortest training time, the longest training time, and the average training time per round to the three datasets of different sizes. The specific experimental results are as follows:
|||2016a(611.23MB)|||2016b(3.26GB)|||2018(19.98GB)||
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
||MIN|MAX|AVG|MIN|MAX|AVG| MIN|MAX|AVG|
|FR|0.73s|1.57s|0.85s|3.24s|4.64s|3.59s|93.15s|94.04s|93.42s|
|Without_FR|0.65s|1.57s|0.77s|2.83s|3.98s|3.15s|92.37s|93.19s|92.55s|
The above experimental results show that the average rounds of training time increases by 0.1s and 0.4s on the smaller datasets Data2016a and 2016b, respectively. It is noted that the average time increase is not more than 1s on Data2018a with 20GB, and the increase of the training time for the 200 rounds is not more than three minutes.
While the performance is improved by three percentage points. This is acceptable in the field of signal recognition. | null | null | null | null | null | null |
Distilling the Knowledge in Data Pruning | Accept (poster) | Summary: Dataset pruning is the task of reducing the number of samples within a dataset without impairing accuracy. This article combines dataset pruning with methods inspired by the knowledge distillation (KD) literature by augmenting the training process with soft labels from a teacher network that was trained on the full dataset. As a result, the approach can be used to improve existing pruning methods (higher accuracy with the same number of samples / matched accuracy with fewer samples).
## Update After Rebuttal:
The authors were responsive and addressed my concerns, thus I'm in favor of acceptance.
Claims And Evidence: To my knowledge, the first paper that combines KD with dataset pruning. Most of the empirical results are done on small-scale datasets (CIFAR, SVHN) which is a bit of a drawback. At the same time, pruning experiments are expensive, and not all academic labs have the resources to run dozens / hundreds of ImageNet training runs, thus I don't think this should be held against the paper.
The empirical results are strong; KD leads to strongly improved accuracies and can be combined with existing pruning methods, which is nice. KD of course comes with the limitation that one needs to train a full model on the entire dataset first; this is definitely a limiting factor for some scenarios but nonetheless, if one would like to run lots of ablations / variants one could train the full model once, and then run the ablations on a smaller subset of the dataset, thus there are certain cases where KD can be helpful and still save compute.
Experimentally, **my biggest concern is that results are compared against the baseline of "100% data, no KD". If KD is proposed as part of the training procedure, shouldn't a stronger baseline be the case of "100% data, KD"?** If so, which methods / pruning metrics still maintain accuracy w.r.t. this stronger baseline?
Methods And Evaluation Criteria: The benchmark datasets and comparison metrics are standard in the literature (accuracy with varying pruning fractions). The core idea - combining KD with dataset pruning - is compellingly simple and straightforward.
Theoretical Claims: I did not carefully assess the theory since dataset pruning is very much an empirical field; while a theoretical motivation is great the ultimate test that decides whether a dataset pruning metric will be useful are experimental results.
Experimental Designs Or Analyses: I checked the soundness and validity of the experimental design. Overall it is sound; the two downsides to the employed approach are that (1.) a teacher [trained on the full dataset] is needed; (2.) the approach introduces an additional hyperparameter $\alpha$ controlling the loss weight (cf. Figure 6). However, none of those two downsides invalidate the approach.
Supplementary Material: I skimmed the empirical part of the supplementary material, including experimental details and statements related to reproducibility (e.g. where the scores are from).
Relation To Broader Scientific Literature: To my knowledge, this is the first paper that combines KD with dataset pruning. The focus is on pruning, while the approach is based on KD. I appreciate the transfer / bridging of those two related subfields.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: The paper is well written overall, accessible, and clear. Figure 1 is a great overview; I found in particular subfigure 1c intriguing since it's a very systematic exploration, with a clear (and at first sight, counter-intuitive) result.
Other Comments Or Suggestions: - Would recommend a pass checking for \cite{}, \citet{}, \citep{} differences - e.g. when talking about a specific paper by Authors (YYYY) this should not be cited as (Authors, YYYY) in the text.
- Nit: Figure 5 has a different plotting style compared to other figures (e.g. grid). Furthermore, I would suggest to use a sequential color palette for Figure 5 due to the nature of the sequential data.
Questions For Authors: Major:
- [copied from above] Experimentally, my biggest concern is that results are compared against the baseline of "100% data, no KD". If KD is proposed as part of the training procedure, shouldn't a stronger baseline be the case of "100% data, KD"? If so, which methods / pruning metrics still maintain accuracy w.r.t. this stronger baseline?
Minor:
- Given that the authors mentioned that they used several metrics from https://github.com/rgeirhos/dataset-pruning-metrics, why not the SSL prototypes metric (which is one of the best-performing ones)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank **Reviewer xePE** for the thoughtful reading, positive feedback and appreciation for the novelty and simplicity of our proposed approach. Below, we kindly address each of the reviewer's questions and concerns:
**1. Comparison with a Stronger Baseline (100% Data, KD)**
We thank the reviewer for this question, and kindly note that the baseline they refer to (100% data, KD) is included already in the paper. As part of our CIFAR-100 and ImageNet experiments (figures 3a, 4a, 4b), this baseline is represented as the joint plot point for all experiments which utilize $f=1$ with KD (and regardless of the pruning method). We emphasize that this plot point is joint for all KD experiments regardless of the pruning method, since when the entire dataset is utilized ($f=1$) no pruning is being conducted, and hence the selected pruning approach has no effect. Following the reviewer’s suggestion, we will include the exact accuracy numbers in the main paper.
In addition, as can be observed in the aforementioned figures, performance can be mostly preserved w.r.t to this strong baseline across several pruning factors when using several pruning methods with KD. For example, on CIFAR-100, performance is mostly preserved when combining most pruning methods with KD and retaining only 60% of the data.
Finally, we note that this stronger baseline is not shown for the SVHN and CIFAR-10 experiments (figures 3b, 3c), since, as can be observed in the aforementioned figures, model performance gets saturated very quickly on these datasets. Hence, performance on higher pruning factors was omitted for these datasets for visualization purposes.
**2. Comparison to the SSL Prototypes Method [1]**
While we haven’t compared our approach to this specific method, we have recently compared it with other (more recent) approaches, namely: **Moderate-DS [2]**, **D2 [3]**, and **DUAL [4]**. The comparison results can be viewed in this figure:
[experiments_recent_pruning_methods.png](https://ibb.co/xKB0F9hq)
As depicted in the figure, all pruning methods (including the recent ones) benefit from the incorporation of KD in the training process. In addition, it can be seen that on low pruning fractions, random pruning + KD maintains its edge over all other methods, except for D2 + KD which achieves a similar performance. However, contrary to D2 which requires careful tuning of several hyperparameters (k,$\gamma_{r}$,β) for each pruning fraction and for each dataset, our method (simple random pruning + KD) has no such requirements and is hence more practical and easier to adopt for real-world use cases.
**3. Citation Formatting Issues**
We thank the reviewer for bringing these issues to our attention. These issues will be corrected in the final version of the paper.
**4. Figure 5 Style**
Following the reviewer’s suggestion, we have revised Figure 5 by using a sequential color palette and adjusting the figure style to match the other figures in the paper. The revised figure can be observed here:
[updated_figure_5_accuracy_vs_teacher_data_fraction.png](https://ibb.co/cXCVqZnK)
We thank the reviewer for this comment.
$~$
**[1]** Beyond neural scaling laws: beating power law scaling via data pruning, NeurIPS, 2022.
**[2]** Moderate Coreset: A Universal Method of Data Selection for Real-world Data-efficient Deep Learning, ICLR, 2023.
**[3]** D2 Pruning: Message Passing for Balancing Diversity & Difficulty in Data Pruning, ICLR, 2024.
**[4]** Lightweight Dataset Pruning without Full Training via Example Difficulty and Prediction Uncertainty, ArXiv, 2025.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for taking the time to respond.
I appreciate the broadening to more comparison methods (2) and the smaller changes (3) and (4).
Regarding the the baseline (1), sorry if my initial review wasn't clear in this regard. I understand that the datapoint is shown in Figures 3a,4a,4b already; my suggestion/concern was that this datapoint should be the reference point against which performance is compared. Currently, the horizontal dashed line in those plots (which serves as the visual baseline for comparison) corresponds to 100% data, teacher accuracy (*without* KD) which I find less than ideal; I'm suggesting to change this dashed line to the more appropriate reference of 100% data *with* KD. Correspondingly, this also affects the description of results in the paper.
As a separate comment, others (in a discussion not visible to the authors) have pointed out that there is some prior work connecting pruning to distillation, such as:
Moser, Brian B., et al. "Distill the Best, Ignore the Rest: Improving Dataset Distillation with Loss-Value-Based Pruning." arXiv preprint arXiv:2411.12115 (2024).
Sundar, Anirudh S., et al. "Prune then distill: Dataset distillation with importance sampling." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023
I should have done a more thorough literature search for my initial review which was based on the assumption that combining pruning and distillation techniques is a novel combination; given that there is some prior work in this space I'm still in favor of acceptance (in light of interesting and strong experimental results) albeit a bit less enthusiastically than before. I would encourage the authors to broaden their discussion of related work (including e.g. works that combine dataset distillation with pruning methods).
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their thoughtful response and truly appreciate the time and effort invested in evaluating our submission.
**1.**
We now understand your request and thank you for the clarification. We will certainly include the horizontal line representing the case of 100% of the data with KD in both the figures and the corresponding text. We agree with the reviewer’s observation that this representation would serve as a more appropriate baseline. In our current figures, the dashed horizontal line was set to reference the teacher accuracy. However, we acknowledge that using the case of 100% data with KD as the baseline would be more informative and relevant, as suggested by the reviewer. We will make the necessary adjustments accordingly.
**2.**
Please note that the mentioned papers [1], [2] focus on dataset distillation, which, while related, fundamentally differs from data pruning or knowledge distillation. Dataset distillation aims to generate a compact set of synthetic samples (e.g., images) using optimization techniques such as trajectory matching or distribution matching. In contrast, our work takes a different approach by integrating knowledge distillation into the classification loss when training on a pruned dataset, utilizing the soft predictions provided by a teacher model.
While we have already discussed the connections and differences between dataset distillation and dataset pruning in the related work section, we appreciate the reviewer’s suggestion. We will enhance our discussion to include additional works that address both dataset distillation and pruning, providing a more comprehensive context for our approach.
In addition, we believe that our work’s novelty lies in utilizing knowledge distillation within data pruning while making several valuable and intriguing observations, and providing theoretical motivation. To the best of our knowledge, these insights have not been previously explored. Specifically, we demonstrate that in the presence of KD, model accuracy remains robust regardless of the data pruning method employed. This finding has practical implications, as it suggests that simple random pruning, when combined with KD, is a viable alternative to more sophisticated pruning methods. Interestingly, we also observe that increasing the teacher model size can lead to a decrease in accuracy when dealing with small pruning fractions.
We hope this response clarifies our approach and addresses the reviewer’s concerns. Once again, we thank the reviewer for the constructive feedback, which will undoubtedly help improve the quality of our work.
**[1]** Moser, Brian B., et al. “Distill the Best, Ignore the Rest: Improving Dataset Distillation with Loss-Value-Based Pruning.” arXiv preprint arXiv:2411.12115 (2024).
**[2]** Sundar, Anirudh S., et al. “Prune then distill: Dataset distillation with importance sampling.” ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023 (edited) | Summary: The key contribution of this paper is to show that using soft predictions from teachers trained on complete data combined with prune data improves the accuracy of students trained on pruned data. The authors conduct a series of experiments with ResNet variations on small datasets, including CIFAR-10, CIFAR-100, and SVHN, to demonstrate this. The results clearly support the authors' claims.
The paper is easy to follow. The experiments support the claim that using knowledge distillation (KD) improves the performance of students trained on pruned datasets. The authors also provide theoretical motivation for why KD of teacher trained on complete data is helpful.
Claims And Evidence: The claim in the paper was supported with experiments, however I have one main concern as below.
Methods And Evaluation Criteria: While the paper presents some observations that I have not seen before, my main concern is the practicality of the setup. If the teacher has already been trained on complete data, what is the value of using pruned data to train the student, given that most of the knowledge can come from the teacher? It seems to me that this approach is only useful if the pruned data is not a subset of the complete dataset or if the authors can demonstrate the value of using pruned data in this setup. Could the authors provide either comparison as follows?
1. Conduct similar experiments to show improvements when the pruned data is not a subset of the complete dataset used to train the teacher.
2. Or provide results where the student is trained solely using the teacher’s soft predictions, without pruned data (I assume this corresponds to f = 0?), as a baseline. This would help demonstrate the value of using pruned data to train the student.
The temperature parameter is quite important when training with knowledge distillation (KD). Therefore, I suggest that the authors include ablation studies on temperatures in these experiments to ensure the best temperature is selected for fair comparisons.
Because of this, I tend to rate it as weak reject but happy to increase the rating if my concern is addressed.
Theoretical Claims: The theoretical proof sounds and seems supports the the claim, but I have not check the correctness of the proofs.
Experimental Designs Or Analyses: See above.
Supplementary Material: No reviewing SM
Relation To Broader Scientific Literature: The paper might have the small impacts
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: see above
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their valuable feedback and for highlighting the strengths of our work. We appreciate the constructive suggestions and are committed to addressing the concerns raised.
**Experimental Results with Disjoint Datasets**
Following the reviewer's suggestion, we conducted experiments to evaluate the case where the pruned data is not a subset of the complete dataset used to train the teacher.
Let $\mathcal{P}$ be a pruned dataset sampled from $\mathcal{D}$ to train the student model, and let $\mathcal{S}$ be the training data for the teacher. In the following experiments, $\mathcal{D}$ and $\mathcal{S}$ are disjoint i.e., $\mathcal{P} \cap \mathcal{S} = \emptyset$.
For the empirical study, we used 70% of the training data to train the teacher and the remaining 30% to train the student with different pruning ratios. Specifically, we compared the performance with and without knowledge distillation (KD) for CIFAR-100 and SVHN datasets.
The experimental results can be viewed in the following link:
[Experimental_Results_with_Disjoint_datasets.png](https://ibb.co/dwP4fDyf)
Notably, combining knowledge distillation (KD) with data pruning yields significant performance gains, even when the student is trained on a pruned dataset that differs from the teacher's training data. For instance, in CIFAR-100 with random pruning at $f=50$%, we observe a 14.5-point accuracy improvement when the teacher model was trained on a different subset.
We are happy to include these experimental results in the main paper. We believe that these findings further support our proposed approach, particularly in the context discussed at the end of Section 3.1, namely use cases where the full dataset is no longer accessible (e.g. due to privacy concerns).
**Ablation Studies on Temperature**
Following the reviewer's suggestion to include ablation studies on temperature, we kindly refer to Figure 10 in the appendix, where we present accuracy results for various temperature values across different pruning fractions and architectures.
We hope our response addresses your concerns, and we would greatly appreciate it if you could consider raising your rating of our work.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the rebuttal. It has addressed my concerns in the review, so I am increasing my rating to Weak Accept from my initial rating of Weak Reject.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their willingness to raise the score. | Summary: The paper explores the use of knowledge distillation (KD) for enhancing training on pruned datasets, demonstrating that simple random pruning with KD can achieve superior accuracy compared to recent data pruning methods. The work also reveals that, when using teachers with smaller capacities, the student can be more beneficial in low pruning fractions. The study provides theoretical motivation and empirical evidence, showing that KD helps mitigate the impact of label noise and improve accuracy.
Claims And Evidence: The main claim of this paper is that a simple random pruning method, when combined with knowledge distillation (KD), outperforms all standard data pruning algorithms that rely on hard labels, and this claim is supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed method is technically sound, and the extensive experiments on multiple datasets including ImageNet demonstrate its superiority.
Theoretical Claims: The theoretical motivation in Section 3.3 explains the benefits of using self-distillation for enhancing training on pruned data. The authors show that self-distillation using a teacher trained on a larger dataset can reduce the bias of the student's estimation error using the context of regularized linear regression.
Experimental Designs Or Analyses: The experiment setup is extensive and well-constructed.
Supplementary Material: The reviewer has read all the contents in the supplementary material, while not carefully checked the correctness of the proof.
Relation To Broader Scientific Literature: The paper successfully extends the data pruning framework with KD, and its empirical strength promises to offer new perspectives and directions in data pruning research. Most importantly, a major strength lies in its ease of implementation.
Essential References Not Discussed: While the paper introduces a simple yet effective approach in data pruning literature, several of recent works are not covered, including:
- Moderate Coreset: A Universal Method of Data Selection for Real-world Data-efficient Deep Learning, ICLR, 2022
- Robust Data Pruning under Label Noise via Maximizing Re-labeling Accuracy, NeurIPS, 2023
- D^2 Pruning: Message Passing for Balancing Diversity & Difficulty in Data Pruning, ICLR, 2024
- Lightweight Dataset Pruning without Full Training via Example Difficulty and Prediction Uncertainty, ArXiv, 2025
Other Strengths And Weaknesses: Strength
- The proposed method is novel, simple, and effective
- Extensive experiments support the empirical strength of the proposed algorithm
- The paper is well-written and easy-to-follow
Weakness
- The reviewer's main concern is the missing of the several important works (in review and experiments as baselines)
Other Comments Or Suggestions: Can the idea of using knowledge distillation for data pruning also be applied to data pruning for vision-language models (VLM) [1] or large language models (LLM) [2]?
[1] Too Large; Data Reduction for Vision-Language Pre-Training, ICCV, 2023
[2] Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models, ICLR, 2025
Questions For Authors: See other review sections above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank **Reviewer RfVw** for their careful reading, thoughtful remarks and the positive feedback. In addition, we appreciate their acknowledgement of the novelty, simplicity and effectiveness of our proposed method.
Below, we kindly address each of the reviewer's questions and concerns:
**Comparison with More Recent Approaches**
We appreciate the reviewer's suggestion to compare our proposed method with more recent data pruning approaches. Following their suggestion and given the time constraints, we have recently conducted additional experiments to compare our method on CIFAR-100 with 3 out of the 4 methods proposed by the reviewer, namely:
- **Moderate-DS [1]**
- **D2 [2]**
- **DUAL [3]**
For a fair comparison of these methods with our method, we have utilized the official implementation of each one to generate the pruning scores using a common ResNet34 architecture. In addition, all of the necessary hyper-parameters were taken from the respective paper and/or supplementary material of each method.
The results of this experiment can be viewed in the following link:
[experiments_recent_pruning_methods.png](https://ibb.co/xKB0F9hq).
As can observed, all pruning methods (including these recent ones) benefit from the incorporation of KD in the training process. In addition, it can be seen that on low pruning fractions random pruning + KD maintains its edge over all other methods, except for D2 + KD which achieves a similar performance. However, contrary to D2 which requires careful tuning of several hyperparameters (k,$\gamma_{r}$,β) for each pruning fraction and each dataset, our method (simple random pruning + KD) has no such requirements and is hence more practical and easier to adopt for real-world use cases.
We note that the last method the reviewer has referred to **[4]** is focused on the problem of *data pruning with re-labeling*. This sub-task of data pruning attempts to re-label noisy samples in a given dataset, and then select the subset of the data with the most accurate re-labeling of erroneous labels. Hence, all experiments in that work are strictly conducted on noisy variants of traditional datasets (e.g., CIFAR-10N, CIFAR-100N), which makes comparisons of that method with traditional data pruning approaches non-trivial.
Finally, we note that we have incorporated all pruning methods suggested by the reviewer into the Related Works section.
**Discussion: Application of Our Method to LLMs and VLMs**
We thank the reviewer for this keen observation. For simplicity, all experiments in our paper were conducted on the image classification task in the vision domain. However, we agree that our proposed method (utilizing KD with data pruning) has great potential to be utilized in other domains, like NLP or multimodal domains. This is especially true nowadays since with the recent rise of AI-based methods over the last few years, and the high training costs that came along with them, developing more efficient training algorithms have become more important than ever. For example, we believe that it's worth exploring whether one can reduce the training costs of a given LLM (while maintaining a certain level of accuracy), by training it on a carefully pruned corpus of data with additional guidance from the logits of an informed teacher (another LLM trained on a larger corpus). We hope our work here, both empirical and theoretical, will incentivize other researchers to explore such intriguing directions in the future.
**[1]** Moderate Coreset: A Universal Method of Data Selection for Real-world Data-efficient Deep Learning, ICLR, 2023.
**[2]** D2 Pruning: Message Passing for Balancing Diversity & Difficulty in Data Pruning, ICLR, 2024.
**[3]** Lightweight Dataset Pruning without Full Training via Example Difficulty and Prediction Uncertainty, ArXiv, 2025.
**[4]** Robust Data Pruning under Label Noise via Maximizing Re-labeling Accuracy, NeurIPS, 2023. | Summary: This article explores the use of knowledge distillation (KD) to improve model performance when training on pruned datasets. The authors investigate how transferring soft predictions from a teacher model can eliminate the accuracy loss caused by aggressive data pruning.
Claims And Evidence: Yes, the article's claims are well-supported by theoretical analysis and extensive empirical results. Below, I assess key claims and their corresponding evidence:
1. Self-Distillation Reduces Bias in the Student Model
Claim: Training a student with a teacher model trained on a larger dataset reduces the bias in the student’s estimation error.
Evidence: The authors provide a mathematical derivation (Theorem 3.1) showing that self-distillation decreases bias.
Empirical results (Figure 5) demonstrate that increasing the teacher’s data fraction (𝑓𝑡) consistently improves student accuracy, supporting the claim.
2. Knowledge Distillation Improves Accuracy Across Different Pruning Methods and Levels
Claim: Incorporating KD consistently improves accuracy, even when training on heavily pruned datasets.
Evidence: Figures 3 & 4 show that models trained with KD outperform those without KD across all pruning methods and datasets.
Results indicate that models trained on only 10%-50% of the data with KD achieve accuracy comparable to full-data training.
In high-pruning scenarios, KD leads to 17-22% accuracy improvements over standard training (especially on CIFAR-100 and SVHN).
3. Random Pruning + KD Matches or Outperforms Sophisticated Pruning Methods
Claim: Using KD with simple random pruning can achieve accuracy comparable to or better than advanced score-based pruning methods.
Evidence: Figure 4 shows that, at high pruning levels, random pruning with KD outperforms sophisticated methods like GraNd, EL2N, and Forgetting. The authors argue that aggressive pruning can accidentally retain noisy samples, making structured pruning less effective than random selection when combined with KD.
4. The Optimal KD Weight (𝛼) depends on the Pruning Level
Claim: The effectiveness of KD depends on the balance between the pruning factor (𝑓) and KD weight (𝛼).
Evidence: Figure 6 shows that at lower pruning fractions (𝑓≤0.1), higher 𝛼 values yield better accuracy.
The authors explain that strong pruning increases label noise, and relying more on teacher predictions (higher 𝛼) helps reducing this.
As pruning becomes less aggressive, lower 𝛼 values work better since the dataset becomes cleaner.
Methods And Evaluation Criteria: The methods and evaluation criteria used in the paper are appropriate for the problem of training models on pruned datasets with knowledge distillation (KD).
Theoretical Claims: The article presents a theoretical foundation for self-distillation in pruned data settings, but some claims lack direct empirical validation and broader generalization. While Theorem 3.1 suggests that training a teacher on a larger dataset reduces estimation bias, its proof is moved to the supplementary.
Additionally, the assumption of Gaussian noise and independent samples may not hold in real-world data distributions. The claim that KD significantly enhances generalization in low-data regimes is well-supported by empirical results but lacks a novel theoretical derivation beyond prior work. Providing explicit bias-variance decomposition experiments and relaxing assumptions in the theoretical analysis would be needed to verify the paper’s contributions.
Experimental Designs Or Analyses: The experimental design is comprehensive, evaluating knowledge distillation (KD) under different pruning levels, datasets, and teacher-student configurations. The use of multiple pruning methods (e.g., forgetting, GraNd, EL2N, and random pruning) and four datasets (CIFAR-10, CIFAR-100, SVHN, and ImageNet) strengthens the validity of the results.
However, some concerns remain:
1. The study shows that random pruning can outperform more sophisticated score-based methods at high pruning levels, but it does not deeply analyze why this occurs. Additional analysis on the types of samples retained by each method is required.
2. The study tests multiple temperature values but it does not thoroughly analyze the trade-offs between different temperatures in varying pruning conditions.
3. The claim that high-capacity teachers harm students at low pruning factors is interesting but would benefit from an ablation study isolating architecture size from other factors (e.g., regularization or training dynamics).
4. Table 1 does not contain any recent SOTA pruning method. The recent-most method compared with is from 2019.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The paper builds on prior work in knowledge distillation (KD) and data pruning, particularly studies on model compression and sample selection.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
The paper presents a well-motivated exploration of knowledge distillation (KD) in pruned data settings, providing both theoretical and empirical support for its claims. The experimental results are thorough, spanning multiple datasets, pruning strategies, and teacher-student configurations.
Weaknesses:
1. Theoretical claims, while sound, rely on assumptions (e.g., Gaussian noise, linear regression framework) that may not generalize to more complex deep learning models.
2. I can infer that some empirical findings, particularly regarding teacher capacity, lack deeper analysis on why larger teachers degrade performance under extreme pruning.
3. The article could improve clarity by providing more intuition behind key mathematical results, and certain methodological details (e.g., hyperparameter tuning of KD temperature) remain underexplored.
Other Comments Or Suggestions: 1. The diagram in fig. 1 is over simplistic for an ICML submission.
2. The capacity gap problem is an interesting finding, but further analysis (e.g., on feature representations or optimization dynamics) could provide deeper insights into why larger teachers degrade performance in extreme pruning scenarios.
3. The choice of KD temperature significantly impacts results, yet details on tuning and sensitivity analysis are limited.
4. Suggest experimenting with some DNNs beyond the ResNet and Wide ResNet family.
Questions For Authors: 1. Given that pruning methods tend to retain hard (and potentially mislabeled) samples, have you analyzed how knowledge distillation interacts with label noise? Could a poorly trained teacher amplify errors instead of eliminating them?
2. Your experiments focus on image classification tasks. Do you expect similar improvements in other domains (e.g., NLP, structured data)? Have you tested the method on regression or reinforcement learning tasks?
3. Given the computational cost of training multiple models (teacher and student) on different dataset fractions, how does this approach compare in terms of efficiency versus direct training on full or pruned datasets? Have you considered any strategies to reduce the added computational burden in real-world applications?
4. Discuss about the limitations of this method and future works to be done.
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the comprehensive and constructive feedback and for highlighting the strengths of our paper. We appreciate the insightful questions and will make efforts to address the reviewer’s concerns.
**Incorporating Recent Pruning Methods**
Following the reviewer’s suggestion, we have included three recent data pruning methods:
* Moderate Coreset: A Universal Method of Data Selection for Real-world Data-efficient Deep Learning (ICLR, 2023)
* D² Pruning: Message Passing for Balancing Diversity & Difficulty in Data Pruning (ICLR, 2024)
* Lightweight Dataset Pruning without Full Training via Example Difficulty and Prediction Uncertainty (ArXiv, 2025)
Accuracy results are available at the following link:
[Recent_pruning_methods](https://ibb.co/xKB0F9hq)
As observed, incorporating KD during training with data pruning also improves accuracy in these three pruning approaches. Also, please note that GraNd and EL2N were published in 2021, while memorization was [first applied](https://arxiv.org/abs/2206.14486) to data pruning in 2022.
We are happy to include these three additional baselines in the main paper (please see our response to Reviewer RfVw).
**On the Use of Simplified Assumptions in Theoretical Analysis**
We acknowledge that the assumptions made in our theoretical section (Gaussian noise, independent samples) may appear limiting when considering complex real-world scenarios. However, our goal was to provide a clear and tractable analysis that offers foundational insights into the behavior of self-distillation in pruned data settings. Employing simplified assumptions, such as linear regression frameworks, is a common practice in theoretical investigations within machine learning, enabling the derivation of interpretable results. For instance:
* Understanding the Gains from Repeated Self-Distillation (NeurIPS 2024)
* A Theoretical Analysis of Fine-tuning with Linear Teachers (NeurIPS 2021)
* Towards Data-Algorithm Dependent Generalization: a Case Study on Overparameterized Linear Regression (NeurIPS 2023).
Following the reviewer's suggestion, we will include a discussion on these methodological limitations and potential extensions in a dedicated section on limitations and future work.
Also, please note that the proof for Theorem 3.1 was moved into the supplementary due to lack of space.
**On Why Larger Teachers Degrade Performance under Extreme Pruning and Impact of KD Temperature**
We thank the reviewer for the opportunity to clarify this observation. Below, we provide our intuition based on our current understanding.
We hypothesize that learning complex decision boundaries becomes increasingly challenging when the number of samples is significantly reduced. Consequently, when training a student model on a pruned dataset with a high pruning ratio, smaller teachers tend to be more beneficial. Larger teachers, while capable of handling hard samples, can cause the student to focus on these difficult instances, which are inherently hard to resolve given the limited data. In contrast, smaller teachers are more tolerant to errors from hard samples and guide the student toward more manageable patterns.
While increasing the temperature can soften the teacher's output, the softmax temperature is a global transformation that affects all logits simultaneously, and thus may not sufficiently reduce the impact of complex decision boundaries imposed by larger teachers.
We illustrate this in the following figure:
[Teacher_capactity_accumulated_predictions.png](https://ibb.co/WNGK191R)
**Details on Tuning KD Temperature and Sensitivity Analysis**
Following the reviewer's suggestion to analyze the KD temperature, we kindly refer to Figure 10 in the appendix, where we present accuracy results for various temperature values across different pruning fractions and architectures.
**Additional Questions**
1. Pruning methods can retain hard or noisy samples, leading to poor student performance (see Section 3.2). However, training a teacher on a (noisy) pruned dataset is not recommended, as it can result in a poorly trained teacher that degrades student’s performance.
2. We expect our approach to generalize to other domains. Data pruning with KD is promising for NLP tasks, and while not tested on regression, we believe it could be suitable. As for RL, further investigation is needed.
3. Data pruning reduces computational burden, e.g., in HPO or Active Learning, training many models on a small portion of the data with soft predictions from a full-data-trained teacher maintains high accuracy while lowering computational costs.
4. As suggested, we will include a section on the method's limitations and future work.
Please let us know if you'd like us to elaborate on any point, as this grants 5,000 more characters.
We hope our response clarifies your concerns, and we would appreciate it if you could consider increasing your score. | null | null | null | null | null | null |
Pareto Merging: Multi-Objective Optimization for Preference-Aware Model Merging | Accept (poster) | Summary: The authors use multi-objective optimization to create a framework for users to give preferences to which tasks are important to them when merging models. Given a merged model, they build another PEFT architecture that takes preference weights as input This new PEFT is trained either data-free or using unlabeled data. Experiments are done on toy examples and merging 8 image data sets.
Claims And Evidence: The results do show improvements based on Pareto Merging, but I think some of what is demonstrated by Table 2 is problematic. It sticks out that Task Arithmetic and Task Artihmetic+PM(equal) perform almost identical. The weights in Task Arithmetic and other merging algorithms can also be selected to prioritize a single task in similar way that the preference vector is set for PM.
There is also missing insight. Why does AdaMerging+PM(equal) significantly outperform AdaMerging? The framework adds an additional PEFT module on top, so there is more complexity in the model. This also requires additional training and it is not clear how much training is needed. For an 8-dimensional preference vector that is sampled once per batch, it would seem that would require many epochs to sample the space of preference vectors. I didn't find any information about how many epochs are necessary in the Appendix.
An additional problem is that Table 6 in the Appendix using ViT-L/14 does not confirm the same significant outperformance claimed in Table 2. The main difference I see is that the individual models with ViT-L/14 significantly outperform the ViT-B/32 models (on average). And with better models, all merging models perform much better and the benefit of PM is not nearly as apparent. With a smaller benefit, the additional training requires more discussion.
Methods And Evaluation Criteria: Most evaluations make sense, but additional evaluations does Task Arithmetic and other merging methods by using the weights to assign priority would help see if the preference gives a benefit.
Theoretical Claims: NA
Experimental Designs Or Analyses: No issue
Supplementary Material: I reviewed the training details and additional results.
Relation To Broader Scientific Literature: The contributions attempt to improve the state of the art in the literature.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The main strength is that users can give preference to different models with very little overhead in terms of memory. But again, a weakness is that this could involve a lot of computational resources in terms of training. Furthermore, when considering priority, it is not clear when one would have priorities in image classification. If we have a priority in image classification, we would best served to use the individual model. The authors need to discuss when this is applicable in model merging for images. I can see where this would be useful for LLMs with model alignment but that is left for future work in the Conclusion. The approach with sampling preference vectors requires more discussion and analysis. What is the insight for optimizing the average loss with respect to preference vectors? See above for additional weaknesses in Claims and Evidence.
Other Comments Or Suggestions: See above
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your valuable suggestions. Below, we provide a detailed response to each point.
---
**R1: Direct incorporation of preference to Task Arithmetic and AdaMerging**
In Section 3, we frame model merging as a MOO problem and propose both straightforward methods (Section 3.1) and our Pareto Merging (PM) approach (Sections 3.2-3.3). While straightforward methods incorporate preferences, they have significant limitations. Our PM method is more parameter-efficient and computationally effective, achieving better performance trade-offs.
Following your suggestion, we tested Task Arithmetic + Preference and AdaMerging + Preference. Results (available at https://anonymous.4open.science/r/ICML-0464/) show:
* Task Arithmetic + Preference has 113.4M parameter overhead and some extreme solutions obtained are typically undesirable for users. Our Task Arithmetic + PM approach with the default configuration (low-rank tensor applied only to attention layers) achieves reasonable trade-offs with 0.6M parameter overhead. Extending low-rank tensors to both attention and MLP layers shows better performance with only 2.1M parameters. Users can adjust settings to balance solution diversity.
* AdaMerging + PM (0.6M parameter overhead) achieves better trade-offs than AdaMerging + Preference (113.4M parameter overhead). Moreover, our approach requires only a single run (0.28 GPU hours), whereas AdaMerging + Preference demands 11 runs (1.65 GPU hours).
We will include the above disscussion in the final version.
**R2: Task arithmetic and task arithmetic+PM(equal) perform almost identical, while AdaMerging+PM(equal) significantly outperform AdaMerging**
Task arithmetic is data-free merging, so task arithmetic+PM(equal) has no additional information, resulting in similar performance. AdaMerging+PM leverages unlabeled data, allowing PM to better resolve task conflicts. This pattern of improvement with data incorporation appears consistently across various compression techniques, such as merging, pruning, and quantization.
**R3: How much training is needed**
We apologize for the omission of detail. We use 2,000 gradient steps. As shown in Table 1, the additional training time is small.
**R3: ViT-L/14 results**
The improvement needs to be considered with respect to traditional MTL, which serves as a performance upper bound. As can be seen from Table 6, with ViT-L/14, AdaMerging already achieves 90.8% (vs. 80.1% with ViT-B/32), naturally limiting improvement margins. Still, AdaMerging+PM (priority) achieves a significant 1.3% improvement over AdaMerging. This smaller margin reflects approaching optimal performance rather than method limitations.
**R4: Computational overhead** As shown in Table 1, our computational overhead during training is small.
for data-free merging (with task arithmetic), the overhead is 0.04 GPU hours; whereas for merging based on unlabeled data (with AdaMerging), the overhead is 0.13 GPU hours.
**R5: Priorities in Image Classification and Extension to LLM**
There seems to be a misunderstanding. Setting priorities does not mean that only one task is important while all other tasks are unimportant. Rather, it means that the model should excel at a specific task while still achieving acceptable performance on others. Maintaining separate models for different preferences is inefficient in terms of parameters, as it requires storing and managing multiple full models.
Our Pareto Merging enables a single model to excel at prioritized tasks while maintaining reasonable performance across all tasks, with minimal parameter overhead. Additionally, our method supports flexible adjustment of prioritized tasks. We recognize the exciting potential of this approach for LLMs and are actively working on extending our algorithm to these models. However, due to time constraints, we can only include this extension in the final version of the paper.
**R5: Preference Vector Sampling** Thanks for your question. We optimize the expected loss across the entire preference space to account for all possible user preferences. Different preferences weight the parameters in the low-rank tensor differently, enabling the model to capture diverse characteristics. While our method already demonstrates strong performance, we acknowledge that developing more effective preference vector sampling techniques is an interesting future work that could further enhance efficiency.
---
We hope the above responses address your concerns. If you have any additional questions or suggestions, we are more than happy to discuss them further.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I agree with some points such as R2 and R6. But a main sticking point is R5 which cannot be remedied without experiments. I do not see why this was done with the task of image classification in mind rather than for LLMs. I will maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. We are happy to know that some of your concerns are addressed. In response to your question on image classification, we have added a new experiment detailed below.
First, recall that model merging is particularly useful in scenarios with limited computational resources, where the traditional approach of storing/running multiple task-specific models is impractical. In this new experiment, we consider three models that are independently fine-tuned on the GTSRB, RESISC45, and SVHN datasets, respectively.
Suppose that a particular user has preference (or "priority" in our context) [GTSRB: 0.6, RESISC45: 0.2, SVHN: 0.2] (i.e., the user puts 60\% importance on the accuracy on GTSRB, and 20\% importance on each of RESISC45 and SVHN). Importantly, *priority does not mean ignoring lower-weighted tasks*. This setting is often encountered in the real world as users typically aim to obtain a **multi-task** model that prioritizes over certain tasks *while still achieving reasonably good performance on the other tasks*. For example, in resource-constrained environments such as edge devices and autonomous driving systems, different preferences emerge in varying contexts (e.g., different conditions, regions, users).
The traditional approach does not use merging. One simply loads the three models and then use one corresponding to each image. However, this requires a large GPU memory (or frequent model loading/unloading which is also computationally expensive) and multiple independent forward passes.
The following table compares the test accuracy of 4 methods:
||GTSRB(Weight 0.6)|RESISC45(Weight 0.2)|SVHN(Weight 0.2)|Weighted Average|
|---|---|---|---|---|
|Method 1: Single Model (GTSRB)|96.0|22.5|24.2|66.9|
|Method 2: AdaMerging|91.8|96.3|93.8|93.1|
|Method 3: AdaMerging+Preference|93.2|95.8|93.4|93.8|
|Method 4: AdaMerging+PM|93.8|96.3|94.0|94.3|
* Method 1, "use the individual model" as you suggested: In our example scenario, as the GTSRB task has the highest priority, we use the single-task model finetuned on GTSRB. As can be seen, while its performance on GTSRB is very good, it performs poorly on the remaining two tasks. This results in a poor weighted average of 66.9\%.
* Method 2, AdaMerging: A representative model merging method. While it achieves good performance across all tasks, it does not consider the user preference, resulting in a suboptimal weighted average.
* Method 3, AdaMerging + Preference: This is the naive extension introduced in Section 3.1 to incorporate user preference. Compared to AdaMerging, it improves the accuracy on GTSRB by 1.4\%, demonstrating that the proposed multi-objective optimization formulation is useful. However, as mentioned in Section 3.1 and R1, when there are $n$ user preference, this approach (i) requires $n$ optimization runs; and (ii) storage of all the three original models (226.8M parameter overhead) .
* Method 4, the proposed PM approach: It shows significant improvement on the priority task of GTSRB (2\% over AdaMerging) while still maintaining good performance on the other tasks. Importantly, it efficiently handles multiple user preferences in a single optimization run with minimal parameter overhead (0.6M), addressing the key limitations of Method 3.
Furthermore, as you mentioned, our approach can be easily extended to LLM alignment. The experiment
is currently in progress. However, as finetuning on different alignment objectives to obtain models for merging is time-consuming, we will only be able to show the results in the final version.
Overall, we believe the proposed preference-aware merging is important for both image and language models.
We hope this clarification addresses your concerns. We deeply appreciate your thoughtful feedback and the time you have devoted to reviewing our paper. We will incorporate all your valuable suggestions in the final version.
We would be very grateful if you would consider revising your assessment based on these clarifications. Thank you very much for your consideration. | Summary: This paper proposes a novel preference-aware multi-objective model merging method called Pareto Merging (PM) to generate a Pareto set of merged models (the number of models might be infinite) by a single merging process. The main contributions include 1) the preference-aware multi-objective model merging formulation and 2) the LoRA-based personalized model generation method with minimal parameter overhead. Experimental results show the proposed Pareto merging method can achieve state-of-the-art performance on different model merging problems with ViT-B/32 models.
**##After Rebuttal Comment##**
Thank you for your detailed response and new results. Since all my concerns have been adequately addressed, I raise my rating to 4.
Claims And Evidence: Most claims in this work are well supported by clear and convincing evidence.
However, I have some concerns about the claim that "to the best of our knowledge, we are the first to utilize gradient-based MOO for model merging." To my understanding, there is a closely related work [1] that also investigates gradient-based MOO for model merging. A detailed discussion and comparison with [1] is required. In addition, this claim should be modified if needed.
[1] Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion, arXiv:2406.09770.
Methods And Evaluation Criteria: I believe the proposed method and the evaluation criteria both make sense for the multi-objective model merging problem. I have the following concerns about the proposed method.
1. To my understanding, the proposed Pareto Merging method has a similar model structure to the previous work on LoRA-based Pareto manifold learning [2, 3] (see Figure 2). The key difference is that PM uses a model merging approach to find the preference-independent based model, while previous work learns the base model. The pros/cons between these two approaches are not very clear to me. A detailed discussion could be very helpful to highlight the unique advantage of the proposed Pareto Merging approach.
2. The motivation of this work is to use the Pareto Merging method to find different models for different users rather than a single preference-conditioned model to adjust the trade-off among different objectives in real time. To my understanding, the proposed preference-independent based model + preference-dependent low-rank tensor structure could be promising for real-time trade-off adjustment. However, if the goal is to find different models for different users, why not just use model merging to find the whole model for each user once the user's preference is known? In what situation will we need to find a set of models for different users in parallel?
[2] Efficient Pareto Manifold Learning with Low-Rank Structure, ICML 2024.
[3] Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences, ICLR 2025.
Theoretical Claims: There is no theoretical claim or proof in this work.
Experimental Designs Or Analyses: I've checked the experimental designs and analyses, and believe most of them are reasonable. I have the following concerns about the experiments.
1. As mentioned in the above sections, a comparison with the closely related work [1] could be helpful.
2. This work uses the smooth Tchebycheff scalarization for multi-objective optimization. It is interesting to know whether this method can truly outperform the simple linear scalarization and the original Tchebycheff scalarization in the model merging task.
Supplementary Material: Yes, I've checked the whole supplementary material which contains detailed experimental settings and more analyses.
Relation To Broader Scientific Literature: The proposed Pareto merging method is a natural extension of the previous methods on learning the entire Pareto set by a single model. I believe this extension is meaningful and valuable since the model merging method could be very important for real-world applications, especially those with large foundation models.
Essential References Not Discussed: As mentioned in the previous section, a closely related work on multi-objective model merging [1] is not discussed in this work.
Other Strengths And Weaknesses: Strengths:
1. This work is generally well-written and easy to follow.
2. Multi-objective model merging is important for many real-world applications, espeically those with large foundation model. This work is a timely contribution on an important research direction.
3. The proposed Pareto merging method achieves state-of-the-art performance on different experiments.
Other Comments Or Suggestions: N/A
Questions For Authors: Please address my concerns raised in the above comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable suggestions. Below, we provide a detailed response to each point.
---
**Response to Claims And Evidence**
**R1: Comparison with [1]** Note that [1] is indeed a concurrent work of ours but with different goal. [1] aims to improve the efficiency of general Pareto set learning algorithm. Instead of training from scratch, [1] employs task arithmetic to merge specific modules while keeping others (e.g., MLP layers or both the MLP and attention layers) separate. They then train a MOE router using labeled data to weight these unmerged components. Note that when both MLP and attention layers are not merged, [1] needs to keep the $K$ models after training.
On the other hand, our method focuses on improving model merging and formulates it as a MOO problem. This extension has not been studied before (including [1]). Unlike [1], our method merges all modules while incurring a small low-rank tensor (with only 0.5\% parameters of the pretrained model).
Note that formulating model merging as a MOO problem requires first identifying the MOO objectives for model merging, which is not trivial. For instance, in data-free merging, we observe that Task Arithmetic can be viewed as optimizing the distances between models in the weight space. This then allows us to define the MOO objectives based on these distances. While traditional model merging considers finding a single solution based on a specific preference, our MOO formulation transforms the problem to the finding of a continuous space of solutions, which is novel.
To further compare with MoE-based fusion in [1], below we compare its performance on merging eight ViT-B/32 models using the same setup as in [1]. As reported in [1], with the use of labeled data, MoE-based fusion (with a final model size of 567M) achieves an average accuracy of 77.2\% when only the MLP is unmerged, and 83.5\% when all modules are unmerged (with final model size 1.02B). On the other hand, our method (with only 114M parameters and NOT requiring labeled data) achieves a higher accuracy at 85.2\%. We will add the above discussion to the final version.
**Response to Methods And Evaluation Criteria**
**R2: Comparison with LoRA-based Pareto Manifold leanring [2, 3]** Thanks for your comment but that is not the key difference. As mentioned earlier, our primary objective is to improve model merging, rather than using model merging for efficient Pareto set learning. Our first key contribution is identifying the limitations of current model merging techniques and reformulating it as a MOO problem to effectively incorporate user preferences. Furthermore, we propose an efficient preference-aware tensor structure, which is different from the weighted sum of LoRAs used in [2, 3].
To further illustrate our advantages, we adapt [2,3] for use in our model merging setting. Using the setup in Section 4.2.2 and Table 2, the results on merging eight ViT-B/32 models are:
|Method|Structure|Test Accuracy|Parameter Overhead|
|---|---|---|---|
| AdaMerging+PM (equal)|ours|84.9|0.61M|
| AdaMerging+PM (equal)|structure in [2,3]|84.5|4.71M|
| AdaMerging+PM (priority)|ours|85.5|0.61M|
| AdaMerging+PM (priority)|structure in [2,3]|85.2|4.71M|
As can be seen, the proposed tensor structure has better performance while significantly reducing the number of parameters. We will add the above discussion to the final version.
**R3: Real-time trade-off adjustment & Merge for each preference independently** The proposed method can be used in both scenarios: (1) As you mentioned, it is promising for real-time trade-off adjustment. (2) When different users have different preferences, the straightforward preference incorporation method, as discussed in Section 3.1, has two limitations: (i) It requires storing all $K$ original models to address $n$ different user preferences. In contrast, our approach only requires a single model and a small low-rank tensor. (ii) For methods like AdaMerging, they require $n$ separate runs for $n$ preferences. For instance, handling 100 preferences with straightforward method would require $0.15 \times 100 = 15$ GPU hours, whereas our method requires only $0.28$ GPU hours.
**Response to Experimental Designs Or Analyses**
For comparison with [1], please refer to **R1**.
**R4: Comparison with other scalarization methods** Thanks for your question. Following your suggestion, we add experiment with different scalarization methods on merging eight ViT-B/32 models (as in Section 4.2.2 and Table 2). The average accuracies obtained are: (i) linear scalarization: 84.3\%; (ii) Tchebycheff scalarization: 82.8\%; and (iii) smooth Tchebycheff scalarization: 85.2\%. These indicate that smooth Tchebycheff scalarization outperforms the other methods for our model merging task. We will add the discussion to the final version.
---
We hope the above responses address your concerns. If you have any additional suggestions, we are more than happy to discuss them further.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and new results. Since all my concerns have been adequately addressed, I raise my rating to 4.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your thoughtful feedback and recognition of our work. We will incorporate all your suggestions into the final version. Thank you for your time and effort! | Summary: This paper introduces a new method for merging models with trade-offs by finding the Pareto front. They included both data-free version and using unlabeled data version of the method.
Claims And Evidence: Most of the claims are clear. I've detailed unclear points in the following comments.
Methods And Evaluation Criteria: Yes, they are standard baselines in the model merging literature, and the paper compared with various suitable baseline methods.
Theoretical Claims: - There is no theoretical claims.
Experimental Designs Or Analyses: Yes, they are standard baselines in the model merging literature, and the paper compared with various suitable baseline methods.
Supplementary Material: No, I did not review the supplementary materials.
Relation To Broader Scientific Literature: This paper is relevant to the model merging community and for preference-aware model merging. Prior work in the field include MAP (model merging with amortized pareto front), and methods in MTL, such as ParetoMTL.
Essential References Not Discussed: I did not identify any essential references not discussed.
Other Strengths And Weaknesses: Strengths:
- A new algorithm for identifying pareto fronts for model merging.
- Compared with various existing baselines.
- Included both data-free and unlabeled data versions of the method.
Weakness:
- A few minor errors as I pointed out in questions.
Other Comments Or Suggestions: NA
Questions For Authors: - In Table 1, how is the # parameters defined? I'm not sure why rewarded soup and MAP would double the parameter counts. Could you please clarify?
- In Table 1, the authors claim that the number of models MAP can generate is 100, because the initial population is 100. I don't think this is true because MAP uses NSGA III and the number of points on the final Pareto front can far exceed the initial population size.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable suggestions. Below, we provide a detailed response to each point.
---
**Response to Questions For Authors**
**R1: Parameter count in Table 1**
The "parameter count" refers to the number of parameters that need to be stored after merging.
Note that for each merging method, there are two options on what to store. For instance, with two models in the soup and 100 user preferences, one can either store the 100 merged models corresponding to these 100 preferences, or store only the two original models and merge them on-the-fly based on the user preference. Obviously, the latter is more memory-efficient, and is we adopt when calculating the parameter count. Hence, for MAP and the Rewarded Soups, the parameter counts are doubled in the two-model case.
**R2: Number of solutions in Table 1**
Thank you for pointing this out. In the paper, we mentioned 100 as the original NSGA-III algorithm enforces a fixed population size by removing dominated or overcrowded solutions during evolution. Upon double checking the MAP implementation, we observed that unlike the original NSGA-III, it retains all the non-dominated solutions, resulting in around 500 solutions at the end. We will correct this in the final version. However, please note that since we used the official implementation from the authors' github in the experiments,
this does not affect any experimental results or discussions in the paper.
---
We hope the above responses address your concerns. If you have any additional questions or suggestions, we are more than happy to discuss them further. | Summary: The paper introduces a novel method named "Pareto Merging," which is designed for the efficient merging of multiple pre-trained machine learning models into a single model, taking into account the preferences of different users. The approach learns a set of models, each optimized for different trade-offs or preferences among the objectives, thereby offering customized solutions for varied user preferences.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A
Experimental Designs Or Analyses: No issues.
Supplementary Material: Yes, mainly Sections A and B.
Relation To Broader Scientific Literature: For model merging, this work considers preference-aware merging. The learned model with low-rank embedding can generate a Pareto set of merged models, with each representing a Pareto-optimal solution for a preference.
Essential References Not Discussed: - The reference for MGDA is not accurate. Check the earlier work below.
[0] "Steepest descent methods for multicriteria optimization". Mathematical methods of operations research 2000.
- It would be better if the authors could discuss the relation with some other gradient-based preference-based multi-objective optimization works, as listed below. For example, how does the preference in this paper differ from the existing works? What are the pros and cons of using low-rank embedding for the preference?
[1] "A multi-objective/multi-task learning framework induced by Pareto stationarity" ICML, 2022.
[2] "FERERO: A flexible framework for preference-guided multi-objective learning" NeurIPS 2024.
[3] "PMGDA: A preference-based multiple gradient descent algorithm" arXiv:2402.09492, 2024.
- A detailed comparison should be made to some existing works on Pareto set learning or preference-conditioned models, some examples are listed below. Based on the discussion in Section 2.2, it seems that the only difference compared to the prior works for preference-conditioned models is that the authors "explore model merging for large models".
[4] "Smooth Tchebycheff scalarization for multi-objective optimization" ICML 2024.
[5] "Pareto set learning for expensive multi-objective optimization" NeurIPS 2022.
[6] "Panacea: Pareto Alignment via Preference Adaptation for LLMs" NeurIPS 2024.
Other Strengths And Weaknesses: **Strengths**
1. The paper is well-written, and the method is clearly described.
2. The proposed method is validated through some experiments.
**Weaknesses**
1. Does Table 1 provide the training or inference time? The authors should discuss the computational complexity in both training and inference and how it compares to some baselines.
2. I do not see a clear novelty compared to the reference [6] in **Essential References Not Discussed** section. This paper also seems to use a low-rank embedding for the preference. Same as [6], this paper can also generate infinite number of models.
Other Comments Or Suggestions: No.
Questions For Authors: 1. Based on the discussion in Section 2.2, it seems that the only difference compared to the prior works for preference-conditioned models is that the authors "explore model merging for large models" while the prior works focus on smaller models. However, the reference [6] in **Essential References Not Discussed** section is very similar to the approach proposed in this paper and also applies to large models.
How is this work different from [6]?
Ethical Review Concerns: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable suggestions. Below, we provide a detailed response to each point.
---
**Response to Essential References Not Discussed**
**R1: MGDA Reference** Thanks. We'll include it in our final version.
**R2: Relation with preference-based MOO works [1,2,3]** They differ fundamentally from ours. They aim to generate a single solution per run based on a given preference. To handle different preferences, they require multiple runs and obtain multiple models. On the other hand, our method produces a continuous Pareto set of solutions. Once trained, a MOO solution can be generated from the given preference without optimization. Currently, we use smooth Tchebycheff scalarization for its simplicity and effectiveness. But algorithms in [1,2,3] can be integrated into our framework by modifying Equation (10). Specifically, instead of directly using $\gamma_k$ to weight $S_k$, we can use algorithms in [1, 2, 3] to compute a weighting for $S_k$.
**R3: Using low-rank embedding for the preference** There might be some misunderstanding. We are not using low-rank embedding for preference. Instead, we employ a low-rank tensor to integrate the preference into a single model, which is significantly more parameter- and compute-efficient compared to generating a separate model for each preference (using methods such as [1, 2, 3]) or use a full-rank tensor structure.
**R4: Difference with Pareto set learning works [4, 5]**
We would like to emphasize that our contributions are:
1. Problem formulation: We address the challenges of model merging by reformulating it as a MOO problem. This requires identifying the MOO objectives for model merging, and is not trivial. In data-free merging, we observe that Task Arithmetic can be viewed as optimizing the distances between models in the weight space. This then allows us to define the MOO objectives based on these distances.
Moreover, while traditional model merging considers finding a single solution for a specific preference, our formulation transforms the problem to the finding of a continuous space of solutions, which is novel.
2. Scalable Pareto Set Learning: While works [4, 5] use hypernetworks (typically 100× larger than base networks), our low-rank tensor approach dramatically reduces the computational costs, making Pareto exploration feasible for large-scale models.
**R5: Comparison with LLM alignment work [6]**
Note that [6] is indeed a concurrent work with 3 key differences:
1. Problem Formulation:
[6] is used for LLM alignment, while this paper is the first to apply MOO to model merging (both data-free and data-based). As mentioned in **R4**, it is non-trivial. As can be seen from empirical results, this leads to the ability
to generate different models for different preferences
and improved performance compared to the baseline algorithms.
2. Efficient Low-Rank Tensor Structure:
[6] uses a SVD-LoRA-based approach, while we propose a low-rank tensor structure. In the following, we adapt the approach in [6] for use in our model merging setting. We set the rank in their approach to 16, so that its number of parameters is comparable with ours.
Using the setup in Section 4.2.2 and Table 2, the results on merging eight ViT-B/32 models are:
|Method| Model|Test accuracy|
|---|---|---|
| AdaMerging+PM (equal)|ours | 84.9|
| AdaMerging+PM (equal) | structure in [6]|84.1|
| AdaMerging+PM (priority)| ours | 85.5|
| AdaMerging+PM (priority) | structure in [6]|84.4|
As can be seen, our low-rank tensor structure outperforms [6], particularly in the priority preference setting. This is because
while [6] assigns a fixed singular matrix row to each objective, our tensor $G$ adaptively learns the relationships between objectives and so our structure is more flexible.
3. Optimization: [6] focuses on LLM alignment using labeled data, whereas we focus on model merging with either no data or unlabeled data. This distinction presents overfitting challenges, which we address using an efficient structure and tensor regularization.
We will include the above discussions in the final version.
**Response to Weakness 1**
**R6: Training and inference times** Table 1 is on training time. As can be seen, our method has small computational overhead compared to the baselines. Specifically, during training, for a layer, AdaMerging optimization has a per-iteration complexity of $O(Kcd)$, while ours is $O(Kcd + (c+d)r + Kr^2)$ (where $K$ is the number of models, $c \times d$ the shape of the layer parameter, and $r$ is the rank.). As $r$ is much smaller than $c$ and $d$, the computational overhead is small.
For inference, once the preference is fixed, the low-rank tensor can be merged into the base model, resulting in no inference overhead.
**Response to Weakness 2 and Questions For Authors**
Please refer to **R5**
---
We hope the above responses address your concerns. If you have any additional questions or suggestions, we are more than happy to discuss them further. | null | null | null | null | null | null |
Provably Cost-Sensitive Adversarial Defense via Randomized Smoothing | Accept (poster) | Summary: This paper introduces an adversarial training algorithm aimed at enhancing cost-sensitive robustness. The writing is clear, and the methodology appears sound within the context of the paper. However, I have concerns regarding the motivation, and the evaluation lacks several critical experiments.
## Update after Rebuttal
The two-round rebuttal did not address my concerns, so I maintain my recommendation: 1. Reject.
There have been prior works on certified adversarial defense via random smoothing, as well as on cost-sensitive learning. My understanding of this work, after reading the rebuttal, is that it applies certified adversarial defense via random smoothing to cost-sensitive learning. This leads to my first and main concern: why this particular combination? Why not apply adversarial training or other defenses to cost-sensitive learning? Which method is more effective in this context?
The authors explain why randomized smoothing cannot be used for adversarial training, but that was not my question. What I asked is for a comparison between "adversarial training" and "certified adversarial defense via random smoothing" when applied to cost-sensitive learning. This key question remains unaddressed.
The reason I ask for a comparison (including results using standard classification metrics) is because the authors claim this work as an adversarial defense. A defense method must be evaluated under realistic threat models, considering the knowledge available to both attackers and defenders. However, the authors also acknowledge that certified defenses struggle against unseen attacks. On the other hand, although they claim that adversarial training generalizes poorly to unseen attacks, latest adversarial training was specifically designed to address this challenge and has evolved with many strategies to improve robustness (as shown in recent works listed in RobustBench). To support the claim that this method is practical and effective, the authors must provide comprehensive evaluations.
Even if we narrow the scope to the context of this study (applying certified adversarial defense via random smoothing to cost-sensitive learning), I still do not see references to or use of the latest methods in either area, even after the rebuttal. It is unclear whether the authors mean that there is no recent work on certified adversarial defense via random smoothing or cost-sensitive learning individually, or that there is no recent work combining the two.
My suggestion to the authors: If this work aims to bridge the gap between certified adversarial defense and cost-sensitive learning, then the narrative should not center on defense. Instead, the focus should be on understanding and addressing the gap between cost-sensitive and cost-insensitive learning when applying certified defenses. It is also important to discuss how different cost-sensitive learning algorithms and certified defense techniques may affect the effectiveness of bridging this gap.
Claims And Evidence: * I do not agree with the motivation that defending against cost-sensitive adversaries is fundamentally different from regular adversarial training. The paper lacks both theoretical justification and empirical demonstrations to support this distinction.
* The paper does not report clean accuracy, despite the fact that maintaining clean accuracy in adversarial training is a well-known challenge. The authors claim in 033 that their method mitigates this issue, yet no supporting evidence is provided.
* It is essential to report the robust accuracy of both cost-sensitive and non-sensitive adversarial examples separately. The drop in robust accuracy for non-sensitive adversarial examples, similar to clean accuracy, should be carefully examined.
* The paper provides limited background on cost-sensitive robustness in introduction (only 050), which is crucial for distinguishing it from standard adversarial robustness. Readers must refer to Section 4 for further details.
Methods And Evaluation Criteria: * The proposed methodology appears to be derived from Zhang et al. (2023), although their work was not specifically designed for cost-sensitive robustness. Could the authors clarify the differences in the algorithm, aside from the application/scenario context?
Theoretical Claims: The authors provide sufficient mathematical proofs in the appendix.
Experimental Designs Or Analyses: * The details of the attacks used in the experiments are not disclosed,
* The paper does not compare its approach with the most relevant work, such as Zhang et al. (2023), nor does it include comparisons with other regular adversarial training methods. Specifically, the overall accuracy presented in Table 1 and Table 11 is noticeably lower than that of state-of-the-art (SOTA) adversarial training methods (refer to RobustBenck).
* Given the lack of disclosure regarding attack details, it remains unclear whether the defense has been evaluated against unseen or adaptive attacks, which are crucial for assessing defenses.
Supplementary Material: Same with the appendix of the main PDF.
Relation To Broader Scientific Literature: No comment.
Essential References Not Discussed: The references are outdated, with no publications from 2023-2024 cited. Furthermore, the authors should include comparisons with regular adversarial training methods featured in RobustBench (https://github.com/RobustBench/robustbench).
Other Strengths And Weaknesses: No comment.
Other Comments Or Suggestions: No comment.
Questions For Authors: My current assessment falls between scores 1 and 2 due to several concerns. However, if these concerns are adequately addressed, I would be open to revising my score and potentially raising it above 2.
1. The proposed method should be compared with regular adversarial training methods, as these are designed to generalize across all adversarial scenarios. It would represent a good contribution if the proposed method demonstrates superior performance, even if only in the context of cost-sensitive robustness.
2. Clean accuracy, as well as robust accuracy for both cost-sensitive and non-sensitive adversarial examples, must be reported separately for a more comprehensive evaluation.
3. The novelty of the proposed method in comparison to Zhang et al. (2023) should be clearly articulated, beyond just the application domain.
4. The details of the attacks used in the experiments must be disclosed to ensure transparency and reproducibility.
5. A literature review that includes relevant works from 2024 is expected. If there are no significant new contributions, this should be explicitly stated for the readers.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: **1. It is essential to report the robust accuracy of both cost-sensitive and non-sensitive adversarial examples separately.**
|Method|$Rob_{cs}$|$Rob_{normal}$|
| -------- | -------- | --------
|Gaussian |22.9|49.8|
|SmoothAdv|26.3 |52.5|
|SmoothMix |16.8|52.7|
|MACER|27.4|54.3|
|Gaussian-CS|50.9|43.7|
|SmoothAdv-CS|53.6 |48.8|
|SmoothMix-CS|26.4|50.7|
|Margin-CS|54.8|48.3|
Thanks for the suggestion. We report the cost-sensitive robustness for both sensitive and non-sensitive examples under the CIFAR-10 S-Seed setting. As expected, there exists a trade-off in certified robustness performance between these two types of samples, which aligns with our training objective: a smaller margin threshold $\gamma_1$ is used for normal examples, while a larger margin threshold $\gamma_2$ is applied to sensitive examples.
**2. The details of the attacks used in the experiments are not disclosed**
As defined, randomized smoothing provides certified robustness against all attacks within the certified radius. For this reason, prior works on randomized smoothing—and most certification-based methods in general—do not explicitly specify a threat model or evaluate against empirical (adaptive) attacks. Following this convention, neither do we.
**3. The paper does not compare its approach with the most relevant work, such as Zhang et al. (2023), nor does it include comparisons with other regular adversarial training methods.**
We report the performance comparison between our method and DiffSmooth, as proposed by Zhang et al. (2023). Our Margin-CS results are presented in parentheses for direct comparison with DiffSmooth. As shown in the table, our method consistently outperforms DiffSmooth across all cost-matrix settings by a significant margin. Furthermore, our method achieves an inference time of 2.28 seconds per image, whereas DiffSmooth requires 89 seconds per image (~40 times slower), highlighting the efficiency and practicality of our approach for real-world deployment.
| Method | $Acc$| $Rob_{cs}$|$Rob_{cost}$|
|--|--|---|----|
| S-seed| 67.12 (**67.5**) | 28.04 (**46.8**) | 4.357 (**3.04**) |
| M-seed| 67.12 (**67.5**) | 36.45 (**54.8**) | 4.49 (**3.07**) |
| S-Pair| 67.12 (**67.5**) | 61.68 (**92.4**) | 0.43 (**0.05**) |
| M-Pair| 67.12 (**67.5**) | 33.64 (**80.4**) | 1.636 (**0.35**) |
**4. The novelty of the proposed method in comparison to Zhang et al. (2023)**
Zhang et al. 2023 enhances randomized smoothing for *general robustness* by using a diffusion model as a denoiser before passing the noisy input to the target model. In contrast, our work improves randomized smoothing for *cost-sensitive robustness* (which existing approaches cannot achieve) through a novel training paradigm. As these approaches are algorithmically distinct and directionally orthogonal, we do not see a particularly strong connection between them—beyond both falling under the broader category of randomized smoothing.
**5. Specifically, the overall accuracy presented in Table 1 and Table 11 is noticeably lower than that of state-of-the-art (SOTA) adversarial training methods (refer to RobustBench).**
As briefly mentioned in the above **response 2**, there is an inevitable gap between empirical robustness (i.e., potentially effective against *specific* attacks, such as adversarial training) and certified robustness (i.e., provably effective against *all* possible attacks within a bounded perturbation, e.g., randomized smoothing). Simply speaking, the noise scale incorporated in randomized smoothing framework is much larger than that in adversarial training, since empirical attacks focus on imperceptible perturbations, whereas rigorous certification requires a reliable computation of the certified radius. This difference is then reflected in the model's clean accuracy. While a deeper investigation into the gap between these two fields would be an interesting direction, it falls outside the scope of this submission.
**6. The proposed method should be compared with regular adversarial training methods, as these are designed to generalize across all adversarial scenarios.**
As mentioned above, the training noise used in randomized smoothing baselines is significantly larger than that used in adversarial training methods. To illustrate this discrepancy, we evaluated the certified robustness of the top two methods on RobustBench—MeanSparse and adversarial training enhanced by diffusion models—and found that their certified performance is close to zero. This highlights the fundamental mismatch between the goals of empirical adversarial training and certification-based approaches: the former aims to improve empirical robustness under small, often imperceptible perturbations, while the latter focuses on providing formal robustness guarantees under much larger perturbation regimes.
---
Rebuttal Comment 1.1:
Comment: 1. My argument regarding the difference between adversarial training and the proposed method is related to their application, specifically in handling clean and malicious inputs during inference. The key question that needs to be addressed is: "Can adversarial training on basic Cost-Sensitive Learning defend against cost-sensitive adversarial examples? If so, why is the proposed defense necessary?" This is my primary concern regarding the motivation of this work.
Why can't adversarial training (e.g., [A]) be applied to basic Cost-Sensitive Learning? For instance, the simplest approach would be to introduce adversarial examples into the training dataset of basic Cost-Sensitive Learning. Please note that the authors do not need to integrate adversarial training with the proposed certified defense, but rather consider adversarial training for non-robust cost-sensitive learning. Additionally, adversarial training does not necessarily require imperceptible perturbations, so the claim in the rebuttal is incorrect.
2. I do not agree with the claim that a defense can protect against all attacks without evidence, especially since more advanced attacks have been proposed since 2023. Can this defense, trained with $L_2$-norm constraints, effectively defend against $L_\infty$-norm attacks or sparse attacks [B, C] with an unlimited $\epsilon$ (this is defending against unseen attacks)? Furthermore, even for the only attack considered in this paper, it is unclear which specific attack is being used. Equation 1 merely presents a general attack objective definition.
3. The rebuttal still does not provide clean accuracy trade-offs before and after applying the defense, despite the authors claiming in the main paper that their method mitigates this issue and arguing that adversarial training has such limitations (this is right, though for the latest adversarial training methods, this is not a significant limitation).
4. The rebuttal also fails to justify why recent work has not been discussed. I am concerned that this research may have been completed as early as the beginning of 2024, which would make it inappropriate for direct publication in a conference at the end of 2025 without incorporating up-to-date studies.
[A] Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., & Jordan, M. (2019, May). Theoretically principled trade-off between robustness and accuracy. In ICML (pp. 7472-7482).
[B] Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828-841.
[C] Vo, V. Q., Abbasnejad, E., & Ranasinghe, D. C. (2024). BRUSLEATTACK: A QUERY-EFFICIENT SCORE-BASED BLACK-BOX SPARSE ADVERSARIAL ATTACK. In ICLR.
---
Reply to Comment 1.1.1:
Comment: **For Comment 1**
It is important to clarify that *certified defenses* and *empirical defenses* are two **distinct** frameworks for robustness, and **none** of the existing certified defenses considered evaluating against empirical adversarial attacks, as **certified defense provides probabilistic robustness guarantees against worst-case perturbations** [1, 2]—a guarantee that empirical methods can never offer. While SmoothAdv (Salman et al., 2019), cited in our submission, explores incorporating empirical adversarial attacks to potentially enhance certified robustness, evaluating attacks (whether with or without guarantees) falls outside the scope of this submission and, more broadly, the certified robustness literature.
In addition, each input sample in certified defense is associated with a **certified radius**, which formally characterizes each sample's ability to resist perturbations. This per-sample certification is a unique property of randomized-smoothing-based approaches and is **not available in empirical adversarial training**.
The *evaluation metric and pipeline* used in certified defense are also **fundamentally different** from those in empirical defense, as we have clarified in Section 6 of our paper. While it may be a meaningful direction to explore how to bridge certified and empirical cost-sensitive learning—e.g., by aligning or unifying certified adversarial defense and empirical adversarial defense within a single framework—**this is beyond the scope of our current submission and the broader body of certified robustness work**.
[1] Cohen J, Rosenfeld E, Kolter Z. Certified adversarial robustness via randomized smoothing. In *International Conference on Machine Learning*, 2019: 1310–1320.
[2] Wong E, Schmidt F, Metzen J H, et al. Scaling provable adversarial defenses. *Advances in Neural Information Processing Systems*, 2018, 31.
**For Comment 2**
By definition, randomized smoothing-based defenses can provide **certified robustness** against all possible $\ell_2$ attacks **within a certain radius** (as already claimed in the rebuttal and the original submission), but they do **not guarantee robustness** against other types of norm-bounded attacks (e.g.,$\ell_\infty$, $\ell_1$) in their standard form (while extensions are possible such as [3],[4]), nor against unseen or unconventional adversarial strategies. Prior research [5] has also shown that adversarial training against one specific norm (such as $ \ell_\infty$) tends to **generalize poorly** to attacks in other norms (such as $ \ell_2$ or $\ell_1$). A comprehensive investigation of this phenomenon is beyond the scope of this submission.
[3]: Vorácek V, Hein M. Improving l1-certified robustness via randomized smoothing by leveraging box constraints[C]//International Conference on Machine Learning. PMLR, 2023: 35198-35222.
[4]: Florian Tramer and Dan Boneh. *Adversarial Training and Robustness for Multiple Perturbations*. Advances in Neural Information Processing Systems, 2019.
[5] Yang G, Duan T, Hu J E, et al. Randomized smoothing of all shapes and sizes[C]//International conference on machine learning. PMLR, 2020: 10693-10705.
**For Comment 3**
The term *model accuracy* in randomized smoothing literature specifically refers to **certified clean accuracy** (of the smoothed classifier), which is fundamentally different from the *clean accuracy* typically reported in empirical adversarial training frameworks. Furthermore, all evaluation metrics used in our work are **certified robustness metrics**, computed using **Monte Carlo sampling** as part of the randomized smoothing framework. These certified metrics provide formal probabilistic guarantees under $\ell_2$-bounded adversarial perturbations, and should not be directly compared to empirical robustness metrics, which rely on adversarial attacks and do not offer worst-case guarantees.
**For Comment 4**
We have included the most recent works on **randomized smoothing** and **cost-sensitive learning** in our paper, supported by multiple rounds of updated literature searches. As our focus is on **certified** defenses rather than **empirical** ones, we have limited the discussion of empirical attacks and defenses, which lie outside the primary scope of our work. We would be happy to incorporate additional literature related to empirical defenses in the revision if the reviewer deems it necessary. | Summary: This paper considers certified robustness when the cost between the correct label and the incorrect one is non-uniform. Specificially, author proposed a certification method via randomized smoothing and the corresponding provable training algoriothm in this context.
Claims And Evidence: The claims in this paper are convincing and supported by clear evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria are mostly convincing. One concern is that the authors should report the performance variance by running the algorithm multiple times given a relatively narrow gap between the proposed method and the baselines, like some cases in Table 3.
Theoretical Claims: The theoretical claims are convincing, while the proof technique is based on Cohen et al 2019.
Experimental Designs Or Analyses: The experiments are consistent with the theoretical setup and are convincing. However, I suggest the author conduct an ablation study on hyper-parameters $\lambda_1$, $\lambda_2$, $\gamma_1$, and $\gamma_2$, showing how sensitive the performance is to these hyper-parameters and how these hyper-parameters affect the overall performance.
Supplementary Material: I read both the proofs and the additional experiments but at a faster pace than the main text.
Relation To Broader Scientific Literature: The provable robustness is a crucial problem in situations where the tolerance of mistakes is very low. The motivation and the applicable situations of the method make sense. I believe this paper can contribute and raise some interest in the adversarial machine learning community.
Essential References Not Discussed: N.A.
Other Strengths And Weaknesses: The strengths and weaknesses are demonstrated in the sections above. Overall, I think it is a good paper. I welcome the authors to address my concerns in the rebuttal and will re-evaluate the manuscript during the discussion period.
Other Comments Or Suggestions: N.A.
Questions For Authors: The questions are summarized as follows:
1. The authors should report the performance variance by running the algorithm multiple times given a relatively narrow gap between the proposed method and the baselines, like some cases in Table 3.
2. I suggest the author conduct an ablation study on hyper-parameters $\lambda_1$, $\lambda_2$, $\gamma_1$, and $\gamma_2$, showing how sensitive the performance is to these hyper-parameters and how these hyper-parameters affect the overall performance.
Ethical Review Concerns: N.A.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1. The authors should report the performance variance**
We conduct three independent runs of the Imagenette S-Seed setting and report the corresponding standard deviations. The experimental procedure comprises two distinct stages: the training phase and the certification (evaluation) phase. The majority of variance arises during training, while variance during certification is negligible due to extensive Monte Carlo sampling and majority voting, which together ensure stable and consistent results. As shown, running all the baselines and our method multiple times does not affect the claims nor findings. We will include performance variance details to further strengthen the manuscript.
| Methods | $Acc$ (± std) | $Rob_{cs}$ (± std) | $Rob_{cost}$ (± std) |
|---------------|---------------|---------------|----------------|
| Gaussian | 80.3 (0.22) | 64.6 (0.32) | 3.67 (0.006) |
| SmoothAdv | 80.2 (0.17) | 64.3 (0.25) | 3.91 (0.007) |
| SmoothMix | 80.6 (0.19) | 59.5 (0.50) | 2.93 (0.032) |
| MACER | 78.2 (0.10) | 63.8 (0.16) | 2.46 (0.004) |
| Gaussian-CS | 74.6 (0.09) | 73.3 (0.24) | 1.67 (0.009) |
| SmoothAdv-CS | 77.6 (0.14) | 66.6 (0.31) | 3.82 (0.016) |
| SmoothMix-CS | 76.1 (0.09) | 68.9 (0.48) | 2.24 (0.042) |
| Margin-CS | 79.6 (0.01) | 81.1 (0.15) | 1.35 (0.008) |
**2. Ablation study on hyper-parameters**
There are two stages in the hyperparameter tuning process. In the first stage, $\lambda_1$ and $\lambda_2$ control the overall trade-off between certified accuracy and cost-sensitive robustness. As expected, increasing $\lambda_1$ and $\lambda_2$ leads to a decrease in overall accuracy but an improvement in cost-sensitive robustness. We set $\lambda_1 = \lambda_2 = 3$ to achieve a favorable balance between the two objectives.
In the second stage, we fix $\lambda_1$ and $\lambda_2$ and tune $\gamma_1$ and $\gamma_2$. Here, $\gamma_1$ determines the margin threshold for selecting normal samples, while $\gamma_2$ controls the threshold for sensitive samples. We observe that increasing $\gamma_2$ enhances cost-sensitive performance, whereas increasing $\gamma_1$ improves overall accuracy, but will decrease cost-sensitive performance. A grid search reveals that the combination $(\gamma_1, \gamma_2) = (4, 16)$ yields satisfactory results.
We will include these details in the revised manuscript's Appendix.
| $\lambda_1$ |$\lambda_2$|$Acc$| $Rob_{cs}$|$Rob_{cost}$|
|-|-|-|-|-|
| 1 | 1 | 0.690 | 0.22| 5.150 |
| 2 | 2 | 0.682| 0.435 | 3.619 |
| 3 | 3 | 0.667 | 0.510|3.407 |
| 4| 4 | 0.631| 0.732 | 1.597|
| 5 | 5 | 0.603 | 0.762 | 1.367|
| 6 | 6 | 0.578 | 0.811| 1.046|
| γ1 |$Acc$(γ2=8) | $Rob_{cs}$(γ2=8) | $Rob_{cost}$ (γ2=8) | $Acc$ (γ2=10) | $Rob_{cs}$ (γ2=10) |$Rob_{cost}$ (γ2=10) | $Acc$(γ2=12) | $Rob_{cs}$(γ2=12)|$Rob_{cost}$(γ2=12) | Acc (γ2=16) |$Rob_{cs}$(γ2=16) | $Rob_{cost}$(γ2=16)|
|-----|-- |-- |-- |-- |-- |--|--|--|--|--|-|--|
| 2| 65.4| 63.3| 2.617| 63.4 | 68.7| 2.488| 63.7| 69.1| 2.484|63.0|70.5 | 2.505|
| 4| 68.2| 49.7| 3.558| 67.9| 52.6| 3.477| 67.7| 54.3| 3.444| 67.5 | 54.8| 3.040|
| 6| 67.3| 39.6| 3.878| 66.0| 49.3| 3.546|65.5|54.4| 3.362 | 64.9 | 55.2 | 3.231 |
| 8 | 66.0 | 33.8 |4.390 | 65.0 | 43.2 | 4.137| 64.1 | 47.4 | 3.921 | 64.5 | 46.3 |3.768 | | Summary: The paper introduces a novel framework for adversarial robustness that incorporates cost-sensitive learning using randomized smoothing. Unlike existing defenses that assume uniform misclassification costs, this method optimizes robustness with a cost matrix that accounts for real-world risk variations (e.g., misclassifying malignant tumors as benign is costlier than the reverse). The main contributions include:
- Cost-Sensitive Certified Radius: A new metric extending the standard certified radius to account for cost-sensitive adversarial robustness.
- Certification Algorithm: A Monte Carlo-based method to estimate cost-sensitive robustness, ensuring statistically rigorous bounds.
- Robust Training Method: A margin-based loss function that maximizes cost-sensitive certified robustness while maintaining accuracy.
- Experimental Validation: The approach outperforms baselines such as standard randomized smoothing, SmoothAdv, and MACER on CIFAR-10, Imagenette, ImageNet, and the medical HAM10k dataset.
The results indicate that the method significantly enhances robustness, particularly in cost-sensitive scenarios, making it relevant for safety-critical applications.
Claims And Evidence: The paper's main claims are:
1. Cost-sensitive certified radius provides better robustness guarantees (Theorem 4.2). Supported by theoretical proof showing it generalizes the standard certified radius.
2. Monte Carlo-based certification is statistically valid (Theorem 4.4). Proof provided using a union bound argument.
3. Proposed training method (Margin-CS) improves cost-sensitive robustness without degrading accuracy. Extensive experimental results demonstrate a ~20% improvement over baseline methods.
4. Scalability to high-dimensional models (e.g., ImageNet). Empirical evidence shows that the approach works on large datasets where previous methods struggle.
All claims are well-supported, with both theoretical and empirical validation.
Methods And Evaluation Criteria: - The benchmark datasets (CIFAR-10, Imagenette, ImageNet, HAM10k) are appropriate, covering both general vision tasks and real-world cost-sensitive applications (e.g., medical imaging).
- The metrics used (Certified Robust Cost, Certified Cost-Sensitive Robustness, Certified Accuracy) effectively measure both overall robustness and cost-sensitive robustness.
- The comparison to baselines is thorough, including standard randomized smoothing and adversarial training methods.
Theoretical Claims: I checked the following theoretical claims:
- Theorem 4.2 (Cost-sensitive certified radius is always greater than or equal to the standard certified radius). Proof follows from standard randomized smoothing principles and the monotonicity of $\phi^{-1}$. Mathematically sound and intuitive.
- Theorem 4.4 (Certified robustness estimate is statistically valid). Proof relies on union bounds and confidence interval estimation. The logic appears correct, but empirical verification (e.g., checking Monte Carlo estimates converge) would strengthen confidence.
Experimental Designs Or Analyses: - Certifications for cost-sensitive robustness are evaluated on diverse datasets and settings.
- Robustness is tested across multiple perturbation budgets ($\epsilon$-values).
- Results on HAM10k confirm the method's relevance to medical AI.
Weakness
- No ablation study on the effectiveness of different training components (e.g., margin-based loss vs. alternative approaches).
- Robustness to $l2$ perturbations (e.g., adversarial patches, feature space attacks) is not explored.
Supplementary Material: The appendix provides detailed proofs, additional experiments, and dataset descriptions. Additional figures (certified radius distributions, clean and robust error heatmaps) provide deeper insights into model behavior.
Reviewed Sections:
- Proof of Theorem 4.2
- Proof of Theorem 4.4
- Additional experiments on ImageNet and HAM10k
- Heatmaps for clean vs. robust errors
Relation To Broader Scientific Literature: The paper is highly relevant to ongoing research in:
- Certified Adversarial Robustness: Extends works like Cohen et al. (2019) on randomized smoothing.
- Cost-Sensitive Learning: Builds on prior methods like Zhang & Evans (2019) but improves scalability and certification guarantees.
- Medical AI and Risk-Aware ML: Addresses real-world concerns in safety-critical AI (e.g., healthcare, autonomous systems).
The work bridges the gap between adversarial robustness and cost-sensitive classification, making it an important contribution.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths
- Strong Theoretical Guarantees: Provides mathematical proofs for all major claims.
- Scalable to Large Models: Works on ImageNet and HAM10k, where previous cost-sensitive methods struggled.
- Real-World Applicability: Demonstrates importance in medical AI.
- General Framework: Extends randomized smoothing to cost-sensitive settings in a principled manner.
Weaknesses
- No Adaptive Attack Evaluation: The method is only tested under $l2$-norm perturbations.
- Lack of Hyperparameter Sensitivity Analysis: Does not explore robustness to different $\sigma$, $\delta$ values.
- Limited Discussion on Real-World Deployment: Practical constraints (e.g., computational cost, real-time processing) are not discussed.
Other Comments Or Suggestions: - Sensitivity Analysis: Adding a hyperparameter sensitivity analysis ($\sigma$, $\delta$, $\gamma$) would improve the practical usability of the method.
- Adaptive Adversarial Attacks: Evaluating the approach against adaptive attacks (e.g., AutoAttack, patch-based attacks) would strengthen the robustness claims.
- Ablation Studies: Analyzing the contributions of individual training components (margin-based loss vs. standard cost-sensitive learning) would clarify which aspects drive performance gains.
Questions For Authors: 1. How does the method perform under adaptive attacks? Randomized smoothing is not foolproof against strong adaptive attacks (e.g., query-based attacks). Have you tested against adaptive adversarial strategies, and if so, how does the method hold up?
What is the computational overhead of certification?
2. Monte Carlo-based certification can be computationally expensive. How does the certification time scale with input dimensions and the number of classes? Can the method be extended to other robustness frameworks?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: **1. Theorem 4.4 (Certified robustness estimate is statistically valid). The logic appears correct, but empirical verification (e.g., checking Monte Carlo estimates converge) would strengthen confidence.**
We conduct the certification process using varying numbers of Monte Carlo samples, where N denotes the number of samples and r represents the certified radius returned by Algorithm 1. The results for the S-Seed setting are shown in the table below. We observe that as N increases, the certified radius gradually converges. Specifically, once N exceeds 50,000, the certified radius stabilizes and exhibits minimal fluctuation.
| N | 100 | 500 | 1000 | 10000 | 50000 | 60000 | 70000 | 80000 | 90000 | 100000 |
|--------|-------|-------|-------|-------|--------|--------|--------|--------|--------|---------|
| r 0.4378| 0.6869| 0.7663| 0.9550| 1.0612 | 1.052 | 1.059 | 1.065 | 1.069 | 1.072 |
**2. No Adaptive Attack Evaluation: The method is only tested under L2 norm perturbations.**
As defined, randomized smoothing provides certified robustness against all attacks—including adaptive ones—within the certified radius. For this reason, prior works on randomized smoothing (and most certification-based methods in general) do not evaluate against adaptive attacks. While we agree that exploring the gap between certified robustness and empirical robustness (i.e., performance under actual attacks) is an interesting direction, it falls outside the scope of this submission.
Although randomized smoothing is formally defined for L2 norm perturbations, prior work has shown that it can be effectively extended to other norms [1,2]. As this generalization is well-established and independent of our main contributions, we do not focus on it here, but we are happy to provide relevant results upon request.
[1] Yang, Greg, et al., "Randomized Smoothing of All Shapes and Sizes", ICML 2020
[2] Vorácek, V., & Hein, M., "Improving L1-Certified Robustness via Randomized Smoothing by Leveraging
Box Constraints", ICML 2023
**3. Limited Discussion on Real-World Deployment: Practical constraints (e.g., computational cost, real-time processing) are not discussed.**
As suggested, we report the training and inference time evaluated on a single A100 GPU (40 GB Memory). During training, we employ a margin-based loss to optimize the cost-sensitive certified radius, which necessitates a substantial amount of Gaussian sampling (e.g. 16) for accurate radius estimation, as well as sufficient epochs to ensure convergence. This results in a slight increase in the training time. During inference, they all share the same certification procedure, so their certification time is nearly identical.
| Method| Gaussian | SmoothAdv | SmoothMix | MACER | Gaussian-CS | SmoothAdv-CS | SmoothMix-CS | Margin-CS |
|----------------|----------|-----------|-----------|-------|-------------|---------------|--------------|-----------|
| Training Time |0.2h |11.53h| 2.68h|13.5h|0.2h|11.53h| 2.69h|13.4h |
| Inference Time | 2.38s |2.27s| 2.28s | 2.25s | 2.39s | 2.27s | 2.28s | 2.28s|
**4. What is the computational overhead of certification? How does Monte Carlo certification time scale with input dimensions and the number of classes? Can the method be extended to other robustness frameworks?**
The overall certification cost is $O(N\times T_{fwd})+O(C)$, where $N$ is the Gaussian sampling number and $T_{fwd}$ is the cost per forward pass (depends on model size and input dim), $C$ is the number of classes.
$O(C)$ is due to post-processing steps after the Monte Carlo sampling. This includes operations like finding the top classes, computing confidence bounds, and estimating per-class certified radii. While this cost is small compared to the sampling phase, it scales linearly with the number of classes
$C$. Our certification algorithm is compatible with any robustness framework. It is independent of the training procedure and can also be integrated into diffusion-based certification methods (as shown in the reponse to **Reviewer KFXn**), The values in parentheses correspond to our method.
| Setting| Overall Acc| $Rob_{cs}$|$Rob_{cost}$|
|--|--|---|----|
| S-seed| 67.12 (**67.5**) | 28.04 (**46.8**) | 4.357 (**3.04**) |
| M-seed| 67.12 (**67.5**) | 36.45 (**54.8**) | 4.49 (**3.07**) |
| S-Pair| 67.12 (**67.5**) | 61.68 (**92.4**) | 0.43 (**0.05**) |
| M-Pair| 67.12 (**67.5**) | 33.64 (**80.4**) | 1.636 (**0.35**) |
**5. Ablation Study & Sensitivity Analysis: Adding a hyperparameter sensitivity analysis would improve the practical usability of the method.**
Please refer to our **Response 2 to Reviewer AS48** for the hyperparameter sensitivity analysis. The experimental results show that the parameter $\lambda$, $\gamma$ controls the trade-off between overall accuracy and cost-sensitive performance. We will include more details on the ablation study and sensitivity analysis in the Appendix of the revised manuscript. | null | null | null | null | null | null | null | null |
DINO-WM: World Models on Pre-trained Visual Features enable Zero-shot Planning | Accept (poster) | Summary: The paper introduces DINO‐WM, building task‐agnostic world models for control and planning. Instead of operating directly in pixel space, the method leverages pre‐trained patch features from DINOv2 to encode observations into a rich, spatially-aware latent representation. The world model is trained offline using trajectories and employs a ViT-based transition model to predict future latent states. At test time, planning is carried out using model predictive control (MPC) with the cross-entropy method (CEM), allowing for zero-shot goal-reaching without additional reward signals or task-specific tuning.
Claims And Evidence: Yes
Methods And Evaluation Criteria: This proposed principle holds when facing multiple experimental settings. However, I am a little worried about the real-world planning and control performance since the robot is always moving when the proposed model is planning the actions. If the planning is too slow (53s), the control system could collapse.
Theoretical Claims: Theoretical claims are good.
Experimental Designs Or Analyses: The experimental setups are a bit easy. More experiments on tasks with complex textured backgrounds need to be evaluated.
Supplementary Material: Yes
Relation To Broader Scientific Literature: The model demonstrates robust generalization, successfully handling novel environment configurations and object variations. This suggests that the learned latent representations capture essential underlying dynamics beyond the specific scenarios seen during training.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ## Strengths :
The approach allows for planning at test time without task-specific fine-tuning or the need for expert demonstrations. The experiments show that DINO-WM can generate effective control policies purely through offline training, which is a significant step toward more general-purpose models.
The authors validate their method on six simulation environments (e.g., maze navigation, push manipulation, robotic arm control, and deformable object manipulation). The extensive comparisons with several state-of-the-art baselines (IRIS, DreamerV3, TD-MPC2, among others) highlight the method’s superior performance in both success rates and reconstruction quality.
The model demonstrates robust generalization, successfully handling novel environment configurations and object variations.
## Weakness
DINO-WM requires a comprehensive offline dataset with sufficient state-action coverage. In real-world settings, gathering such data might be challenging, potentially limiting the method’s applicability outside of controlled environments.
All experiments are conducted in simulated tasks. Although the results are promising, further validation on real-world robotic platforms would be needed to assess practical deployment, as mentioned before.
The current planning framework operates solely in the action space. While effective for the tasks presented, incorporating hierarchical planning or multi-level control strategies could enhance performance on more complex or fine-grained tasks.
Although the paper includes ablation studies, additional analysis on hyperparameter sensitivity and computational trade-offs would provide deeper insights into the method’s robustness and scalability.
(Appendix A.4.2 is total a waste. Failed to understand why the authors want to add this paragraph.)
Other Comments Or Suggestions: No
Questions For Authors: More foundation models could be evaluated, like OpenVLA using Dino+Siglip. It would add some value if authors could investigate such setups.
Ethics Expertise Needed: ['Discrimination / Bias / Fairness Concerns']
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful review, especially in highlighting that DINO-WM demonstrates “robust generalization” and “is a significant step toward more general-purpose models.” We now address the issues raised by the reviewer.
>**“I am a little worried about the real-world planning and control performance since the robot is always moving when the proposed model is planning the actions”**
Our planning framework follows the principles of Model Predictive Control (MPC)—a well-established approach in real-world robotics. The planner optimizes future actions but executes only a subset before re-planning. The robot’s continuous movement is not an issue, as each planning step integrates updated proprioceptive and visual observations, ensuring robust closed-loop control.
For planning efficiency, we improved our inference code for DINO-WM since submission and now it takes 15.89s for CEM compared to 53s reported in the manuscript. We provide further ways to speed up planning and we refer to our response to **Reviewer 57WD** due to space limits.
>**“More experiments on tasks with complex textured backgrounds need to be evaluated.”**
We evaluated ClutteredPushT, which adds a complex textured background to PushT. Please see our response to **Reviewer 57WD** for open-loop rollouts and final performance due to space limits.
>**“DINO-WM requires a comprehensive offline dataset with sufficient state-action coverage” “validation on real-world robotic platforms”**
We agree with the reviewer that DINO-WM requires a diverse dataset for effective planning. While it avoids online interactions, reward signals, and expert demonstrations, it still follows the general trade-off that broader coverage improves generalization. However, this limitation can be mitigated by continuously updating the world model with new experiences, enabling progressive improvement without requiring extensive real-world interactions upfront.
For real-world validation, independent researchers have already successfully deployed DINO-WM on real robots. We are consulting with the AC on how to share this while maintaining anonymity.
>**“incorporating hierarchical planning or multi-level control strategies could enhance performance on more complex or fine-grained tasks.”**
We totally agree that hierarchical planning can unlock further capabilities of DINO-WM. Our current work serves as a proof of concept that even single-level planning can be effective with pre-trained WMs. This establishes a strong foundation on which more advanced planning strategies can be built. We see this as an exciting and promising direction for future work.
>**“hyperparameter sensitivity and computational trade-offs” “method’s robustness and scalability”**
We conducted a hyperparameter sweep for CEM on PushT. We refer to our response to **Reviewer 57WD** for the computation time and performance tradeoff due to space limits.
For WM training, we ablated image sizes on PointMaze (see response to **Reviewer 4KWu**). The training hyperparameters for DINO-WM is quite robust, as the same hyperparameters work across all six reported environments.
>**“Appendix A.4.2 is total a waste.”**
Appendix A.4.2 demonstrates the necessity of the frame-level attention mask in DINO-WM’s predictor. The experiments (L749-753) supports our claim that the causal mask effectively ensures attention to past frames only, enabling the model to capture essential temporal dynamics such as velocity and acceleration. Would the reviewer prefer that we remove this section?
>**“More foundation models could be evaluated”**
We agree that evaluating more foundation models can provide further insights on the pros and cons of DINO-WM compared to existing models. To this end, we evaluated Genie, a foundation model for generating interactive environments. We train the Genie model on our PushT dataset (open-loop rollouts in [Figure 5](https://tinyurl.com/3659ydkn)). It is evident that Genie performs worse in prediction quality and future state estimation compared to DINO-WM, even with ground truth action conditioning.
| Model | LPIPS |
|--------------|--------|
| Genie | 0.043 |
| Ours | 0.007 |
For OpenVLA with SigLIP, we note that it is a language-conditioned feed-forward policy trained with expert trajectories, fundamentally differing from DINO-WM, which learns environment dynamics from any interaction dataset and performs goal-conditioned planning. Since our setting doesn’t assume expert data or language conditioning, OpenVLA is unsuitable for fine-tuning. Additionally, its released model assumes a fixed action space, making it infeasible to finetune the model on our datasets.
We also evaluate other foundational vision models such as pre-trained MAE as the encoder for DINO-WM. Please refer to our response to **Reviewer 4KWu** for the experiment details and final performance of this experiment due to the 5000 char response limit.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I may not express it clearly, but my original intention is not to compare with openvla. Instead, I hope the authors can explore the foundation feature combination more, such as DINO+SigLip.
I could raise my score, but I really would love to see results based on fusing multiple foundation features.
---
Reply to Comment 1.1.1:
Comment: Thank you for clarifying your question. We conducted additional experiments comparing models with single features (DINOv2, SigLip) and combined features (DINOv2 + SigLIP) following your suggestions. Results on the PushT environment are shown in the table below:
| Model | Feature Dim | Predictor Size | CEM | MPC | SSIM | LPIPS |
|------------------|-------------|----------------|------|------|-------|-------|
| DINOv2 (Ours) | 384 | 20,195,320 | 0.86 | 0.90 | 0.985 | 0.007 |
| SigLIP | 768 | 39,237,352 | 0.56 | 0.78 | 0.985 | 0.009 |
| DINOv2 + SigLIP | 1152 | 58,352,104 | 0.60 | 0.84 | 0.980 | 0.017 |
We hypothesize that incorporating SigLIP features alongside DINOv2 did not improve performance because our tasks do not benefit from the language grounding that SigLIP offers. In fact, the combined DINOv2 + SigLIP features slightly underperform the DINO-only baseline for the final task planning success rate (MPC). This may be due to the significantly larger embedding space, which requires a larger predictor and can be harder to train effectively with a fixed dataset size.
We hope this addresses your question, and we are happy to discuss any further questions you may have. | Summary: This work aims to train a world model using large vision pre-trained features on offline trajectories in a task-agnostic fashion. They use the resulting trained world model to plan out Push-T, Maze, Reach, Rope, and Granalur control tasks in a zero-shot manner. Specifically, the authors train a world model using DINOv2 pre-trained features in the latent space. With extensive experimentation, the authors clearly show the benefit of using such a world model based on leveraging pre-trained large vision model features as opposed to the norm in the field -- to train the encoder from scratch.
Claims And Evidence: I find all the claims made in the paper to be clear and supported by concrete experimental evidence.
Methods And Evaluation Criteria: Yes, the authors evaluate both plannings using the learned world model on (a) tasks on whose data they were trained on, (b) on novel unseen environment configurations -- testing the generalization capabilities as well as reporting visual similarity metrics such as LPIPS and SSIM when trained on a separate (and optional) decoder.
Theoretical Claims: No theory is involved in this paper.
Experimental Designs Or Analyses: Yes, and I don't have any particular concern with any experimental designs/analyses.
Supplementary Material: Yes, I skimmed over the supplementary material for details regarding the decoder architecture and hyperparameters. I also look at the detailed environment descriptions (specifically looking at the dataset sizes).
Relation To Broader Scientific Literature: I believe this work expands the current MBRL setting by incorporating the learned visual features from large vision models. Specifically training a world model on a task-agnostic dataset and being able to zero-shot plan on control tasks is a fundamental ability that a world model should possess. Although this paper does not evaluate on more complicated robotic tasks -- I believe this is a step in the right direction.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: **Strengths**: I find the idea of using a pre-trained representation such as DINOv2's patch embeddings to be a really neat and simple idea to enable zero-shot planning. Leveraging the strengths of the large pre-trained visual models is an appealing research direction in contrast to encoders in MBRL/world model literature that have been trained from scratch.
**Weaknesses**: The authors can consider showing DINO-WM results on more robotic tasks (something like MetaWorld suite). I understanding that CEM planning over say picking or placing an object might be slightly challenging -- but it would be nice to see how DINO-WM does on those tasks. Even if it ends up failing, a discussion section describing the failures and potential reasons of that failure would be really helpful for the broader scientific community.
Other Comments Or Suggestions: 1. I find the choice of TDMPC2 as a baseline to be slightly strange. I believe TDMPC2 is learning a trivial (zero) representation because the only source for the TDMPC2-like world model to learn something meaningful is the reward. In the absence of reward, the "consistency" loss should immediately learn all zeros. Only reconstruction-based works like IRIS and DreamerV3 make sense as a baseline for this work. However, I haven't penalized the authors for this at all -- this was just a curious thought when I encountered the baseline.
Questions For Authors: 1. What is the architecture that is used for the (optional) decoder to generate the results in Figure 4? Is the decode restricted to be the same across all the baselines (IRIS and DreamerV3) -- in which case I'd like the authors to report the number of parameters each decoder has. It would be ideal to showcase the results with the same decoder structure or with similar parameters to ensure that it is indeed the representation that is being evaluated.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful and constructive feedback, and for acknowledging that DINO-WM “expands the current MBRL setting by incorporating the learned visual features from large vision models”, and the idea of using pretrained features to be “neat and simple”. We now address the questions raised in the review below.
**DINO-WM results on more robotic tasks**
In addition to the robotic deformable manipulation environments and the robotic arm reaching task presented in the manuscript, we further provide results of DINO-WM on LIBERO [1], a tabletop environment manipulating diverse objects, with third-person image observations and a 7DoF action space. We note that CEM in this raw action space would be inefficient starting from completely random action samples (which would result in the robot mostly jittering in place). Therefore, instead of sampling randomly, we sample from a pre-trained diffusion policy [2]. DINO-WM is capable of long-horizon predictions and assists in selecting the best action trajectories. We provide visualizations for open-loop rollouts of DINO-WM in [Figure 1](https://tinyurl.com/5eckpz75) and videos for task planning [here](https://tinyurl.com/yptacs64). Without planning with DINO-WM, the diffusion policy model achieves a 35% success rate (across 20 trajectories). With planning using DINO-WM, the task success rate is improved to 55%. This demonstrates the effectiveness of DINO-WM, even with a high-dimensional continuous action space and a long task horizon.
**TD-MPC2 as a baseline**
We completely agree with the reviewer’s insight on why TD-MPC2 underperforms compared to other reconstruction-based baselines, as we have also discussed in the original manuscript (L290-293). Our motivation for including TD-MPC2 as a baseline is that it represents state-of-the-art work in world modeling while also incorporating planning to sample actions during training. We believe that comparing against TD-MPC2 provides valuable insight into the extent to which the performance of world models in this line of work depends on informative reward functions.
**Architecture and size for decoders**
We thank the reviewer for raising this constructive question. For the (optional) DINO-WM decoder, the architecture is based on VQ-VAE and consists of two stacked Decoder modules. Each Decoder is a transposed CNN with residual blocks. For DreamerV3 and IRIS, we use the decoders provided within the respective algorithms, both of which are CNN-based decoders.
We have included the number of parameters for the decoder in each baseline in the Table below. For DreamerV3 and IRIS, we follow the default parameters from the original implementations. In response to the reviewer’s suggestion, we have also matched the decoder sizes for DreamerV3 and IRIS (denoted as DreamerV3 Large and IRIS Large). The decoded open-loop rollout images are shown in [Figure 4](https://tinyurl.com/mrku8uza). We observe that DreamerV3 Large shows improved prediction quality compared to the original DreamerV3. However, IRIS Large demonstrates slightly worse performance than IRIS, which could be due to the fact that IRIS requires identical parameters for both its encoder and decoder, making the model more difficult to optimize.
| Model | Decoder Size |
|--------------------|--------------|
| Ours | 10,140,163 |
| IRIS | 1,821,827 |
| DreamerV3 | 6,985,667 |
| IRIS Large | 11,126,979 |
| DreamerV3 Large | 11,256,577 |
[1] LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning
[2] Diffusion Policy: Visuomotor Policy Learning via Action Diffusion | Summary: The paper introduces DINO-WM, a world model that operates within the DINOv2 representation space without the need for reconstruction. The model is trained using teacher forcing with a frame-level causal mask. Due to its task-agnostic nature, DINO-WM can be used for zero-shot model predictive control without demonstration collection, reward modeling, or learning inverse dynamics models.
Claims And Evidence: While leveraging DINOv2 representations as world states achieves superior results compared to image reconstruction, its working mechanism is unclear. The authors claim that reconstruction-based methods contain insufficient task information, but they do not provide evidence that the frozen DINOv2 representations are better in task representation.
Methods And Evaluation Criteria: The proposed methods are aligned with the motivation and evaluated using appropriate criteria.
Theoretical Claims: I have checked the theoretical claims in this paper.
Experimental Designs Or Analyses: The paper provides extensive experiments with sufficient details.
Supplementary Material: I have reviewed all appendices as well as the supplementary code implementation.
Relation To Broader Scientific Literature: The paper introduces DINOv2 as the representation space of the world model, thereby alleviating the need for image reconstruction both during training and testing. While this concept of latent world model is not new, the proposed world model demonstrates superior performance and could serve as a solid baseline for future research.
Essential References Not Discussed: As far as I know, all closely related works are cited appropriately.
Other Strengths And Weaknesses: W1) **Latent world model is not new.** Many related works have adopted the latent world model design, and switching to DINOv2 representation space is somewhat limited in technical innovation. However, the extensive experiments still make this work an insightful contribution to the community.
W2) **The reasons why DINOv2 performs better are not clearly analyzed.** As I mentioned earlier, the authors fail to provide evidence that the frozen DINOv2 representations are better in task representation. In addition, the authors only compare backbones with global representations in Tables 2-4, which are inherently unsuitable for capturing spatial relationships. It would be more convincing if other semantic-rich alternatives, such as MAE [1], are included in the study.
[1] Masked Autoencoders Are Scalable Vision Learners. Kaiming He, et al.
Other Comments Or Suggestions: I believe this paper presents a solid practice in world model design and model predictive control. The generality of the proposed method could inspire future research a lot.
Questions For Authors: Q1) **DINOv2 input resolution.** If I remember it correctly, the pretrained DINOv2 backbone accepts images of 224x224 pixels as input. Therefore, I am curious why the authors resize the input images to 196x196 pixels (Line 927). Does it still work well without finetuning?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback, and for acknowledging that DINO-WM “demonstrates superior performance and could serve as a solid baseline for future research” and “could inspire future research a lot.” We address the issues raised in the review below.
>**“why DINOv2 performs better are not clearly analyzed”“they do not provide evidence that the frozen DINOv2 representations are better in task representation”**
While our manuscript presents results on feature reconstruction and task planning, demonstrating that DINOv2 patch features enable more accurate world modeling, we now provide further analysis of the features themselves. A well-established method for evaluating feature quality in downstream control tasks is linear probing, which assesses how well the features encode task-relevant state information. We conducted linear probe experiments on PointMaze, PushT, and Wall, comparing DINO-S Patch, DINO-S CLS, DINO-B Patch, IR3M, and Pre-trained MAE (DINO-S and DINO-B denotes DINO model with ViT Small and Base architecture, respectively). The validation loss for these linear probes is reported in [Table 2](https://tinyurl.com/mpurtfs), where DINO-S Patch and DINO-B Patch achieve the lowest validation loss, indicating their superior task representation capabilities.
To further analyze the DINOv2 features, we visualize the principal components after performing PCA on the patch embeddings from DINO-S on our deformable environments which has the most complex state space, following a procedure similar to that in the original DINOv2 paper ([Figure 2](https://tinyurl.com/r39zsb4r)). The visualizations show that DINOv2 features effectively identify objects of interest, distinguish the agent, and separate the foreground from the background, reinforcing that frozen DINOv2 representations encode task-relevant information effectively.
>**“the authors only compare backbones with global representations in Tables 2-4, which are inherently unsuitable for capturing spatial relationships”“It would be more convincing if other semantic-rich alternatives, such as MAE [1], are included in the study”**
We totally agree that having more baselines with semantic-rich baselines would provide further insights. We first note that for the IRIS baseline which we compare with in Table 1 and Table 3 of the manuscript, it encodes an image with 16 tokens instead of a global representation, which is also capable of capturing spatial relationships explicitly.
We ran a new experiment on PushT using a pre-trained MAE suggested by the reviewer. The performance is presented in the Table below. We observe that the WM trained with MAE features has lower final MPC performance than the original DINO-WM. We hypothesize this is because MAE prioritizes reconstruction over task relevance, as indicated by linear probe results. Additionally, even the smallest MAE encoder, which we used in the experiments, is still significantly larger than the DINO-S model, with feature dimensionality twice than that of DINO-S. This highlights that a larger and more expensive feature extractor does not necessarily translate to better task performance, and the higher computational cost of MAE makes it a less efficient choice.
| Encoder Model | Encoder Param Count | Feature Size | MPC |
|--------------|---------------------|--------------|------|
| DINOv2 | 22,056,576 | 384 | 0.90 |
| MAE | 85,798,656 | 768 | 0.86 |
>**“Q1 DINOv2 input resolution.”**
Although DINOv2 is pre-trained on 224×224 images, its ViT-based architecture processes images as fixed-size patches, allowing it to handle arbitrary image sizes as long as the height and width are divisible by the patch size (e.g., 14 for DINO-S). In fact, the original DINOv2 paper [1] explicitly demonstrates its ability to work with non-224×224 images, including rectangular and high-resolution images (Section 7.5).
In DINO-WM, our choice of 196×196 input resolution is purely an engineering decision. Our decoder assumes a 16x spatial upscaling, and we aimed to match the environment’s 224×224 observation shape after decoding. Since DINOv2’s fixed patch size is 14, an input of 196×196 results in a 14×14 patch grid, which decodes to 224×224 after the 16× upscaling. However, using 224×224 directly with an additional interpolation layer at the decoder output would be equally valid.
To verify this, we conducted an ablation on PointMaze with input sizes ranging from 28×28 to 280×280. The results in [Table 3](https://tinyurl.com/ycyua34p) show that image size 224×224 performs on par with 196×196. This shows that our choice does not limit DINOv2’s capabilities. Moreover, the ability of DINO-WM to process images of various sizes enhances its flexibility in modeling environments of varying complexity, allowing for adaptive trade-offs between granularity and computational efficiency.
[1] DINOv2: Learning Robust Visual Features without Supervision
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their comprehensive response. The extensive experiments have addressed my concerns. I will increase my rating to 4 and hope the authors will include the results in the revision.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and for increasing your score. We're glad the additional experiments addressed your concerns, and we will include the results in the revised manuscript. | Summary: This paper proposes DINO-WM, a task-agnostic world model that predicts future visual features using DINOv2 embeddings instead of reconstructing raw observations. Trained on offline trajectories with a Vision Transformer, it enables zero-shot test-time optimization via model predictive control. DINO-WM outperforms prior methods in goal-reaching success (45% improvement) and world modeling quality (56% improvement) across diverse tasks like maze navigation and robotic manipulation. By leveraging high-level feature prediction, it achieves flexible planning without task-specific retraining or auxiliary supervision.
Claims And Evidence: DINO-WM claims to produce high-quality world modeling, supported by a 56% improvement in LPIPS over prior methods. It claims to achieve high success in reaching arbitrary goals, with a 45% improvement over previous approaches. Additionally, it claims to generalize across task variations, such as different maze layouts and object shapes, outperforming prior work in diverse environments.
Methods And Evaluation Criteria: Yes
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: Yes. The experiments are well-structured to evaluate DINO-WM’s ability to learn from offline datasets, optimize behavior at test time, and generalize across tasks. Comparisons against state-of-the-art baselines show that DINO-WM significantly improves world modeling quality. Results confirm that using DINOv2 patch embeddings enhances planning performance.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: Prior world modeling work learns the dynamics model together with the embedding function. This work leverages frozen DINO features and learns the dynamics model in isolation, showing that a good embedding space makes world modeling + MPC a winning recipe.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Most of the experiments focus on variations of reaching tasks (e.g., non-prehensile manipulation), making it unclear whether the same performance will hold for contact-rich manipulation. Specifically, it is uncertain whether frozen DINOv2 feature patches provide sufficient resolution for contact-rich tasks (even just for pick-and-place tasks). Additionally, for dexterous manipulation with high-DOF action spaces, it is unclear whether planning will be fast enough to find a solution, given that planar pushing already takes ~50 seconds planning time. Lastly, in cluttered environments with complex backgrounds, the model's performance remains uncertain.
Other Comments Or Suggestions: No.
Questions For Authors: 1. Can you show examples of pick-and-place tasks, preferably with a non-trivial SE(3) action space beyond simple table-top pick-and-place?
2. Can you demonstrate your existing experiments in environments with cluttered backgrounds?
3. Does increasing the latent-space planning horizon degrade performance?
4. How does changing the patch size impact performance?
5. Figure 4 is very compelling, but was your decoder trained on the same data as DreamerV3 and IRIS?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback, for identifying that DINO-WM “significantly improves world modeling quality” and shows “a good embedding space makes world modeling + MPC a winning recipe”. We address the issues raised in the review below.
>**“it is uncertain whether frozen DINOv2 feature patches provide sufficient resolution for contact-rich tasks”**
We show that frozen DINOv2 patch features capture state accurately, even in contact-rich tasks like PushT and deformable manipulation. Figure 4 in the manuscript illustrates how they precisely represent particle positions—challenging for global features like ImageNet-pretrained ResNet or R3M.
We further trained a DINO-WM on LIBERO [1], a tabletop environment manipulating diverse objects, with third-person image observations and a 7DoF action space. [Figure 1](https://tinyurl.com/5eckpz75) compares open-loop rollouts of WMs trained with DINO patch features vs. DINO CLS features. Reconstruction scores for the predicted frames can be seen in [Table 7](https://tinyurl.com/y4thskrv). This shows that DINO patch features can accurately represent the object within the gripper, whereas global representations struggle with dynamic elements—both the object in the gripper and the gripper itself. This further reinforces the suitability of patch features for contact-rich interactions.
In our response to **Reviewer 4KWu**, we provide linear probe results on the environment state using DINO patch features, along with PCA visualizations, demonstrating its ability to capture task-relevant information.
>**“it is unclear whether planning will be fast enough”**
We address planning efficiency in 3 ways.
1. We improved our inference code for DINO-WM since submission. For the same hyperparameters, planning now takes 15.89s, compared to 53s reported in the manuscript.
2. While we report results for CEM with 100 samples per iteration, this can be adjusted based on task complexity. [Table 1](https://tinyurl.com/2mn6aben) presents the tradeoff between sample size, planning time, and performance on PushT. This flexibility allows us to balance computational efficiency with performance depending on task requirements.
3. Training and planning with a DINO-WM using a larger frameskip can speed up planning. This effectively increases the planning horizon as each prediction covers a longer time span. We train a DINO-WM with frameskip 25 on PushT and report the performance in [Table 6](https://tinyurl.com/525kjsza). While modeling long-term dependencies is more challenging as frameskip increases, it presents an opportunity to improve efficiency, making hierarchical planning promising directions for future research.
>**“In cluttered environments with complex backgrounds, the model's performance remains uncertain”**
We ran experiments on ClutteredPushT, where the background of the PushT environment is a cluttered real-world tabletop. Open-loop rollouts of a trained DINO-WM is in [Figure 3](https://tinyurl.com/49j2c5av), and we report the task performance in [Table 5](https://tinyurl.com/4wvxtw84) compared to the original PushT.
In ClutteredPushT, DINO-WM can still identify the effect of agent actions and predict the motion of relevant objects. The final planning performance is only marginally degraded. This shows DINO-WM’s capability of modeling environments with complex backgrounds.
Additionally, we have trained an unconditional world model on the CLEVRER [2] dataset where multiple objects may collide. Videos of open-loop rollouts are provided [here](https://tinyurl.com/cx59vz8p).
**Q1. Environment with SE(3) action space.**
We further provide results of DINO-WM on LIBERO which has a 7DoF action space. Due to space limit, we refer to our response to **Reviewer dmt6** for planning videos and performance.
**Q2.** Addressed in the ClutteredPushT experiment.
**Q3. Does increasing the latent-space planning horizon degrade performance?**
Not necessarily. Longer planning horizon enables discovering states that are further away, with the tradeoff that the WM’s long-term prediction would be less accurate. We balance this via planning with receding horizons as in our deformable environments.
**Q4.** We conduct ablations with different image sizes (yielding different patch sizes after DINO encoder) on PointMaze. We report the planning success rate (SR) for CEM, MPC, and the predicted frame’s image scores in [Table 3](https://tinyurl.com/ycyua34p).
With MPC, all models eventually obtain a decent SR, but models with larger patch size are able to achieve better SR with CEM. This shows that bigger patch sizes can contain more precise state information, making the world model more accurate for zero-shot open-loop planning.
**Q5.** Yes, our decoder is trained on the same data with all baselines including DreamerV3 and IRIS.
[1] LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning
[2] CLEVRER: CoLlision Events for Video REpresentation and Reasoning | null | null | null | null | null | null |
CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for Text-to-Image Generation | Accept (poster) | Summary: The paper introduces CHATS, a text-to-image generation framework that integrates human preference alignment with test-time sampling. It employs two distinct models to capture preferred and dispreferred distributions, trained with a new objective based on Direct Preference Optimization (DPO), and uses a proxy-prompt sampling strategy. The main experimental results claim CHATS outperforms traditional methods like Diffusion-DPO across benchmarks。
## Update after rebuttal
Thanks to the authors for their efforts in their rebuttal. I decided to keep my initial positive rating.
Claims And Evidence: The claims of improved performance and data efficiency are supported by experiments on SD1.5, SDXL, and an in-house model, with results in Tables 1-3 showing CHATS outperforming baselines.
Methods And Evaluation Criteria: The proposed CHATS method, combining human preference alignment and test-time sampling, makes sense for improving text-to-image generation quality and alignment.
Theoretical Claims: I reviewed the correctness of the training objective derivation for CHATS in Section A.2 (Appendix A). The logic is sound.
Experimental Designs Or Analyses: I checked the experimental design in Section 5, focusing on Tables 1-3. The comparison with Standard and Diffusion-DPO baselines across SD1.5, SDXL, and In-house T2I models is valid, and the metrics (HPS v2, ImageReward, PickScore) are well-established.
Supplementary Material: Yes. I reviewed Appendix A (Mathematical Derivations), specifically A.1 and A.2, which detail the global optimum and CHATS training objective.
Relation To Broader Scientific Literature: CHATS extends DPO (Rafailov et al., 2023) and CFG (Ho & Salimans, 2022) by integrating preference alignment and sampling.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Weakness:
* The training objective (Section 4.1, Eq. 12) uses Jensen’s inequality to approximate an intractable expectation, but the paper doesn’t quantify the impact of this simplification. Could this lead to suboptimal convergence or bias in the preferred/dispreferred split?
* The small dataset size (7,459 pairs) is a strength for efficiency but a weakness for validation. The paper doesn’t test CHATS on larger, messier datasets.
* The default $\alpha$=0.5 and $s$=5 work well, but the ablation (Table 4, Fig. 3) suggests that sensitivity isn’t deeply explored.
* Using two models increases the computing cost compared to single-model methods like Diffusion-DPO. The paper doesn’t report runtime or memory metrics.
Other Comments Or Suggestions: N/A.
Questions For Authors: Please refer to the weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable and insightful feedback!
### **1. Clarification on approximating an intractable expectation using Jensen’s inequality**
We acknowledge the reviewer's concern regarding the use of Jensen’s inequality in our derivation of Eq. 12. Specifically, we approximate the intractable expectation as follows:
$$
\mathcal{L}(z\_0^+, z\_0^-) = -\log \sigma\left( \mathbb{E}[X(z\_{0:T})] \right) \leq -\mathbb{E} \left[\log \sigma\left( X(z\_{0:T}) \right)\right],
$$
please refer to Eq.39 and Eq.40 in appendix for more details.
Although applying Jensen’s inequality in this manner provides an upper bound rather than the exact loss, our extensive empirical evaluations across multiple benchmarks (e.g., Tables 1–3) indicate that the resulting training dynamics lead to consistent improvements in generation quality and robust convergence. In practice, the bias introduced by this approximation is effectively absorbed during optimization, and the desired preferred/dispreferred split is maintained.
While a tighter approximation might further reduce any potential bias, the experimental results confirm that our current approximation does not lead to suboptimal convergence or a misrepresentation of the two distributions. Quantifying the exact impact of this inequality remains challenging due to the inherent complexity of the diffusion process. However, the empirical performance gains observed across diverse datasets and model architectures strongly suggest that the approximation error is minimal.
### **2. Validation on different dataset**
Although the 7,459-pair dataset (OIP) demonstrates our method's high data efficiency, we have also evaluated CHATS on a larger and noisier dataset. Our experiments on the PaP v2 dataset, which comprises 851,293 preference pairs, are reported in Table 5 and confirm that CHATS consistently outperforms baseline methods on both datasets. Furthermore, our results indicate that training with the higher-quality OIP dataset yields better performance, as discussed in "A small high-quality preference dataset is enough" (Line 379, right column). Detailed information about the datasets can be found in the appendix, Line 681-687.
### **3. More analysis on sensitivity of $\alpha$ and $s$**
We conduct additional experiments to further examine the impact of hyperparameters. The results on SDXL are summarized below.
For the guidance scale $s$, we obtained the following:
|$s$| HPS v2 on Photo ($\uparrow$) |
|-|-|
|2|28.83|
|3| 29.36|
|4| 29.81|
| 5 (default) | 29.62 |
| 6 | 29.61|
| 7 | 29.39|
| 8 | 28.72|
These results indicate that while our default value $s=5$ works well, a value of $s=4$ yields a slightly higher score. It is important to note that $s$ is a user-specified hyperparameter that governs the trade-off between concentrating generation on high semantic-density areas and maintaining output diversity. As such, it is not unique to CHATS and was not extensively tuned in our framework.
Regarding $\alpha$, our experiments yield the following:
| $\alpha$ | HPS v2 on Photo ($\uparrow$) |
|-|-|
| 0.5 (default) | 29.62 |
| 0.0 | 29.36 |
| -0.1 | 29.34 |
| -0.3 | 29.25 |
These results confirm that the best performance is achieved around $\alpha = 0.5$, which is consistent with our analysis presented in Figure 5 and Line 765-769 of the appendix.
In summary, our ablation studies demonstrate that CHATS is robust to moderate variations in these hyperparameters. The default settings of $s=5$ and $\alpha=0.5$ represent a high-quality generation, with minor adjustments yielding comparable performance.
### **4. Computational cost**
We would like to point out that Table 6 already reports computational cost metrics in terms of images generated per second, comparing CHATS with single-model methods like Diffusion-DPO. In addition, we have performed supplementary experiments to address the increased cost introduced by the dual-model architecture. As explained in Line 436-439 (right column), by simultaneously distilling both the guidance scale (i.e., $s$ in Eq. 17) and the two models into a single model, the extra inference cost can be **completely** eliminated while still achieving high-quality generation. For example, the following table shows that our distillation variant (CHATS-distill) not only reduces memory usage and increases throughput relative to CHATS but also maintains the improved HPS v2 score:
|Method|Memory($\downarrow$) |Throughput ($\uparrow$)|HPS v2 on Photo($\uparrow$)|
|-|-|-|-|
|Standard|1$\times$|1$\times$|26.88
|CHATS|2$\times$|0.97$\times$|29.62
|CHATS-distill|1$\times$|2$\times$| 29.53
Thus, while the dual-model approach in CHATS does introduce additional cost in its raw form, our distillation strategy fully recovers efficiency without compromising the quality of the generated images. | Summary: This paper presents CHATS, a framework for text-to-image generation (T2I) that enhances both text-image alignment and generation quality. Unlike traditional approaches that separately apply human preference alignment and classifier-free guidance, CHATS integrates both components to optimize text-to-image diffusion models. The proposed method models both preferred and dispreferred distributions and employs a proxy-prompt-based sampling strategy to leverage useful information from both. CHATS demonstrates data efficiency, achieving good performance with minimal fine-tuning data. Experimental results show that CHATS outperforms existing preference alignment techniques on some evaluation metrics.
Claims And Evidence: NA
Methods And Evaluation Criteria: yes
Theoretical Claims: NA
Experimental Designs Or Analyses: - The performance improvement, especially over SDXL + Diffusion-DPO, on Tab 1 - Tab 3 is marginal. The effectiveness of the proposed method may not be significantly verified.
Supplementary Material: all
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: [d] Null-text Inversion for Editing Real Images using Guided Diffusion Models. CVPR’23
Other Strengths And Weaknesses: Strengths:
+ A new framework combining RL and guidance. The paper introduces an approach that jointly optimizes human preference alignment and test-time sampling, addressing some limitations in existing text-to-image models.
+ Data-Efficient Fine-Tuning. CHATS achieves good performance with a small, high-quality fine-tuning dataset, making it more practical and resource-efficient for real-world applications.
Weaknesses:
- Lack of novelty. Human alignment methods for diffusion models [a, b, c] and learning negative/disprefer concepts for text-time sampling [d] have been already proposed. This method seems to simply combine exisiting technoligies.
- The performance improvement, especially over SDXL + Diffusion-DPO, on Tab 1 - Tab 3 is marginal. The effectiveness of the proposed method may not be significantly verified.
- The deployment cost may be double compared with a single diffusion model, due to the introduction of the minus model. Besides, the efficiency can also be influenced as shown in Tab 6.
[a] Diffusion Model Alignment Using Direct Preference Optimization. CVPR’24
[b] Training Diffusion Models with Reinforcement Learning. ICLR’22
[c] DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models. NeurIPS’23
[d] Null-text Inversion for Editing Real Images using Guided Diffusion Models. CVPR’23
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your comments.
### **1. Novelty**
We respectfully disagree with the assertion that CHATS merely *combines existing technologies*. In our work, we explicitly differentiate our approach from prior DPO methods for diffusion models in related work [1–3] in Line 135-146 (left column). In fact, the 3 papers you cited are **evidences that support** our novelty rather than undermining it. The key insight of our CHATS is to integrate human preference optimization finetuning with the sampling process, leveraging their inherent synergy to refine image generation. However, none of DPO methods you mentioned explore it.
Moreover, Null-text Inversion [4] concentrates on image editing by optimizing the null-text embeddings **using gradient updates** to achieve a superior DDIM inversion. In contrast, the proxy-prompt-based sampling strategy employed by CHATS is primarily designed to enhance sampling efficiency by reducing the number of forward passes from 3 in Eq.16 to 2 in Eq.17, while all prompt features remain **frozen and untrained**. Additionally, [4] neither utilizes human preference data nor trains separate models to explicitly capture the distributions of preferred and dispreferred images, and the dual architectures used by CHATS naturally separate these distributions into two distinct parts. As a result, our method is **not simply a combination of existing techniques but rather a novel integration** that leverages human preference data to improve the generative process. To the best of our knowledge, such an approach has **not** been previously explored in the context of text-to-image generation.
### **2. Clarification on performance improvement**
We acknowledge that the numerical improvements, especially when comparing SDXL + Diffusion-DPO with CHATS, may appear modest on individual benchmarks. However, we emphasize that the improvements are **consistent** across multiple benchmark evaluations and are observed in both diffusion models (SDXL) and flow matching models (In-house T2I). Our extensive evaluations across aesthetic scores, GenEval, and DPG-Bench consistently demonstrate that CHATS improves aesthetic alignment and generation quality. The consistent gains across diverse datasets and model architectures validate the effectiveness of our approach. Moreover, these improvements are achieved with only a small high-quality fine-tuning dataset, highlighting the data efficiency of CHATS.
### **3. Clarification of deployment cost**
While CHATS introduces a dual-model architecture that, if used naively, doubles the mode size and slightly decreases the throughputs compared to a single diffusion model (as shown in Table 6), this cost can be **completely eliminated** through distillation. As noted in Line 436–439 (right column), by simultaneously distilling the guidance scale (i.e., $s$ in Eq. 17) and the two models into a single model, we achieve both high efficiency and high-quality generation. For example, our distillation variant, “CHATS-distill,” attains a memory footprint and throughput comparable to or even exceeding that of the standard model, while retaining the improved HPS v2 score, as demonstrated in the following table:
| Method | Memory ($\downarrow$) | Throughput ($\uparrow$) | HPS v2 on Photo ($\uparrow$) |
|----------------|--------------------------|---------------------------|-------------------------------|
| Standard | 1× | 1× | 26.88 |
| CHATS | 2× | 0.97× | 29.62 |
| CHATS-distill | 1× | 2× | 29.53 |
Thus, while the raw dual-model approach incurs additional computational cost, our results show that a distilled version can match or surpass the efficiency of a single model without compromising the quality gains provided by CHATS.
**References**
[1] Diffusion Model Alignment Using Direct Preference Optimization, CVPR’24.
[2] Training Diffusion Models with Reinforcement Learning, ICLR’24.
[3] DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models, NeurIPS’23.
[4] Null-text Inversion for Editing Real Images using Guided Diffusion Models, CVPR’23. | Summary: This paper aims to improve the performance of text-to-image diffusion models by using a human preference dataset. To make better use of DPO and CFG, they propose a training objective that trains a preferred model and a dispreferred model. During the sampling step, they introduce a new guidance method that incorporates both models. Experimental results show performance improvements on several benchmark datasets.
## Update after rebuttal
Thank you for the authors' response and the additional responses to further comments. After considering the authors' rebuttal and the reviews from other reviewers, I have revised my rating to a borderline accept. This work is meaningful in that it explores DPO applied to diffusion models sampled with CFG. However, this approach requires twice the memory unless a distillation method is applied. The authors have properly presented and analyzed the strengths of the proposed method, but the need for further hyperparameter tuning and the need for a distillation method are still apparent. Therefore, while I lean towards a positive evaluation, my support is not particularly strong.
Claims And Evidence: The primary claim of the paper is that training and sampling with separate models for preferred and dispreferred outputs improves text-to-image diffusion models aligned with human preferences. The proposed method modifies the DPO objective to accommodate this dual-model setup and introduces a corresponding guidance-based sampling strategy. While the experimental results partially support this claim, I find that the advantages of this approach over existing methods are not clear.
* The modification of the DPO objective to fit the dual-model framework requires further justification. The traditional DPO is designed from reinforcement learning setups where a single policy model represents the reward function. Splitting it into two models raises the question of whether the theoretical foundations of DPO still hold in this new formulation.
* It is also unclear whether the (dis)preferred model only receives training signals from its corresponding (dis)preferred dataset and, if so, whether this setup can truly be considered a dual-objective framework.
* A key concern is to understand the convergence properties of the diffusion model under the proposed objective, i.e., where does the optimal policy of the new formulation converge?
* In the sampling step, the derivation of Eq. (15) requires further clarification. For example, in Eq. (16), what does it mean for $\epsilon_{\theta^-}(z_t, ,c)$ to receive a signal in the positive direction? A more intuitive or theoretical explanation of the behavior of this term would strengthen the argument.
* The paper introduces proxy prompts but does not provide clear evidence of their effectiveness. Specifically, it is unclear why linear interpolation remains effective in this setting. The lack of a strong theoretical justification for this design choice weakens the argument for its necessity.
* Training and sampling with two models raises concerns about memory efficiency.
Methods And Evaluation Criteria: Similar to my concerns above, it is uncertain whether the proposed dual-model training scheme preserves the theoretical foundations of DPO and whether it leads to a well-defined optimal policy.
Theoretical Claims: I have reviewed the derivations of RLHF and the training objective of CHATS. However, I am particularly interested in whether the convergence point of the loss function from Eq. (9) is consistent with that of existing RLHF or DPO methods. A clearer discussion of how the loss formulation ensures consistency with established preference learning frameworks would strengthen the paper's claims.
Experimental Designs Or Analyses: I have reviewed Experiments section of the paper and have the following concerns:
* In Table 4, the ablation study shows that a single model trained on the full dataset already outperforms the baseline. This raises the question of whether the proposed two-model approach provides a significant advantage over a simpler alternative. A clearer rationale is needed to demonstrate the need to train separate preferred and dispreferred models.
* In Figure 3, the paper provides a sensitivity analysis for $\alpha$, but while the main text discusses cases where $\alpha$ is negative, the experiments do not include results for negative $\alpha$. Including this analysis would strengthen the empirical evaluation by providing a more complete picture of the behavior of the method.
Supplementary Material: I did a rough review for the appendix, focusing on the mathematical derivation section.
Relation To Broader Scientific Literature: The paper proposes a new preference optimization approach for diffusion models, building on methods widely used in large language models (LLMs) and other domains. By extending preference optimization to the diffusion model framework, the work introduces a potentially impactful direction for aligning generative models with human preferences.
If the proposed method demonstrates significant advantages over existing approaches, it could have broader applicability beyond diffusion models, potentially influencing preference optimization strategies in LLMs and other generative models. A stronger discussion on the generalizability of this approach to different architectures would further highlight its relevance to the broader scientific community.
Essential References Not Discussed: As far as I know, the essential papers seem to be well-referenced.
Other Strengths And Weaknesses: I have discussed in above sections.
Other Comments Or Suggestions: N/A
Questions For Authors: I would like the response to primarily address the issues and questions raised in the Claims and Evidence section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback and questions! Due to space limitations, we provide responses to your main comments. Further questions can be discussed in subsequent responses.
### **1. Theoretical foundations & convergence properties of CHATS**
Given that DPO is invariant to affine transformations of the reward, for reward $R'(z_{0:T}, c) = a \cdot R(z_{0:T}, c) + b$, the optimal policy becomes (cf. Eq.29):
$$
p^*(z_{0:T}\mid c) = \frac{p_{\mathrm{ref}}(z_{0:T}\mid c) e^{a \cdot R(z_{0:T}, c) + b}}{Z'(c)}.
$$
with $\beta$ omitted for simplicity.
CHATS decomposes reward of traditional DPO (Eq.30) into two parts:
$$
R^+(\theta^+) = \log\frac{p_{\theta^+}(z_{0:T}^+\mid c)}{p_{\mathrm{ref}}(z_{0:T}^+\mid c)}, \quad
R^-(\theta^-) = \log\frac{p_{\theta^-}(z_{0:T}^-\mid c)}{p_{\mathrm{ref}}(z_{0:T}^-\mid c)},
$$
with $\beta$ and $\log Z(c)$ omitted since they are constants for optimization. Defining the **effective reward** as: $R_{\mathrm{CHATS}} = R^+(\theta^+) + R^-(\theta^-)$, the **optimal distribution** becomes:
$$
p^*(z_{0:T}\mid c) = \frac{p_{\mathrm{ref}}(z_{0:T}\mid c) e^{R_{\mathrm{CHATS}}}}{Z^{\text{CHATS}}(c)}.
$$
Under the assumption of $L$-smoothness and using standard gradient descent, we obtain the inequality:
$$
\mathcal{L}_{k+1} \le \mathcal{L}_k - \frac{\eta}{2} \|\nabla \mathcal{L}(\theta^+_k, \theta^-_k)\|^2,
$$
which ensures the CHATS loss (Eq. 9) decreases and converges. Since their combination recovers the same optimal joint distribution as traditional DPO methods [1], CHATS preserves theoretical foundations of DPO.
### **2. Dual training signals**
In our training procedure, we do not split the data into separate (dis)preferred subsets. Instead, when minimizing losses such as those in Eq. 13 and 14, we select a ranked preference pair $(z_0^+, z_0^-)$ from the entire dataset. This single pair is then used to simultaneously update both the preferred and dispreferred models within a unified dual-objective framework. This joint training approach ensures that both models receive complementary signals derived from the ranked pair.
### **3. Clarification on Eq. 15**
As indicated in Line 220–226 (right column), when $0 < \alpha < \frac{1+s}{s}$, the dispreferred distribution $p_{\theta^-}$ is partially incorporated so that it contributes useful patterns while remaining less influential than $p_{\theta^+}$. In other words, the dispreferred model's noise prediction is toned down, partially suppressing undesirable patterns while preserving beneficial information. Conversely, when $\alpha < 0$, the terms $p_{\theta^-}(z_t | c)^{\alpha s}$ and $p_{\theta^-}(z_t)^{-(1+\alpha)s}$ actively push samples away from undesired modes, effectively suppressing the entire output of the dispreferred model, similar to using a null prompt $\varnothing$.
### **4. Justification on proxy prompt**
Generative models represent prompts in a continuous embedding space where linear operations reflect meaningful semantic changes. For example, word embedding arithmetic [2] (e.g., “queen” ≈ “king” – “man” + “woman”) shows that semantic attributes can be linearly combined. Thus, interpolating between a prompt $c$ and the null prompt $\varnothing$ (i.e., forming $\hat{c} = -\alpha c + (1+\alpha) \varnothing$) not only reliably captures the semantic content but also reduces the number of forward passes—from 3 in Eq.16 to 2 in Eq.17, thereby cutting inference cost by about 1/3.
### **5. Memory efficiency**
While the dual architectures in CHATS introduce additional cost (see Table 6), as noted in Line 436–439 (right column), this extra inference cost can be **completely eliminated** via distillation. As shown in the table below, by simultaneously distilling the guidance scale (i.e., $s$ in Eq. 17) and the two models into a single one, CHATS achieves both high efficiency and high-quality generation (Model: SDXL).
|Method|Memory($\downarrow$) |Throughput ($\uparrow$)|HPS v2 on Photo($\uparrow$)|
|-|-|-|-|
|Standard|1$\times$|1$\times$|26.88
|CHATS|2$\times$|0.97$\times$|29.62
|CHATS-distill|1$\times$|2$\times$| 29.53
### **6. Justification on Table 4**
Even though a single model trained on the full dataset outperforms the baseline, our CHATS method consistently delivers further improvements as shown in Table 4. Similar trends are observed with SDXL:
|Config|HPS v2 on Photo ($\uparrow$)|
|-|-|
| single model (full data) + $s$=5 | 28.20
| two models + $s$=5,$\alpha$=0.5 | 29.62
### **7. More analysis on $\alpha$**
We show more ablation on $\alpha$ on table below (SDXL, $s$=5):
|$\alpha$|HPS v2 on Photo($\uparrow$)|
|-|-|
|0.5 (default)|29.62 |
|0.0| 29.36|
|-0.1| 29.34|
| -0.3| 29.25|
We observe the best choice of $\alpha$ occurs around 0.5, consistent with our analysis in Fig.5 and Line 765-769 in appendix.
**References**
[1] Diffusion Model Alignment Using Direct Preference Optimization, CVPR'24
[2] Distributed Representations of Words and Phrases and their Compositionality, NeurIPS'13
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors’ response. Some of my concerns have been addressed. In particular, I had overlooked and misunderstood the dual training objective, but the theoretical analysis and explanations provided by the authors help clarify this point. Since this is my primary concern, it brings me to at least a borderline recommendation for this paper. However, I still have some remaining concerns that make me hesitant to move toward acceptance just yet.
First, while it makes intuitive sense what guidance the authors want each of their terms in Eq. (15) to give as a sampling method, it is theoretically unclear what distribution $\tilde{p}_\theta$ should ultimately follow. For example, classifier-free guidance can be interpreted as, from classifier guidance using Bayes’ rule, sharpening a classifier and then replacing it with a combination of unconditional and conditional denoised networks. I wonder if a similar theoretical explanation could be applied here as well.
Additionally, I am not sure that the current explanation of proxy prompts convincingly supports the proposed method. Modern diffusion models use much more complex text encoders than [2], and the claim that the text embeddings follow linear properties based on the analysis of [2] is not particularly convincing. At the very least, there should be experimental evidence to support this property. For example, let $c_x$ denote the text embedding for prompt $x$. Then, as the authors mentioned, I would be interested to see whether $\hat{c} := c_{king} - c_{man} + c_{woman}$ aligns with $c_{queen}$, or whether sampling from $\hat{c}$ using the diffusion model generates images that semantically represent 'queen'.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful and insightful comments. We respond to your remaining concerns on:
### **1.What distribution should $\tilde p_\theta$ ultimately follow**
Below is a compressed derivation of Eq.15 starting from Bayes’ rule and extending the standard classifier‐free guidance (CFG) derivation to our dual‐model setting.
Start from Bayes’ rule for a classifier:
$$
p(c \mid z_t) = \frac{p(z_t \mid c)p(c)}{p(z_t)},
$$
since $p(c)$ can be regarded as a constant during optimization, CFG defines the guided distribution by raising $p(c \mid z_t)$ with a guidance scale $s$:
$$
\tilde{p}(z_t \mid c) \propto p(z_t|c)\bigl[p(c \mid z_t)\bigr]^s.
$$
Substitute the expression for $p(c \mid z_t)$ and omit $p(c)$ yields:
$$
\tilde{p}(z_t \mid c) \propto p(z_t \mid c)^{1+s}p(z_t)^{-s}.
$$
In CHATS, two models are used:
- The **preferred model** $p_{\theta^+}(z_t \mid c)$,
- The **dispreferred model** $p_{\theta^-}(z_t \mid c)$ (with its unconditional form $p_{\theta^-}(z_t)$).
For each model, we can write a classifier-like term via Bayes’ rule. For the preferred model:
$$
p_{\theta^+}(c \mid z_t) = \frac{p_{\theta^+}(z_t \mid c)p(c)}{p_{\theta^+}(z_t)},
$$
and for the dispreferred model:
$$
p_{\theta^-}(c \mid z_t) = \frac{p_{\theta^-}(z_t \mid c)p(c)}{p_{\theta^-}(z_t)}.
$$
Assuming $p_{\theta^+}(z_t) \approx p_{\theta^-}(z_t)$, we combine the two signals by defining a composite log-odds score:
$$
\Delta(z_t,c)=\log\frac{p_{\theta^+}(z_t \mid c)}{p_{\theta^-}(z_t)}+\alpha\log\frac{p_{\theta^-}(z_t \mid c)}{p_{\theta^-}(z_t)}.
$$
The first term tends to generate features favored by the preferred model while suppressing the background features typically produced by the dispreferred model in its unconditional output (similar to CFG), and the second term further accounts for the shift in the output of the dispreferred model when conditioned on $c$, with its impact regulated by a scalar $\alpha$. In this form, the useful information in $p_{\theta^-} (z_t \mid c)$ is effectively utilized as well.
Following CFG, we define the CHATS guided distribution as:
$$
\tilde{p}\_\theta(z\_t \mid c) \propto p\_{\theta^+}(z\_t \mid c)\exp\Bigl(s\cdot\Delta(z_t,c)\Bigr).
$$
Substitute $\Delta(z_t,c)$:
$$
\tilde{p}\_\theta(z\_t \mid c) \propto p\_{\theta^+}(z\_t \mid c) \cdot \exp\left(s\left[\log\frac{p_{\theta^+}(z\_t \mid c)}{p\_{\theta^-}(z\_t)}+\alpha\cdot\log\frac{p\_{\theta^-}(z_t \mid c)}{p\_{\theta^-}(z\_t)}\right]\right).
$$
Using $\exp(s\log A)=A^s$, we have
$$
\tilde{p}\_\theta(z\_t \mid c) \propto p\_{\theta^+}(z\_t \mid c)
\left(\frac{p\_{\theta^+}(z\_t \mid c)}{p\_{\theta^-}(z\_t)}\right)^s
\left(\frac{p\_{\theta^-}(z\_t \mid c)}{p\_{\theta^-}(z\_t)}\right)^{\alpha s}.
$$
Grouping terms, we obtain
$$
\tilde{p}\_\theta(z\_t \mid c) \propto p\_{\theta^+}(z\_t \mid c)^{1+s}p\_{\theta^-}(z\_t \mid c)^{\alpha s}p\_{\theta^-}(z\_t)^{-(1+\alpha)s},
$$
which is the same with Eq.15. The final guided distribution is not merely a sharpened version of $p_{\theta^+}(z_t \mid c)$. It also leverages the dispreferred model. The term$\left(\frac{p\_{\theta^-}(z\_t \mid c)}{p\_{\theta^-}(z\_t)}\right)^{\alpha s}$ adjusts the output based on how conditioning on $c$ changes the dispreferred model’s behavior. This derivation, starting from $p(c\mid z_t)$ for both models, provides a theoretical foundation for the CHATS sampling distribution analogous to that of CFG.
### **2. More evidence on proxy prompt**
We appreciate the reviewer's concern regarding the linearity assumption in the text embedding space, especially given that modern diffusion models use more complex text encoders than those analyzed in [2]. To address this, we perform an experiment to verify whether an **random additive fusion** of two text embeddings can indeed capture meaningful semantic information, which is more similar to proxy prompt than "queen-king" case.
**Setup:**
We randomly masked certain components of the original prompt by replacing them with a **[mask]** token and generated images under the following four conditions (Model: SDXL):
**1:** Using only the unmasked portion of the prompt.
**2:** Using only the masked components (i.e., the content replaced by [mask]).
**3:** Using the original, unaltered prompt.
**4:** Converting both the unmasked and masked components into text embeddings and merging them via element‐wise addition. The resulting fused embedding is then used for image generation.
The qualitative results in [this PDF](https://0x0.st/82YA.pdf) show that using the fused text embedding (Condition 4) captures the intended semantics and produces images of equal or higher quality than those from the original prompt (Condition 3). This preliminary evidence indicates that an additive fusion in the text embedding space effectively integrates semantic features, thereby supporting our proxy prompt approach in Eq.17 even with modern, complex text encoders. | null | null | null | null | null | null | null | null |
Adjustment for Confounding using Pre-Trained Representations | Accept (poster) | Summary: This paper explores how non-tabular data, such as images and text, can be incorporated into average treatment effect (ATE) estimation to account for confounding factors. The authors propose using latent features from pre-trained neural networks for adjustment. They formalize conditions under which these features enable valid ATE estimation, particularly in the double machine learning framework. The paper highlights challenges related to high-dimensional representations and non-identifiability but argues that neural networks can overcome these issues by adapting to intrinsic sparsity and dimensional structures, enabling fast convergence rates in treatment effect estimation.
Claims And Evidence: I found this paper to be a primarily theoretical paper with minimal experimental support. For example, the last few arguments in the abstract are "Common structural assumptions for obtaining fast convergence rates with additive or sparse linear models are shown to be unrealistic for latent features. We argue, however, that neural networks are largely insensitive to these issues. In particular, we show that neural networks can achieve fast convergence rates by adapting to intrinsic notions of sparsity and dimension of the learning problem." but I couldn't find experimental support for these claims.
Moreover, just like the paper, I find the formulation of the problem unpractical. Indeed, imaging data could be regarded as confounders but practically modeling this does not really gain you any improved understanding. All examples given in the paper have a simplistic form; i.e., the severity of disease, the size of the stone, or the extent of a fracture is a univariate confounder. I'm all for modeling these clear and interpretable confounding effects, but using images as their surrogate seems to be a detour and over-complication. In the end, we don't know whether image latent representations (extracted by a pre-trained network) actually contain those information or not.
Methods And Evaluation Criteria: Methods:
The formulation and theoretical derivation look correct to my eye, but I have to admit that I'm not a statistician so might not be able to fully appreciate the methodological and theoretical sophistication. I made my overall assessment assuming great merit in these formulations.
Evaluation:
The shortcoming of the study is in the evaluation. The core analysis is two plots (Fig. 4&5) showing the estimated ATE by the proposed estimator and some baseline estimators. These two experiments are toy examples in nature and have extremely simple setup. Even confined to this experimental settings, the study could explore way more different simulation scenarios, e.g., by varying simulation parameters.
Theoretical Claims: As mentioned above, The formulation and theoretical derivation look correct to my eye but I don't have the expertise to fully assess its correctness.
Experimental Designs Or Analyses: * I feel the experimental setups are over simplified. Having a simple 0.7/0.3 ratio for simulating the basic label confounding and the 5-dimensional autoencoder latent representation for complex confounding have very limited generalizability to real life problems given the confounding effects in imaging applications are high-dimensional in nature.
* Compared to other estimators, the proposed one does not seem to be better in Fig. 4&5.
Supplementary Material: I've reviewed the results in the supplement (Fig 7&8) but not the theoretical proof.
Relation To Broader Scientific Literature: Yes, the paper presents the scenario that they want to dive into, which is a specific case in the broader confounder analysis literature.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Additional experiments** can be found at https://anonymous.4open.science/r/icml2025-6599/add_exp_r3.pdf
- - - -
### **Theory & Practical Relevance**
> I found this paper to be a primarily theoretical paper with minimal experimental support.
This is correct. Our paper is a theoretical contribution to the fast-growing literature that aims to incorporate non-tabular data and pre-trained models in causal inference procedures. While many papers do this empirically (as described in Sec. 2), we establish a novel set of theoretical conditions allowing for valid statistical inference of ATE estimation in this context. Our experiments mainly serve to illustrate these concepts.
Nonetheless, we followed the reviewer’s suggestion and added several additional experiments (see answer below).
> [NNs] can achieve fast convergence rates by adapting to intrinsic notions of sparsity and dimension of the learning problem." [...] I couldn't find experimental support for these claims.
We provide empirical support for these claims in our experiments on complex confounding shown in Fig. 5 & 8. The confounding via latent features precisely mimics the idea of low intrinsic dimension and HCM structure of the target function. In contrast to the other depicted ATE estimators, NN-based nuisance estimation denoted by “DML (NN)” shows that NNs can adapt to the intrinsic notions of sparsity and dimension of the learning problem, thereby yielding unbiased ATE estimation. We thank the reviewer for the remark and will emphasize this in the revised version of the paper.
> I find the formulation of the problem unpractical.[...] practically modeling [imaging data] does not really gain you any improved understanding [...] images as their surrogate seems to be a detour and over-complication.
We politely disagree with this viewpoint. Incorporating non-tabular data in ATE estimation to adjust for confounding does strongly impact scientific understanding and resembles what is done in practical application, as also described in our motivating examples in Sec. 1. In scenarios, where the confounder is available only embedded in non-tabular data (e.g. kidney stone or tumor size can only be measured via medical imaging), incorporating such modality in the ATE estimation is crucial to obtain unbiased estimates and draw valid scientific conclusions, as we show both theoretically and empirically in our paper.
- - - -
### **Empirical Evaluation & Additional Experiments**
> Experimental setups are over simplified. Having [...] a [5-dim. AE latent rep.] for complex confounding have very limited generalizability to real life problems given the confounding effects in imaging applications are high-dimensional in nature.
We thank the reviewer for raising this point. However, all of the previously mentioned confounding “in nature”, e.g. kidney stone or tumor size, is in fact low dimensional yet embedded in the medical image in high dimensions, making it necessary to extract this information. Our experiments using both text and X-ray data precisely show that pre-trained models can be used to extract the latent confounding (low-dim.) information from the high-dim. non-tabular data modalities and achieve valid inference.
> The study could explore way more different simulation scenarios, e.g., by varying simulation parameters.
Based on the reviewer's suggestion, we conducted several additional experiments, in which we varied simulation parameters including the treatment assignment probabilities and the size of the latent dim. of the confounder. The new results (Fig. 9-12) are in line with our previous results and reinforce our main theoretical findings. In particular, results are insensitive to changes in assignment prob. and latent dim. We thank the reviewer for this remark and think these results further strengthen our paper. Following the suggestion of reviewer vZqT we also conducted several other experiments, e.g. comparing DML with and without pre-training in Fig. 14.
> Compared to other estimators, the proposed one does not seem to be better in Fig. 4&5.
This might be a misunderstanding. The estimators suggested by us (DML Linear and the DML NN) are the only estimators in Figs. 4 & 5 that yield unbiased ATE estimation with good coverage (overlap of confidence intervals with the true ATE, i.e., the red line) while the other estimators do not. Note that the unbiased “Oracle” estimator in Fig. 4 is an infeasible model in practice and serves as a gold standard comparison.
- - - -
We thank the reviewer for the suggestions related to additional experiments. We hope the additional experiments in our response address the points raised in the review. Should the reviewer find the response satisfactory, we would appreciate reconsidering the initial score. Otherwise, we remain fully committed to addressing any remaining concerns during the second author response phase.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. The explanation helps my understanding. I'm on the fence. I still cannot wrap my head around why we need to incorporate images in ATE estimation. I'm all for correcting for kidney stone size or tumor size, but why can't we just have a model estimating those measures and use those tabular measures instead? Put in other words, are we sure that those latent variables contain the confounding information of interest? I know this is a theoretical piece, but I just want to make sure we are solving a real problem.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for allowing us to elaborate on these aspects in further detail.
> Why [do] we need to incorporate images in ATE estimation. I'm all for correcting for kidney stone size or tumor size, but why can't we just have a model estimating those measures and use those tabular measures instead?
- In fact, what the reviewer is suggesting can be regarded as a special case of what we do. We use a model to estimate a tabular representation of the confounding information in the image. As a special case, this could also correspond to predicting confounders directly.
- If all relevant confounders were known, but the information in the data is only contained in the image, it would indeed be possible to design specific models to account for them, as the reviewer correctly points out. For this, we suggest a pre-trained model.
- In observational studies, relevant confounders are not always known a priori to the analyst. In this case, the additional advantage of using pre-trained representations is that they can account for many other potential confounders beyond pre-selected or manually predicted confounders (the Densenet-121 model used in our application, for example, was trained to detect 18 different anomalies on the chest X-ray scans).
> Put in other words, are we sure that those latent variables contain the confounding information of interest?
- Of course, we cannot be sure in general. The situation is no different from other ATE estimation situations where the “no unmeasured confounding” assumption is unavoidable. This is more precisely characterized in Def 3.1 (i) in our context.
- However, this assumption becomes more reasonable in our context, given that pre-trained representations encompass a variety of potential confounding information about the image and potentially much more than what manually predicted confounders would contain (as mentioned above).
- On the other hand, if the tabular data has no record of the relevant confounding information (e.g., the tumor size), the only option is to estimate it from the non-tabular (image) data source.
> I know this is a theoretical piece, but I just want to make sure we are solving a real problem.
- We understand this concern, but we would like to point out that we are by no means the first to investigate the problem of incorporating non-tabular data in ATE estimation. This approach has been explored by different studies in many real-world examples (e.g. Veitch et al., 2019, 2020; Jerzak et al., 2022 a,b, 2023; Klaassen et al., 2024; Dhawan et al., 2024). Our core contribution is embedding these practical approaches in a broader theoretical framework, highlighting potential pitfalls, and providing theoretical guarantees for valid statistical inference.
- In the upcoming years, several new open EHR databases (e.g., MIMIC-IV or European Health Data Space) will make a plethora of data (including non-tabular data) publicly available, making scenarios of estimating ATE from real-world observational data settings while requiring adjustment for non-tabular data even more relevant.
We once again thank the reviewer for taking the time to engage with our responses and hope we have addressed all remaining concerns.
----
**References**
- Dhawan et al. (2024), End-to-end causal effect estimation from unstructured natural language data
- Klaassen et al. (2024), DoubleMLDeep: Estimation of causal effects with multimodal data
- Jerzak et al. (2023), Integrating earth observation data into causal inference: challenges and opportunities
- Jerzak et al. (2022), Estimating causal effects under image confounding bias with an application to poverty in africa
- Jerzak et al. (2022), Image-based treatment effect heterogeneity
- Veitch et al. (2020), Adapting text embeddings for causal inference
- Veitch et al. (2019), Using embeddings to correct for unobserved confounding in networks | Summary: The paper revisits the problem of estimating Average Treatment Effects (ATE) in observational studies under an assumption of ignorability where confounding factors are available as images or text (non-tabular data). The authors develop a theoretical argument that describes under what conditions the use of pre-trained neural networks to extract relevant features (representations) from non-tabular data gives valid adjustment within the Double Machine Learning (DML) framework. One key theoretical contribution is to show that the intrinsic dimension of the latent representation Z is invariant under linear transformations which combined with properties of the target function f(z) leads to fast convergence of the ATE.
Claims And Evidence: All claims are very well supported both theoretically and empirically.
Methods And Evaluation Criteria: Yes, all good.
Theoretical Claims: Did not check proofs in detail but follows established results in the literature.
Experimental Designs Or Analyses: Yes, experimental design is sound.
Supplementary Material: Yes, skimmed over proofs.
Relation To Broader Scientific Literature: Yes, related work is discussed appropriately.
Essential References Not Discussed: No significant reference appears to be missing.
Other Strengths And Weaknesses: The paper is very well executed: it introduces existing convergence guarantees for function approximation, highlights the challenges well, and provides a compelling argument for the validity for using pre-trained features for adjustment.
The HCM definition is a little bit difficult to visualize. Could more details or examples be given to better convey the intuition.
It seems to me that a lot of the heavy lifting is done in the function approximation theorems of Secs. 4 and 5.1 and 5.2. Once those are established ATE estimation follows more or less straightforwardly. Why frame the results so closely to ATE estimation? Is there not an opportunity here to provide guarantees for any functional of pre-trained representations?
With my causality hat on, I would say that the biggest challenge for the correctness of the procedure is to guarantee Def. 3.1. It is highly non-trivial that the representation will accurately preserve the information content in the original data. (I guess this is a limitation of all DML work so might not need addressing here.)
Other Comments Or Suggestions: See above.
Questions For Authors: Sec. 5.2, in which f is parameterized by a feed forward NN, is not illustrated in the experiments (as far as I know). The authors instead use linear functions and random forest regressors. Is there a reason for this disconnect between theory and experiments? If no non-linearities can be incorporated into the target function parameterization then that is a potential limitation of the theory.
Classification problems typically require a non-linear transformation or a link function, do classification functions satisfy the HCM condition?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### **Theoretical Results**
> Why frame the results so closely to ATE estimation? Is there not an opportunity here to provide guarantees for any functional of pre-trained representations?
Indeed, the derived convergence rates could be used for convergence guarantees of any functional of the pre-trained representations and thereby establish asymptotic results about other causal estimands other than the ATE. Given the popularity of the ATE (both in theory and practice), we chose to focus on the ATE estimand in the context of DML. However, as correctly pointed out by the reviewer, our convergence rates can be used more generally, e.g., to provide asymptotic inference results for estimands such as the average treatment effect of the treated (ATT) and the conditional ATE (CATE). For the latter, however, stronger assumptions would be required, e.g., $P$-OMS instead of $P$-valid pre-trained representations.
Based on the reviewer’s excellent remark, we will add a discussion to Sec. 5 and elaborate on this aspect.
> I would say that the biggest challenge for the correctness of the procedure is to guarantee Def. 3.1. It is highly non-trivial that the representation will accurately preserve the information content in the original data. (I guess this is a limitation of all DML [...])
We thank the reviewer for raising this relevant point. Indeed, validating the assumption of Def. 3.1 in practice is not trivial — as with any other causal assumption. However, this limitation is not specific to DML per se, but is instead inherent to any method aiming to estimate the ATE based on pre-trained representation for adjustment, given that Def. 3.1 (i) is necessary for the identification of the ATE in this context. We will make this clear in a revised version of the paper and thank the reviewer for bringing this up.
> Sec. 5.2, in which f is parameterized by a feed forward NN, is not illustrated in the experiments (as far as I know). The authors instead use linear functions and random forest regressors. Is there a reason for this disconnect between theory and experiments?
This seems to be a misunderstanding. In all experiments, we use DML with neural networks as regressors. The “Label confounding” setup in Figs. 4 & 7 is simple enough that a single layer (“linear”) suffices; in the “Complex confounding” setup in Figs. 5 & 8, we use a 100-layer neural network. The RF regressor is included to highlight that methods not invariant to ILT transformations often fail when used on pre-trained representations.
Hence, our experiments are well in line with our theory. We will state this more clearly in the revised version.
----
### **HCM**
> Classification problems typically require a non-linear transformation or a link function, do classification functions satisfy the HCM condition?
Yes, this is correct. More precisely, in the classification context, the conditional probability would satisfy the HCM condition. A non-linearity or link is not a problem in this context, since it is just a simple function in the final layer of the HCM. We thank the reviewer for this question and will clarify this in the revised version of the paper.
> The HCM definition is a little bit difficult to visualize. Could more details or examples be given to better convey the intuition.
We thank the reviewer for this remark. To better illustrate the concept of the HCM in our paper, we will add a visualization of the HCM similar to the one from the original publication (Kohler and Langer, Annals of Statistics, 2021) and add further explanation to it.
- - - -
We appreciate the reviewer's thoughtful remarks and important points related to the assumptions in our paper. We hope the additional clarifications in our response address the points raised in the review. Should the reviewer find the response satisfactory, we would appreciate reconsidering the initial score. Otherwise, we remain fully committed to addressing any remaining concerns during the second author response phase. | Summary: This paper investigates the application of Double Machine Learning (DML) for estimating the Average Treatment Effect (ATE) in non-tabular data contexts, such as text and images. The authors highlight the limitations of traditional causal inference methods in handling non-tabular data and propose leveraging pre-trained representations from deep neural networks (DNNs) to adjust for confounding variables.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, I check the main proofs.
Experimental Designs Or Analyses: The experimental setup is well-structured, with a clear dataset and a relevant benchmark.
Supplementary Material: I briefly reviewed the additional experimental results and mathematical derivations in the supplementary materials.
Relation To Broader Scientific Literature: The work relates to prior research on DML for causal inference, particularly in tabular data settings (Chernozhukov et al., 2017; 2018).
It extends research on representation learning for causal inference (Veitch et al., 2019; 2020), but could better position itself in this literature. Connections to theoretical work on ILTs and non-identifiability (Dai et al., 2022) are relevant but lack empirical grounding.
Essential References Not Discussed: No, the related works are thoroughly summarized and appropriately referenced.
Other Strengths And Weaknesses: **Strengths**:
-Novel application of DML to non-tabular data.
-Theoretical contributions on ILTs and HCM are interesting and relevant.
**Weaknesses**:
-The manuscript lacks empirical validation of the impact of ILTs on ATE estimation.
-While the benefits of HCM are theoretically motivated, they are not directly tested or empirically verified.
-Critical ablation studies, such as comparing DML with raw data versus pre-trained features, are missing.
-Although ILTs are extensively discussed, their connection to the experimental results is not clearly established or well-articulated.
Other Comments Or Suggestions: -Consider adding empirical tests on ILTs to evaluate their impact on DML estimation.
-Clarify how HCM enhances DML performance beyond theoretical claims, providing empirical evidence if possible.
Questions For Authors: 1. How does ILT invariance specifically impact DML estimation?
2. Why is there no experiment comparing DML with and without pre-trained representations?
3. Are there scenarios where HCM does not hold?bIf real-world data does not follow an HCM structure, how does this affect ATE estimation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Additional experiments** can be found at https://anonymous.4open.science/r/icml2025-6599/add_exp_r1.pdf
- - - -
### **Impact of ILTs on ATE**
> How does ILT invariance specifically impact DML estimation?
The ILT invariance of pre-trained representation does not only have theoretical but also crucial practical consequences in DML estimation. More specifically, Random Forest (RF) and the Lasso are commonly used for nuisance function estimation in DML applications with tabular data. However, as we show both theoretically and empirically, additivity and sparsity cannot reasonably be assumed given the ILT invariance. Further extending our previous results on the IMDb and X-Ray datasets, we show in Fig.13 that both the ILT non-invariant nuisance estimators RF (building on additivity) and Lasso (building on sparsity) yield biased ATE estimation when using pre-trained representations, while DML with ILT invariant “Linear” (NN with linear and classification model head) estimators yields unbiased results in both empirical studies.
We thank the reviewer for this question and hope our explanation clarifies it. We have discussed this aspect in the paper on pages 7-8, but are happy to further highlight this point in a revised version of the manuscript.
- - - -
### **With/out Pre-Training**
> Why is there no experiment comparing DML with and without pre-trained representations?
We thank the reviewer for bringing up this important aspect. To validate the benefits of pre-training in our context, we have now conducted experiments where we fit DML with pre-trained representations and compare it with DML without pre-training. As models have to be trained from scratch, we expect pre-training to have less bias than training from scratch. We demonstrate this on the pneumonia (X-ray) data set, where we used 500 and all 3769 X-rays for ATE estimation and fitted CNNs as nuisance function estimates (both for the propensity score and outcome regression). The results are depicted in Fig.14 and demonstrate that DML using pre-trained representations yields unbiased estimates with good coverage while DML with from-scratch training of CNNs does not. Note that these results become even more pronounced in case of smaller sample sizes (<200) –- a setup that is frequently encountered in clinical practice.
We thank the reviewer again for this very helpful comment. We think that the new results further strengthen our paper.
- - - -
### **HCM**
> Are there scenarios where HCM does not hold? If real-world data does not follow an HCM structure, how does this affect ATE estimation?
We thank the reviewer for raising this question. Indeed, the HCM structure is a structural assumption that does not necessarily need to hold in all real-world data applications. However, the success of DNNs in many real-world applications and the fact that DNNs precisely mimic such HCM structures suggest that it may be a reasonable assumption in many settings. That being said, our asymptotic normality results in Thm. 5.7 do not necessarily depend on the HCM assumption. In fact, we used it to relax the smoothness assumptions that otherwise would be required. Given a sufficient amount of smoothness of the target function and low ID of the representations, Thm. 5.7 can achieve the same root-n consistency and asymptotic normality. However, there are cases where ATE estimation yields biased results if the HCM structure does not hold. To illustrate this, we conducted an additional experiment, where confounding is based on the product of pre-trained representations (hence HCM no longer holds, which is crucially different from our previous complex conf. experiments). The results in Fig. 15 demonstrate that none of the estimators, not even the DML with NN, can yield unbiased estimates in this non-HCM setup.
We appreciate the reviewer’s thoughtful comment and hope our response and additional experiments clarified the question.
- - - -
### **Other Comments**
> [The paper] could better position itself in the literature
We thank the reviewer for the comment. We will revise our related literature section accordingly.
- - - -
We sincerely appreciate the reviewer's constructive feedback and hope that the additional experimental findings and clarifying explanations address the points raised in the review. Should the reviewer find the response satisfactory, we would appreciate reconsidering the initial score. Otherwise, we remain fully committed to addressing any remaining concerns during the second author response phase.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's reply, I'll raise the score from 2 to 3.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for their careful assessment, the revised score, and the constructive feedback that helped improve our paper. | null | null | null | null | null | null | null | null |
MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation | Accept (poster) | Summary: The paper employs a multi-time stepping procedure in conjunction with a numerical integrator, physical operators, and neural networks to process downsampled data in both space and time. The proposed method applies time stepping on a finer temporal grid while utilizing the RK4 scheme for PDE integration. Supervision is provided through samples on coarser spatial-temporal grids, and the results demonstrate that this approach can yield accurate long-term predictions even when training data is limited.
Claims And Evidence: The approach demonstrates efficiency in low training data regimes. Both quantitative and qualitative results underscore the effectiveness of the learning scheme, even when trained on a limited dataset (3–5 trajectories of 200–2000 timesteps), compared to the selected baseline methods.
Methods And Evaluation Criteria: The method leverages physical priors, a numerical integration scheme, and learnable components to facilitate the efficient training of a PDE solver. The evaluation setting challenges current data-driven and physics-informed baselines by operating in a low training data regime. Given the high computational cost required to obtain reliable simulations, this approach is particularly noteworthy.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: The experiments are designed to showcase the learning efficiency of the method. While they sufficiently demonstrate the method's efficiency relative to the baselines in a low training data regime, it would be beneficial to include tests with larger datasets to evaluate whether the observed improvements remain significant across varying data quantities.
Supplementary Material: Yes. Especially the implementation details.
Relation To Broader Scientific Literature: The key contributions of the paper lie in the continuous effort of the community on crossing different modeling principles and ideas from physics, numerical scheme and deep learning.
Essential References Not Discussed: To the best of my knowledge the related work is sufficient for the context of the key contributions.
Other Strengths And Weaknesses: Strengths:
- The method's architecture is described in sufficient detail.
Weaknesses:
- The paper appears to test relatively few recent purely data-driven models as benchmarks. Could the authors provide an explanation or clarify the rationale for this choice? The claim on the amount of data should exclude all purely data-driven methods.
- Given the extent of physical prior information incorporated, it would be valuable to see the performance of the physics-informed counterparts of various architectures (e.g., FNO-PhyFNO, DeepONet-PhyDeepONet) across different cases. At a minimum, the authors should justify why certain baselines are included for some cases but not for others (e.g., PhyFNO is only evaluated for KdV).
Other Comments Or Suggestions: Typos:
- L241 (right colomn): "Sovling PDE".
Questions For Authors: It appears that the "Correction Block" is an FNO without additional training objectives specifically aimed at achieving the correction. Could the authors clarify this design choice or provide qualitative results that illustrate how the state $\hat{u}_m^k$ is effectively corrected?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your constructive comments. To enhance clarity, we have thoroughly proofread the manuscript and corrected all identified typographical errors. We believe these revisions will significantly improve the presentation. The updated version will be uploaded as soon as file submissions are enabled.
### **Weaknesses**
>**W1. The benchmarks include few recent pure data-driven models.**
**Reply:** Great remark! Our proposed MultiPDENet integrates PDE priors with multiscale time-stepping scheme, enabling efficient spatiotemporal simulations (e.g., turbulence) on coarse grids with limited data. While recent pure data-driven models (e.g., modern-UNet [1]) perform well in data-rich scenarios (e.g., requiring **128 trajectories**), their performance deteriorates significantly in small data settings.
While we have selectd widely recognized data-driven baselines (e.g., FNO, modern-UNet, DeepONet), we have also considered hybrid physics-learning models (e.g., LI, TSM, PhyFNO, PeRCNN) for comparison. Following your suggestion, we added the recent model ONO [2] as an additional baseline. Its inferior performance under data scarcity (**see Table A**) further highlights the robustness of our framework.
**Table A:** Comparison of MultiPDENet and baselines for NSE.
|**Model**|**RMSE**(↓)|**MAE**(↓)|**MNAD**(↓)|**HCT**(s↑)|**Infer. cost**(s↓)|
|-|-|-|-|-|-|
|UNet|0.8224|0.5209|0.0627|3.9627|7|
|FNO|1.0100|0.7319|0.0887|2.5749|5|
|LI|NaN|NaN|NaN|3.5000|9|
|TSM|NaN|NaN|NaN|3.7531|9|
|DeepONet|2.1849|1.0227|0.1074|0.1126|**1**|
|ONO|0.6613|0.4441|0.0535|4.2356|5|
|MultiPDENet|**0.1379**|**0.0648**|**0.0077** |**8.3566**|26|
>**W2. The selection of baselines for varying experimental cases.**
**Reply:** Insightful comment! Our baseline selection carefully balances pure data-driven models (e.g., FNO, UNet, DeepONet) and physics-inspired counterparts (e.g., PhyFNO, LI, TSM, PeRCNN) to ensure fairness and relevance.
To ensure consistent comparison, we added PhyFNO as a baseline for the Burgers and GS equations, with results shown in **Table B** below. So far, we have maintained a unified setting of baseline comparsion (FNO, UNet, DeepONet, PhyFNO, PeRCNN) for KdV, Burgers, and GS equations. In the case of NSE, we slightly altered the baseline models by considering two other well-recognized models specifically tailored for NSE (aka, LI and TSM). In all cases, we have demonstrated that MultiPDENet surpasses the baselines.
We believe the current baseline setup ensures a **fair** and **consistent** comparison. Thanks for your great suggestion!
**Table B:** Comparison of MultiPDENet and PhyFNO for Burgers and GS.
|**Case**|**Model**|**RMSE**(↓)|**MAE**(↓)|**MNAD**(↓)|**HCT**(s↑)|
|-|-|-|-|-|-|
|Burgers|PhyFNO|0.0832|0.0749|0.0599|0.5546|
|Burgers|MultiPDENet|**0.0057**|**0.0037**|**0.0031**|**1.4000**|
|GS|PhyFNO|0.5721|0.3579|0.3520|510|
|GS|MultiPDENet|**0.0573**|**0.0294**|**0.0298**|**1400**|
### **Suggestions**
>**S1. Typos of "Sovling PDE".**
**Reply:** We have thoroughly proofread the paper and corrected all typos in the revised version.
### **Questions**
>**Q1. Design motivation for the Correction Block and the effectiveness of state correction.**
**Reply:** Great remark! The Correction Block is designed to mitigate the information loss introduced by resolution reduction before computing the equivalent derivatives, enabling the model to adapt to the coarse grid. Instead of explicitly recovering the high-resolution solution, the Correction Block implicitly corrects the coarse-grid state by adjusting the scaling of the equivalent derivative term. During training, this block focuses on minimizing the overall PDE residual rather than directly reconstructing fine-grid details. The state $\hat{\bar{\mathbf{u}}}{_m^k}$, obtained via the Correction Block, represents a neural-corrected version of the coarse solution. However, this correction should not be interpreted as an explicit recovery of the fine-grid solution. Rather, it ensures that the equivalent derivative term computed via the Symmetric Filter is optimally adjusted to minimize the overall PDE residual. Our training objective is solely aimed at making the results more closely approximate the ground truth solution, as demonstrated in the ablation study (Model-E in **Table 3 on Page 8**). Therefore, there are no additional training objectives specifically designed for correction beyond this implicit adjustment mechanism.
***Refs:***
[1] Gupta et al. Towards Multi-spatiotemporal-scale Generalized PDE Modeling. TMLR, 2023.
[2] Xiao, et al. Improved Operator Learning by Orthogonal Attention. ICML, 2024.
***Remark:*** Once again, we sincerely appreciate your constructive comments. Looking forward to your feedback! | Summary: The paper introduces MultiPDENet, a PDE-embedded neural network with multiscale time stepping to accelerate flow simulations by integrating numerical methods with machine learning. It employs finite difference-based convolutional filters to approximate spatial derivatives on coarse grids, while a Physics Block with a 4th-order Runge-Kutta integrator preserves PDE structures for accurate predictions. To mitigate temporal error accumulation, a multiscale time integration approach is introduced, where a neural network corrects errors at a coarse time scale. Experiments on various PDEs, including the Navier-Stokes equations, demonstrate state-of-the-art accuracy and long-term stability with improved efficiency over traditional numerical methods and neural network baselines.
Claims And Evidence: Yes, at least for the 2D NSE, it is pretty impressive to me.
Methods And Evaluation Criteria: Yes, for fluid problem, the energy spectrum evaluation is good.
Theoretical Claims: There is no such proofs for theoretical claims.
Experimental Designs Or Analyses: Yes, for all the dynamical systems.
Supplementary Material: Yes, all of them.
Relation To Broader Scientific Literature: I think the key contribution of the paper is demonstrating the black+white box method can work for NSE.
Essential References Not Discussed: There are missing some citations such as ICLR 2022 for black box methods.
Other Strengths And Weaknesses: The paper needs to add a specific algorithm for training and a specific algorithm for testing.
Other Comments Or Suggestions: No, I do not have any.
Questions For Authors: For NSE, is the white box performed on the coarse mesh?
Do we really need to satisfy the CFL condition?
For NSE, is the black box performed on the coarse mesh also?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thanks for your constructive comments and suggestions! We have carefully addressed them, and the following responses have been incorporated into the revised paper.
### **Questions**
>**Q1. Essential References Not Discussed.**
**Re:** We appreciate your comment. We have noted that our initial submission omitted several important black-box methods in ICLR 2022, such as MP-PDE [1], GMR-Transformer-GMUS [2], and SiT [3]. We will include these references in the related work section in the revised manuscript.
>**Q2. Is the white-box modules and black-blox moduls applied on the coarse mesh for NSE?**
**Re:** Great remark! To accelerate the prediction of spatiotemporal dynamics on coarse grids, we integrated the numerical scheme with neural networks through a multiscale time-stepping strategy. Consequently, the entire network, encompassing both white-box and black-box modules, is designed to operate on a unfied coarse mesh.
>**Q3. Does the model need to satisfy the CFL condition?**
**Re:** Insightful question! According to the CFL condition, the maximum allowable time step is given by: $\delta t _{\max}=\mathrm{CFL} _{\max} \cdot \frac{\delta x}{\left| u \right| _{\max}}$, where the standard choice for numerical simulations is $\mathrm{CFL} _{\max}=0.5$. Based on this, we obtained $\delta t _{\max} = 0.007$ for the coarse grid with $\delta x = 2\pi/64$, which indeed means that our model satifies the CFL condition in our original test setting.
However, to further investigate the extent to which MultiPDENet can go beyond the CFL condition, we conducted experiments with increased time intervals, namely, $\delta t_{\max} = 0.028$ (four times the required smallest time step). The resulting model is called MultiPDENet-L. With this setup, we achieved an extra 4$\times$ speedup (e.g., inference time of 6 sec vs. the original 26 sec) while maintaining a similar model accuracy, aligning with the speed of other baseline models as shown in **Table A**. Note that MultiPDENet-L was trained based on a rollout strategy over 8 macro-steps.
In summary, MultiPDENet does **not** need to adhere to the CFL condition, demonstrating its capability to operate effectively beyond conventional stability constraints.
**Table A:** Comparison of MultiPDENet and baselines for NSE
|Model|RMSE(↓)|MAE(↓)|MNAD(↓)|HCT(s↑)|Infer. cost(s↓)|
|-|-|-|-|-|-|
|UNet|0.82|0.52|0.06|3.96|7|
|FNO|1.01|0.73|0.09|2.57|5|
|LI|NaN|NaN|NaN|3.50|9|
|TSM|NaN|NaN|NaN|3.75|9|
|DeepONet|2.18|1.02|0.11|0.11|**1**|
|MultiPDENet|**0.13**|**0.06**|**0.01**|**8.36**|26|
|MultiPDENet-L |0.37 |0.18 | 0.02 | 7.42 |6|
***Refs:***
[1] Brandstetter et al. Message passing neural PDE solvers. ICLR, 2022.
[2] Han et al. Predicting Physics in Mesh-reduced Space with Temporal Attention. ICLR, 2022.
[3] Shao et al. SiT: Simulation transformer for particle-based physics simulation. ICLR, 2022.
***Remark:*** Once again, we sincerely appreciate your constructive comments. Please feel free to let us know if you have any further questions. Looking forward to your feedback!
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ effort for addressing my comments. However, I still have some questions. Firstly, I kindly ask an algorithm of training and another algorithm of testing (evaluation) for the fluid problems. Secondly, I want a table to list the solvers used for all cases in the paper, and also show if the solvers are open-sourced, I believe this will benefit the community. Thirdly, for Poisson block used in the NS, does the Poisson block iteratively solve the pressure equation? Or it is a direct solving. Fourthly, since A100 is used, how much GPU memory is needed to train and evaluate the model for the NS case, I want an exact number. I am looking forward to the authors reply.
---
Reply to Comment 1.1.1:
Comment: We appreciate your additional comments and trust that the following responses will address the concerns raised. These responses will be incorporated into our revised paper.
### **Questions**
>**Q1. The training and testing algorithm for the fluid problems.**
**Re:** Great remark! In fluid dynamics problems, the autoregressive roll-out training method is most commonly adopted for long-term prediction tasks [1]. A key advantage of this approach lies in its flexible adjustment of the roll-out window size tailored to specific scenarios. For example, in NSE problems, a 32-step roll-out window is often empirically chosen as a balance between computational efficiency and numerical stability [2, 3]. When model stability is ensured, training strategies with single-step predictions (i.e., larger time steps) can be employed to reduce computational overhead [4]. In this work, we introduce a micro-step correction mechanism to stabilize the network’s performance over macro-step predictions, enabling reliable single-step roll-out training while maintaining accuracy.
For testing, the standard algorithm involves comparing predicted trajectories against ground truth using statistical metrics (e.g., RMSE, HCT, MNAD) and physics-consistency metrics (e.g., Energy spectrum). The former metrics quantify numerical accuracy, and the latter evaluate adherence to fundamental fluid mechanics principles. This combination of evaluations ensures that predictions are not only numerically precise but also physically interpretable and robust.
>**Q2. Solver used in the paper.**
**Re:** We appreciate your comment! All numerical solvers employed in this study are summarized in **Table A**, and their implementations will be open-sourced alongside the reproduction toolkit. This code release aims to facilitate both the replication of our experiments and extended research endeavors by the community.
**Table A:** Numerical Solver for generating datasets
|Cases|Numerical method|Spatial grid|Temporal steps|Open-sourced|
|-|-|-|-|-|
|KdV|Spectral|256|10000|yes|
|Burgers|FD|100$^2$|2000|yes|
|GS|FD|128$^2$|4000|yes|
|NSE|FV|2048$^2$|153600|yes|
>**Q3. Does the Poisson block iteratively solve the pressure equation?**
**Re:** Excellent comment! The Poisson block, which is critical for solving the pressure term, relies on the Poisson solver as its core component. This solver leverages a numerical methodology grounded in frequency-domain. The basic idea is to convert the original problem into the frequency domain, where the spatial differential operation becomes a multiplication operation via Fourier transform. The Poisson equation is then solved in the frequency domain, and the inverse Fourier transform is then used to obtain the solution in the original spatial domain. From the Navier-Stokes equations, we derive the relation $\Delta p = 2 \left(u_x v_y - u_y v_x\right)$ (the subscripts indicate the spatial derivatives along $x$ or $y$ directions). By directly feeding the right-hand side of this expression into the Poisson solver, we can compute the pressure term without requiring an iterative process.
>**Q4. GPU usage for training and testing on Navier-Stokes Cases.**
**Re:** Thanks for this comment! When training on the NSE, we use a batch size of 90, which results in a GPU memory usage of 74.79 GiB. During inference, the memory consumption is significantly lower, at 12.38 GiB. Notably, our model can also be trained and deployed on a single RTX 4090 GPU by adjusting the batch size to 20, which requires only 19.50 GiB of GPU memory.
***Refs:***
[1] Rao, et al. Encoding physics to learn reaction–diffusion processes. NMI, 2023.
[2] Kochkov, et al. Machine learning–accelerated computational fluid dynamics. PNAS, 2021.
[3] Sun, et al. A Neural PDE Solver with Temporal Stencil Modeling. ICLR, 2023.
[4] K. Gupta, et al. Towards Multi-spatiotemporal-scale Generalized PDE Modeling. TMLR, 2023.
***Remark:*** Thank you very much for your valuable feedback. Please let us know if you have other questions! | Summary: The authors proposed a framework for data-driven fluid flow simulation on uniform grid. Trying to soft-embed partial differential equations in data-driven flow simulations, the authors design a convolutional filter based on the constraints of the central difference discretization of the first and second order derivatives. To be able to achieve accurate large time step rollout predictions in the learned model, the authors generate each time step prediction by first iterate through a series of pseudo time steps through a "physics block", and then apply a corrector at the end to generate the corrected predictions at the next time step. Test results show that the proposed method outperforms existing benchmarks.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes. The method itself leads to a series of limitations though, see "other strength and weakness" for details.
Theoretical Claims: N/A
Experimental Designs Or Analyses: All experimental setups have been reviewed. Please see "questions for authors" for additional concerns and questions I have.
Supplementary Material: All supplementary materials have been reviewed.
Relation To Broader Scientific Literature: The paper is related to the data-driven modeling of (fluid) flow. The authors have provided enough reviews of the related literature in the introduction.
Essential References Not Discussed: No
Other Strengths And Weaknesses: 1. It should be noted that the proposed method not only has to be applied to structured grid (as is with most networks that utilize convolutions), but also has to be applied on uniform grid due to the limitation of the filters used. This is a significant limitation of this work which is likely not possible to resolve.
2. The authors did not discuss the extension to 3D cases.
3. The proposed framework will likely to suffer in cases with irregular geometries. The boundaries will be concerning when objects of non-cubic shapes are involved, since the authors only provide strategies to handle periodic boundaries of simple shape.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Table S7 & S8, am I understanding it correctly that the model is only faster than Direct Numerical Simulation (which is well-known to be computationally expensive) by 5-7 times?
2. If that is the case then I am deeply concerned about the inference speed of the proposed framework, since typical architectures in the domain are usually reported to be orders of magnitude faster than unsteady RANS or LES (both are cheaper than DNS due to less requirement on grid density). Please report the inference speed of the proposed network versus other benchmark cases.
3. Continued from 2, please report the performance of different models when their inference speed is about the same, by adjusting the size of the models. It should be noted that such comparison should be performed by shrinking the size of the proposed network rather than increasing the size of the benchmark networks, as the benchmark networks should stay at around the recommended sizes in their respective papers.
If the authors can prove with sufficient evidence that the proposed framework still outperforms benchmark cases when the networks are adjusted to run at the same inference time per step, then I am happy to raise the score to 3.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your constructive comments! We have addressed them thoroughly and added new figures/tables (see the **rebuttal.pdf** via https://anonymous.4open.science/r/Rebuttal-5D8B/rebuttal.pdf). These results will be added to the revised paper.
### **Weaknesses**
>**W1. Uniform grids.**
**Re:** Thanks for your **thoughtful comments** on irregular geometries and complex boundary conditions (BCs). Our study focuses on fluid simulations on regular grids with periodic BCs, a common constraint shared with methods like FNO, LI, and TSM. This indicates that the limitation is not specific to MultiPDENet but represents a general challenge in the field. Similar to Geo-FNO [1], a geometry-aware netwok (e.g., a geometry encoder-decoder) could be established on top of MultiPDENet (latent structured geometry learner) to handle general geometries. Thank you for pointing out the important direction for our future work!
To demonstrate our model's generalization to complex BCs, we conducted experiments on Burgers equation with Dirichlet and Neumann BCs, while keeping other settings consistent with the original data generation setup. We tested our previously trained model for inference on 10 trajectories with complex BCs through BC encoding (**Table R1** in **rebuttal.pdf**). **Figure R1** in **rebuttal.pdf** shows predicted snapshots, confirming the model's generalizability over complex BCs.
>**W2. Lacking 3D cases.**
**Re:** Great remark! We followed your comment and tested our model on the 3D Gray-Scott (GS) equation. We generated 5 datasets (1 for training and 4 for testing).
Snapshots of trajectory evolution from 0 to 600 s for MultiPDENet and baselines are shown in **Figure R2(a)** in **rebuttal.pdf**. **Figure R2(b)** in **rebuttal.pdf** shows MultiPDENet maintains a Pearson correlation coefficient $>$0.8 throughout the evolution. The error distribution in **Figure R2\(c)** in **rebuttal.pdf** highlights our model’s superior performance. **Table R2** in **rebuttal.pdf** summarizes our model’s performance suggesting strong potential to solve 3D problems.
### **Questions**
>**Q1. Model inference speed.**
**Re:** Thanks for raising this important point! It's crucial to **clarify** that the reported orders of magnitude speedups for popular models like FNO, LI and DeepONet (10$^3\times$, 80$\times$, 24$\times$) are often **not** evaluated against numerical methods with comparable accuracy. McGreivy et al. [2] revealed this by reimplementing these models and comparing them under consistent precision, resulting in only 7$\times$ speedup for FNO, while LI and DeepONet exhibited slower performance. Their evaluation effectively highlighted the prior discrepancies in speedup reporting.
Consistent with [2], we demonstrate a 5–7$\times$ speedup under the condition of sonsistent accuracy, althouh implementation in JAX may further yield extra speedup gains (e.g., $>5\times$) [3]. We must admit that the rollout strategy used in our model inherently limits its speed, while reducing the network size provides only marginal gains. Nonetheless, MultiPDENet generalizes well across diverse initial conditions, varying $Re$, external forces, and larger domains.
>**Q2. Does MultiPDENet still outperform other models at the same inference speed?**
**Re:** Great question! As mentioned above, solely reducing network parameters does not substantially accelerate the inference due to the employed rollout strategy. However, leveraging a larger time step $\delta t$ proves effective. MultiPDENet's multiscale time-stepping design allows it to circumvent the CFL condition, ensuring accuracy and stability even with increased $\delta t$. Specifically, with $4\delta t$ (MultiPDENet-L), we achieved an extra 4$\times$ speedup (e.g., inference time of 6s) while maintaining a similar model accuracy, aligning with the speed of other models as shown in **Table A**. Note that MultiPDENet-L was trained based on a rollout strategy over 8 macro-steps.
These results demonstrate our model’s multiscale time stepping scheme consistently outperforms all baselines across all metrics, even at similar inference speeds. Hope this clarifies your concern.
**Table A:** Comparison of MultiPDENet and baselines for NSE
|Model|RMSE(↓)|MNAD(↓)|HCT(s↑)|Infer. cost(s↓)|
|-|-|-|-|-|
|UNet|0.82|0.06|3.96|7|
|FNO|1.01|0.09|2.57|5|
|LI|NaN|NaN|3.50|9|
|TSM|NaN|NaN|3.75|9|
|DeepONet|2.18|1.02|0.11|**1**|
|MultiPDENet|**0.13**|**0.01**|**8.36**|26|
|MultiPDENet-L|0.37|0.02|7.43|6|
***Refs:***
[1] Li et al. Fourier neural operator with learned deformations for pdes on general geometries. JLMR, 2023
[2] McGreivy et al. Weak baselines and reporting biases lead to overoptimism in machine learning for fluid-related partial differential equations. Nature Machine Intelligence, 2024.
[3] Takamoto et al. PDEbench: An extensive benchmark for scientific machine learning. NeurIPS, 2022.
**Remark:** Please let us know if you have other questions. Looking forward to your feedback!
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I think my concerns are largely addressed. I am raising the score to 3.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback and for increasing the score. We will include the additional experiments and text in the revised paper. | null | null | null | null | null | null | null | null |
Deep Sturm–Liouville: From Sample-Based to 1D Regularization with Learnable Orthogonal Basis Functions | Accept (poster) | Summary: This paper proposes a novel method (Deep Sturm-Liouville, DSL) that combines the Sturm-Liouville theorem with neural networks. By solving one-dimensional Sturm-Liouville problems, DSL generates orthogonal basis functions from the resulting eigenfunctions to approximate target functions and employs implicit gradients for network training. The method integrates both intrinsic implicit regularization and manually added explicit regularization. Experiments on the Adult, Dry Bean, Bank Marketing, MNIST, and CIFAR10 datasets demonstrate that DSL achieves performance comparable to traditional neural networks while exhibiting superior sample efficiency.
## update after rebuttal
I will maintain my score after reading the response from authors
Claims And Evidence: In the abstract and introduction, the authors hypothesize that the generalization challenges of neural networks may stem from "0D regularization" (i.e., sample-point-based regularization) and claim that DSL can "overcome" this limitation. However:
**Theoretical gaps**: No rigorous proof is provided to show that DSL inherently achieves better regularization or generalization than conventional methods. The paper also fails to explain how 0D regularization negatively impacts neural network generalization.
**Experimental limitations**: The results only demonstrate comparable performance to existing methods (standard neural networks and neural ODEs), with no evidence that DSL actually addresses generalization difficulties.
Methods And Evaluation Criteria: The methodological framework of DSL is aligned with the core objectives of this research.
Theoretical Claims: The theoretical proofs are rigorous.
Experimental Designs Or Analyses: While the current results still meaningfully validate the feasibility and potential of DSL, the experimental section has certain limitations:
**Architectural ambiguity**: The neural network architectures (e.g., layer configurations, parameter counts) used for baseline comparisons are not clearly specified.
**Incomplete hyperparameter analysis**: While the impact of the number of basis functions is discussed, critical hyperparameters (e.g., the regularization coefficient α in Eq. 9, architectures of the field-line generation network) are not systematically analyzed.
**Computational fairness**: DSL requires additional time to compute basis functions and other intrinsic information. To ensure a fair comparison, the authors should include results for traditional neural networks trained under equivalent computational budgets.
Supplementary Material: Part E of the supplementary material has been reviewed.
Relation To Broader Scientific Literature: Traditional regularization methods (e.g., weight decay, dropout) impose constraints in parameter space or via stochastic perturbations, while sample-level regularization (e.g., adversarial training, Mixup) enforces robustness to local input perturbations. These are inherently "0D" approaches, operating on discrete samples or parameters. In contrast, DSL introduces 1D regularization along field lines, extending constraints to continuous paths and implicitly enforcing function smoothness through orthogonal basis functions derived from Sturm-Liouville problems.
Essential References Not Discussed: No essential references were omitted.
Other Strengths And Weaknesses: Have mentioned above.
Other Comments Or Suggestions: Typos:
Page 2: "Liouville" is misspelled as "Liouiville."
Page 7, Line 333: "Dirichlet" is misspelled as "Dirichelt."
Figure references:
Figure 5 (Page 8) is not discussed in the main text. The analysis of sample efficiency in Section 5 should reference Figure 5 instead of Figure 7 in the appendix.
Page 7, Line 360: The reference to "Figure 5.2.1" is unclear.
Questions For Authors: 1. How exactly does the proposed method address the generalization challenges attributed to 0D regularization?
2. What is the mechanism behind the implicit regularization in DSL, and how does it theoretically or empirically outperform traditional neural network regularization?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for carefully reading our paper, providing thoughtful feedback and recognizing the novelty of our approach.
I — In response to the reviewer's main concerns:
1 — 1D Regularization vs. 0D Regularization: By 0D regularization, we refer to regularization applied at discrete data points. For example, a term like $\sum_i \lvert \nabla_x F(x_i) \rvert$ computes the gradient of the predictor $F(x)$ only at the sampled points $x_i$. Ideally, one would compute a continuous regularization such as $\int_{\Omega} \lvert \nabla_x F(x) \rvert dx$, but this is generally intractable. Our method bridges this gap by computing regularization along one-dimensional trajectories defined by a vector field $a(x)$ for each data point $x$. This 1D regularization provides a more informative approximation of the continuous case by integrating along these paths. Within this framework, we introduce two new types of regularization: spectral regularization and implicit regularization. These are complementary to traditional techniques (such as L1 or L2 regularization), which we do not aim to replace. In future work, our framework could also incorporate other forms of regularization or even modify the loss function itself.
2 — Implicit Regularization: The key idea behind implicit regularization in our framework is to retain only the first $n^\text{th}$ basis functions. By controlling the number of retained basis functions, we also control their level of oscillation—a direct consequence of the Sturm–Liouville theorem (see lines 173–180). This is analogous to the Fourier basis, where higher-order components exhibit more oscillation than lower-order ones. However, Sturm–Liouville provides a more general formulation. This property allows us to construct smoother function approximators, which have a known theoretical connection to generalization. Moreover, sample efficiency is inherently tied to generalization performance (see [1, 2] ). Our experiments on sample efficiency were specifically designed to assess generalization behavior. By limiting the number of basis functions, we directly control the expressiveness and smoothness of the learned function, thus influencing the regularization effect.
II — Regarding the reviewer's concerns on "Experimental Design and Analyses":
1 — Architectural Ambiguity for the Baseline: The reviewer is right, and we will include a dedicated section to clarify this point in the appendix. For MNIST and CIFAR-10, we report the results from Massaroli et al. (2020), and we will explicitly reference this in the appendix. For the tabular datasets, we used an architecture similar to the one used for the function \( a(x) \). The source code is accessible via an anonymous GitHub https://rb.gy/n5iz02. Upon acceptance, the source code will be made publicly available to support the reproducibility of our work.
2 — Incomplete Hyperparameter Analysis: We conducted an ablation study in Figure 4(b) of the paper to analyze the impact of the spectral regularization coefficient on training accuracy. However, the coefficient $\alpha$ was missing from the figure and no reference was made to Equation (9). We will update the figure and its legend to include both the coefficient and the appropriate reference, as their absence is indeed misleading.
3 — Computational Fairness: The reviewer is right to highlight that computational budget is a critical factor for fair comparisons. For MNIST and CIFAR-10, we chose to report the results from Massaroli et al. (2020), a recognized reference. For the tabular datasets, we used similar architectures for both the baseline and the DSL model to ensure a fair comparison in terms of parameter count and model complexity.
Lastly, thank you for pointing out several typos in the manuscript—we will correct them accordingly.
[1] Zhang, Chiyuan, et al. "Understanding deep learning requires rethinking generalization." arXiv preprint arXiv:1611.03530 (2016).
[2] Arpit, Devansh, et al. "A closer look at memorization in deep networks." International conference on machine learning. PMLR, 2017. | Summary: In the present contribution authors describe a novel approximation scheme suitable for general mappings $\mathbb{R}^{m}\rightarrow \mathbb{R}^{n}$ where both $m$ and $n$ may be large. The scheme is suggested to be used as an alternative to neural networks.
A simplified description of the proposed mapping $y = f(x),$ where $x\in\mathbb{R}^{m},\,y\in\mathbb{R}^{n}$ is as follows
1. Solve the neural ordinary differential equation (ODE) starting from $x$ both for positive and for negative time, until the trajectory hits the boundary of the hypercube. From that record the whole trajectory $\gamma(t)$ and boundary points $t_{-}$, $t_{+}$, where the trajectory hit the hypercube.
2. Use $\gamma(t)$ to define parameters of one-dimensional Sturm–Liouville problem on the interval $\left[t_{-},t_{+}\right]$
3. Solve this problem for first $k$ eigenvectors $v_{j}(x),\,j=1,\dots,k$
4. Compute $y_{i} = \sum_{j}w_{ij} v_{j}(x),\,i=1,\dots,n$
In this scheme the following parameters are potentially learnable:
1. Vector field of neural ODE
2. Parameters of functions that compute coefficients of Sturm-Liouville problem from $\gamma(t)$
3. Number of basis functions used from Sturm-Liouville problem
4. Weights of linear transformation $w_{ij}$
Authors experimentally evaluated the proposed scheme and show that it achieves results competitive with other more standard approaches.
In addition to experimental evaluation authors prove two theoretical results:
1. The obtained eigenvectors are orthogonal on $\Omega\subset \mathbb{R}^{m}$, i.e., the input space where $x$ is defined and not only on the curve $\gamma(t)$.
2. That proposed scheme is related to the Dirichlet rank-1 parabolic eigenvalue problem.
## update after rebuttal
Summarised in https://openreview.net/forum?id=CzSNEvCckO¬eId=6lXg5LQbdT
Claims And Evidence: Authors made several mild claims:
1. That the proposed approximation scheme leads to comparable performance to classical approaches.
2. Theoretical claims on orthogonality of eigenvalues.
3. Theoretical claims on the relation to parabolic eigenvalue problem.
I believe that experimental claims are supported by results on CIFAR and MNIST.
The first theoretical claim seems to be supported too. At least I can not point to any problems with the proof.
The second theoretical claim is not supported. I will provide more details in the appropriate section of the review.
Methods And Evaluation Criteria: In general, I find evaluation criteria to be appropriate. However, I believe it would be beneficial for the readers to have access to more information on training time, memory load and other metrics along this line. The method proposed by the authors is quite complicated with many non-standard components that require custom derivation of derivative rules, this makes it hard to estimate computational requirements for using this technique.
Theoretical Claims: I reviewed proofs for both theoretical claims.
The first one of the orthogonality of the eigenvalues "in the bulk" seems fine (Appendix C).
The proof for the second theoretical statement is not correct. In Appendix D on lines 728-732 authors assume the coordinate transformation they define is valid. Unfortunately, the assumption of the theorem does not exclude the situation when this transformation is not possible to define.
In the theorem authors assume $a_i(x)>0$. Next, they define scalar coordinate $t(x)$ by identity $\nabla_{x} t(x) = a(x)$. To be able to do that, one need $a(x)$ to be conservative field. If this is not assumed one can easily build pathological examples. Consider$$a(x) = \frac{1}{2}\begin{pmatrix}(x_1 - x_2)^2\\\\(x_1 + x_2)^2\\\\1\end{pmatrix}.$$Curl of this field reads$$\nabla_{x}\times a(x) = \begin{pmatrix}0\\\\0\\\\ 2 x_1\end{pmatrix}.$$Since $a_i(x) >0$ holds our choice agrees with conditions states in the theorem. However, it is not possible that $a(x) = \nabla_x t(x)$ for some scalar function $t(x)$ since curl of $a(x)$ is not zero.
The rest of the coordinates defined by the author also have this property but for vectors $a_{k}(x)$ that are orthogonal to $a(x)$. This means these fields should also be potential, and authors need to carefully explain how they are going to build these additional orthogonal fields. The reference to the Gram-Schmidt process is not sufficient.
Experimental Designs Or Analyses: I find the design and analysis of the experiment reasonable, besides the fact that no data on memory and computation load is available. Besides that in my view authors did not perform basic ablation study that seems appropriate given the complexity of the method they propose. I describe the suggested ablation below.
The scheme proposed by authors is to extract trajectory from neural ODE and later use this trajectory to construct a basis from Sturm-Liouville problem. The most obvious ablation to this scheme is to completely remove the step with ODE.
This can be done with simple modifications to Sturm-Liouville problem$$-\frac{d}{d\tau}\left(p_{\theta}(x, \tau)\frac{d}{d\tau} u_i(\tau)\right) + q_{\theta}(x, \tau)u_i(\tau) = \lambda_i \omega(x, \tau) u_{i}(\tau),$$where coefficients of Sturm-Liouville problem are neural networks.
Since authors implemented an efficient differentiation through Sturm-Liouville solver, this kind of replacement is a simple modification of their code.
I suggest authors perform this simplifications and report how this will affect accuracy and data efficiency.
Supplementary Material: I reviewed all materials in the abstract including the proofs. Several questions on the material from the appendix are present in other questions of the review.
Relation To Broader Scientific Literature: If one considers MLP, each layer can be regarded as a progressive building of basis. The role of the last layer is to use this learned basis to solve the problem of interest with linear or simple nonlinear (link) function (see, e.g., Bishop CM, Nasrabadi NM. Pattern recognition and machine learning).
Present contributions suggest an alternative way to construct this data-dependent basis based on the initial-value problem and boundary-value problem.
The first related approach from the literature in neural ordinary differential equations https://arxiv.org/abs/1806.07366. Neural ODE are most widely used in generative modelling https://arxiv.org/abs/1810.01367, https://arxiv.org/abs/2210.02747. However, special classes of these models can also be applied directly to classification https://openreview.net/forum?id=SAv3nhzNWhw, anomaly detection https://arxiv.org/abs/2302.07253, reduced-order modelling https://www.nature.com/articles/s41598-023-36799-6, segmentation https://arxiv.org/abs/2502.06034, etc.
Since the technique proposed in the present contribution is related to partial-differential equations. This provides a second link to existing literature.
Methods based on PDEs were popular in the image processing problems, e.g., Chan TF, Shen J, Vese L. Variational PDE models in image processing, but they fall out of favour with the adoption of methods based on neural networks. PDE-based approaches also recently made they way into the deep learning, e.g., in https://arxiv.org/abs/2403.15726 authors used reaction-diffusion model, in https://arxiv.org/abs/2502.06034 a wave equation, and in http://proceedings.mlr.press/v107/sun20a/sun20a.pdf a general neural PDE is considered. Still, PDE-based approaches are rare and the method proposed by authors of the present contribution is an interesting exception.
Essential References Not Discussed: I believe authors sufficiently discussed related literature. It seems Neural ODE is the most relevant related technique. Authors discuss Neural ODEs and perform numerical experiments explicitly comparing their method with Neural ODEs.
Other Strengths And Weaknesses: **Strengths:**
1. Authors rigorously explain their approach including many details how derivatives are computed for the non-standard components.
2. To the best of my knowledge the approach is highly original with no directly related techniques in a published literature
**Weaknesses:**
1. The scheme is very complicated and it is not clear which parts are necessary since ablation is not available
2. The only advantage seems to be data efficiency that can be likely achieved with standard methods coupled with additional regularisation
3. The code is not available
4. Neural networks are still used to parametrise neural ODE and Sturm-Liouville problem, and in combination with minor to no improvement it rises the questions about the significance of the proposed approach
Other Comments Or Suggestions: I have several minor questions that I list here:
1. In several places authors mention that their approach is a "1D" regularisation and other known approaches are from "0D" regularisation, e.g., Lines 57-60. This terminology is not clear to me. Typically, regularisation implies some additional constraints, e.g., that the $l_2$ norm of weights is small, or that activations are normalised to $1$. Why does the approach by authors present a form of regularisation? If one uses some method to map between parts of input space (e.g., https://arxiv.org/abs/2303.16852), forming a trajectory between points, is it a form of regularisation too?
2. Lines 85-88, right column. "Conversely, DSL’s vector field operates directly within the input space ($\Omega\rightarrow\Omega$), ensuring that field lines traverse the entire domain." Why does the field traverse the entire domain? Since all components authors used are learnable, the trajectories of neural ODE can be arbitrary, not necessarily "dense" in $\Omega$.
3. Lines 303-306, right column. In the formulation of Theorem 5.1, indices $i$ and $j$ should be distinct which is not explicitly specified.
Questions For Authors: Here a briefly summarise my main concerns:
1. Proof of Theorem 5.3 is not correct (see above).
2. The code is not available.
3. No information about computation and memory requirements.
4. The scheme is too complicated, and no ablation is available.
So, my main suggestions is to perform ablation study (see suggestions above) and correct the proof of Theorem 5.3.
Ethical Review Concerns: na
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: The authors would like to thank you for your detailed review, for carefully examining the demonstrations and for recognizing the novelty of our approach.
I — Demonstration 2
First, we would like to address the mistake you identified in Demonstration 2, specifically in lines 728–732. The reviewer is absolutely correct, and we sincerely thank you for pointing this out. This demonstration is a key element of our paper, and we greatly appreciate your attention to its details. Fortunately, the mistake can be corrected without affecting the main results. Below, we explain how to fix it. As a reminder, the objective of this part of the demonstration is to construct a change of variables such that the first row of the Jacobian matrix of the mapping function $(t, y = F(x))$ is the vector $a(x)$ which is required to be orthogonal to the remaining rows of the Jacobian.
(i) — As the reviewer correctly pointed out, $a(x)$ must be the gradient of a scalar function. We will explicitly add this as a condition in the theorem. This is directly related to lines 251–252 of the paper, where we assume the absence of singular points.
(ii) — The reviewer also rightly pointed out an issue in the construction of the orthogonal basis: the Gram-Schmidt process alone is not enough as the resulting vectors also need to be gradients of scalar functions. To address this, we will leverage tools from differential geometry by interpreting the change of variables as defining a one-dimensional bundle generated by the vector field $a(x)$. We will add the following explanation to the paper:
``Let \(E\) be a one-dimensional real vector bundle over a manifold \(M\). It is known (see Milnor \& Stasheff's Characteristic Classes) that \(E\) is trivial if and only if it admits a global nowhere‐vanishing section—equivalently if its first Stiefel–Whitney class \(w_1(E)\) vanishes. In this case, a nowhere‐vanishing section \(a\) provides a canonical trivialization $E \cong M \times \mathbb{R}$. Furthermore, if we choose local coordinates adapted to this trivialization and if there exists a smooth function $V : \mathbb{R}^n \rightarrow \mathbb{R}$ such that $\nabla V(x)=a(x)$, the Jacobian of the corresponding local map \(T_x\) can be arranged so that its first row is precisely \(a(x)\). This reflects the fact that the coordinate system is chosen to align the fiber direction (generated by \(a(x)\)) with the first coordinate axis. With an additional Riemannian metric on \(M\), the orthogonal complement Bundles lemma (see John M. Lee, Introduction to Smooth Manifolds) then provides an orthogonal decomposition $ T_xM = \operatorname{span}\{a(x)\} \oplus \operatorname{span}\{a(x)\}^\perp, $ completing the geometric picture. Consequently, the first row of the Jacobian is exactly $a(x)$, aligning the coordinate system with the direction defined by $a(x)$. Combined with the orthonormality of the remaining coordinates, this ensures that the first row of the Jacobian matrix is orthogonal to all other rows.''
II — Ablation study
To evaluate the role of the vector field in our approach, the reviewer suggested an ablation study that removes its influence. We agree this is a valuable experiment and would like to describe it in more detail. In this ablation, we apply the proposed Sturm–Liouville formulation over the time interval $[0, 1]$ and evaluate the basis functions at $t = 0.5$. The input to each MLP is formed by concatenating the data point $x$ with the time variable $t$.
For the sample efficiency results and the accuracy performance on tabular data, please find below the updated plot and table that includes the ablation model https://rb.gy/xjx9tk. Our results show that DSL achieves higher sample efficiency and comparable accuracy compared to the ablation model.
III — Weakness and other questions
1 — Code Availability: The source code is accessible via an anonymous GitHub https://rb.gy/n5iz02. Upon acceptance, the source code will be made publicly available.
2 — Use of Neural Networks in DSL: We acknowledge the dependency of the DSL on neural networks. As a direction for future work, we are exploring ways to design a version of the DSL that does not rely on neural networks. DSL represents a foundational block that we aim to develop further.
3 — 0D vs 1D Regularization: Since this question was also raised by another reviewer, we addressed it in the rebuttal of Reviewer cdFW (1).
4 — DSL Coverage of $\Omega$: The reviewer is right to point out that, for a general vector field, the trajectories might not cover the entire domain $\Omega$. To ensure full coverage, the ODE must satisfy certain constraints. These are discussed in the paper between line 215 (second column) and line 253 (first column), where we explain the uniqueness of $\gamma(t_-)$ and $\gamma(t_+)$. However, we realize this section may not clearly state that these constraints also guarantee coverage of $\Omega$. We will add an explicit sentence to highlight this point.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for a detailed reply and additional ablation study. In the rebuttal authors addressed my major concerns so I changed my recommendation accordingly. | Summary: This paper introduces Deep Sturm-Liouville (DSL), a novel function approximator that integrates the Sturm-Liouville Theorem (SLT) into deep learning to achieve continuous 1D regularization along field lines in the input space. Demonstrates competitive performance and improved sample efficiency on diverse datasets including MNIST and CIFAR-10, showing DSL's effectiveness in practical machine learning tasks.
Claims And Evidence: After careful review, I did not find any claims that were obviously problematic or lacked sufficient support. The evidence provided seems comprehensive and convincing. The theoretical analysis is rigorous, and the experimental verification validates the authors' theoretical analysis.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate.
Theoretical Claims: I did not check all the details of the proof line by line, but on the whole, the theoretical analysis part of the paper has clear logic and reasonable proof structure.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper are generally sound and valid.
Supplementary Material: I reviewed the proofs and experiments in the supplementary material.
Relation To Broader Scientific Literature: The paper's integration of Sturm-Liouville Theory with deep learning provides a novel mathematical foundation for developing more expressive and generalizable function approximators. This interdisciplinary approach bridges gaps between applied mathematics and machine learning, offering new insights into how to design models that can better capture the underlying structure of data.
Essential References Not Discussed: No literature missing.
Other Strengths And Weaknesses: Strengths:
The paper presents a novel integration of Sturm-Liouville theory with deep learning, creating the Deep Sturm-Liouville (DSL) framework. The introduction of 1D regularization along field lines in the input space addresses a fundamental limitation of traditional sample-based (0D) regularization methods, providing a new dimension for controlling model complexity and improving generalization. The experiment verifies the correctness of the proposed method. The overall structure of the paper is clear and easy for readers to understand.
Weaknesses:
1、 The computational overhead of solving Sturm-Liouville problems might restrict scalability to very large-scale problems or real-time applications, especially in scenarios with abundant training data.
2、 The experimental is limited. The author only verified it on small datasets. There is a lack of verification on the large-scale datasets.
Other Comments Or Suggestions: No
Questions For Authors: 1、 The computational overhead of solving Sturm-Liouville problems might restrict scalability to very large-scale problems or real-time applications, especially in scenarios with abundant training data.
2、 The experimental is limited. The author only verified it on small datasets. There is a lack of verification on the large-scale datasets.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer upBa for taking the time to review our paper in detail and for recognizing the novelty of our approach. The novelty of our approach was highlighted also by reviewer 2moq, who described our article as "highly original with no directly related techniques in a published literature". We appreciate your concern regarding the scalability of our method to large-scale datasets or real-time applications. We fully agree that this limitation may hinder the immediate widespread deployment of our framework in such contexts.
However, as the reviewer noted, the core contribution of our work lies in introducing a new and principled approximator framework for regularization, grounded on theoretical results. As such, we believe it should be viewed as a foundational step—laying the groundwork for future improvements and extensions.
Improving scalability is a clear direction for future work. Potential avenues include designing more efficient approximations of the Sturm–Liouville basis, new methods to compute gradients through the eigenvalue process that avoid relying on the implicit differentiation theorem, as well as exploring alternative ODE solvers that are better suited to Sturm–Liouville-type equations. Additionally, we plan to explore new forms of regularization beyond the spectral and implicit ones introduced here. We see this work as the first step in a broader research agenda that combines structure, theory, and learning in a unified framework. We will update the paper to make this limitation clearer—especially in the limitations section—and outline these directions as promising areas for future work. | null | null | null | null | null | null | null | null |
SketchDNN: Joint Continuous-Discrete Diffusion for CAD Sketch Generation | Accept (poster) | Summary: The paper proposes a diffusion based CAD sketch generation framework using a mixture of continuous and discrete diffusion. Technically, it introduces Gaussian-softmax diffusion, which is able to model categorical distributions with diffusion models. The paper provides a detailed theoretical derivation for their proposed Gaussian softmax diffusion and shows its effectiveness in generating CAD sketches. Experimentally, the paper shows state-of-the-art CAD generation results compared to baselines.
## Post Rebuttal
I appreciate the authors' rebuttal addressing many of my concerns. I'd like to keep my rating as it is. The main concern I have is still the usability of the method given that it doesn't support any kinds of controllable generation. Thus, a discussion on this front would be great to be included in the final revised edition.
Claims And Evidence: The paper proposes a novel diffusion process with Gaussian-softmax diffusion. Using it, the paper is able to generate state-of-the-art CAD sketches that contain both continuous and discrete parameters. While this claim is supported by the paper's superior CAD sketch generation results compared to baselines quantitatively, it would be great to also see qualitative examples compared with the baselines.
Further, the paper also claims that its Gaussian-softmax diffusion works better than categorical diffusion. An ablation study comparing the two would be good to have.
Methods And Evaluation Criteria: The paper's usage of Gaussian-softmax diffusion is convincing for the task of CAD sketch generation. The paper also uses an existing dataset with standard generation metrics. I would appreciate a more qualitative comparison with existing methods.
While it's standard to limit the number of primitives to below 16, I'm curious to see the scalability of this method when the number of primitives increases.
Furthermore, I'm a little puzzled by the independence assumption in the reverse process of the diffusion. The decomposition in Eq.13-14 assumes the independence of each primitive during the forward and reverse processes. Then does that mean the denoising process of each primitive is independent from each other? Then, it would become a primitive generation rather than a sketch generation.
Theoretical Claims: The paper provides an extensive set of derivations for their Gaussian-softmax diffusion process. The math looks convincing, but I didn't look into the details too carefully.
Experimental Designs Or Analyses: The paper uses a standard dataset following the existing data pre-processing scheme. The paper only presents unconditional generation results, and lack qualitative comparisons with existing baselines. Further, it's unclear how can we use the model for downstream tasks, since it lacks any type of control
Supplementary Material: I took a brief look at the supplement, which contains an extensive set of mathematical derivations of their Gaussian-softmax diffusion process. However, I did not check the correctness of the derivation in close detail.
Relation To Broader Scientific Literature: The paper provides a novel diffusion formulation using Gaussian Softmax distributions. It's a good extension to the current diffusion model literation in terms of modeling categorical distributions.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: THe paper is techically sound with validations backing up the theoretical innovations. I don't have any additional weaknesses besides things I listed above.
Other Comments Or Suggestions: Ln.113 departure -> depart.
Questions For Authors: I have listed my questions in the previous sections. Specifically, I would like to see
1. More qualitative results compared with baselines.
2. Clarification on the independence assumption among primitives during the denoising process.
3. Discussion on the scalability of the method when increasing the number of primitives modeled.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. While this claim is supported by the paper's superior CAD sketch generation results compared to baselines quantitatively, it would be great to also see qualitative examples compared with the baselines.
1. We have a qualitative analysis written and ready for the final paper. However, we only compare ours against Vitruvion, as other prior arts like SketchGen don’t have published code. We chose not to do a qualitative analysis against Sketchgraphs since Vitruvion is a follow up work to it.
2. The paper also claims that its Gaussian-softmax diffusion works better than categorical diffusion. An ablation study comparing the two would be good to have.
1. We do an ablation study see SketchDNN (Cat.) in Table 1 and Table 2\. Which is our SketchDNN model but just using methodology set by Hoogeboom et al. in “Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions” which doesn’t allow for superposition.
3. While it's standard to limit the number of primitives to below 16, I'm curious to see the scalability of this method when the number of primitives increases.
1. Since the backbone of our model is essentially just the DiT model by Peebles & Xie, it has the same scaling laws as that of DiT.
4. Furthermore, I'm a little puzzled by the independence assumption in the reverse process of the diffusion. The decomposition in Eq.13-14 assumes the independence of each primitive during the forward and reverse processes. Then does that mean the denoising process of each primitive is independent from each other? Then, it would become a primitive generation rather than a sketch generation.
1. The primitives can only be independently noised and denoised when given the ground truth or model prediction of the noiseless CAD sketch. The noiseless CAD sketch is just like the noise prediction in image diffusion models, which also denoise each pixel independently.
5. Further, it's unclear how we can use the model for downstream tasks, since it lacks any type of control.
1. You are correct, we do not show conditional generation. Perhaps we used the wrong terminology, this will be fixed in the refined paper. What we meant was that the primitives generated by our model can be outfitted with constraints using an auto constrain model and then be parsed into Onshape. | Summary: Authors proposed a diffusion-based generative model for 2D CAD engineering drawings of parametric primitives. Main contributions include the use of Gaussian-softmax to unify discrete and continuous parameters, which works well for a modified diffusion model without positional encoding. And also a superposition data representation that joins the parameters from different types of parameters all in one. Results are reported for SketchGraphs where the generation quality are better compared to baselines.
Claims And Evidence: Line 108: “we propose the first diffusion-based generative model for parametric CAD sketches” --- This is inaccurate, there is already existing works that apply diffusion model to generate CAD sketches. E.g “Diffusion-CAD: Controllable Diffusion Model for Generating Computer-Aided Design Models”, they generate both the CAD sketches and also the extrusion parameters that turn it into a CAD model. Similarly, there is also "VQ-CAD: Computer-Aided Design model generation with vector quantized diffusion".
Methods And Evaluation Criteria: Paper has no novel and uniqueness scores, which is common in CAD generation. Also no closest-retrieval results to demonstrate model is not overfitting to training set (something like fig 6 in Neural Wavelet-domain Diffusion for 3D Shape Generation will be nice to have).
Theoretical Claims: Derivation for Gaussian-softmax seems correct.
Experimental Designs Or Analyses: Baseline resutls from Vitruvion (table 1) is not the same as those reported in the original paper. I believe setting is the same, 16-max primitives. But fig 5 in Vitruvion reports better performance of 6.35 (per primitives) and 66.6 (per sketch).
Analysis for the benfit of superposition (line 339-344) also lacks proper support. SketchDNN (Cat) used discrete diffusion and in theory should be much worse than continous diffusion. I am not surprised about the results. However this is not a valid proof that superposition mechanism is a key factor for performance. For that, I am expecting to see a comparision where different primitives are seperated into different tokens, with only parameters related to that primitive type included.
Supplementary Material: Yes I browse through the derviation, seems correct but I am not 100% certain.
Relation To Broader Scientific Literature: Automatically generating CAD engineering drawings is an interesting topic. Although this paper has somewhat limited impact in the CAD community as it removes the constraints for simplicity.
Essential References Not Discussed: Related work section is too short and does not cover previous works. Missing citations for sketch-and-extrude CAD generation:
1) Diffusion-CAD: Controllable Diffusion Model for Generating Computer-Aided Design Models
2) VQ-CAD: Computer-Aided Design model generation with vector quantized diffusion
3) SkexGen: Autoregressive Generation of CAD Construction Sequences with Disentangled Codebooks
Also missing citations of discrete and continuous diffusion for CAD-related or vector-like data:
4) HouseDiffusion: Vector Floorplan Generation via a Diffusion Model with Discrete and Continuous Denoising
5) CoLay: Controllable Layout Generation through Multi-conditional Latent Diffusion
6) PLay: Parametrically Conditioned Layout Generation using Latent Diffusion
Finally removing the positional encoding in CAD generation has already be done in BrepGen and shown to address the permutation invariant nature of CAD models (section 6.3). This should also be properly acknowledged.
7) BrepGen: A B-rep Generative Diffusion Model with Structured Latent Geometry
Other Strengths And Weaknesses: Data representation is similar to DeepCAD and Vitruvion without too many modifications, but the Gaussian-Softmax that joins discrete and continuous diffusion is interesting and clearly demonstrates its advantage compared to discrete diffusion.
Overall, paper is well written and easy to follow. Results are demonstrated on the large-scale SketchGraphs dataset. Related work section definitely needs more work to be done. There are also some over-claims or unsupported analysis in the paper (see my comments above).
In terms of results, I hope authors can clarify the number reported for vitruvion baseline and add the novel/unique metrics which show that model is not overfitting.
The lack of constraints in the output is also a major disadvantage of this aproach. Usually autoregressive model or pointer network are more suitable for generating the constraints. It is much more difficult for diffusion models to represent inter-relations with fixed-length parameters.
Other Comments Or Suggestions: Minor type Line 380 state-of-the-art (SOA) -> SOTA
Questions For Authors: 1) What is the inference speed compared to autoregressive baselines?
2) Since the parameters are superpositioned, then the correct primitive type can be determined by looking at which parameters is valid after denoising, is discrete primitive type class even required?
3) How does this approach compared to other methods that jointly denoise continous and discrete values? There are many baseline diffusion methods that authors did not compared against. A very related one is House Diffusion in which discrete parameter was represented as binary hash code.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: 1. Line 108: “we propose the first diffusion-based generative model for parametric CAD sketches”.
1. Sorry, you are right, what we meant to write was “the first ‘sketch space’/’data space’ diffusion-based generative model for parametric CAD sketches.
2. No closest-retrieval results to demonstrate the model is not overfitting to the training set.
1. None of our baselines performed a closest-retrieval analysis to demonstrate the model is overfitting, so we saw no need to do it as well. Furthermore, we instead provided precision and recall scores since they are widely used. If need be, we can present the training and validation loss to show our model is unlikely to have overfit.
3. Vitruvion (table 1\) is not the same as those reported in the original paper.
1. Our deduplication procedure is slightly different from Vitruvion’s, where we quantize the continuous parameters to 8-bits rather than 6-bits, this was to avoid placing somewhat similar sketches into the same bin. Secondly, since our model is permutation invariant, we sorted the rows in the sketch matrix to a canonical ordering. This was done to avoid sketches with identical geometry but differing orderings from being labeled as nonidentical. As a result, our dataset differed from that used in the original Vitruvion paper, so retraining Vitruvion with our dataset yielded the scores we present in the paper. We will put this in refined paper.
4. Constraints are removed for simplicity, making the paper have a limited impact.
1. This is not as large of an issue as there are auto constraining models, for instance in Vitruvion, primitives are generated first, then constraints are filled in between them. Even though our method generates primitives, it can be coupled with “off-the-shelf” auto constraining models to create full fledged sketches. We will discuss this in the refined paper.
5. Data representation is similar to DeepCAD and Vitruvion without too many modifications.
1. The similarities between our data representation and DeepCAD/Vitruvion lies in the fact that we choose similar attributes to define the parameters of primitives. Even then, our work encodes “Arc” primitives in a different manner to DeepCAD/Vitruvion. Furthermore, Vitruvion tokenizes primitive parameters into a series of value, id, and position tokens which we don’t do. As for DeepCAD, we don’t quantize continuous parameters and we also allow constructible primitives unlike DeepCAD.
6. What is the inference speed compared to autoregressive baselines?
1. The inference speed is much slower, since we chose T \= 2000\. Our model takes \~30s to generate a sketch whereas Vitruvion takes \~3-5s. However, we believe that future work can reduce this discrepancy by perhaps borrowing from preexisting methods used in standard gaussian diffusion.
7. Since the parameters are superpositioned, then the correct primitive type can be determined by looking at which parameters is valid after denoising, is discrete primitive type class even required?
1. The primitive type is needed to determine what parameters are valid at the end of the generation, since all parameter values between \-1 and 1 are valid. Furthermore, the primitive type is needed in the reverse process/inference to downweight irrelevant parameters (Section 5.2, line 319). Without explicitly using the primitive type, because of superposition, all continuous parameters may be valid and it wouldn’t be straightforward to select one primitive type over another.
8. How does this approach compared to other methods that jointly denoise continous and discrete values (specifically House diffusion)
1. In House diffusion, discrete variables are mapped to continuous space, where the gaussian diffusion process occurs. Only gaussian diffusion is being performed, no diffusion is occurring in discrete space. For t \< 20, the model outputs a binary representation that gets mapped back to a continuous value where gaussian denoising occurs. This is not related to performing diffusion in discrete space for discrete variables and in continuous space for continuous variables concurrently.
9. Brepgen reference for permutation invariance missing
1. Thank you for letting us know, we have rectified this in our refined paper.
10. References missing
1. We have expanded on the related works section in our refined paper.
*Diffusion-CAD paper was published on Jan 29, 2025 which is a day before this paper was submitted.*
11. “For that, I am expecting to see a comparison where different primitives are separated into different tokens, with only parameters related to that primitive type included.”
1. I’m confused as to what you’re asking here. What difference does it make whether irrelevant parameters are zeroed out or not included? Passing either into a linear layer, will only add up the contributions of the nonzero entries. Which is exactly the case for our categorical diffusion ablation study. | Summary: This paper introduces a diffusion-based generative model for CAD sketch primitive generation. The technical innovation is a Gaussian-Softmax based diffusion paradigm. The method addresses key challenges in the heterogeneity of primitives (each primitive type is defined by its distinct parameterization) and permutation invariance of primitives. The proposed method improves the state-of-the-art results on the SketchGraphs dataset. This work marks the first application of diffusion models to parametric CAD sketch generation.
Claims And Evidence: Yes
Methods And Evaluation Criteria: yes
Theoretical Claims: I checked the Gaussian-Softmax Diffusion Derivation in the supplementary. There is no issue as far as I know.
Experimental Designs Or Analyses: yes
Supplementary Material: I review the entire supplementary
Relation To Broader Scientific Literature: The key contribution of the proposed method is its potential to address the challenges posed by heterogeneous and unordered primitives in CAD sketches.
Essential References Not Discussed: "Brepgen: A b-rep generative diffusion model with structured latent geometry" is also a CAD(Brep) generation paper using a diffusion-based generative model.
Other Strengths And Weaknesses: Strengths:
1. Writing is clear and easy to follow.
2. Experimental analysis is well-presented.
Weakness:
1. No qualitative comparison with other methods.
2. No failure case analysis.
3. In the ablation study, SketchDNN (Cat.) is trained using categorical diffusion. Does this involve quantizing continuous parameters to make them discrete? This detail is unclear.
4. The number of primitives varies for each sketch. The paper does not explain how the number of primitives is determined during testing or how the data is prepared during training.
Other Comments Or Suggestions: 1. More qualitative results would be better.
Questions For Authors: See Strengths and Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. No Qualitative Analysis
1. We have now written a qualitative analysis ready for the final paper. However, we only compare ours against Vitruvion, as other prior arts like SketchGen don’t have published code. We chose not to do a qualitative analysis against Sketchgraphs since Vitruvion is a follow up work to it.
2. No Failure Case Analysis
1. We now have a failure case analysis written and ready for the final paper. The failure cases we discuss are: 1\) The generated primitives have no discernible pattern or form, 2\) gaps exist between primitive terminations or in other words the endpoints of primitives that should be coincident are not, 3\) Extraneous primitives that don’t contribute to the overall sketch design or are not intertwined with the rest of the sketch
3. In the ablation study, SketchDNN (Cat.) is trained using categorical diffusion. Does this involve quantizing continuous parameters to make them discrete? This detail is unclear.
1. No, continuous parameters are not quantized. The only change is that the forward and reverse process of discrete variables (primitive types, constructible tag) follows the methodology set by Hoogeboom et al. in “Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions” which doesn’t allow for superposition.
4. The number of primitives varies for each sketch. The paper does not explain how the number of primitives is determined during testing or how the data is prepared during training.
1. We set the maximum number of primitives to be 16, due to time and resource constraints (Section 5.1, line 305). If a sketch contains less than 16 primitives, then it is padded with null primitives which are represented by the “none” node type (Section 2, line 104). Each sketch is given by a matrix, where each row represents a primitive (Section 2, line 110).
5. Brepgen reference missing
1. Thank you for letting us know, we will include this in our final paper, as we reworked our related work section to be more comprehensive. | null | null | null | null | null | null | null | null |
EditLord: Learning Code Transformation Rules for Code Editing | Accept (poster) | Summary: This paper seeks to decompose the traditional end-to-end LLM-assisted code editing task into discrete and step-wise processes. To this purpose, this paper adopted LLM to summarize meta editing rules from 3 editing tasks: optimization, decompilation and security hardening, and augmented LLM performance in a retrieval style. Their proposed approach achieved SOTA performance in the 3 aforementioned tasks.
Claims And Evidence: The claim of “The post-edit code must be semantically equivalent to the pre-edit code, i.e., having the same functionality, and possessing the desired new properties” raises concerns about the proportion of such edits in real-world scenarios.
Methods And Evaluation Criteria: Code editing tasks are usually treated as translation tasks, and the idea of transforming such an abstract process into a chain-of-edit process is interesting and promising. Meanwhile, the writing style of this paper is easy to follow. However, I have some concerns about this work.
The first concern is the limited scope of the implementation, as it focuses only on three tasks: optimization, decompilation, and security hardening. My intuition is that these tasks are not highly frequent in real-world editing scenarios, which may limit the broader impact of this approach. I would suggest conducting an empirical study to quantify the actual proportion of such edits in real-world settings, providing a clearer picture of the practical significance and applicability of this work.
The next concern is that the lack of quality assurance of the rule set or edit. Despite the manual inspection, the quality of these rules has not been formally verified. For example, it remains unclear whether the retrieved rules can always be correctly applied to the samples, consistently leading to accurate edits.
To summarize, I encourage the authors to extend the editing scenario to project-wise editing, which holds greater practical significance in software development. Additionally, incorporating methods such as symbolic reasoning to formally verify the rules and assess their applicability would enhance the reliability and robustness of the approach.
Theoretical Claims: NA, there is no theorical claim in the submission.
Experimental Designs Or Analyses: Yes, all experimental designs.
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: [1] Liu, Chenyan, et al. "CoEdPilot: Recommending Code Edits with Learned Prior Edit Relevance, Project-wise Awareness, and Interactive Nature." Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis. 2024.
[2] Priyanshu Gupta, Avishree Khare, Yasharth Bajpai, Saikat Chakraborty, Sumit Gulwani, Aditya Kanade, Arjun Radhakrishna, Gustavo Soares, and Ashish Tiwari. 2023. Grace: Language Models Meet Code Edits. FSE
[3] CODIT: Code Editing With Tree-Based Neural Models. IEEE Transactions on Software Engineering 48, 4 (2022), 1385–1399.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: In practical software engineering, people apply code edits in a repository. In addition, the code edits are triggered by an *issue* (see github issue). How this research can push forward this direction.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We really appreciate your time and effort in leaving constructive comments.
**Q1: The three tasks it focuses on may limit real-world applicability because they are not highly frequent in real-world editing scenarios.**
The three editing tasks we considered are extensively studied in the literature [1-9], even evaluated on the same datasets as we do. They are broadly applicable already. For example, Scalene, the popular Python profiler, has already integrated GPT to suggest code optimizations for short code snippet [11]; LLMs have been integrated into existing decompilers as plugins [8,9]; Large companies are actively hosting competitions on LLMs to produce secure code [10]
[1] Tan, Hanzhuo, et al. "LLM4Decompile: Decompiling Binary Code with Large Language Models." Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024.
[2] Hu, Peiwei, Ruigang Liang, and Kai Chen. "Degpt: Optimizing decompiler output with llm." Proceedings 2024 Network and Distributed System Security Symposium. Vol. 267622140. 2024.
[3] He, Jingxuan, and Martin Vechev. "Large language models for code: Security hardening and adversarial testing." Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 2023.
[4] Peng, Jinjun, et al. "CWEval: Outcome-driven Evaluation on Functionality and Security of LLM Code Generation." arXiv preprint arXiv:2501.08200 (2025).
[5] Shypula, Alexander, et al. "Learning Performance-Improving Code Edits." ICLR. 2024.
[6] Huang, Dong, et al. "Effilearner: Enhancing efficiency of generated code via self-optimization." Advances in Neural Information Processing Systems 37 (2024): 84482-84522.
[7] Peng, Yun, et al. "PerfCodeGen: Improving Performance of LLM Generated Code with Execution Feedback." arXiv preprint arXiv:2412.03578 (2024).
[8] aiDAPal: IDA Pro plugin that uses a locally running LLM that has been fine-tuned for Hex-Rays pseudocode to assist with code analysis, https://github.com/atredispartners/aidapal
[9] GhidraAssist: An LLM extension for Ghidra to enable AI assistance in RE, https://github.com/jtang613/GhidrAssist
[10] Amazon Nova AI Challenge accelerating the field of generative AI (LLM for secure code generation), https://www.amazon.science/amazon-nova-ai-challenge-accelerating-the-field-of-generative-ai
[11] Scalene: a high-performance, high-precision CPU, GPU, and memory profiler for Python with AI-powered optimization proposals, https://github.com/plasma-umass/scalene
**Q2: Despite the manual inspection, the quality of these rules has not been formally verified.**
This is a valid concern. While there is no formal guarantee on the generated editing rules’ correctness, our evaluation shows our rule set consistently improves editing performance (Sec. 3.2). We included additional results to show that our meta-rule set learning algorithm can bring improved robustness against the perturbations of the meta-rule set (see below). We do agree that further interacting with formal verifiers to guarantee the correctness of editing rules can be an exciting future work.
||Correct|Compile|Readability|||
|:-|-|-|-|:-|:-|
||||char|token|emb|
|EditLord|93.1|46.6|44.0|47.6|41.4|
|w/ shuffle|94.0|47.3|43.1|45.2|39.7|
**Q3: In practical software engineering, people apply code edits in a repository. In addition, the code edits are triggered by an issue (see GitHub issue). How can this research push forward in this direction?**
EditLord focuses on file-level local edits rather than repo-level editing. File-level edits are already valuable in many scenarios during software development. For example, the developers frequently refactor their code at the local context, e.g., by editing the function they have just written to make it more efficient. While we do believe repository-level editing is indeed an exciting work, we do not attempt to overclaim that EditLord is readily useful for repository-level edits. Instead, we would like to emphasize that the tasks we considered in this paper are already nontrivial and useful, as we have also argued in our response to Q1.
**Q4: Essential references not discussed.**
Thanks for pointing out these related works. We found the main difference between EditLord and many existing editing works (e.g., CoEdPilot, Grace, and CODIT) is that these works mainly focus on repairing functionality, while our tasks focus on preserving original functionality while introducing changes in other dimensions, e.g., performance and readability. That said, these are all very related and should be discussed. We have added discussions in the related works of these papers in our draft.
---
Rebuttal Comment 1.1:
Comment: I thank the authors' response, which generally address my concern. In this case, I would happy to support its acceptance. In the revision, the authors could consider how to extend their file-level editing solutions to a repository-level solution.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for carefully reviewing our response. We really appreciate your recognition of our effort to address the concerns raised, and we are grateful for your support for the paper's acceptance. We will include a study on EditLord for repo-level code editing tasks, e.g., SWE-Bench, by replacing the default code agent for these tasks with EditLord. | Summary: The paper introduces a method for editing code in a decompositional way, where it extracts editing steps, obtains functional specifications, and performs rule-based code editing by prompting LMs.
Claims And Evidence: The paper claims that their method improves code efficiency by leveraging the decompositional nature of code editing tasks. However, after reading the full paper, I don’t fully understand why the extracted editing rules are effective in improving code efficiency or what the rules actually look like. Additionally, their figures need significant improvement, as they don’t help in understanding their methodology—such as the input and output of each subtask, or how the rules and editing process work—but instead confuse me.
I also suggest the authors clearly indicate which dataset benchmarks they use in each table of results. From their experiments section, I gathered that they evaluated on one dataset benchmark, the HQ dataset. However, they also mention the HumanEval dataset elsewhere in the paper, which causes confusion.
Overall, the paper’s presentation requires major revisions and improvements to help readers understand how their method works, and more dataset benchmarks should be included for better evaluation.
Methods And Evaluation Criteria: See "Claims And Evidence"
Theoretical Claims: Theoretical claims are not required for this work.
Experimental Designs Or Analyses: The number of compared baselines are enough, but more datasets benchmarks should be included for a more comprehensive comparison.
Supplementary Material: I downloaded their supplementary material, though it's an empty repo. However, the authors clarified that they will make the code public.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: See "Claims And Evidence"
Other Comments Or Suggestions: See "Claims And Evidence"
Questions For Authors: See "Claims And Evidence"
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you so much for your taking the time and effort to read our paper and leave constructive comments.
**Q1. Why extracted editing rules are effective in improving code efficiency or what the rules actually look like.**
These extracted editing rules serve as editing guidelines that help the model reason about the useful steps and actions to take to generate the edited code. Please refer to Table 8 in Appendix B for some examples of the rules. We have also added the detailed samples to the [anonymous repo](https://anonymous.4open.science/r/EditLord-9C9B/example.pdf).
**Q2. The figures are confusing and do not clearly illustrate the methodology, e.g., the input/output of each subtasks, rule usage, and the editing process.**
We apologize for the potentially unclear figure presentation. To clarify, we take the performance optimization task as an example in Fig 1. The input is a slow (pre-edit) code, while the expected output is semantically equivalent but faster (post-edit) code. Now consider inferencing by an EditLord finetuned model. Instead of directly generating the faster code (post-edit code), the model will first generate functional specifications and editing rules, and then these generated is in the context, and the model keeps on generating the post-edit (faster) code.
For the decompilation task, the input is an unreadable, non-idiomatic ghidra-decompiled (pre-edit) code, while the expected output is a more readable, idiomatic (post-edit) code (see Fig 3).
For the security hardening task, the input is vulnerable code, while the expected output is secure code.
In the upper part of Fig 2, we illustrate how we prepare the functional specifications and editing rules. We will take the pre-edit and post-edit code pair from the training data as input and use LLMs to annotate the functional specification and the editing rules for each pair.
Since editing rules are often shared among different samples, our Alg. 1 describes how an LM serves as a rule learner by iteratively refining the raw meta-rule set and producing a more concise meta-rule set by using operations ADD, MERGE, and PRUNE. This step is done *independently* for each task.
We also state the input and output for each editing mode more clearly in the [updated figure](https://anonymous.4open.science/r/EditLord-9C9B/workflow.pdf) for our improved workflow figure.
**Q3. The dataset used in their experiment is unclear and causes confusion (e.g., HQ, HumanEval).**
We evaluate different editing tasks with different datasets, as different objectives require different evaluations. Specifically, for performance optimization tasks, we train on the HQ dataset and evaluate on the test split from PIE. For the decompilation task, we train on AnghaBench and test on HumanEval-Decompile. For the security hardening task, we train on SVEN, and test on CWEval. We will update our Section 3.1 to describe the setting more clearly.
**Q4. More benchmarks should be included for better evaluation.**
Thanks for pointing this out. Per our response to reviewer raGi, we evaluated EditLord’s finetuned DeepSeek-Coder 1.3B in CodeEditorBench, the benchmark we did not consider in the submission. We follow their metrics by focusing on 1) accuracy: the percentage of problems with correct edits; 2) OptScoreTime: the execution time improvement; and 3) OptScore, the improvement computed by the averaged time and memory together.
Surprisingly, without further finetuning EditLord on this benchmark, but simply running inference using a finetuned EditLord model on a completely different training set, it substantially outperforms the finetuned baseline by 22.5%, 1.8%, and 1.1%, respectively.
||Accuracy|OptScoreTime|OptScore|
|:-|:-|:-|:-|
|Finetuned|0.9%|0.03%|0.09%|
|EditLord|23.4%|1.83%|1.19%|
**Q5. Repo seems empty.**
Thanks for your interest in our artifact. We would hope to clarify that the repository is not empty. The code and prompts were uploaded along with the paper submission. It is likely that the README is blank, which makes the repo appear to be incomplete. We have already uploaded other important files and updated the README. We hope this addresses your concern. | Summary: EditLord is a system designed to improve performance on code editing tasks (e.g. performance/readability/security). The proposed pipeline involves a few steps, starting from a training dataset (editing task-specific) with pre/post edit programs. In the first step, a LLM is used to produce a set of editing ‘meta-rules’. In the second step, a LLM is used to describe the transformation that takes places for each training task. Experiments show, across different tasks, that finetuning a LLM with these extra annotations leads to better performance versus finetuning a LLM on just the tasks alone.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes they seem to.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experiments seem sound in design.
Supplementary Material: I briefly looked through the code included in the supplemental.
Relation To Broader Scientific Literature: This system works on the problem of code editing/improvement; offering a general method/strategy for improving LLM finetuning for this task.
Essential References Not Discussed: References largely seem appropriate.
Although not 'essential', I would encourage the authors to add a discussion of works that try to discover similar artifacts as their 'meta rule-sets'. E.g. for program synthesis (not editing) library learning works (DreamCoder, LILO), or for more general LLM tasks skill discovery (TroVE, GENOME). This connection seems useful, and relevant, though I think even including it in the appendix would be fine.
Other Strengths And Weaknesses: This is a strong paper. It offers an interesting solution for a hard, well-studied problem, demonstrating consistent experimental improvement over a range of tasks.
The idea of trying to extract out meta-rules from a corpora of editing tasks is noteworthy and to my knowledge has not be tried before; beyond benefits during finetuning, this also allows for human-in-the-loop improvements through editing / curation of these rule sets (Section 3.6).
The experimental framework seems robust and quite comprehensive; from my read, the paper does a good job of supporting their claims against prior finetuning alternatives.
The most significant weakness of the paper is that its unclear how robust this meta-rule creation process actually is; while it certainly proves consistently useful, the machinery to produce these rules is relatively simple from a certain perspective (which in some ways is a positive of the system). There aren't really any guarantees that these meta-rules will be good / interpretable / general (unless the human-in-the-loop machinery is employed), but this can be left as a problem for future work / investigations.
Other Comments Or Suggestions: One thought that I had while reading the paper: the proposed process basically creates better annotations for LLM finetuning. When trained on the entire dataset, these annotations lead to improved performance, but how does this performance improvement change as a function of the amount of finetuning data used? I would almost expect this method to do 'better' (than the default finetuning strategy) when the amount of finetuning data is small (would be a useful appendix/supplemental experiment).
Questions For Authors: Though not critical, it would have been interesting to see how larger LLMs (not finetuned) would perform on these tasks, especially when the rules/specifications are appended to the prompt or not.
How stable is convergence of meta-rule set? I.e. if you re-order the rules from 2.2 before passing them into Alg 1, do the same types of meta-rules get discovered, or does this have high variance? I think more analysis on both the stability meta-rule set generation process, and how well it matches 'semantics' (human prior, by some measure) would bring the paper from good to great.
Clarification question: What is DIS in algorithm 1? Is this some text-embedding distance?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are grateful for your effort in leaving us such encouraging comments.
**Q1. It’s unclear how robust the meta rule creation process is. And there aren’t really any guarantees on the quality without human interaction.**
Great point. We observed that if we simply bootstrap per-sample rules, the rules often end up being noisy, repetitive, and not generic. This leads the resulting meta-rule set to be too large to fit in the model context, and the resulting per-sample manifested rules become extremely susceptible to the order of training samples. This motivated us to *mitigate* this issue by developing the meta-rule learning algorithm to keep the meta-rule set concise and generalizable. As you have also noted, our evaluation in Sec 3.6 shows that exposing the meta-rule set during inference allows us to invite human intervention to further improve the quality of the edited code.
While there is no formal guarantee on the generated editing rules’ correctness, our evaluation shows our rule set consistently improves editing performance (Sec. 3.2). In our response to your Q4, our additional results show that our meta-rule set learning algorithm is relatively robust against the perturbations of the training samples. We do agree that bringing provable guarantees to the editing rules, e.g., by interacting with formal verifiers brought by the compiler techniques, is an exciting future work.
**Q2. It’s unclear how annotation-driven performance improvements scale with varying amounts of finetuning data.**
This is a great suggestion. We have added an experiment on the decompilation task when using only 50% of the finetuning data. The results show that EditLord, with only 50% of the training set, still outperforms the finetuned baseline with 100% training samples by 5.9%. Its readability results remain 13.5% and 16.4% higher than the finetuned baseline on char- and token-level readability. This result demonstrates that the sample efficiency brought by EditLord, requiring less than 50% of training samples while achieving comparable performance.
|Data Usage|DeepSeek-Coder-1.3B|Correct|Compile|Readability|||
|:-|:-|-|-|-|:-|:-|
|||||char|token|emb|
|50%|EditLord|41.2|93.1|42.6|46.3|41.4|
|100%|Finetuned|38.9|77.1|36.6|40.8|37.5|
|100%|EditLord|46.6|93.1|44.0|47.6|41.4|
**Q3. What is the performance of larger, non-finetuned LLMs w/ or w/o rules/specifications appended?**
We added an experiment exploring the GPT-4o-mini’s (2024-07-18 version) performance when incorporating our meta-rule set and functional specifications. As shown below, simply including them for non-finetuned GPT-4o-mini will improve its correctness and readability by 11.9% and 8.6%, respectively.
||Correct|Compile|Readability|||
|:-|-|-|-|:-|:-|
||||char|token|emb|
|Prompt|44.3|61.8|33.1|37.0|37.9|
|w/ EditLord|49.6|57.3|36.2|40.1|41.0|
**Q4. How stable is the convergence of the meta-rule set? Will shuffling introduce high variance?**
We have added an experiment that randomly shuffles the initial dataset before passing it into Alg. 1 and obtaining a pair of meta-rule sets. We then measure the similarity between these two sets. Specifically, for each rule in one set, we compute its average semantic similarity to the top 5 most similar rules in another set and average them again to obtain the overall similarity. The semantic similarity between two rules is computed by the cosine distances of the rule embeddings computed by CodeSage, following the similar setting as in the readability metrics in Sec. 3.1. The resulting semantic similarity between the shuffled rule sets is 0.87, close to 1.
We also show the end-to-end results on the robustness of EditLord introduced by the resulting meta-rule set. The perturbed rule sets lead to less than 2.4% performance changes.
||Correct|Compile|Readability|||
|:-|-|-|-|:-|:-|
||||char|token|emb|
|EditLord|93.1|46.6|44.0|47.6|41.4|
|w/ shuffle|94.0|47.3|43.1|45.2|39.7|
**Q5. What is DIS in Alg. 1?**
DIS is the internal embedding distance calculated by the LLM. We intended to use DIS to describe that we ask the LLMs to discover the rules that share similar semantics to decide whether to directly ADD the current rule to the rule set or MERGE it with an existing rule in the rule set. We acknowledge that this can lead to confusion, and we have updated the algorithm at [link]((https://anonymous.4open.science/r/EditLord-9C9B/algo.pdf)).
**Q6: Essential References Not Discussed.**
Thanks for pointing out these related benchmarks. We have added the discussion to our draft. We view code generation tasks as complementary to ours as their input is not the code, but usually natural language specification, similar to our functional specification.
Thanks for pointing out skill discovery works, e.g., TroVE and GENOME! We were inspired by TroVE in the very early stage of our project but ended up narrowing down our focus to primarily code-editing works. We should definitely discuss both, and we have added them to the draft. | Summary: Traditionally, for code editing, language models (LMs) are often used to directly generate the output code (or diff) given the input code in a single turn. There have also been approaches that prompt LMs in a CoT-style manner to generate some reasoning before outputting the edited code. Similarly, existing approaches for supervising LMs for code editing focus on directly generating the edited code (optionally augmented with some reasoning).
This paper offers a fresh perspective on supervising LMs for code editing tasks. Instead of supervising an LM to directly transform the input code to the output code, the proposed approach first produces an understanding (functional specification) of the input code, a set of rules to transform the input code, and finally, the output code conditioned on the functional specification and the set of rules.
The paper presents experiments considering 3 different code editing tasks and 3 recent language models (DeepseekCoder-1.3B, DeepSeekCoder-6.7B, and GPT4o-mini). Authors compare their fine-tuning approach with naive fine-tuning; zero-shot prompting; and chain-of-through prompting, for each task and for each model. The results indicate that the proposed fine-tuning approach consistently outperforms the standard fine-tuning and other baselines.
Authors also present ablations demonstrating their approach to be more robust w.r.t. semantics preserving code transformations and length of the input code.
Claims And Evidence: Claims made in the paper are supported by clear and convincing evidence.
However, for further validation of the results, it would be nice to have additional experiments on some other well-known existing code-editing benchmarks. Please see "Essential References Not Discussed".
Methods And Evaluation Criteria: Proposed methods and/or evaluation criteria make sense for the problem or application at hand.
Theoretical Claims: There are no theoretical claims in the paper as such.
Experimental Designs Or Analyses: Experimental designs seem sound and valid.
However, for further validation of the results, it would be nice to have additional experiments on some other well-known existing code-editing benchmarks. Please see "Essential References Not Discussed".
Supplementary Material: I have given the supplimentary material a quick read. It describes the prompts used, sample rules discovered by the model, and hyperparameter details.
Relation To Broader Scientific Literature: The key contributions are related and relevant to prior literature on using language models for code editing.
The paper utilizes benchmarks developed by the following prior literature on code editing for evaluating their proposed approach.
[1] Learning performance-improving code edits. arXiv preprint arXiv:2302.07867, 2023.
[2] Llm4decompile:Decompiling binary code with large language models. arXiv preprint arXiv:2403.05286, 2024.
[3] Cweval: Outcome-driven evaluation on functionality and security of llm code generation. arXiv preprint arXiv:2501.08200, 2025.
Essential References Not Discussed: The following code-editing benchmarks are fairly well known and should be discussed and included in experiments
[1] [CodeEditorBench: Evaluating Code Editing Capability of Large Language Models](https://arxiv.org/abs/2404.03543)
[2] [NoFunEval: Funny How Code LMs Falter on Requirements Beyond Functional Correctness](https://arxiv.org/abs/2401.15963)
[3] [Aider Code Editing Benchmark (including aider polyglot)](https://aider.chat/docs/benchmarks.html)
[4] [HumanEvalFix](https://arxiv.org/abs/2308.07124) (a code-editing variant of Human eval)
Other Strengths And Weaknesses: Strengths
* Paper should serve as an interesting read for audience interesting in language models for code editing
* The paper presents experiments considering 3 different code editing tasks and 3 recent language models (DeepseekCoder-1.3B, DeepSeekCoder-6.7B, and GPT4o-mini). Authors compare their fine-tuning approach with naive fine-tuning; zero-shot prompting; and chain-of-through prompting, for each task and for each model. The results indicate that the proposed fine-tuning approach consistently outperforms the standard fine-tuning and other baselines.
* Authors also present ablations demonstrating their approach to be more robust w.r.t. semantics preserving code transformations and length of the input code.
Weakness
* While experiments are fairly detailed, the paper currently ignores some well-known benchmarks specifically designed for code-editing. Covering these benchmarks should help strengthen the claims in the paper and add more credibility. (this is the only major weakness I observe in this paper.)
* The proposed approach assumes a training dataset in the target domain. (What about covering security vulnuerabilities that were not a part of the training dataset?). Investigating out-of-domain generalization of the proposed approach should be interesting.
* Paper currently does not provide examples corresponding to each of the three datasets. One representative example from each of the three code-editing tasks would be very helpful. It will help us understand the granularity of the input code (function level/ class level/ or file level) for each task.
Other Comments Or Suggestions: Comments
* Post-edited code may not always necessarily be equivalent to the pre-edited code in case of bug fixing. Therefore, s_i should be a function of both x_i and y_i and not x_i alone.
* If possible, the paper should included zero-shot & CoT performance of larger models like GPT4-o as well. It would be interesting to see if fine-tuning GPT4o-mini makes is as good as GPT4-o for code-editing.
Questions For Authors: * How robust is the proposed approach to noise in rule manifestations? During training rules are manifested while making use of both x_i and y_i. During testing, we don’t have y_i, so we might end up using irrelevant rules.
* How large is the discovered rule set for different code editing scenarios? Is it possible to share entire rule set for each of the three scenarios/datasets? Additionally, how large is the initial rule set G^0 and the final rule set G?
* Have authors considered extending CoT reasoning with their rule set? Explicitly augmenting CoT reasoning to use rules might give better results?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We really appreciate your time and effort in reviewing our paper and giving us constructive comments!
**Q1: More well-known benchmarks should be included.**
Thanks for pointing this out. We originally focused on individual tasks where the corresponding papers also proposed tailored solutions (e.g., PIE) to ensure we compare to the state-of-the-art baselines.
That said, we evaluated our finetuned DeepSeek-Coder 1.3B on the Code Polish task in CodeEditorBench. We follow their metrics by focusing on 1) accuracy: the %problems with correct edits; 2) OptScoreTime: the execution time improvement; and 3) OptScore, the improvement computed by the averaged time and memory. EditLord, even without extra finetuning on this dataset, outperforms the finetuned model by 22.5%, 1.8%, and 1.1%, respectively.
||Accuracy|OptScoreTime|OptScore|
|:-|:-|:-|:-|
|Finetuned|0.9%|0.03%|0.09%|
|EditLord|23.4%|1.83%|1.19%|
**Q2: How does EditLord work on out-of-domain generalization (e.g., unseen vulnerabilities)?**
We ensured our training and evaluation came from two data sources. As described in Sec 3.1, our training comes from SVEN, but our testing is from CWEval with unseen CWEs.
We add below a breakdown of the baseline and EditLord’s performance on seen/unseen CWEs. EditLord consistently generalizes better than the baseline, outperforming it by 7.5% and 38.1%, respectively.
We also add generalization tests on unseen languages (Python/Java) in performance optimization in CodeEditorBench. EditLord achieves improvement in both seen and unseen languages, outperforming it by 2.05% and 0.61%, respectively.
Along with the length generalization results in Sec 3.5, our added experiments here show that EditLord maintains strong generalization in various settings. We will include them in the draft.
|Security|Methods|Correct@k|||Security@k|||Correct & Sec@k|||
|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|||k=1|k=10|k=50|k=1|k=10|k=50|k=1|k=10|k=50|
|Seen CWEs|Finetuned|24.3|38.4|41.7|12.8|44.0|50.0|7.7|21.6|25.0|
|Seen CWEs|EditLord|36.8|53.3|66.7|12.5|43.3|58.3|8.7|24.7|25.0|
|Unseen CWEs|Finetuned|24.1|35.0|40.0|8.6|14.8|22.5|4.6|11.5|17.5|
|Unseen CWEs|EditLord|29.6|48.5|57.5|12.1|23.8|30.0|7.0|16.9|22.5|
|Code Polish|Methods|Accuracy|OptScoreTime|OptScore|
|:-|:-|:-|:-|:-|
|Seen lang (cpp)|Finetuned|1.4%|0.02%|0.24%|
|Seen lang (cpp)|EditLord|28.3%|3.1%|2.29%|
|Unseen lang|Finetuned|0.7%|0.04%|0.02%|
|Unseen lang|EditLord|20.9%|1.18%|0.63%|
**Q3: The granularity of the input code (function/class/file level) for each task is unclear.**
The input code is at the file level for all tasks. This ensures the code can be compiled to measure the functional correctness. Please see detailed examples [here](https://anonymous.4open.science/r/EditLord-9C9B/example.pdf).
**Q4: How robust is EditLord to noise in rule manifestations during inference?**
Great point. We observed that learning rules for each sample often leads to noisy and repetitive rules, which can degrade performance. This motivated us to propose Alg.1 to disentangle the meta-rule learning and the per-sample rule manifestation. While there is no formal guarantee on the generated editing rules’ correctness, our evaluation shows our rule set consistently improves editing performance (Sec. 3.2). That said, generating provably correct rules in formal languages with formal verifiers is indeed an exciting future work.
We include some preliminary results on the robustness of EditLord against randomly shuffled rules for training. The following shows that perturbed rule sets lead to less than 2.4 performance changes in the decompilation task.
||Correct|Compile|Readability|||
|:-|-|-|-|:-|:-|
||||char|token|emb|
|EditLord|93.1|46.6|44.0|47.6|41.4|
|w/ shuffle|94.0|47.3|43.1|45.2|39.7|
**Q5: How large is the initial rule set G^0, and the final discovered rule set G? Can you share the entire rule set for each task?**
The initial rule set $G^0$ has 2.9K, 1.9K, and 1.2K rules for performance, decompilation, and security hardening, respectively, while the final rule set G has 221, 228, 237, respectively. Table 8 shows the example rules. Thanks for your interest. We will definitely release the full set once the paper is ready to be published.
**Q6: Will extending CoT reasoning with their rule set give better results?**
Yes, we included this study in the paper. As described in Fig.2, the *prompting* editing mode refers to CoT prompting with our meta-rule set. In Fig.4 (R=0 means CoT prompting without iteratively refining the generated code), we compare this CoT setting with zero-shot prompting w/o CoT (i.e., w/ vs w/o EditLord). Including our rules improves the CoT performance by 46% across all the tasks.
**Q7: Related work not discussed.**
Thanks for pointing out these benchmarks. We have included preliminary results on CodeEditorBench (Q1), showing EditLord’s potential to generalize to unseen benchmarks. We will add new results and discussions of these benchmarks in our paper. | null | null | null | null | null | null |
Learning Input Encodings for Kernel-Optimal Implicit Neural Representations | Accept (poster) | Summary: This paper first established a theoretical insight that the neural tangent kernel of implicit neural representation can approximate any positive semidefinite dot-product kernels. Developing on this insight, the paper propose a kernel alignment regularizer to improve the INR system. Experiments show the proposed method performs better than baseline methods in image reconstruction and phase retrieval tasks.
Claims And Evidence: 110-114: The neural tangent kernel associated with an INR (which is a multi-layer perceptron) captures the evolution of network predictions during training. So by regularizing the NTK, we can control the training of INR. With this insight, the paper introduces a regularizer Eqn. (11) and claims this regularizer can improve the INR.
The evidence is both theoretical and empirical. The paper shows the proposed regularized loss function yield better performance in image reconstruction and phase retrieval.
The experiments indeed support the claim made by the paper. However, I personally consider the experiments to be a little simple. The experiments are simple tasks and mainly synthetic. If the proposed method can be applied to other more challenging real-world problems, the paper will be much stronger.
Methods And Evaluation Criteria: The method proposed in the paper is simple and effective, and the theoretical foundation behind the regularization loss function is strong. So I believe the motivation and justification of the proposed method is valid. The proposed method is evaluated by two tasks and shows good results. But as mentioned in previous sections, the tasks seem to be too simple and work easily.
Theoretical Claims: I didn’t carefully check the proof, while the overall idea makes sense to me.
Experimental Designs Or Analyses: As mentioned in the previous sections, the considered tasks seem to be too simple.
Supplementary Material: No supplementary material is provided.
Relation To Broader Scientific Literature: The proposed method seems to have a strong relation to reconstruction tasks, which are the foundation of many important tasks, such as unsupervised learning through reconstruction, image super resolution, etc.. From this perspective, the paper provides new insights into a fundamental problems, so I consider this as a significant contribution.
It would be much stronger if the proposed method is shown to be also effective in some real-world tasks, rather than only the synthetic tasks presented in this paper.
Essential References Not Discussed: No
Other Strengths And Weaknesses: No
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **All the Table/Figure Rx can be found in https://anonymous.4open.science/r/ICML-Re-3333**
**Q1:** The proposed method should be applied to other more challenging real-world problems.
**A1:** Thank you for your valuable suggestion. We have conducted additional experiments on more complex problems, specifically focusing on 3D neural fields and comparing our methods with Instant-NGP, as NeRF can be time-consuming. We performed experiments on the NeRF Synthetic datasets with 25, 50, and 100 view perspective samples. As shown in [Table R3](https://anonymous.4open.science/r/ICML-Re-3333/Table_R3.pdf), the average PSNR for 25 view perspectives indicates that the PEAK algorithm outperforms the vanilla Instant-NGP. Notably, the improvement is more pronounced with fewer training samples, demonstrating that PEAK effectively enhances generalization capability. Furthermore, [Figure R3](https://anonymous.4open.science/r/ICML-Re-3333/Figure_R3.pdf) shows that PEAK reduces artifacts caused by sparser samples, indicating its ability to leverage the internal structure of the data to improve performance in downstream tasks with limited data. We will include this 3D experimental data in the revised manuscript to further demonstrate the effectiveness and generalizability of our method. | Summary: The paper proposes two theoretically-motivated changes to implicit neural representation architecture and training, based on comparisons to the infinite width neural tangent kernel. The first change is a regularization strategy to encourage alignment with the optimal NTK, and the second is a trainable encoding that can be added before an INR to improve kernel alignment. Since the optimal kernel is not computable (due to both lack of access to the true data distribution and computational limitations of evaluating and inverting a large matrix), the idea is to encourage the INR to approximate this optimal kernel without computing it directly. Note that the proposed changes can be applied to any INR as a plug-in modification.
Claims And Evidence: The claims of improved performance are well substantiated by experiments. The claims of matching the optimal kernel are substantiated with a toy experiment in Figure 2, though in general it seems the proposed method approximates a class of kernels that includes the optimal kernel, but may not exactly match the optimal kernel since that is in general not identifiable from a finite dataset.
Methods And Evaluation Criteria: Evaluation on image inpainting (with different types of linear image corruption) and phase retrieval (a classic nonlinear inverse problem, also here for images), is quite compelling. These are reasonable examples to test the method and substantial improvement is shown compared to existing INRs, both quantitatively and qualitatively.
Although the quality of results obtained with the proposed method PEAK are impressive, it would be informative to also compare model size (ideally keeping the number of trainable parameters fixed across all models compared) and training/inference time for each method compared. Since PEAK requires adding a trainable embedding layer, I am concerned that it may gain an unfair advantage by increasing model size and/or training time.
Theoretical Claims: The theoretical claims include a characterization of some properties that must be satisfied by the optimal kernel method for a dataset, and algorithmic and approximation choices to encourage an INR to approximately satisfy these same properties (including a theorem that such an INR exists). Since the optimal kernel itself is not identifiable from a discrete dataset, I would encourage the authors to relax some of the language around alignment with the optimal kernel since it seems the proposed method is instead endowing the INR with some properties that are also shared by the optimal kernel. Nonetheless, the theoretical contribution is valuable as most INR architectures make no attempt at theoretical motivation or characterization. Figure 2 does also validate in a toy setting that this approximation can induce an INR to mimic the optimal kernel in a setting where the optimal kernel is known and computable, which is a compelling illustration of the main idea.
Some notation is not clear. Specifically:
- What is A? In Theorem 3.3 A would seem to be a subset of the real numbers reflecting the range space of a kernel. Then A appears in section 3.4 where it would appear to be a function of two arguments, sometimes bold and sometimes not bold. From context my guess is that the bold version is a vectorized version of the non-bold version of A, but that neither A in section 3.4 is related to the A in theorem 3.3. Eventually the bold function A is defined in line 266 (right column), but this is after the reader has been seeing it without definition for almost a page. It would be preferable to explain at first use that A is a function to be learned, that must satisfy certain properties.
- The embedding function gamma is sometimes denoted with the internal parameters a_j listed explicitly, and sometimes with these parameters implicit in the function. This is a minor issue but at first glance can make it difficult to see that these are all the same function. Since gamma is also introduced and described before the practical choice for its definition/parameterization is given (in line 257, right column), I would suggest mentioning at first use that gamma is, like A, a function to be learned that must satisfy certain properties.
- The circled plus notation is used (e.g. on line 247, left column) without definition. The same notation can be used for multiple operations (e.g. direct sum, exclusive or, and dilation) so it warrants precise definition here.
Experimental Designs Or Analyses: Please refer to my comments on methods and evaluation criteria. Specifically, I’d like to see some comparison or discussion of relative model sizes and training/evaluation times in addition to the provided comparisons of quality. I encourage the authors to release their code (upon publication, if not before).
Supplementary Material: The supplementary material contains more details of the relevant theoretical background (on neural tangent kernels and kernel regression), as well as proofs. One proof is of the NTK dot product property (though I consider this background material since the dot product property has been shown before and is leveraged e.g. in the Fourier Features Let Networks Learn paper). The other proof is of the nonnegative power series expansion of a dot product kernel, which is not clearly stated whether this is a novel result or a convenient restatement of a known result. The supplement also includes some ablation studies and details on the model architectures; these should be at least referenced (with brief discussion) in the main paper.
Relation To Broader Scientific Literature: Related work is adequately discussed.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A; see the main review.
Other Comments Or Suggestions: Figure 1 is difficult to understand. Figure 1(a) could be improved by plotting with alpha<1 so that overlapping dots/lines can be distinguished. Figure 1(b) is not clear what it is trying to show (what do arrows mean? What are the level sets showing? What is the black triangle? etc.).
Overall the writing is clear, though there are occasional typos/minor grammatical mistakes. E.g. the sentence on line 214 (left column) is not a complete sentence, though the meaning is still clear.
Questions For Authors: My primary question for the authors is about the model sizes and training/inference times for each of the methods compared, to determine whether the improved performance comes at a cost. My leaning towards acceptance assumes that this information will be shared during the rebuttal/revision.
I also have some questions about notation in the theoretical contributions; these can be addressed by revising the exposition there.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **All the Table/Figure Rx can be found in https://anonymous.4open.science/r/ICML-Re-3333**
**Q1:** A comparison of model size and training/inference time, with a fixed number of trainable parameters, is necessary for a fair evaluation.
**A1:** We have conducted a comparison regarding parameter numbers and training time. Our results demonstrate that PEAK converges quickly and achieves the highest PSNR at the same training time, as seen in [Figure R1](https://anonymous.4open.science/r/ICML-Re-3333/Figure_R1.pdf). Additionally, as shown in [Figure R2](https://anonymous.4open.science/r/ICML-Re-3333/Figure_R2.pdf), PEAK maintains the highest PSNR with an equivalent number of parameters compared to other methods. These findings indicate that our PEAK excels in both efficiency and performance.
**Q2:** It may be beneficial to relax the language around alignment with the optimal kernel.
**A2:** Thank you for your insightful feedback. We agree that complete alignment with the optimal kernel is unrealistic due to its unidentifiability from discrete data. Our method aims to endow the INR with properties shared by the optimal kernel, rather than asserting full alignment. In the revised paper, we will adjust our wording to more accurately reflect the actual capabilities of the method and avoid overemphasizing alignment.
**Q3:** Some notation is not clear.
(1) The notation for the variable A in Theorem 3.3 and Section 3.4 appears inconsistent and requires clarification.
(2) The notation for the embedding function $\gamma$ requires clarification.
(3) The notation for the circled plus is used without definition and can represent multiple operations.
**A3:** Thank you for pointing out the confusion regarding the notations.
(1) We sincerely apologize for the oversight. In fact, the $A$ in Theorem 3.3 and Section 3.4 represents entirely different concepts. In Theorem 3.3, $A$ denotes a subset of the real numbers, reflecting the range space of a kernel, whereas in Section 3.4, $\mathbf{A}:\mathcal{X}\times \mathcal{X}\rightarrow \mathbb{R}$ (denoted in bold) is a function that measures the similarity between input points. To enhance clarity, we will change the notation in Theorem 3.3 to signify a set, and we will move the definition of the function $\mathbf{A}$ in Section 3.4 to its first mention in the revised manuscript.
(2) The initial definition of $\gamma$ is provided at line 166 in the right column. We will review the text to ensure clarity and reinforce the understanding that $\gamma$ is a learnable function with specific properties.
(3) The $\oplus$ and $\otimes$ represent the direct sum and direct product, respectively. We will add more detailed explanations following these symbols.
**Q4:** Encourage the release of the code upon publication, if not before.
**A4:** We will make our code publicly available upon publication to facilitate reproducibility and further research in this area.
**Q5:** The supplement on ablation studies and model architecture details should be included in the main paper.
**A5:** We appreciate your suggestion regarding the supplementary material on ablation studies and model architecture details. We will ensure to reference this material in the main paper and summarize the key findings to provide readers with a clearer understanding of our experimental design.
**Q6:** Improvements are needed for the clarity of Figure 1.
**A6:** Thank you for your feedback on Figure 1. We will follow your suggestions to improve its clarity in the revised version. In Figure 1(a), we will reduce the transparency to better distinguish overlapping dots and lines. In Figure 1(b), the "orange arrows" indicate the training progress, i.e. training INR with the vanilla loss function and PEAK, resulting in $f_{\boldsymbol{\theta}}(\gamma(\cdot))$ and $f_{\hat{\boldsymbol{\theta}}}(\hat{\gamma}(\cdot))$, respectively. The "green arrows" represent the application of the NTK theorem to calculate the corresponding NTK values, where $f_{\boldsymbol{\theta}}(\gamma(\cdot))\rightarrow K$ (depicted as the black triangle) and $f_{\hat{\boldsymbol{\theta}}}(\hat{\gamma}(\cdot))\rightarrow \hat{K}$ (depicted as the orange star). The "blue arrows" illustrate the theoretical analysis that introduces the optimal kernel $K^*$ (blue star). The vanilla INR's corresponding $K$ (black triangle) is far from the $K^*$ (blue star). Our KAR aligns the $\hat{K}$ with $K^*$ in the kernel space, guiding the INR to optimize $\gamma$. And the level sets represent the kernel space. We will add further explanation and simplify Figure 1(b) to enhance clarity in the revised version.
**Q7:** Some typos/minor grammatical mistakes. E.g. the sentence on line 214 (left column) is not complete.
**A7:** Thank you for pointing out the incomplete sentence. We will conduct a thorough review of the manuscript to correct these typographical and grammatical errors. | Summary: The paper summarises NTK-theory related contributions on INRs, and derives the optimal kernel for INRs under certain conditions. It then proposes an algorithm, named PEAK, to approximate a "Kernel Alignment Regularizer" and apply it to an INR, so that its kernel is encouraged to move towards the optimal one, improving its generalisation performance. Experiments show that the proposed approach achieves a similar kernel to the optimal one in a simple function approximation scenario. More practical experiments on image reconstruction and phase retrieval show that the proposed approach helps a simple ReLU-based MLP to achieve better results than with Fourier features or hash encoding.
##Update after rebuttal
The rebuttal addressed most of my concerns. I am inclined towards acceptance, and expect the authors to incorporate the new results/experiments/considerations in the final version of the paper in case of acceptance.
Claims And Evidence: The theoretical claims and supported by proofs and analyses, however the experimental claims about the effectiveness of the proposed approach in improving the generalisation of INRs lack experimental support. See below.
Methods And Evaluation Criteria: The proposed method seems appropriate and well grounded.
Theoretical Claims: I checked the claims to the best of my abilities, however I could not verify the proofs due to my limited expertise in NTK theory
Experimental Designs Or Analyses: Yes. While I find the experiments themselves to be appropriate for the evaluation of generalisation capabilities of the method, I believe that the shown results are not sufficiently convincing.
E1) SIREN [1] is mentioned in the paper but not used for comparisons in the experiments. Other more recent methods such as MFN[2], BACON [3], Gauss [4], WIRE [5], FINER [6] and SAPE [7] should be compared to, potentially also showing whether the proposed framework can be applied to such methods and improve them. Currently, the experiments convince the reader that the framework works on ReLU MLPs, but they do not support a practical use of the framework.
E2) On a similar note, time and memory requirements of the algorithm are not discussed, which would be needed to justify its practicality.
E3) Additional experiments on different tasks would also be appropriate, such as 3D shape and neural fields, the latter of which would greatly benefit from better generalisation capabilities.
E4) The supplementary shows some sensitivity to hyper parameter choice. This should be discussed in a limitations section in the main paper.
E5) Since INRs have been shown to have a bias towards low frequencies, how does the proposed regularizer affect this? An analysis would be interesting.
E6) Additionally, a second (compact) INR is used to compute the function A. How many additional parameters does this introduce? Does the proposed approach work better than a vanilla INR that has as many parameters as the total of f + g?
[1] Implicit Neural Representations with Periodic Activation Functions, Sitzmann et al.
[2] Multiplicative Filter Networks, Rizal Fathony, Anit Kumar Sahu, Devin Willmott, J Zico Kolter
[3] BACON: Band-limited Coordinate Networks for Multiscale Scene Representation, David B. Lindell, Dave Van Veen, Jeong Joon Park, Gordon Wetzstein
[4] Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs, Sameera Ramasinghe, Simon Lucey
[5] WIRE: Wavelet Implicit Neural Representations, Saragadam et al.
[6] FINER: Flexible spectral-bias tuning in Implicit NEural Representation by Variable-periodic Activation Functions, Zhen Liu, Hao Zhu, Qi Zhang, Jingde Fu, Weibing Deng, Zhan Ma, Yanwen Guo, Xun Cao
[7] SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization, Hertz et al.
Supplementary Material: Yes. Section B.
Relation To Broader Scientific Literature: The theoretical contributions seem well contextualised in the literature. However the experiments do not show relations to prior experimental work.
Essential References Not Discussed: See the experimental section of the review.
Other Strengths And Weaknesses: Strengths:
S1) The paper is overall well written
S2) The method seems original and potentially interesting to use in practice
Weaknesses:
W1) The experiments do not convince the reader about the validity of the method except with a very basic baseline. The number of additional parameters, as explained above, may also present a weakness as it is not currently discussed.
W2) Limitations are not discussed.
Other Comments Or Suggestions: Page 4 Line 212-215 column 1: sentence is not finished, the "while" could be removed.
Section 4.2: missing reference to figure 3, which is not referenced anywhere
Questions For Authors: I would like the authors to address my concern about experimental validity, in case I missed some points or scope.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **All the Table/Figure Rx can be found in https://anonymous.4open.science/r/ICML-Re-3333**
**Q1:** The paper should compare PEAK with SIREN, MFN, BACON, Gauss, WIRE, FINER and SAPE, and explore whether the framework can be applied to and improve these methods.
**A1:** Thank you for the concern regarding the experimental validity. Actually, we have shown that our PEAK algorithm is not sensitive to the choice of INRs with different activation functions, as illustrated in Appendix Figure 6(c). Furthermore, we have applied PEAK to the INRs you mentioned, except for SAPE, in *[Tables R1](https://anonymous.4open.science/r/ICML-Re-3333/Table_R1.pdf) and [R2](https://anonymous.4open.science/r/ICML-Re-3333/Table_R2.pdf)*. The exclusion of SAPE is due to its significantly different training strategy, which requires additional fine-tuning for a fair comparison. For more details about these experiments and our motivations, please refer to ***A1 to Reviewer QEM1***.
**Q2:** The time and memory requirements of the algorithm are not discussed.
**A2:** As shown in *Figure R1*, for the "Baboon" image reconstruction task, our proposed PEAK achieves a PSNR of 30 dB in 2 seconds, which benefits from the kernel alignment providing additional self-similarity to accelerate convergence. As shown in *Figure R2*, PEAK achieves a higher PSNR compared to other methods while using the same number of parameters. We will include a more comprehensive discussion of these results in the revised version.
**Q3:** Additional experiments on tasks like 3D shape representation and neural fields.
**A3:** Thank you for the suggestion. We have conducted more experiments on neural fields, comparing our methods with Instant-NGP, as NeRF can be time-consuming. Specifically, we performed experiments on the NeRF Synthetic datasets with 25, 50, and 100 view perspective samples, respectively. As shown in *Table R3*, the average PSNR for 25 view perspectives indicates that the PEAK algorithm outperforms the vanilla Instant-NGP. Notably, the improvement is more pronounced with fewer training samples, demonstrating that PEAK effectively enhances generalization capability. *Figure R3* shows that PEAK reduces artifacts caused by sparser samples, indicating its ability to leverage the internal structure of the data to improve performance in downstream tasks with limited data. We will include this 3D experimental data in the revised manuscript to further demonstrate the effectiveness and generalizability of our method.
**Q4:** The supplementary shows some sensitivity to hyperparameter choice. This should be discussed in a limitations section.
**A4:** Due to space constraints, we only discuss the polynomial degree of $\gamma$ in the main text, as it is unique to PEAK. The influence of other parameters (e.g., the regularization coefficient $\lambda$) is a common consideration for all regularization-based methods. The ablation study on activation functions demonstrates that PEAK is generally insensitive to these choices. While the output dimension $r$ of the regularization network appears to affect the final results, all configurations still outperform the baselines. We will provide the PSNR values of baselines in the supplementary material to avoid potential misunderstandings.
**Q5:** An analysis of the proposed regularizer's impact on low-frequency bias in INRs is needed.
**A5:** As shown in *Table R4* and *Figure R4*, we conducted experiments using a synthetic image. We sampled a $256\times 256$ grid from $[-1,1]\times [-1,1]$ as $\mathcal{X}$ and generated $\mathcal{Y}$ using $\mathbf{y}_i=\sin(50\pi\sin(\frac{\pi}{3}\cdot\left\\|\mathbf{x}_i\right\\|_2))\in\mathcal{Y}$. We then trained MLP, Fourier, and our proposed PEAK algorithm on $\mathcal{X}\times\mathcal{Y}$. This synthetic image represents a frequency gradient transitioning from low (outer) to high (center, $(0,0)$). The results indicate that the MLP struggles to learn higher frequencies even after 10,000 epochs. The Fourier shows some improvement by initially capturing low frequencies before gradually learning high frequencies, but this process is relatively slow. In contrast, our PEAK algorithm effectively addresses the low-frequency bias, enabling it to learn both low and high frequencies almost simultaneously.
**Q6:** Clarify the number of additional parameters introduced by the second (compact) INR and compare it to a vanilla INR with the same number of parameters.
**A6:** The second (compact) INR introduces around additional $\frac{1}{10}$ of the first (main) INR parameters. We have compared its performance against a vanilla INR with the same total number of parameters. As illustrated in *Figure R2*, our algorithm exhibits superior performance.
**Q7:** "While" should be removed; a reference to Figure 3 is missing.
**A7:** Thank you for pointing out the incomplete sentence and the missing reference to Figure 3. We will make the necessary corrections in the revised manuscript. | Summary: The paper studies Implicit Neural Representation (INRs) from a Neural Tangent Kernel (NTK) perspective. It introduces the *Kernel Alignment Regularizer* (KAR), which encourages alignment between the INR’s NTK and an optimal kernel and *Plug-in Encoding for Aligned Kernels* (PEAK). PEAK is a method to integrate KAR with INR architectures with learnable input encodings. The authors show that PEAK improves image reconstruction and phase retrieval tasks compared to simple baselines (MLPs, Fourier features, and Hash encoding).
**Update after rebuttal.** Please refer to [my last comment](https://openreview.net/forum?id=Cx80t5FAQJ¬eId=zeP46QL4uf).
Claims And Evidence: The paper makes convincing claims.
Methods And Evaluation Criteria: The methodology is reasonable, but some stronger baselines are lacking, e.g., SIREN, WIRE.
Theoretical Claims: Proofs appear correct. Several theoretical results were already known and not properly referenced.
Experimental Designs Or Analyses: The experiments show that PEAK improves over baselines, but it is unclear if the chosen baselines are strong enough. Moreover, the per-image results in tables in the main are unnecessary and could be aggregated.
Supplementary Material: I checked the supplementary material (appendix).
Relation To Broader Scientific Literature: To the best of my knowledge, the proposed PEAK algorithm is novel. However, several theoretical results – presented as novel – are known or well-established in the literature. For instance, all results reported in Appendix A are not novel and not referenced. Moreover, the result in Theorem 3.1 expressing the optimal kernel in terms of the posterior mean kernel is a standard result in Gaussian process and Bayesian nonparametric methods. In particular (6) is the Bayesian optimal kernel which minimizes the MSE.
Essential References Not Discussed: The paper is not the first work studying INRs from an NTK perspective. In particular, Yuce et al. (2022) should be discussed.
Yuce, Gizem, et al. *A structured dictionary perspective on implicit neural representations*. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
Other Strengths And Weaknesses: The paper is dense, making it hard to follow. For instance, proofs could be moved to the appendix to improve readability, while providing in the main text only clearer high-level intuitions.
Other Comments Or Suggestions: (L217) Notice that, for fully-connected networks, depth does not help generalization in the NTK regime (Bietti & Bach, 2020).
Questions For Authors: 1. What motivated the choice of the current baselines?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Q1:** The choice of the current baselines needs clarification.
**A1:** We appreciate your inquiry regarding the choice of baselines. Our PEAK algorithm is designed to find an encoder $\gamma(\mathbf{x})$ that enhances the generalization ability of the composed function $f_{\boldsymbol{\theta}}(\gamma(\mathbf{x}))$. Consequently, we focus on these baselines that also work with $\gamma(\mathbf{x})$. The vanilla MLP with $\gamma(\mathbf{x})$ serves as a benchmark for basic performance. The Fourier feature and Hash encoding are among the most influential methods for improving high-frequency representation capability and convergence speed. In response to your feedback, we have extended Table 1 to include a broader range of baselines in [Table R1](https://anonymous.4open.science/r/ICML-Re-3333/Table_R1.pdf), such as SIREN, MFN, BACON, Gauss, WIRE, and FINER. While these more recent methods show improved high-frequency representation ability in their original studies, their generalization ability remains limited, as shown in [Table R1](https://anonymous.4open.science/r/ICML-Re-3333/Table_R1.pdf). Additionally, our KAR can be directly applied to these baselines in a plug-and-play manner. We further evaluate the improvement of KAR on the aforementioned baselines in [Table R2](https://anonymous.4open.science/r/ICML-Re-3333/Table_R2.pdf). The results indicate that KAR enhances the performance of all these baselines, and the numerical results are consistent and not sensitive to the choice of baseline, as KAR estimates the same optimal kernel under the same sampling pattern and image.
**Q2:** Some theoretical results were not properly referenced.
**A2:** Thanks for your insightful suggestions from the perspective of a peer in this specialized field. Our main contributions are the introduction of KAR regularization and the PEAK algorithm, both derived from Theorem 3.3. Given the wide-ranging applications of INR across various research domains, our goal is to present the core ideas in a manner that is accessible to researchers with varying levels of prior knowledge, without necessitating consultation of the original literature. Therefore, we have distilled the essential part of these Theorems. In the revised manuscript, we will include proper citations in Appendix A to facilitate readers in finding the related works.
**Q3:** The per-image results in tables in the main are unnecessary and could be aggregated.
**A3:** Thank you for pointing this out. We believe that including more image results is important as the performance of INRs is closely tied to the specific characteristics of each image. For instance, in Table 1, PEAK shows the highest improvement on the "Baboon" image with a missing patch, due to its left-right symmetry. This symmetry allows PEAK to enhance generalization by effectively learning internal self-similarities within the signal. A more detailed discussion on this will be provided in the revised version.
**Q4:** The relationship of the work to the findings of Yuce et al. (2022) should be discussed.
**A4:** NTK is a useful mathematical tool for analyzing the dynamics of INRs, and many researchers have utilized it to illustrate the properties of INRs. Yuce et al. (2022) analyze INRs from a dictionary learning perspective, which differs significantly from our approach. In contrast, Tancik et al. (2020) provide an earlier analysis of INRs from the NTK perspective, proposing the Fourier feature encoder to improve high-frequency representation capabilities, which we have cited in our paper. To our knowledge, our PEAK is the first work to employ kernel alignment for guiding encoder design. In the revised manuscript, we will include a citation to clarify the similarities and differences between our work and that of Yuce et al. (2022).
**Q5:** The readability of the paper could be improved.
**A5:** We recognize that the paper's density may pose a challenge to readers. To enhance readability, we will move technical proofs to the appendix and focus on providing clearer high-level intuitions in the main text, such as summarizing key concepts and illustrating them with examples. Additionally, we will work on simplifying the exposition of our methods and results.
**Q6:** Notice that, for fully connected networks, depth does not help generalization in the NTK regime (Bietti & Bach, 2020).
**A6:** Thank you for your valuable comment regarding the findings of Bietti & Bach (2020). We agree that while depth may not theoretically improve generalization in the NTK regime for fully connected networks, in practical applications—especially when the network width is not infinite—depth can still positively influence model performance. In fact, there is very limited research that relies solely on a single-layer INR for realistic applications. Therefore, we propose an encoder learning algorithm rather than modifying the activation function in a single-hidden-layer INR, as done by Simon et al. (2022).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers and the additional results provided. However, since the updated manuscript cannot be reviewed, I believe the paper requires another round of review to properly assess (i) whether all prior theoretical results are properly referenced and clearly presented as such, and (ii) the new baselines and experimental findings.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thorough review of our manuscript. We would like to clarify our position regarding the request for another round of review:
(i) **Referencing Theoretical Results**: The NTK theory referenced in the main text (L122, left side) has been appropriately cited, and we have not intentionally overlooked relevant works to obscure our theoretical contributions. Additionally, we have cited one of the most representative works, Tancik et al. (2020), in multiple places (e.g., L46, right side; L166, right side) and have thoroughly discussed its relationship to our work. Specifically, we have compared the performance of the fixed input Fourier mapping (Fourier) proposed based on theoretical analysis with our learnable mapping in Tables 1, 2 and Figures 4, 5.
(ii) **New Baselines and Experimental Findings**: The experiments we provided in the rebuttal are extensions of the original experiments. Most of the "new baselines" you mentioned involve modifications to the activation functions used in INRs. In fact, our original manuscript already included a comparison of up to 20 different activation functions in Appendix Figure 6(c), which encompasses the mentioned new baselines, including SIREN (corresponding to the sine activation function) and WIRE (corresponding to the Gabor activation function). The experimental results demonstrate that our algorithm is not sensitive to the specific network architecture. This experimental finding was already presented in the original manuscript, and the additional results in Tables R1 and R2 further reinforce this finding rather than indicating a new one.
In summary, the theoretical discussions and experimental results provided in the rebuttal are consistent with those in the original manuscript, serving to enhance the robustness of our arguments. Based on these points, we believe that:
1. The original manuscript has adequately cited the most relevant works (e.g., Tancik et al. 2020, Jacot et al. 2018).
2. The additional experiments conducted were in response to the reviewer's request to further clarify the performance of our method, without introducing new baselines, altering conclusions, or presenting novel findings.
We kindly request a fair evaluation of our original manuscript in light of the points we have clarified. | null | null | null | null | null | null |
Learning Utilities from Demonstrations in Markov Decision Processes | Accept (poster) | Summary: This paper considers learning a utility function from demonstration using inverse reinforcement learning and risk sensitive RL. The reward function is assumed to be known. The utility function mapping cumulative rewards to a scalar value is to be inferred. The authors proof the partial identifiability of utility function and improved identifiability with multiple environments. Two algorithms are proposed: 1) CATY-UL is used to classify whether a utility function lies in the compatibility set (i.e., compatible with observed behavior), 2) TRACTOR-UL is used to find a utility function that lies in the compatibility set. Practical implementation uses a discretization based approach for policy evaluation and utility representation and update. Experiments on real and simulated data show TRACTOR-UL's ability to find compatible utility functions.
Claims And Evidence: This paper proposes a method to learn utility functions from demonstration. The results demonstrate the claimed capability.
Methods And Evaluation Criteria: The proposal to focus on learning utility with known reward in line 201 makes sense. The definition of compatibility in (3) makes sense and is the main driver of the algorithms. Learning from multiple environments to improve identifiability makes sense.
Theoretical Claims: I checked the counter examples used to proof partial identification, i.e., proposition 4.1-4.5. I briefly checked the derivations of the sample complexity bounds, i.e., theorem 5.1.
Experimental Designs Or Analyses: The experimental design is sound, as the main focus is showing the ability to search compatible utility functions.
A few questions:
* Is the purpose of including human data only to show that human data is not Markovian? Numerical experiment could have been conducted completely in simulation.
* Did the authors demonstrate improved identifiability with more environments, i.e., larger N? Sorry if I missed it but it was hard to find.
Supplementary Material: I briefly reviewed the proofs and additional experiments.
Relation To Broader Scientific Literature: This paper address the problem of model risk-sensitivity in inverse RL. Prior work has mainly focused on average return criteria, ignoring the full return distribution. This paper addresses the gap.
Essential References Not Discussed: I am not aware of essential references being missed.
Other Strengths And Weaknesses: **Weakness**
* This paper is quite difficult to read, perhaps due to a lot of notations. Ultimately the results and algorithms are pretty straightforward. I think the authors could present it in a much easier way.
Other Comments Or Suggestions: NA
Questions For Authors: * On line 18 in algorithm 2, what does the $\Pi$ symbol mean? Is it a policy or a projection operator?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the Reviewer for recognizing the validity of our proposal, specifically of learning a utility with known reward and to use demonstrations from multiple environments to reduce identifiability issues. Below, we answer to the Reviewer's comments and questions.
> Is the purpose of including human data only to show that human data is not Markovian? Numerical experiment could have been conducted completely in simulation.
Yes. Since one of the main contributions of the paper is to present the *first* model of human behavior that complies with non-Markovian policies in MDPs (see Eq. (1)), then, we have included human data to provide some empirical evidence on the claim that human behavior is inherently non-Markovian. This is precisely the goal of Experiment 1.
> Did the authors demonstrate improved identifiability with more environments, i.e., larger N? Sorry if I missed it but it was hard to find.
From the *theoretical* viewpoint, our Proposition 4.5 demonstrates that the feasible set of utilities $\mathcal{U}$ reduces its size as demonstrations from multiple environments are observed, up to the limit in which it contains only the expert's utility $\mathcal{U}=\{U^E\}$. Simply put, a policy in an environment represents a constraint in the set of utilities $\mathfrak{U}$, thus, adding more environments we are adding more constraints, and the feasible set $\mathcal{U}$, i.e., the intersection of these subsets, reduces its size.
From an *empirical* perspective, as mentioned in Experiment 2, we have observed that, increasing the number of environments $N$, the empirically-best step size reduces, which may be a symptom that the feasible set $\mathcal{U}$ is smaller, and, thus, we have to be more "precise" for spotting it inside the set of all utilities $\mathfrak{U}$. Anyway, note that Theorem 5.2 holds with a choice of step size $\alpha$ that decreases with $N$ (see line 2728), thus, our previous conjecture may be wrong.
Nevertheless, we remark that an increment of $N$ does not necessary improve the identifiability of the expert's utility $U^E$, although it cannot worsen it, and, thus, this complicates the analysis of the relationship between $N$ and the identifiability of $U^E$. Indeed, intuitively, identifying $U^E$ using demonstrations from $N+M$ environments is "easier" than using only $N$ environments as long as the additional $M$ environments provide constraints that do not already appear in the first $N$ environments. For this reason, analyzing how the identifiability of $U^E$ improves as $N$ increases is not immediate from a technical perspective. Thus, we leave it for future works.
> This paper is quite difficult to read, perhaps due to a lot of notations. Ultimately the results and algorithms are pretty straightforward. I think the authors could present it in a much easier way.
We agree with the Reviewer that the algorithms and the results in Section 5 could be presented using a simpler notation. However, note that the additional notation is necessary for providing a sketch of the proofs of Theorems 5.1 and 5.2 in the main paper. We will try to adjust this trade-off by moving the notation not strictly necessary to the appendix, in order to improve the clarity and the readability of the paper.
> On line 18 in algorithm 2, what does the $\Pi$ symbol mean? Is it a policy or a projection operator?
Yes, $\Pi_{\overline{\underline{\mathfrak{U}}}_L}$ denotes the Euclidean projection onto set $\overline{\underline{\mathfrak{U}}}_L$, as defined on line 083.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the responses. I don't have more questions. The paper is solid other than being very dense. I am increasing the score. | Summary: This paper introduces a framework for learning utility functions from expert demonstrations inMDPs, where the utility function captures the agent’s risk sensitivity. As the utility may not be identifiable in a single environment, the authors consider learning from demonstrations in multiple environments. The paper proposes two methods and provide sample complexity guarantees for both. Finally, the authors conduct experiments on toy environment (both with real-world participants and simulated).
Claims And Evidence: The claims made in the submission are sufficiently supported.
Methods And Evaluation Criteria: The conducted experiments are in very small toy environments which seem inspired by simple financial decision-making situations. As a result, the experiments are not fully convincing, but still make sense for the motivation of learning risk-sensitive utility functions.
Theoretical Claims: I did not check the correctness of the proofs.
Experimental Designs Or Analyses: I skimmed through the additional experimental details in Appendix F. The experimental design with the participants makes sense, though, it is not clear who the 15 participants are or whether these participants are representative/unbiased.
One concern is the lack of baselines. The authors do not compare against existing methods in their experiments.
Supplementary Material: I did not review the code in the supplmentary material.
Relation To Broader Scientific Literature: This paper differentiates itself by explicitly modeling the risk attidude of individuals, whereas most prior work on IRL assumes risk-neutral experts (i.e., individuals). It is closely related to the literature on risk-senstive IRL (Majumdar et al. (2017), Singh et al. (2018), Chen et al. (2019), Ratliff & Mazumdar (2020), Cheng et al. (2023)) and primarily differs from the existing work in that it does not assume that the expert is Markovian.
Essential References Not Discussed: As far as I can tell, related work is appropriately addressed by the authors.
Other Strengths And Weaknesses: ### Strengths
1. The problem of learning risk-sensitive utility functions is well-motivated and addresses an important limitation of traditional IRL, which typically assumes risk-neutral behavior.
2. The paper answers several fundamental questions about the problem setup, including the identifiability of utility functions.
### Weaknesses
1. The experiments lack comparisons to baselines from existing work on risk-sensitive IRL or utility learning. Could you please clarify whether existing methods are unsuitable for direct comparison or are there other reasons for having no baselines?
2. The experiments are conducted on small toy environments, making it unclear how well the proposed methods scale and whether they can be applied in practice. Additionally, the dependence on the horizon $H$ appears to be a bottleneck and could potentially limiting the practicality of the approach in problems with longer horizons.
Other Comments Or Suggestions: 1. Even though the paper is overall well-written, it is still quite tedious to parse due to the sometimes seemingly exessive use of notation. It is nearly impossible to keep track of everything and you might want to consider finding ways to reduce the notational burden for the sake of the reader.
Questions For Authors: See above.
**Post-Rebuttal Update**
I maintain my original score and I am still in favor acceptance.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are glad that the Reviewer appreciated the significance of the problem setting considered and the analysis conducted. Below, we report answers to the Reviewer's comments.
> On the experiments conducted.
The ultimate goal of the paper is to introduce a new problem setting, and to characterize its properties and challenges (e.g., identifiability issues). The presentation of algorithms CATY-UL and TRACTOR-UL serves the main objective of demonstrating that, from a theoretical viewpoint, the utility learning problem can be solved efficiently even in the worst case, which is not obvious given the non-Markovianity of the expert's policy. This result paves the way to more practical utility learning algorithms, whose development would be "risky" in absence of theoretical guarantees of polynomial sample and computational complexity (e.g., see Theorems 5.1, 5.2). For these reasons, we believe that, although interesting and important in order to comprehensively corroborate the proposed model and algorithms, an extensive empirical validation that goes beyond the simple tabular settings considered in Section 6 has limited relevance for the scope of this work.
In brief, we stress that since the paper is the first to consider this problem setting and since it already provides significant contributions, we conducted only illustrative experiments in the tabular setting. The development of more complex and practical algorithms able to scale should be conducted in future works.
> On the participants to the experiments.
The 15 participants are **lab members**, as mentioned at the beginning of Section 6 (line 399). We will highlight this fact in the paper.
> On the absence of baselines.
Concerning **Experiment 1**, the objective is to understand which model of human behavior is the most appropriate. Since the main novel feature of the model of behavior that *we* propose in the paper (see Eq. (1)) is the *non-Markovianity* of the expert's policy, then, in the experiment, we compare with the baseline represented by the *Markovian* policy.
The goal of **Experiment 2** is to test the efficiency of TRACTOR-UL at recovering a utility function under the newly-proposed UL problem setting. Since no IRL algorithm in literature, neither the common ones like [1,2,3] nor the risk-sensitive ones like [4,5,6], aims to learn a utility function of this kind, then comparing the efficiency with which existing (risk-sensitive) IRL algorithms converge to their learning targets with the efficiency with which TRACTOR-UL converges to a utility function would not be much meaningful.
> On the dependence on $H$.
We agree with the Reviewer that the dependence on $H^4$ and $\frac{1}{\epsilon^4}$ in the theoretical guarantee of Theorem 5.2, although being polynomial, can be prohibitive in problems with very long horizons for which we require accurate estimates. However, note that this guarantee holds *in the worst case* and using the discretization approach that, while offering the advantage of simplifying the theoretical analysis of the algorithm, might be less efficient in practice than other methods, e.g., estimating the utility of the expert through function approximation and some fixed set of basis functions. We leave this interesting direction to future works.
> On the notation.
We agree with the Reviewer that part of the notation introduced for instance in Section 2 and Section 5 is rather cumbersome and marginal for conveying the main ideas of the paper. However, it is necessary for providing a sketch of the proofs of Theorems 5.1 and 5.2 in the main paper. We will try to adjust this trade-off by moving the notation not strictly necessary to the appendix, in order to improve the clarity and the readability of the paper.
[1] Andrew Y. Ng and Stuart J. Russell. Algorithms for inverse reinforcement learning.
[2] Brian D. Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy
[3] Deepak Ramachandran and Eyal Amir. Bayesian Inverse Reinforcement Learning.
[4] Lillian J. Ratliff and Eric Mazumdar. Inverse risk-sensitive reinforcement learning.
[5] Sumeet Singh, Jonathan Lacotte, Anirudha Majumdar, and Marco Pavone. Risk-sensitive inverse reinforcement learning via semi- and non-parametric methods.
[6] Haoyang Cao, Zhengqi Wu, and Renyuan Xu. Inference of utilities and time preference in sequential decision-making. | Summary: The paper introduces a new risk-sensitive model for inverse reinforcement learning in MDPs, explicitly accounting for the non-Markovian policies induced by risk-sensitive utility functions. The main contributions include formulating the Utility Learning problem to learn an agent's risk attitude, characterizing its partial identifiability, and proposing two algorithms, CATY-UL and TRACTOR-UL, for efficient utility learning from finite demonstrations. The authors validate their methods through theoretical analysis and proof-of-concept experiments.
Claims And Evidence: The paper provides strong theoretical justifications for its claims, with formal propositions showing the limitations of identifiability in the single-environment setting. The propositions supporting the value of multi-environment data (Proposition 4.5) are correct but somewhat weak, as they confirm possibility rather than providing explicit complexity or a lower bound on the number of required environments.
Methods And Evaluation Criteria: The methods and evaluation criteria chosen (partial identifiability, regret, and compatibility metrics) make sense and are relevant to the problem of learning risk-sensitive utilities. The paper provides both bounds on the feasibility of learned utilities and empirical validations.
Theoretical Claims: The theoretical claims, particularly the identifiability results and regret bounds for algorithms, appear sound. I also checked the correctness of Proposition 4.1, Proposition 4.2, and Theorem 5.1; no issues were identified in the proofs. However, the complexity of the augmented state space approach is not well analyzed.
Experimental Designs Or Analyses: The experimental results offers simple proof-of-concept demonstrations with real human data, validating the non-Markovian nature of human decision-making. However, the experiments are preliminary and somewhat limited in scope, particularly lacking evaluation in larger-scale or varied environments.
Supplementary Material: I skimmed the supplementary material, which are clear and comprehensive. They present proofs of theoretical results and additional algorithmic details and pseudo-code.
Relation To Broader Scientific Literature: The paper situates itself within inverse reinforcement learning and risk-sensitive MDP literature, highlighting how it generalizes previous IRL models (Ng & Russell, 2000) and connects to expected utility theory. However, the authors should more explicitly discuss connections to the broader literature on risk-sensitive MDPs, including Expected Risk Measure (ERM) MDPs and Conditional Value-at-Risk (CVaR) MDPs, which are not sufficiently discussed. Some references related to Maximum Entropy IRL (Ziebart, 2010) seems to be missing as well.
Essential References Not Discussed: Essential references related to risk-sensitive MDPs, particularly Online Risk-sensitive MDP, including ERM and CVaR-based, were not fully cited or adequately discussed. Incorporating such references would improve clarity and contextualize this work further within the broader risk-sensitive reinforcement learning literature. To name a few
Online ERM-MDP
- Fei, Y., Yang, Z., Chen, Y., Wang, Z., and Xie, Q. Risk-sensitive reinforcement learning: near-optimal risksample tradeoff in regret. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS ’20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546.
- Fei, Y., Yang, Z., Chen, Y., and Wang, Z. Exponential bellman equation and improved regret bounds for risksensitive reinforcement learning. Advances in neural information processing systems, 34:20436–20446, 2021.
- Liang, H., & Luo, Z. Q. (2024). Bridging distributional and risk-sensitive reinforcement learning with provable regret bounds. Journal of Machine Learning Research, 25(221), 1-56.
Online CVaR-MDP
- Bastani, O., Ma, J. Y., Shen, E., & Xu, W. (2022). Regret bounds for risk-sensitive reinforcement learning. Advances in Neural Information Processing Systems, 35, 36259-36269.
- Wang, K., Kallus, N., and Sun, W. Near-minimax-optimal risk-sensitive reinforcement learning with cvar. In International Conference on Machine Learning, pp. 3586435907. PMLR, 2023.
- Wang, K., Liang, D., Kallus, N., and Sun, W. Risk-sensitive rl with optimized certainty equivalents via reduction to standard rl. arXiv preprint arXiv:2403.06323, 2024.
Other Strengths And Weaknesses: Strengths:
- Novel problem formulation clearly motivated by real-world risk-sensitive behaviors.
- Strong theoretical contributions on identifiability.
- Clear, well-written manuscript.
Weaknesses:
- Limited empirical validation; experiments are preliminary and conducted only in simple settings.
- Ambiguity in demonstrating the complexity and requirements for multi-environment utility identifiability.
- Missing key discussions and comparisons with related risk-sensitive formulations (ERM, CVaR).
Other Comments Or Suggestions: - Clarify and simplify Section 2’s notation, as some elements appear dense and are not essential for the main text.
- More clearly state computational complexity and practical scalability concerns.
Questions For Authors: Please see the above concerns/weakness. In addition
- Proposition 4.5 currently states only the possibility of unique identifiability. Can you provide insight into how many environments typically might be required for practical identifiability?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the Reviewer for recognizing the novelty of the proposed problem formulation, and the strength of our theoretical contributions on the identifiability problem of utilities. Below, we answer to the Reviewer's comments and questions.
> On the comparison with literature.
We thank the Reviewer for the references. We will incorporate a discussion on both ERM-MDPs and CVaR-MDPs in the paper. Although the primary focus of our work is the *inverse* problem, we agree with the Reviewer that including a discussion on algorithms for solving the *forward* problem can improve the quality and clarity of the paper.
> On the empirical validation.
The ultimate goal of the paper is to introduce a new problem setting, and to characterize its properties and challenges (e.g., identifiability issues). The presentation of CATY-UL and TRACTOR-UL serves the main objective of demonstrating that, from a theoretical viewpoint, the UL problem can be solved efficiently even in the worst case, which is not obvious given the non-Markovianity of the expert's policy. This result paves the way to more practical UL algorithms, whose development would be "risky" in absence of theoretical guarantees of polynomial sample and computational complexity (see Theorems 5.1, 5.2). For these reasons, we believe that, although interesting, an extensive empirical validation of the proposed algorithms that goes beyond the simple tabular settings considered in Section 6 has limited relevance for the scope of this work.
In brief, we stress that, since the paper is the first to consider this problem setting and since it already provides significant contributions, then we conducted only illustrative experiments in the tabular setting. The development of more complex and practical algorithms able to scale should be conducted in future works.
> On the notation.
We agree with the Reviewer that part of the notation introduced in Section 2 is rather cumbersome and marginal for conveying the main ideas of the paper. However, it is necessary for providing a sketch of the proofs of Theorems 5.1 and 5.2 in the main paper. We will try to adjust this trade-off by moving the notation not strictly necessary to the appendix, in order to improve the clarity and the readability of the paper.
> On the computational complexity.
For the subroutines:
- EXPLORE: *time* = $\mathcal{O}(N\tau)$; *space* = $\mathcal{O}(SAHN)$.
- ERD: *time* = $\mathcal{O}(H\tau^E+H/\epsilon_0)$; *space* = $\mathcal{O}(H/\epsilon_0)$.
- PLANNING: *time* = $\mathcal{O}(S^2AH^2/\epsilon_0)$; *space* = $\mathcal{O}(SAH^2/\epsilon_0)$.
- ROLLOUT: *time* = $\mathcal{O}(KH)$; *space* = $\mathcal{O}(K)$.
Thus:
- **CATY-UL**: *time* = $\mathcal{O}(N\tau+MN(H\tau^E+S^2AH^2/\epsilon_0))$, where $M$ denotes the number of input utilities to which CATY-UL is applied; *space* = $\mathcal{O}(SAHN+SAH^2/\epsilon_0)$.
- **TRACTOR-UL**: *time* = $\mathcal{O}(N\tau+NH\tau^E+T(NS^2AH^2/\epsilon_0+NKH+Q_{time}))$, where $Q_{time}$ represents the number of iterations of the optimization solver adopted for the Euclidean projection; *space* = $\mathcal{O}(NH/\epsilon_0+SAH^2/\epsilon_0+K+Q_{space})$, where $Q_{space}$ is the space used for the projection.
Note that these complexities are polynomial in all the quantities of interest $S,A,H$, $N,$ $\frac{1}{\epsilon}$, $\log\frac{1}{\delta},Q_{time},Q_{space}$, and this holds even if we replace $\epsilon_0,\tau^E,\tau,T,K$ with the values that provide the theoretical guarantees in Theorems 5.1, 5.2.
We remark that the assumption made in Theorem 5.2 that the Euclidean projection is *exact* is made just for simplifying the theoretical analysis, but it is *not necessary*. If the Reviewer desires details on the proof, we can provide them.
We will add all these considerations to the paper.
> On the multi-environment utility identifiability.
This is an interesting point. Intuitively, it is not trivial to compute a minimum number of environments $N\ge\overline{N}$ that suffices for the "practical" identifiability of the expert's utility $U^E$, for two reasons:
- It depends on how much "*informative*" are the observed $N$ environments. For instance, if the $N$ environments provide similar constraints in the space of utilities $\mathfrak{U}$, then identifying $U^E$ remains difficult.
- Depending on what we want to do with the expert's utility $U^E$ once that we have recovered it, we might be satisfied with less accurate estimates. For instance, if we want to recover $U^E$ for transferring it to a difficult environment, then, intuitively, we need a very good estimate $\widehat{U}\approx U^E$, because the target environment is difficult, and so we expect $N$ to be large; instead, if the goal is to do planning in a simple environment, then we can tolerate $N$ to be small.
For these reasons, analyzing how the identifiability of $U^E$ improves as $N$ increases is not immediate from a technical perspective and we leave it for future works.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their detailed response and clarifications, which address most of my concern. I maintain my positive evaluation for this paper. | Summary: The paper considers a specific type of risk-sensitive MDPs where the risk sensitivity is captured via a continuous, strictly increasing utility function. Given a set of optimal expert demonstrations and the expert reward, the aim is to recover the expert's utility function. The authors first provide a few impossibility results, showing that similar to the IRL, the utility learning problem is ill-posed in general. However, they show that if given access to all deterministic optimal policies under any transition kernel, we could identify the expert's utility function. Motivated by this observation, they provide two algorithms for learning from expert data collected in multiple environments: CATY-UL, a classification-based method to check the approximate compatibility of a given utility, and TRACTOR-UL, a more practical algorithm that outputs a single candidate utility, much like traditional IRL methods. Finally, the applicability of TRACTOR-UL is showcased on a tabular toy problem.
Claims And Evidence: Generally, the claims are clear, and complete proofs are provided in the appendix.
Methods And Evaluation Criteria: The problem definition, the algorithms, and the theoretical results are clearly presented and consistent.
Theoretical Claims: I reviewed the proofs of Proposition 4.5 and Theorem 5.2, and they seemed to be sound.
Experimental Designs Or Analyses: The experimental validation is limited to a toy problem, which may have limited practical relevance. Nevertheless, the results are well-documented, and the authors considered human expert data.
Supplementary Material: Except for the proofs mentioned above, I didn't have the time to go through the appendix in detail.
Relation To Broader Scientific Literature: While IRL in the risk-neutral setting has been addressed extensively, and some risk-sensitive approaches exist, the specific setting of identifying the utility given the reward seems to be novel.
Essential References Not Discussed: The authors stress the ability of their approach to induce non-Markovian behavior. The claim on line 409 that this is the first IRL model to induce non-Markovian behavior is too strong. Majumdar et al. (2017) already address IRL with coherent risk measures such as CVaR, which leads to non-Markovian policies (Chow, 2017). Moreover, most risk measures, except for ERM, EVaR, and time-consistent ones [Chow, Theorem 1.3.8], inherently yield non-Markovian policies.
- Majumdar, Anirudha, et al. "Risk-sensitive Inverse Reinforcement Learning via Coherent Risk Models." Robotics: science and systems. Vol. 16. 2017.
- Chow, Yinlam. Risk-sensitive and data-driven sequential decision making. Diss. Stanford University, 2017.
Other Strengths And Weaknesses: Strengths:
- The paper introduces an interesting problem setting by focusing on learning the utility function for a known reward.
- The theoretical contribution providing both identifiability results and convergence guarantees is solid.
Weaknesses:
- The paper is quite notation-heavy, which can make it difficult to follow. If you can reduce the density of math symbols in the main text, that could significantly improve the overall readability.
Other Comments Or Suggestions: Typos: in line 197, 281, 381: contained into -> contained in
Questions For Authors: When learning from a single environment, Theorem 5.2 basically guarantees that for the recovered utility, the expert is $\varepsilon$ optimal. Is it possible to also say something in terms of Hausdorff distance to the set of feasible utilities?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are glad that the Reviewer appreciated the novelty and the significance of the problem setting introduced, and that the Reviewer recognized the solidity of the theoretical results presented. Below, we report answers to the Reviewer's comments.
> The authors stress the ability of their approach to induce non-Markovian behavior. The claim on line 409 that this is the first IRL model to induce non-Markovian behavior is too strong. Majumdar et al. (2017) already address IRL with coherent risk measures such as CVaR, which leads to non-Markovian policies (Chow, 2017). Moreover, most risk measures, except for ERM, EVaR, and time-consistent ones [Chow, Theorem 1.3.8], inherently yield non-Markovian policies.
We agree with the Reviewer that the claim on line 409 is imprecise, and we will change it to "... the *first IRL model that contemplates non-Markovian policies **in MDPs***". Indeed, as we explain in Section 7, Majumdar et al. (2017) consider the much simpler "prepare-react model" as environment intead of an MDP.
Also, we note that, even though most risk measures yield non-Markovian policies, there is no IRL algorithm for MDPs in the literature that models the expert's policy as the result of the optimization of a risk measure, and, as such, as non-Markovian. Indeed, as mentioned in Section 7 and explained in Appendix A, works like [1] model the expert's policy as Boltzmann rational, i.e., as stochastic Markovian.
> The paper is quite notation-heavy, which can make it difficult to follow. If you can reduce the density of math symbols in the main text, that could significantly improve the overall readability.
We agree with the Reviewer that the main text is notationally dense. We will try to simplify some passages to improve the readability in the final version of the paper, moving the notation not strictly necessary in the main paper to the appendix.
> Typos: in line 197, 281, 381: contained into -> contained in
We thank the Reviewer for pointing out, we have fixed them.
> When learning from a single environment, Theorem 5.2 basically guarantees that for the recovered utility, the expert is optimal. Is it possible to also say something in terms of Hausdorff distance to the set of feasible utilities?
Answering to this question is not trivial. The reason is that, for the feasible utility set, it is not immediate to find an explicit representation for the feasible utility set $\mathcal{U}$ that permits to construct an estimator $\widehat{\mathcal{U}}$ for which it is simple to carry out a sample complexity analysis.
Consider the sample complexity analysis conducted for the estimation of the feasible reward set [2,3,4,5]. In [2,3], it is proved that each reward $r$ in the feasible reward set $\mathcal{R}$ can be parameterized as a function of the optimal value $V_h$ and advantage $A_h$ functions as:
$$
r_h(s,a)=V_h(s)-\sum\limits_{s'\in\mathcal{S}}p_h(s'|s,a)
V_{h+1}(s')+1\\{\pi_h^E(s)=a\\}A_h(s,a).\qquad (1)
$$
Thus, using as estimator for $\mathcal{R}$ the set $\widehat{\mathcal{R}}$ where each reward $\widehat{r}\in\widehat{\mathcal{R}}$ can be parametrized as a function of the optimal value $\widehat{V}_h$ and advantage $\widehat{A}_h$ functions in the estimated MDP as:
$$
\widehat{r}\_h(s,a)=\widehat{V}\_h(s)-\sum\limits\_{s'\in\mathcal{S}}\widehat{p}\_h(s'|s,a) \widehat{V}\_{h+1}(s')+1\\{\widehat{\pi}\_h^E(s)=a\\}\widehat{A}\_h(s,a),
$$
then it is possible to bound the Hausdorff distance in max norm by the 1-norm of the transition models:
$$
\mathcal{H}(\mathcal{R},\widehat{\mathcal{R}})\le \max_{s,a,h}\\|\widehat{p}\_h(\cdot|s,a)-p_h(\cdot|s,a)\\|_1,
$$
as long as $\widehat{\pi}^E=\pi^E$ with high probability (see also [4,5]). Then, the analysis follows rather directly by applying standard concentration inequalities.
To adapt this analysis to sets of utilities, we require a simple representation of the feasible utility set $\mathcal{U}$ analogous to that in (1). However, it is not immediate to us how to obtain it, since utilities are objects rather different from reward functions. Nevertheless, studying the feasible utility sets and the problem of learning them is an interesting future research direction.
[1] Ratliff, L. J. and Mazumdar, E. Inverse risk-sensitive reinforcement learning.
[2] Metelli, A. M., Ramponi, G., Concetti, A., and Restelli, M. Provably efficient learning of transferable rewards.
[3] Lindner, D., Krause, A., and Ramponi, G. Active exploration for inverse reinforcement learning.
[4] Metelli, A. M., Lazzati, F., and Restelli, M. Towards theoretical understanding of inverse reinforcement learning.
[5] Zhao, L., Wang, M., and Bai, Y. Is inverse reinforcement learning harder than standard reinforcement learning? | null | null | null | null | null | null |
Geometric Feature Embedding for Effective 3D Few-Shot Class Incremental Learning | Accept (poster) | Summary: This paper investigates few-shot class incremental learning for 3D object classification using foundation models. Building on the work of FoundationModel (Ahmadi et al.), the authors employ a frozen, pre-trained large-scale 3D encoder (Uni3D) to extract generalizable features for each point. They then construct enhanced text embeddings based on prompts generated from category names, which are combined with geometric features to calculate similarity and assign the final label. A key contribution of the paper is the method for constructing abstract geometric features using spectral clustering and Laplacian eigenmaps, as well as the way to fuse the text embeddings from the prompts with the geometric features through transformer. Experimental evaluation across multiple datasets and settings demonstrates that the proposed method achieves clear improvements in performance.
Claims And Evidence: The evidence provided to support the claims in the paper is insufficient in certain areas, as some details are lacking. For instance, the specific configuration used when removing a module in Table 3 is not clearly explained, leaving this aspect of the experiment unclear.
Methods And Evaluation Criteria: Yes, the proposed method is evaluated both within-dataset incremental learning and cross-dataset incremental learning, in line with the protocols established by existing methods.
Theoretical Claims: The paper does not present any theoretical claims.
Experimental Designs Or Analyses: Yes, the overall experimental settings are valid, as they follow established protocols from existing methods. However, the ablation study in Table 3 lacks sufficient support, as the detailed configuration for removing a module is not clearly provided.
Supplementary Material: Yes. I have reviewed all parts.
Relation To Broader Scientific Literature: The key idea of this paper is closely related to the concept of cross-modal feature fusion, specifically integrating text and visual cues, which has been explored in prior research.
Essential References Not Discussed: N.A.
Other Strengths And Weaknesses: **Strengths**:
1. The proposed method is sound and well-constructed.
2. The paper demonstrates clear improvements across multiple datasets and FSCIL settings.
3. The ablation studies are comprehensive.
**Weaknesses**:
1. It would be beneficial to include visualizations of the basis vectors (and their evolution as new classes are introduced) or the distribution of geometric features.
2. Implementation details of the transformer encoder are not provided.
3. In Table 3, the detailed configuration for removing one module is not clearly explained.
Other Comments Or Suggestions: N.A.
Questions For Authors: 1. What principles guided the design of the five settings presented in Figure 1?
2. How does the observation in lines 87-93 lead to the conclusion that "one challenge to address to advance FSCIL on 3D point clouds" is "enhancing the model’s ability to learn robust feature representations"?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful feedback, which has guided us in refining the manuscript and addressing key concerns. Below, we provide detailed responses to each of your questions, supported by additional analyses and clarifications.
**Q1: Clarification of Ablation Studies in Table 3**
**A1**: Thank you for prompting us to clarify this section. Table 3 systematically evaluates the contributions of two critical components in the geometric feature extraction module:
- **Dynamic Geometric Projection Clusters**: These clusters are constructed via spectral clustering and Laplacian eigenmaps to encode shared geometric structures.
- **Attention Weights for Basis Vectors**: Learnable weights prioritize cluster centers relevant to incremental tasks.
The ablation settings are:
- Row 1: Raw point cloud features without DGPC.
- Row 2: DGPC with equal weights for all basis vectors.
- Row 3: DGPC + learnable attention weights.
Key findings are now presented more succinctly:
- DGPC Necessity: Removing DGPC (Row 1) degrades average accuracy by 2.7%, as raw features fail to integrate geometric-textual semantics.
- Attention Mechanism: Adding learnable weights (Row 3) improves harmonic accuracy by 2.1% over Row 2, demonstrating adaptive weighting’s role in suppressing noise and outliers.
The results validate that DGPC and attention weights synergistically enhance stability and discriminability. These results are now contextualized in Section 4.4, and additional visualization examples are available in Figure 2 of [the anonymous link](https://anonymous.4open.science/r/6061-FB07/6061.pdf). We previously did not express this clearly, but based on your feedback, we will restructure the table and discussion for better interpretability.
**Q2: Visualization of Dynamic Projection Clusters**
**A2**: To validate the effectiveness of dynamic geometric feature projection clusters during incremental learning, we visualize the geometric features extracted by DGPC at each incremental stage, as shown in Figure 2 of [the anonymous link](https://anonymous.4open.science/r/6061-FB07/6061.pdf). Visualizations reveal that DGPC-enhanced features exhibit tighter intra-class clustering and clearer inter-class separation. These results demonstrate that DGPC effectively encodes task-invariant geometric priors, enabling robust feature extraction across incremental phases. We have integrated key examples into the main text in our revised version.
**Q3: Implementation Details of the Transformer Encoder**
**A3**: The Transformer encoder comprises 2 standard layers, each with 8-head self-attention. We have enhanced the implementation details section in the revised version to include more comprehensive information.
**Q4: Design Principles of Figure 1**
**A4**: We sincerely appreciate your guidance in improving Figure 1's clarity. In light of your thoughtful suggestions, we have reorganized and optimized Figure 1 (detailed in Figure 1 of [the anonymous link](https://anonymous.4open.science/r/6061-FB07/6061.pdf)) to improve its clarity. It now contains 7 comparisons:
1. **SOTA Baselines**: Methods (1)-(2) adopt strategies from C3PR [1] and FoundationModel [2].
2. **Cross-Combination Strategies**: Methods (3)-(7) integrate various prompt styles with distinct training strategies.
For detailed explanations of Figure 1, we sincerely invite you to consult our response to Reviewer 8rPH's Question 2.
**Q5: Linking Observations to Robust Feature Learning**
**A5**: We appreciate your guidance in strengthening this connection. The experiments (lines 87-93) demonstrate that complex prompt designs yield inferior performance compared to simple prompts. This observation highlights a critical limitation: existing methods overly rely on manually crafted text semantics while failing to autonomously extract geometry-aware robust features from 3D point clouds. Consequently, models exhibit excessive sensitivity to textual variations and struggle to adapt to distribution shifts in incremental phases with limited samples.
3D-FLEG addresses this by:
1. **Geometric Feature Embedding**: Explicitly encoding spatial structures into prompts via dynamic projection clusters, bypassing dependency on complex text engineering.
2. **Unified Optimization**: Forcing joint alignment between geometric features and text semantics during incremental training, enabling the model to prioritize discriminative cross-modal patterns from sparse data.
By incorporating supplementary experiments, enhanced visualizations, and expanded explanations, we sincerely hope to have addressed all raised concerns. Please let us know if you feel any additional adjustments would better address your concerns.
**References**
[1] Canonical shape projection is all you need for 3d few-shot class incremental learning, ECCV 2024.
[2] Foundation Model-Powered 3D Few-Shot Class Incremental Learning via Training-Free Adaptor, ACCV 2024. | Summary: The paper proposes 3D-GLEG, a method to improve 3D few-shot class incremental learning by incorporating geometric features into the learning process. The authors propose two modules: a geometric feature extraction module and a geometric feature embedding module. By leveraging geometric information, 3D-FLEG achieves superior performance on four datasets, ModelNet, ShapeNet, ScanObjectNN and CO3D.
Claims And Evidence: 1. The claim that Laplacian Eigenmaps can extract geometric structure from point cloud data is not well supported.
2. The claim that AdaptiveAvgPool1d for fine-grained feature extraction sounds not true for me. Based on my understanding, average pooling is not typically used for fine-grained feature extraction. Instead, it tends to extract global, smoothed features.
3. The claim that the geometric feature embedding module can ensure that data from both modalities interact on similar levels of abstraction is not fully supported according to section 3.4. How do equations 5 and 6 ensure that the point cloud features and text features have similar levels of abstraction? They are only in the same dimensional space. The authors only use a simple text prompt template that includes class names, which I believe contains abstract features. However, point cloud features should contain more detailed features.
Methods And Evaluation Criteria: The proposed method and evaluation criteria are appropriate for the FSCIL problem. The evaluation on ModelNet, ShapeNet, ScanObjectNN and CO3D provides a solid benchmark, covering both real-world and synthetic 3D datasets. The metrics, including accuracy, harmonic mean accuracy, and relative accuracy drop, effectively measure both new class adaptation and forgetting mitigation.
Theoretical Claims: The paper does not provide formal theoretical proofs but makes claims about the effectiveness of Laplacian eigenmaps in preserving geometric structures and the dynamic geometric feature projection clusters in improving feature representation. While the method is conceptually plausible, the paper does not rigorously prove that the transformed features are explicitly geometry-aware.
Experimental Designs Or Analyses: The paper evaluates the method on four datasets and uses multiple metrics. The results indicate improved performance over baseline methods. However, there are some potential issues:
1. No direct ablation study that analyzes the contribution of Laplacian eigenmaps or whether they truly enhance geometry-awareness and how.
2. More detailed discussion of why the method performs better across datasets.
Supplementary Material: I reviewed the appendices, including the dataset partition, explanation of evaluation metrics, additional results and experiments on the number of basis vectors. I have no doubt about these.
Relation To Broader Scientific Literature: For 3D FSCIL problem, previous works like Microshape proposed a universal description language to reduce domain discrepancies, and C3PR adapted CLIP to handle FSCIL task. This paper solves the problem by integrating geometric features, reducing reliance on foundation models and complicated training strategies.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: 1. It is confusing that both the ground-truth label and final feature representations computed from U are represented with the notation y.
2. Some of the Microshapes is spelled as "Micrpshapes"
Questions For Authors: 1. For Figure 1, can the authors elaborate more on each prompt-training and training strategies and also add the citations if necessary? Since there are no detailed descriptions for the alignment module, how can the authors conclude that their method, embedding geometric features, is simpler than the alignment module?
2. Can the authors explain how they get the initial features, $featBase_i$, as they also mentioned the point cloud features $featPoint_i$?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback, which has helped us significantly improve the clarity and rigor of our manuscript. Below are our detailed responses:
**Q1: Claims (Laplacian Eigenmaps, AdaptiveAvgPool1d, geometric feature) are not well supported**
**A1: (1) Laplacian Eigenmaps for Geometric Structure Extraction**
Theoretically, Laplacian Eigenmaps minimizes $\sum_{i,j} (y_i - y_j)^2 W_{ij}$, where $y_i, y_j$ are low-dimensional representations of data points $x_i, x_j$, and $W_{ij}$ reflects their proximity ($W_{ij}=e^{-\frac{\|x_i-x_j\|^2}{t}}$ if neighbors; otherwise, $W_{ij}=0$). This ensures nearby points in the original space remain close in the reduced space.
Using the graph Laplacian $L=D-W$, the problem reduces to minimizing $y^T L y$. The eigenvectors of $L$ capture the manifold's structure, preserving local geometry effectively. A similar theoretical analysis appears in [1].
Besides, we have additionally conducted an ablation study to validate the role of Laplacian Eigenmaps as given below, demonstrating a 1.8% improvement in accuracy when utilized. We have incorporated the comprehensive theoretical proof and analysis in our revised paper.
|Laplacian eigenmaps||Average|Accuracy|(%)|
|:-:|-|-|-|-|
||Session0|Session1|Session2|Session3|
|✗|**93.8**|91.2|86.4|85.0|
|✓|**93.8**|**91.9**|**87.5**|**86.8**|
**A1: (2) AdaptiveAvgPool1d for Fine-Grained Features**
We apologize for the confusion caused by the description of "fine-grained" features. You are right that traditional average pooling (AvgPool) typically extracts globally smoothed features. AdaptiveAvgPool1d in 3D-FLEG differs by dynamically adjusting pooling windows to capture local geometric statistics rather than fixed-window averaging. Specifically, it quantifies geometric attribute distributions across localized regions along the channel dimension, enabling fine-grained pattern extraction [2], where "fine-grained" refers to the statistical representation of local geometric details. We have revised our manuscript to clarify this distinction and articulate our design rationale.
**A1: (3) Modality Abstraction Alignment**
We have strengthened Section 4.3 to clarify the modality abstraction alignment:
A dual-pooling strategy (Eq. 5) extracts multi-scale geometric features, bridging the gap between text and point cloud details. The Transformer encoder (Eq. 6) then refines these features, emphasizing geometry relevant to text prompts and reducing noise.
Cross-entropy loss (Eq. 8) ensures consistency in the shared semantic space, aligning geometric and textual abstractions.
**Q2: Cross-Dataset Superiority**
**A2:** Thank you for prompting this critical analysis. 3D-FLEG’s cross-dataset superiority stems from its geometry-centric design:
- **Dynamic Projection Clusters** capture task-invariant geometric patterns, which generalize across synthetic-to-real domains.
- **Geometric Feature Embedding** directly fuses these features with text semantics, bypassing domain-specific text variations.
This synergy enables 3D-FLEG to achieve 7% higher accuracy in cross-dataset tasks. We have revised Section 4.3 to clarify this mechanism.
**Q3: Symbol Confusion and Misspellings**
**A3:** Thank you for highlighting these inconsistencies. We have reviewed the manuscript and made corrections to the symbols and typographical errors to improve clarity and rigor.
**Q4: Detailed Descriptions of Prompt and Training Strategies in Figure 1**
**A4:** We appreciate your suggestion to improve Figure 1. We have detailed the experimental setup principles with relevant citations. For further details, please see our response to Reviewer 8rPH's Q2.
The "Alignment Module and Dual Cache System" requires caching five samples per class, causing significant computational and memory overhead. In contrast, our geometric embedding module integrates geometric features directly with text prompts, eliminating the need for complex alignment training and caching.
**Q5:Clarification on Initial Features ${featBase}_i$ and ${featPoint}_i$**
**A5:** To address this ambiguity, we have revised Section 3 to explicitly define:
${featBase}_i$ : Features extracted from base-class data using the frozen Uni3D encoder. These features are used to construct dynamic geometric projection clusters.
${featPoint}_i$ : Features processed during incremental training from the same Uni3D encoder. These are dynamically reprojected through DGPC using learnable attention weights to extract geometric features.
Your feedback has guided us in refining both the theoretical foundations and presentation of our work. All revisions have been incorporated into the manuscript.
**References**
[1] Laplacian eigenmaps for dimensionality reduction and data representation, Neural computation 2003.
[2] Point cloud segmentation of overhead contact systems with deep learning in high-speed rails, Journal of Network and Computer Applications 2023. | Summary: The paper proposes a model called 3D-FLEG for the 3D few-shot class incremental learning task. The model has a geometric feature extraction module that obtains geometric features through clustering and Laplacian eigenmaps, and it includes a geometric feature embedding module to fuse these geometric features with text features, considering modality heterogeneity.
Claims And Evidence: The claim that the reliance on text prompts and training strategies limits the robustness and performance of few-shot class incremental learning is reasonable and well supported.
Methods And Evaluation Criteria: The proposed method is well explained, and the evaluation criteria and datasets used are appropriate for the task.
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: The main experiments are comprehensive and demonstrate the effectiveness of the proposed method. Nonetheless, some ablation studies are missing. For instance, the Geometric Feature Extraction Module uses spectral clustering as its first step, but the paper does not specify the number of clusters used in this step and analyze how varying this parameter affects performance. Additionally, it would be useful to know how the model's performance varies if the prompt style is changed, for example, to GPT-generated prompts.
Supplementary Material: I reviewed the additional results provided in the supplementary material.
Relation To Broader Scientific Literature: The proposed designs complement the broader literature and introduce new designs.
Essential References Not Discussed: The following papers are related to the paper's context and should be cited—including computing geometric features via clustering, improving generalization on novel classes, and fusing multimodal knowledge for novel class learning:
+ ICCV 2023, Generalized Few-Shot Point Cloud Segmentation Via Geometric Words
+ CVPR 2024, Rethinking Few-shot 3D Point Cloud Semantic Segmentation
+ ICLR 2025, Multimodality Helps Few-shot 3D Point Cloud Semantic Segmentation
Other Strengths And Weaknesses: The paper is clearly written and easy to follow. The motivations for the design choices are clear and reasonable.
Other Comments Or Suggestions: N/A
Questions For Authors: Please see the Experimental Designs part about the missing ablations. Providing the analysis on the mentioned ablation studies will further enhance the quality of the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive feedback. We deeply appreciate your insights, which have guided us in refining our manuscript. Below, we outline the specific revisions made in response to your concerns:
**Q1: Ablation Studies on the Number of Clusters**
**A1**: We deeply appreciate your guidance on this critical aspect. Honestly speaking, we have added a detailed ablation analysis of the number of clusters in Table 6 of Appendix D (also given below), demonstrating how it affects model performance. Note that the cluster numbers are the same as the **Basis Vectors Count** in our experiments, as each cluster center is adaptively mapped to a corresponding basis vector within the projection space during dynamic projection cluster construction.
As shown in Table 6, aligning cluster numbers with the base model's feature dimension (1024 in our case) balances information retention for accuracy and the redundancy of basis vectors. We have also included additional sensitivity analyses on cluster update rates as given in Fig. 6 of Appendix D.
We have revised our manuscript to emphasize these findings more clearly in the main paper.
| |**Average**| ||**Accuracy(%)**|**Harmonic**|**Accuracy**|**(%)**|
|:---------------------:|------|------|------|------|-----|------|------|
| **Basis Vectors Count**| 0 | 1 | 2 | 3 | 1 | 2 | 3 |
| 256 | 93.4 | 91.5 | 86.4 | 85.0 | 86.8 | 76.2 | 76.8 |
| 512 | 93.2 | 91.5 | 86.3 | 85.9 | 87.0 | 76.2 | 77.1 |
| 1024 | 93.8 | 91.9 | 87.5 | 86.8 |87.0 | 77.4 | 77.5 |
| 2048 | 93.6 | 91.6 | 87.5 | 86.7 |87.3 | 77.7 | 77.7 |
| 4096 | 93.8 | 91.9 | 87.8 | 86.4 |87.0 | 77.7 | 78.3 |
**Q2:Impact of Prompt Style on Model Performance**
**A2**: Thank you for highlighting the importance of prompt-style analysis.
We implemented a more comprehensive comparison of the 7 experimental configurations.
They include:
1. **SOTA Baselines**: Methods (1)-(2) adopt strategies from C3PR [1] and FoundationModel [2].
2. **Cross-Combination Strategies**: Methods (3)-(7) integrate various prompt styles with distinct training strategies.
From that, we recorded and provided a new Figure 1 (now visible in Figure 1 of [the anonymous link](https://anonymous.4open.science/r/6061-FB07/6061.pdf)) in our revised version.
As shown in Figure 1, the performance of our geometric embedding (Method 7) remains stable across prompt variations, proving reduced dependency on prompt quality.
This demonstrates that geometric feature embedding alleviates reliance on prompt quality by encoding structural priors, ensuring stable performance even under variations in prompt style. Compared with "Alignment Module + Dual Cache System" (Method 2), which caches 5 samples per class, our strategy replays only a single sample and achieves 7% higher accuracy with lower memory overhead.
**Q3:Cited related papers**
**A3**: We sincerely appreciate the suggestion. The following works have been incorporated to strengthen our related works review:
1. **Geometric Word-Based Segmentation [3]**: Validates cluster-driven feature learning, aligning with our dynamic projection cluster design. (We have added it to Section 3.3 in our revised paper.)
2. **3D Few-Shot Generalization [4]**: Highlights domain adaptation challenges, motivating our geometry-centric approach for cross-dataset robustness. (We have added it to Section 3.4 in our revised paper.)
3. **Multimodal Fusion [5]**: Supports our cross-modal alignment strategy via joint geometric-textual optimization. (We have added it to Section 3.4 in our revised paper.)
These references are now cited in relevant sections as indicated in the corresponding bracket.
We hope our revised works have addressed your concerns.
Should further clarifications or adjustments be needed, we are fully committed to incorporating your guidance.
Thanks for your feedback, as it is invaluable in refining our work.
**References**
[1] Canonical shape projection is all you need for 3d few-shot class incremental learning, ECCV 2024.
[2] Foundation Model-Powered 3D Few-Shot Class Incremental Learning via Training-free Adaptor, ACCV 2024.
[3] Generalized Few-Shot Point Cloud Segmentation Via Geometric Words, ICCV 2023.
[4] Rethinking Few-shot 3D Point Cloud Semantic Segmentation, CVPR 2024.
[5] Multimodality Helps Few-shot 3D Point Cloud Semantic Segmentation, ICLR 2025.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. Now my concerns have been addressed and I would update my recommendation to accept.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
We sincerely appreciate your time and constructive feedback throughout the review process. We are delighted to hear that our rebuttal has addressed your concerns and that you now recommend acceptance.
Your insightful comments have significantly strengthened our paper, and we are grateful for your valuable contribution to improving our work.
Best wishes,
All authors | null | null | null | null | null | null | null | null |
Not All Tokens Matter All The Time: Dynamic Token Aggregation Towards Efficient Detection Transformers | Accept (poster) | Summary: This paper proposes Dynamic DETR to reduce token redundancy within the encoder for improving the efficiency of DETR-like object detectors. This problem has also been studied by previous work such as Sparse DETR and Focus-DETR. Compared to these existing efforts, this work proposes a finer-grained sparsification approach, using different retention ratios for different feature pyramid levels and designing different sparsification strategies for different levels. Another improvement is the introduction of an additional loss function to distill the feature representation of the original encoder. The proposed method was validated on multiple DETR variants and demonstrated advantages over established methods.
Claims And Evidence: The claim that “the distribution of important tokens across different hierarchies” follows a certain pattern has been validated in only one sample (Figure 2). Is this phenomenon universal? It is better to define some quantitative indicator, which would increase the credibility of this claim.
Methods And Evaluation Criteria: Yes, the evaluation makes sense.
Theoretical Claims: No, it's not a theoretical paper.
Experimental Designs Or Analyses: The experimental designs are generally sound.
Supplementary Material: Yes, Appendix A and B
Relation To Broader Scientific Literature: No, not related to the broader scientific literature
Essential References Not Discussed: No, the existing related works have been well discussed
Other Strengths And Weaknesses: Strengths:
1. The motivation makes sense. The proposed method uncovers the need to use different sparsification thresholds for different levels, which was neglected by previous works.
2. The experiments are adequate. The proposed method was validated on multiple DETR variants such as D-DETR, DINO and DAB-DETR.
Weaknesses:
1. There is some lack of clarity in the experimental setup.
- The paper does not indicate what device the FPS in Table 1 was measured on.
- For which backbone is Figure 1(a) obtained?
2. There are many writing issues:
- The figure captions are too long. It is recommended to move part of the content into the main text for better readability and clarity.
- Figure 1(a) provides limited information. It would be more informative to include a comparison of FLOPs before and after applying the proposed method.
- In Figure 1(b), both the horizontal (FLOPs) and vertical (FPS) axes represent efficiency. To enhance clarity and intuitiveness, it is recommended to replace one of the axes with AP.
Other Comments Or Suggestions: see the weaknesses part above
Questions For Authors: To what extent does the proposed method increase training costs? How does this compare to other counterparts?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your valuable suggestions, and next we respond to each comment as follows.
## W1: There is some lack of clarity in the experimental setup.
- *The paper does not indicate what device the FPS in Table 1 was measured on.*
- *For which backbone is Figure 1(a) obtained?*
**Response**: Sorry for our unclear descriptions about the experimental setup.
- The FPS of all models in both the initial submission and the rebuttal phase is measured on a single RTX 3090, with the server load kept consistent.
- The backbone for Figure 1(a) is ResNet-50.
## W2: There are many writing issues.
- *The figure captions are too long. It is recommended to move part of the content into the main text for better readability and clarity.*
- *Figure 1(a) provides limited information. It would be more informative to include a comparison of FLOPs before and after applying the proposed method.*
- *In Figure 1(b), both the horizontal (FLOPs) and vertical (FPS) axes represent efficiency. To enhance clarity and intuitiveness, it is recommended to replace one of the axes with AP.*
**Response**: We sincerely appreciate your detailed comments on writing and figure clarity.
- Regarding the figure captions, we will refine them by moving excessive details into the main text to improve readability.
- For Figure 1(a), we acknowledge its limited informativeness and will incorporate a FLOPs comparison before and after applying the proposed method to intuitively showcase the effectiveness of our approach.
- In Figure 1(b), we will revise the visualization to present Params vs. AP, providing a more balanced perspective on both efficiency and accuracy.
Thanks again for your insightful suggestions, and we will incorporate these modifications to enhance the overall clarity and readability.
## Q1: To what extent does the proposed method increase training costs? How does this compare to other counterparts?
**Response**: Following your insightful suggestions as well as the recommendations from previous reviewers, we take DINO as the baseline detector and compare the training costs of our method and other competitors. Specific results are as follows.
Tab. T1. Training cost comparisons between Ours (Dynamic DINO) and other efficient solutions with ResNet-50 on the LVIS val-set, where the memory are captured when batch size=2.
| Model | AP | AP$_{\mathrm{50}}$ | AP$_{\mathrm{75}}$ | AP$_{\mathrm{r}}$ | AP$_{\mathrm{c}}$ | AP$_{\mathrm{f}}$ | FLOPs (G) | FPS | GPU Memory (G) |
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| DINO| 26.1 | 34.5 | 27.5 | 8.3 | 24.1 | 36.1 | 247.1 | 19.8 | 24244|
| Sparse DINO| 22.9 | 32.0 | 24.2 | 8.4 | 21.3 | 30.9 | 151.7 | 21.2 | 22680|
| Lite DINO| 20.2 | 28.0 | 21.4 | 3.0 | 17.5 | 30.8 | 160.0 | 16.0 | 42348|
| Focus DINO| **23.7** | **32.9** | **25.2** | **10.2** | **21.7** | 31.9 | 168.2 | 20.4 | 22640|
| Dynamic DINO| 23.4 | 31.8 | 25.0 | 7.7 | 20.8 | **33.4** | **146.6** | **22.5**| **22557**|
||
Tab. T2. Training cost comparisons between Ours (Dynamic DINO) and other efficient solutions with Swin-Transformer on the COCO val-set, where the memory are captured when batch size=2.
| Model | AP | AP$_{\mathrm{50}}$ | AP$_{\mathrm{75}}$|FLOPs (G) | FPS | GPU Memory (G) |
|-|:-:|:-:|:-:|:-:|:-:|:-:|
|DINO|51.5|70.2|56.5|252.3|14.0|27892|
|Sparse DINO|49.6|68.4|54.1|137.0|18.0|**24696**|
|Lite DINO|48.3|66.1|52.8|151.0|16.8|26371|
|Focus DINO|**49.9**|68.2|54.3|156.9|15.3|31933|
|Dynamic DINO|**49.9**|68.8|54.3|**149.4**|**18.2**|27764|
||
Tab. T3. Training cost comparisons between Ours (Dynamic DINO) and other efficient solutions with MobileNet-V2 on the COCO val-set, where the memory are captured when batch size=2.
|Model|AP|AP$_{\mathrm{50}}$|AP$_{\mathrm{75}}$|FLOPs (G)|FPS|GPU Memory (G) |
|-|:-:|:-:|:-:|:-:|:-:|:-:|
|DINO|25.9|38.7|27.5|172.6|24.0|15304|
|Sparse DINO|19.8|33.8|20.7|67.5|27.2|15810|
|Lite DINO|18.8|29.2|19.9|78.4|**36.2**|33950|
|Focus DINO|20.0|31.8|20.7|79.8|24.7|14141|
|Dynamic DINO|**21.7**|**33.1**|**23.1**|**61.5**|36.0|**14082**|
||
From the above results, it can be observed that the proposed Dynamic Token Sparsification strategy generally maintains a similar or even lower training cost compared to the baseline detector. In comparison to other counterparts, our method demonstrates clear advantages in both accuracy and speed. | Summary: This paper introduces Dynamic DETR designed to enhance the computational efficiency of DETR-based methods. The study identifies the encoder as the primary computational bottleneck and proposes a dynamic token sparsification strategy to reduce redundant tokens, effectively lowering computational complexity while preserving detection performance.
The proposed method features two key modules:
1. **Proximal Aggregation**: Merges tokens based on spatial adjacency to retain local structural details.
2. **Holistic Aggregation**: Ranks token importance and aggregates less important tokens into their most similar important counterparts, ensuring efficient representation.
Compared to the original DETR model, Dynamic DETR reduces computational cost by approximately 40% to 50% in terms of FLOPs, while only causing a minor 0.5% to 1% drop in AP.
Claims And Evidence: The authors claim that the encoder is the main computational bottleneck in DETR models. The paper provides FLOPs analysis (Figure 1) showing that the encoder contributes significantly to the total computational cost.
In addition, the authors claim that token importance dynamically changes across different encoder stages and support this claim through statistical analysis of token importance distribution at various levels.
Methods And Evaluation Criteria: Dynamic DETR incorporates Proximal Aggregation, Holistic Aggregation, and Center-distance Regularization to enhance token sparsification.
- Proximal Aggregation employs a window-based approach to determine proximity, ensuring that token merging preserves local structural integrity.
- Holistic Aggregation merges less important tokens into their most similar important tokens, effectively reducing redundancy while maintaining essential semantic information.
- Center-distance Regularization ensures that token representations remain statistically consistent before and after sparsification, preserving feature integrity and model stability.
Theoretical Claims: Not applicable, the article does not have proofs for theoretical claims.
Experimental Designs Or Analyses: The paper mainly conducts experiments based on COCO2017. Although multiple DETR variants are tested, there is a lack of verification of generalization capabilities on other datasets (such as ADE20K, Cityscapes, etc.).
Supplementary Material: The supplementary material mainly consists of ablation experiments, which illustrate the selection of model hyperparameters and Token Aggregation Strategies. Visualizations also validate the sparse structure of tokens at different levels.
Relation To Broader Scientific Literature: This study focuses on designing a dynamic token sparsification strategy to address the encoder computation bottleneck in DETR. Unlike previous lightweight approaches, it introduces a more adaptive and efficient mechanism for token selection.
Essential References Not Discussed: The paper provides a comprehensive discussion of previous related works.
Other Strengths And Weaknesses: **Strengths**
1. Compared to static sparsification methods, Dynamic DETR adopts a dynamic strategy, which more effectively balances computational cost and detection accuracy.
2. It is applicable to multiple DETR variants, including Deformable DETR, DINO, and DAB-DETR, demonstrating its broad compatibility.
**Weaknesses**
1. While Dynamic DETR reduces the overall FLOPs, the token importance computation and matching strategy introduce extra operations. A breakdown of this overhead in terms of FLOPs or latency would provide better clarity.
2. The paper primarily evaluates the method on COCO2017, lacking experiments on other datasets such as ADE20K and Cityscapes.
3. Compared to Lite DETR (Li et al., 2023), the improvements in AP are relatively small, despite similar FLOPs reduction. Given that Lite DETR achieves comparable performance, the advantages of Dynamic DETR in real-world applications should be further clarified.
Other Comments Or Suggestions: 1. Adding the corresponding symbols from the equations in the paper to Figure 3 would improve clarity and make it easier for readers to follow the method.
Questions For Authors: please refer to the Strengths And Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We really appreciate your constructive comments. We respond to each comment as follows.
## W1: A breakdown of token importance computation and matching strategy in terms of FLOPs or latency would provide better clarity
**Response**: To quantify this overhead, we provide a detailed analysis of the FLOPs and Latency for these two components in **Tab. T1**.
Tab. T1. FLOPs and latency analysis of our method based on DINO, and the results are based on COCO val-set.
|Model|AP| AP$_{\mathrm{50}}$|AP$_{\mathrm{75}}$|FLOPs (G)|FPS|Latency (s)|
|-|:-:|:-:|:-:|:-:|:-:|:-:|
|DINO| 50.9 | 68.9 | 55.3|244.5|14.4|0.069|
|w/o Token Imp. (Random Selecting)|46.6|68.3|50.1|134.7|27.8|0.036|
|w/o Mat. (Prox Agg.&Holi Selecting)|49.8|69.9|54.2|140.2|24.8|0.040|
|Dynamic DINO|50.2|69.2|54.7|141.7|23.2|0.043|
||
When we discard the token importance in Eq. (10), namely randomly assigning tokens as important, the performance drops drastically. Meanwhile, turning the matching scheme off leads to a ~1% AP drop but improves FPS by 10.6. Moreover, both components contribute significantly to parameter reduction.
## W2: The paper primarily evaluates the method on COCO2017, lacking experiments on other datasets such as ADE20K and Cityscapes
**Response**:
To ensure fair comparisons, we follow the data setups of prior works (*Sparse detr, Lite detr, and Focus detr*), where COCO is primarily used for evaluation and analysis. ADE20K and Cityscapes are designed for segmentation tasks, making them less directly aligned with our study. Meanwhile, following the insightful advices from yours and previous reviewers, we conduct experiments on VOC and LVIS—two widely used object detection benchmarks beyond COCO.Note that all the models are with a ResNet-50 as the backbone and trained for 12 epochs.
Tab. T2. Performance of DINO and various efficient solutions on the VOC2007 val-set.
| Model| mAP|FLOPs (G)|FPS|
|-|:-:|:-:|:-:|
| DINO|65.7| 241.6|15.5|
| Sparse DINO|62.5|141.4|19.6|
| Lite DINO |38.1|151.0|21.3|
| Focus DINO|51.4| 153.6|20.2|
| Dynamic DINO|**63.8**|**135.2**|**21.1**|
||
Tab. T3. Performance of DINO and various efficient solutions on the LVIS-1.0 val-set.
|Model|AP|AP$_{\mathrm{50}}$|AP$_{\mathrm{75}}$|AP$_{\mathrm{r}}$|AP$_{\mathrm{c}}$|AP$_{\mathrm{f}}$|FLOPs (G)|FPS|
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|DINO|26.1|34.5|27.5|8.3|24.1|36.1|247.1|19.8|
|Sparse DINO| 22.9|32.0|24.2|8.4|21.3|30.9|151.7|21.2|
|Lite DINO| 20.2 | 28.0|21.4|3.0|17.5|30.8|160.0|16.0|
|Focus DINO| **23.7**| **32.9** | **25.2**|**10.2**|**21.7**|31.9|168.2|20.4|
|Dynamic DINO| 23.4 | 31.8|25.0 |7.7|20.8|**33.4**|**146.6**|**22.5**|
||
As exhibited in **Tab. T2** and **Tab. T3**, the results on VOC and LVIS datasets further showcase the superiority and generality of our dynamic strategy. Moreover, we plan to explore the potential of dynamic token sparsification in pixel-level dense prediction tasks in future.
## W3: Given that Lite DETR achieves comparable performance, the advantages of Dynamic DETR in real-world applications should be further clarified
**Response**:
First of all, we are sorry for the incorrect parameter descrition about DINO and Dynamic DINO in the initial submission, while the corrected verison is as follows. Our anonymous code please refers to [here](https://anonymous.4open.science/r/Dynamic-DETR-4D7F)
Tab. T4. Corrected efficiency of DINO, Lite DINO, and Dynamic DINO on the COCO val-set.
|Model|AP|AP$_{\mathrm{50}}$|AP$_{\mathrm{75}}$|FLOPs (G)|FPS|
|-|:-:|:-:|:-:|:-:|:-:|
|DINO|50.9|68.9|55.3|244.5|14.4|14.4|
|Lite DINO|**50.4**|-|54.6|151.0|**23.2**|
|Dynamic DINO|50.2|69.2|**54.7**|**141.7**|**23.2**|
||
For COCO, our method slightly lead Lite DETR. However, on the LVIS and VOC datasets and when using MobileNet as the backbone, our method significantly outperforms Lite DETR in terms of AP, Params, and FPS. Noting that Lite DETR suffers from performance degradation under shorter training schedules, showing the efficiency of our dynamic strategy in reducing training costs.
Moreover, to verify the potential of our dynamic token sparsification in real-time detection tasks, we investigate the performance of several efficient detectors when integrated a lightweight backbone MobileNet-V2, and the results are shown in **Tab. T5**.
Tab. T5. Performance of DINO and various efficient solutions with MobileNet-V2 on the COCO val-set, where the output channels are set to 256 for convergence.
|Model|AP|AP$_{\mathrm{50}}$|AP$_{\mathrm{75}}$|FLOPs (G)|FPS|
|-|:-:|:-:|:-:|:-:|:-:|
|DINO|25.9|38.7|27.5|172.6|24.0|
|Sparse DINO|19.8|33.8|20.7|67.5|27.2|
|Lite DINO|18.8|29.2|19.9|78.4|**36.2**|
|Focus DINO|20.0|31.8|20.7|79.8|24.7|
|Dynamic DINO|**21.7**|**33.1**|**23.1**|**61.5**|36.0|
||
Dynamic DINO significantly boosts the inference performance from 24.0 to 36.0 while the accuracy also outperforms its counterparts by a large margin. This competitive result shows the potential adaptability of our method for practical scenarios.
---
Rebuttal Comment 1.1:
Comment: I appreciate the thorough reply from the authors. The majority of my questions have been clarified. The results on the LVIS and VOC datasets, in particular, provide strong evidence of the advantages of Dynamic DINO compared to Lite DINO. I am therefore inclined to slightly increase my score.
---
Reply to Comment 1.1.1:
Comment: We extend our sincere gratitude to the reviewer for the valuable time and insightful feedback. In the revised version, we will incorporate the breakdown of the latency analysis, results on additional datasets, and further discussions on the advantages of Dynamic DETR in real-world applications. Once again, we truly appreciate your thoughtful comments and recognition. | Summary: It is known in the object detection literature that the detection transformers are notorious for their long and computationally demanding training requirements. To partially address this issue, this work proposes a novel token aggregation strategy for detection transformers based on the recent token merging strategies. In particular, the work aims to exploit the redundancy of the tokens at different feature levels dynamically by merging them. Experimental results on COCO2017 aim to highlight the efficiency gains of the proposed method.
Claims And Evidence: The main motivation behind the proposed method is motivated well and discussed in a detailed and objective manner. It also follows the established token merging literature [A, B, C] for computer vision.
Main claims of the work could be listed as follows:
**1.** Dynamic DETR performs at least on-par with the base models and other efficient DETR varieties.
**2.** Dynamic DETR consistently outperforms the efficiency of the models compared to the base models and other efficient DETR varieties.
**3.** Dynamic DETR framework finds _the_ sweet spot between the performance and efficiency.
With respect to these claims, the authors present:
**1.** The authors provide experimental results on COCO2017 _minival-set_. In these results, it is evident that the Dynamic DETR is almost always ~1 AP behind the base model, while performing on-par with Lite DETR [D] and Focus-attention DETR [E] in both of the presented settings.
**2.** The authors provide the FLOPs on COCO2017 _minival-set_. In these results, it is clear that the Dynamic DETR is more efficient with respect to FLOPs compared to base models, while being almost the same as Lite DETR [D] on D-DETR baseline and improving Lite DETR [D] by 3% on the DINO baseline (albeit the FPS gain is almost the same as Lite DETR [D]).
**3.** Based on the aforementioned two results, Dynamic DETR slightly compromises the performance while providing reasonable efficiency improvements compared to the base models.
Following from these claims and presented evidence, it can be seen that Dynamic DETR is not an objectively better method compared to Lite DETR [D] (D-DETR efficiency results and DINO performance results on Table 1). In addition, while I acknowledge that the earlier works (e.g [A]) were also constrained to the FLOPs, the discussions in the work are limited solely with FLOPs, while the broader discussion on efficiency, and finding the sweet spot between efficiency and performance is a multi-faceted discussion, involving both number of training iterations and memory costs.
Based on these discussions, the claims of the work are not strongly supported.
[A] Bolya, Daniel, et al. "Token Merging: Your ViT But Faster." The Eleventh International Conference on Learning Representations.
[B] Bolya, Daniel, and Judy Hoffman. "Token merging for fast stable diffusion." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.
[C] Yuan, Xin, Hongliang Fei, and Jinoo Baek. "Efficient transformer adaptation with soft token merging." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[D] Li, Feng, et al. "Lite detr: An interleaved multi-scale encoder for efficient detr." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.
[E] Zheng, Dehua, et al. "Less is more: Focus attention for efficient detr." Proceedings of the IEEE/CVF international conference on computer vision. 2023.
Methods And Evaluation Criteria: - Basing the experiments on COCO2017 is valid as it is frankly the most established object detection benchmark. However, the rationale behind the usage of minival-set is not very clear.
- In addition, object detectors (and thus DETRs) are utilized in broad range of domains, often involving long-tailed and challenging cases. Therefore, performing analyses/discussions on other established datasets involving much more classes and much denser annotations, such as LVIS [A] could be helpful for the work. From a token merging point-of-view, these cases could be more challenging given the dense nature of object annotations, though it would also be impressive Dynamic DETR works on it too.
Theoretical Claims: The nature of the paper is mostly empirical without detailed theoretical claims. However, the design choices made in the paper are mostly motivated and explained by relevant examples and visualizations.
Experimental Designs Or Analyses: Other than the aforementioned issue on the usage of minival-test, the rationale behind it and a short description of exactly what it corresponds to, the experiments seem sound.
Supplementary Material: The supplementary material includes more visualizations of queries with different methods including Dynamic DETR and some discussion on design options for token aggregation strategies. It supports the empirical analyses on the main paper.
Relation To Broader Scientific Literature: Object detection is one of the primary and most well-established computer vision tasks. Detection transformers are the trailblazing response of the detection community to the surge of transformers, though they are known to have various efficiency issues. Thus, the scope of the work is interesting to the broader audiences of detection and efficiency communities.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The idea proposed in the work is an extension of token merging for the detection literature. It contains various novel design choices to make this extension work in an efficient and reasonably performant manner. The writing is also mostly clear, although the flow of thought was a bit counter-intuitive for me between Sections 3.2 and 3.3, since the exact definitions of concepts in 3.2 are defined later.
Other Comments Or Suggestions: - There is a minor typo on Pg.5: hyperparamt -> hyper parameter
- Some figures, such as Figure 1 have very small font size. They are otherwise carefully designed and neat.
Questions For Authors: - Where do you think your work stands with respect to other efficiency concerns, such as the number of training iterations and memory imprint?
- Do you think your work is complementary, synergic or orthogonal to other efficient DETR methods, such as [A] from ECCV 2024?
[A] Yavuz, Feyza, et al. "Bucketed Ranking-Based Losses for Efficient Training of Object Detectors." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your thoughtful comments. Below we response to these concerns.
## C1: Comparable performance with Lite DETR
**Response**:
First of all, we are sorry for the incorrect parameter descrition about DINO and Dynamic DINO in the initial submission, while the corrected verison is as follows. Our anonymous code please refers to [here](https://anonymous.4open.science/r/Dynamic-DETR-4D7F)
Tab. T1. Corrected efficiency of DINO, Lite DINO, and Dynamic DINO on the COCO val-set.
|Model|AP|FLOPs (G)|FPS|
|-|:-:|:-:|:-:|
|DINO|50.9|244.5|14.4|
|Lite DINO|**50.4**|151.0|**23.2**|
|Dynamic DINO|50.2|**141.7**|**23.2**|
||
To throughly explore the advantages of our Dynamic strategy to Lite DETR, we take DINO as the baseline detector and compare the performance between these two methods as follows.
Our method outperforms Lite-DETR across different datasets and backbone networks, with even greater advantages under a shorter training schedule, further demonstrating its efficiency.
- On COCO val-set, Swin-T, 12 epochs.
|Model|AP|FLOPs (G)|FPS|GPU Memory (G)|Training Hours (h:m)|
|-|:-:|:-:|:-:|:-:|:-:|
|DINO|51.5|252.3|14.0|27892|**23:27**|
|Lite DINO|48.3|151.0|16.8|**26371**|24:20|
|Dynamic DINO|**49.9**|**149.4**|**18.2**|27764|23:48|
||
- On LVIS val-set, ResNet-50, 12 epochs.
|Model|AP|FLOPs (G)|FPS|GPU Memory (G)|Training Hours (h:m)|
|-|:-:|:-:|:-:|:-:|:-:|
|DINO|26.1|247.1|19.8|24244|14:59|
|Lite DINO|20.2|160.0|16.0|42348|13:48|
|Dynamic DINO|**23.4**|**146.6**|**22.5**|**22557**|**12:15**|
||
- On COCO val-set, MobileNet-V2, 12 epochs.
|Model|AP|FLOPs (G)|FPS|GPU Memory (G)|
|-|:-:|:-:|:-:|:-:|
|DINO|25.9|172.6|24.0|15304|
|Lite DINO|18.8|78.4|**36.2**|33950|
|Dynamic DINO|**21.7**|**61.5**|36.0|**14082**|
||
- On VOC2007 val-set, ResNet-50, 12 epochs.
|Model|AP|FLOPs (G)|FPS|
|-|:-:|:-:|:-:|
|DINO|65.7|241.6|15.5|
|Lite DINO|38.1|151.0|**21.3**|
|Dynamic DINO|**63.8**|**135.2**|21.1|
||
## C2: The usage of minival-set
**Response**: Sorry for our unclear descriptions. All the results in our paper are evaluated on COCO val-set, which is strictly consistent with previous works (*Sparse detr, Lite detr, and Focus detr*).
## C3: Experiemnts on LVIS dataset
**Response**: We perform experiments on LVIS datasets to verify the generalizability and robustness of our approach.
Tab. T2. Performance of DINO and various efficient solutions on the LVIS-1.0 val-set.
| Model | AP | AP$_{\mathrm{50}}$ | AP$_{\mathrm{75}}$ | AP$_{\mathrm{r}}$ | AP$_{\mathrm{c}}$ | AP$_{\mathrm{f}}$ | FLOPs (G) | FPS | GPU Memory (G) |
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| DINO| 26.1 | 34.5 | 27.5 | 8.3 | 24.1 | 36.1 | 247.1 | 19.8 | 24244|
| Sparse DINO| 22.9 | 32.0 | 24.2 | 8.4 | 21.3 | 30.9 | 151.7 | 21.2 | 22680|
| Lite DINO| 20.2 | 28.0 | 21.4 | 3.0 | 17.5 | 30.8 | 160.0 | 16.0 | 42348|
| Focus DINO| **23.7** | **32.9** | **25.2** | **10.2** | **21.7** | 31.9 | 168.2 | 20.4 | 22640|
| Dynamic DINO| 23.4 | 31.8 | 25.0 | 7.7 | 20.8 | **33.4** | **146.6** | **22.5**| **22557**|
||
As exhibited in **Tab. T3**, our Dynamic DINO lags Focus DINO slightly by 0.3% points, but outperforms it in inference speed by 2.1 FPS. To sum up, the proposed Dynamic token aggregation strategy significantly reduces the parameters of the baseline model (DINO), while also exhibiting a smaller performance loss compared to other efficient solutions.
## C4: Writing, typos and figures
**Response**: We sincerely appreciate your detailed comments on writing and figure clarity.
We will clarify the flow between Sections 3.2 and 3.3 to ensure a more intuitive progression of ideas. Moreover, the typos will be corrected and the font size in Figure 1 will be enlarged for better readability.
## C5: Comparison between ours and other competitors in training iterations and memory imprint
**Response**: Considering the training iterations of these methods are similar, we report the memory info of several setups (see **Tab. T2, T3, and T4** and **the responses to C1**). In comparison to other counterparts, our method shows best overall performance.
Tab. T3. Swin-Transformer on the COCO val-set.
|Model|AP|FLOPs (G) | FPS | GPU Memory (G) |
|-|:-:|:-:|:-:|:-:|
|DINO|51.5|252.3|14.0|27892|
|Sparse DINO|49.6|137.0|18.0|**24696**|
|Lite DINO|48.3|151.0|16.8|26371|
|Focus DINO|**49.9**|156.9|15.3|31933|
|Dynamic DINO|**49.9**|**149.4**|**18.2**|27764|
||
Tab. T4. MobileNet-V2 on the COCO val-set.
|Model|AP|FLOPs (G)|FPS|GPU Memory (G) |
|-|:-:|:-:|:-:|:-:|
|DINO|25.9|172.6|24.0|15304|
|Sparse DINO|19.8|67.5|27.2|15810|
|Lite DINO|18.8|78.4|**36.2**|33950|
|Focus DINO|20.0|79.8|24.7|14141|
|Dynamic DINO|**21.7**|**61.5**|36.0|**14082**|
||
## C6: Integration with BR Loss
**Response**: The BR Loss could consitently facilitate the final performance of both DINO and Dy-DINO, see **Tab. T5**.
Tab. T5. Swin-Transformer on the COCO val-set.
| Model|DINO|DINO+BR|Dy-DINO|Dy-DINO+BR|
|-|:-:|:-:|:-:|:-:|
|AP|51.5|52.2|49.9|50.3|
||
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed response of the authors to the issues I raised. In particular, the authors have provided further empirical evidence related to how Dynamic DETR compares against Lite DETR where they have shown that it indeed provides non-trivial gains over more settings. In addition, the authors have provided detailed discussions related to different efficiency measures, including not FLOPs this time but also memory imprints.
Based on these points, I will be raising my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful reconsideration and for your encouraging comments. We're glad that our additional analysis and clarifications addressed your concerns, and we will incorporate these additions into the revised version. Again, we truly appreciate your willingness to engage deeply with our work. | Summary: The paper **"Not All Tokens Matter All The Time: Dynamic Token Aggregation Towards Efficient Detection Transformers"** proposes a novel framework called **Dynamic DETR**, aiming to address the computational efficiency bottleneck in **Detection Transformers (DETRs)**. DETRs require high computational resources, especially in the encoder, which becomes a major bottleneck. Existing methods generally adopt **static token sparsification strategies**, ignoring the differences in token importance across different layers and encoder blocks, leading to performance degradation. Dynamic DETR significantly reduces computational costs while maintaining high detection accuracy by dynamically adjusting token density and incorporating a multi-level token sparsification strategy. The main contributions include:
**Dynamic Token Aggregation:** Dynamically adjusts token density to reduce redundancy and computational complexity.
**Multi-level Token Aggregation:** Employs neighbor-based aggregation at lower levels to preserve spatial details and global aggregation at higher levels to capture contextual information.
**Representational Center-distance Regularization:** Ensures consistency in feature distribution before and after sparsification through regularization, improving detection performance.
Claims And Evidence: The claims made in the paper are well-supported by extensive experiments. The authors conducted experiments on the **COCO2017 dataset**, demonstrating the effectiveness of **Dynamic DETR**. The results show that **Dynamic DETR significantly reduces FLOPs (by 39.7%-53.9%)** across multiple DETR variants while incurring only **a slight performance drop (AP decrease of 0.5%-1.0%)**. Additionally, ablation studies confirm the effectiveness of dynamic token aggregation, multi-level aggregation strategies, and representational center-distance regularization.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable and well-suited for object detection tasks. **Dynamic DETR** effectively reduces redundant tokens while maintaining detection accuracy through dynamic token density adjustment and multi-level aggregation. The experiments are conducted on the **COCO2017 dataset**, with evaluation metrics including **AP, AP50, and AP75**, ensuring comprehensive assessment.
Theoretical Claims: The paper does not propose theoretical proofs, so there is no need to verify the correctness of theoretical claims.
Experimental Designs Or Analyses: The experimental design and analyses are reasonable and effective. The authors evaluate Dynamic DETR on **multiple DETR variants**, demonstrating its **generalizability and effectiveness**. Additionally, ablation studies confirm the contribution of each component, ensuring the reliability of the experimental results.
Supplementary Material: There are no supplementary materials provided in this paper.
Relation To Broader Scientific Literature: This work is closely related to existing **DETR improvement methods** and **token sparsification algorithms.** Existing DETR variants (e.g., **Deformable DETR, DAB-DETR**) improve detection performance but still have **high computational costs**. **Dynamic DETR** addresses this limitation by **introducing a dynamic token aggregation strategy**, significantly reducing computational costs while maintaining high detection accuracy, filling the gap left by previous methods.
Essential References Not Discussed: The paper cites a large number of relevant references covering DETR variants and token sparsification algorithms. No critical missing references were identified.
Other Strengths And Weaknesses: **strengths:**
- **Dynamic DETR effectively reduces computational costs** while maintaining high detection accuracy by dynamically adjusting token density.
- **Multi-level token aggregation strategy and representational center-distance regularization** enhance model robustness and generalization ability.
- **Comprehensive experimental design**, validating the method's effectiveness and generalizability across multiple DETR variants.
**Weaknesses:**
- The paper **does not discuss the real-world deployment performance** of Dynamic DETR, such as its effectiveness in **real-time detection tasks.**
- Although **Dynamic DETR performs well on COCO**, its performance on **other datasets (e.g., Pascal VOC) remains unverified.**
Other Comments Or Suggestions: - The experimental results and analyses are comprehensive, but the authors could further discuss **the potential of Dynamic DETR in practical applications.**
- It is suggested that the authors explore **Dynamic DETR's applicability in other Transformer architectures** in future work to verify its generalizability.
Questions For Authors: 1. **How does Dynamic DETR perform in real-world applications, such as real-time detection tasks? Are there any plans to conduct related experiments?**
- **Impact**: Understanding Dynamic DETR’s real-world performance helps assess its deployment potential.
2. **How does Dynamic DETR perform on other datasets (e.g., Pascal VOC)? Are there plans for cross-dataset validation?**
- **Impact**: Cross-dataset validation can further demonstrate the generalizability and robustness of Dynamic DETR.
3. **Is Dynamic DETR applicable to other Transformer architectures (e.g., ViT)? Are there plans to explore this aspect?**
- **Impact**: Investigating Dynamic DETR’s applicability in other Transformer architectures can further verify its **generalizability and scalability.**
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your selfless dedication and thoughtful comments. Below we response to these concerns.
## W1&Q1: The paper does not discuss the real-world deployment performance of Dynamic DETR, such as its effectiveness in real-time detection tasks
**Response**:
To verify the potential of our dynamic token aggregation in real-time detection, we investigate the performance of several efficient detectors when integrated a lightweight backbone MobileNet-V2, and the results are shown in **Tab. T1**.
Tab. T1. Performance of DINO and various efficient solutions with MobileNet-V2 on the COCO val-set, where the output channels are set to 256 for convergence and trained for 12 epochs.
|Model|AP|AP$_{\mathrm{50}}$|AP$_{\mathrm{75}}$|FLOPs (G)|FPS|
|-|:-:|:-:|:-:|:-:|:-:|
|DINO|25.9|38.7|27.5|172.6|24.0|
|Sparse DINO|19.8|33.8|20.7|67.5|27.2|
|Lite DINO|18.8|29.2|19.9|78.4|**36.2**|
|Focus DINO|20.0|31.8|20.7|79.8|24.7|
|Dynamic DINO|**21.7**|**33.1**|**23.1**|**61.5**|36.0|
||
Dynamic DINO significantly boosts the inference performance from 24.0 to 36.0 while the accuracy also outperforms its counterparts by a large margin. This competitive result shows the potential adaptability of our method for practical scenarios.
Moreover, to enable a fair and comprehensive comparison, our experiment setups strictly align with pioneering works [R1, R2, R3], which primarily focus on benchmark evaluations and do not explicitly consider deployment performance. Future work could explore lightweight adaptations and hardware-friendly implementations to further bridge the gap between research and practical deployment.
- [R1] Roh, B., et al. Sparse detr: Efficient end-to-end object detection with learnable sparsity. ICLR, 2022.
- [R2] Li, Feng, et al. Lite detr: An interleaved multi-scale encoder for efficient detr. CVPR, 2023.
- [R3] Zheng, Dehua, et al. Less is more: Focus attention for efficient detr. ICCV, 2023.
## W2&Q2: Although Dynamic DETR performs well on COCO, its performance on other datasets (e.g., Pascal VOC) remains unverified.
**Response**: To demonstrate the generalizability and robustness of our Dynamic DETR, as suggested by your valuable comment, we perform experiments on VOC and LVIS datasets, two of the most commonly used benchmarks beyond COCO for object detection. The results are as exhibited as **Tab. T2** and **Tab. T3**. Note that all the models are with a ResNet-50 as the backbone and trained for a bunch of 12 epochs.
Tab. T2. Performance of DINO and various efficient solutions on the VOC2007 val-set.
| Model| mAP|FLOPs (G)|FPS|
|-|:-:|:-:|:-:|
| DINO|65.7| 241.6|15.5|
| Sparse DINO|62.5|141.4|19.6|
| Lite DINO |38.1|151.0|**21.3**|
| Focus DINO|51.4| 153.6|20.2|
| Dynamic DINO|**63.8**|**135.2**|21.1|
||
Tab. T3. Performance of DINO and various efficient solutions on the LVIS-1.0 val-set.
| Model | AP | AP$_{\mathrm{50}}$ | AP$_{\mathrm{75}}$ | AP$_{\mathrm{r}}$ | AP$_{\mathrm{c}}$ | AP$_{\mathrm{f}}$ | FLOPs (G) | FPS |
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| DINO|26.1|34.5|27.5|8.3|24.1|36.1|247.1|19.8|
| Sparse DINO| 22.9 | 32.0|24.2|8.4|21.3|30.9|151.7|21.2|
| Lite DINO| 20.2 | 28.0|21.4|3.0|17.5|30.8|160.0|16.0|
| Focus DINO| **23.7** | **32.9** | **25.2**|**10.2**|**21.7**|31.9|168.2|20.4|
| Dynamic DINO| 23.4 | 31.8|25.0 |7.7|20.8|**33.4**|**146.6**|**22.5**|
||
Consistent with the performance on the COCO dataset, the proposed dynamic token aggregation significantly reduces the parameters of the baseline model (DINO), while also exhibiting a smaller performance loss compared to other efficient solutions. Specifically, as exhibited in **Tab. T2**, our Dynamic DINO scores 63.8% AP on the VOC dataset, which is 1.9% points lower than the baseline DINO but with a 36% improvement in FPS, and excels other competitors by a large margin both in accuracy and efficiency. For the LVIS results in **Tab. T3**, our Dynamic DINO lags Focus DINO slightly by 0.3% points, but outperforms it in inference speed by 2.1 FPS.
In summary, the results on VOC and LVIS datasets further showcase the superiority and generality of our Dynamic token sparisification strategy.
## Q3: Is Dynamic DETR applicable to other Transformer architectures (e.g., ViT)? Are there plans to explore this aspect?
**Response**: To further explore the generalizability of Dynamic DETR, we conducted additional experiments using Swin Transformer as the backbone. The results in **Tab. T4** demonstrate that our approach remains effective across different Transformer architectures, highlighting its adaptability.
Tab. T4. Performance of DINO and various efficient solutions with Swin-Transformer on the COCO val-set.
| Model | AP | AP$_{\mathrm{50}}$ | AP$_{\mathrm{75}}$|FLOPs (G) | FPS |
|-|:-:|:-:|:-:|:-:|:-:|
|DINO|51.5|70.2|56.5|252.3|14.0|
|Sparse DINO|49.6|68.4|54.1|137.0|18.0|
|Lite DINO|48.3|66.1|52.8|151.0|16.8|
|Focus DINO|**49.9**|68.2|54.3|156.9|15.3|
|Dynamic DINO|**49.9**|68.8|54.3|**149.4**|**18.2**|
||
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. You have conducted substantial justification and experiments, and I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your recognition and for taking the time to go through our work in detail. We're glad that the additional justifications and experiments helped clarify our approach. We sincerely appreciate your revised score and constructive feedback! | null | null | null | null | null | null |
Conformal Anomaly Detection in Event Sequences | Accept (poster) | Summary: The paper introduces CADES, a novel anomaly detection method for continuous-time event sequences under the conformal inference framework. The authors propose two new non-conformity scores tailored to event sequences based on the time-rescaling theorem, which address the non-identifiability issues of existing test statistics. CADES combines these scores with Bonferroni correction to conduct statistical hypothesis testing and provides theoretical guarantees on the false positive rate (FPR). Extensive experiments on synthetic and real-world datasets demonstrate that CADES outperforms state-of-the-art methods while maintaining FPR control.
Claims And Evidence: The claims are well-supported by both theoretical guarantees and empirical results.
- The paper provides rigorous proofs for the validity of the p-values used in CADES and offers theoretical guarantees on calibration-conditional FPR control.
- The experimental results show that CADES achieves superior detection performance compared to state-of-the-art methods while controlling the FPR at a pre-specified level.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem of anomaly detection in event sequences.
- The application of conformal inference and the design of two non-conformity scores effectively address the limitations of existing methods, particularly their inability to control the FPR and the non-identifiability issues.
- The evaluation criteria on synthetic and real-world datasets, including AUROC, FPR control, and runtime comparisons, are appropriate for assessing the performance of anomaly detection methods.
Theoretical Claims: The theoretical claims about the validity of the p-values used in CADES and the guarantees on calibration-conditional FPR are backed by rigorous proofs. The authors also provide additional theoretical insights into the conditions under which these guarantees hold, especially with regard to the size of the calibration set.
Experimental Designs Or Analyses: The experimental designs and analyses are sound and well-executed. It is worth noting that the summary and visualization of the experimental results are neat and clear.
- The paper conducts experiments on synthetic datasets to evaluate CADES' performance in GOF testing for the SPP and addressing the non-identifiability issues, as well as on real-world datasets to demonstrate its practical applicability in detecting various types of anomalies and controlling the FPR.
- The ablation studies verify the effectiveness of combining two non-conformity scores and using two-sided p-values.
Supplementary Material: I have carefully examined all supplementary materials.
Relation To Broader Scientific Literature: The key contributions of this paper lie in its exploration of conformal inference and anomaly detection in event sequences. The paper effectively bridges the gap between these two domains, making a significant advancement, which could also inspire future work on anomaly detection in other types of sequential data.
Essential References Not Discussed: No. The paper covers the essential references related to conformal inference, event sequence anomaly detection, and temporal point processes.
Other Strengths And Weaknesses: Strengths:
- **Originality**: The paper establishes a novel connection between conformal inference and anomaly detection in event sequences. It combines two newly designed non-conformity scores with Bonferroni correction to conduct statistical hypothesis testing and addresses the non-identifiability issues of existing approaches.
- **Theoretical Analysis**: The paper offers theoretical support for FPR control of anomaly detection in event sequences. Furthermore, the paper provides guarantees on calibration-conditional FPR control, which are stronger than the marginal FPR guarantees.
- **Evaluation**: The experimental evaluations are thorough and well-designed. The results demonstrate that CADES outperforms state-of-the-art methods on both synthetic and real-world datasets while effectively controlling the FPR at a pre-specified level.
- **Presentation**: The paper is well-organized. It presents a clear explanation of the motivation, methods, and experiments. This makes the paper easy to follow.
Weaknesses:
- While the authors provide theoretical guarantees on FPR control, the practical implication and significance of these guarantees in real-world scenarios could be explained in more detail.
Other Comments Or Suggestions: Suggestions:
- The paper could explore the potential for extending the proposed method to other types of sequential data, such as time series or spatial-temporal data, to broaden its applicability.
- It would be clearer to denote the new observation $X$ in the problem statement of Section 2 as $X_{\text{test}}$.
Questions For Authors: Please refer to weaknesses and suggestions.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our work and suggestions. We provide answers to your concerns as follows:
**Q1**: While the authors provide theoretical guarantees on FPR control, the practical implication and significance of these guarantees in real-world scenarios could be explained in more detail.
**R1**: FPR control guarantees are crucial for safety-critical applications, such as cybersecurity, finance, and healthcare. For example, in electronic health records (EHRs), maintaining low false positive rates is vital. High false positives could lead to incorrect diagnoses or unnecessary treatments, potentially compromising patient safety. Our method ensures that detected anomalies are more likely to be genuine, helping healthcare professionals make more reliable decisions. We will add this content in our paper.
**Q2**: The paper could explore the potential for extending the proposed method to other types of sequential data, such as time series or spatial-temporal data, to broaden its applicability.
**R2**: In this work, we primarily focus on anomaly detection in event sequences. Unlike other types of sequential data, event sequence data is asynchronous and of variable length, commonly modeled using temporal point processes. As you suggested, we plan to extend our method to other types of sequential data as future work.
**Q3**: It would be clearer to denote the new observation $X$ in the problem statement of Section 2 as $X_{\text{test}}$.
**R3**: Thank you for pointing this out. We have already made the replacement in our manuscript. | Summary: The paper is the first to extend conformal inference to anomaly detection in event sequences and proposes a novel method called CADES, which provides statistical guarantees of validity. Notably, CADES addresses the severe non-identifiability issues found in previous methods by developing two new powerful non-conformity scores. The authors provides both rigorous theoretical analysis of FPR and thorough evaluations, demonstrating CADES' strong performance in event sequence anomaly detection.
Claims And Evidence: The claims are supported by rigorous theoretical analysis and superior empirical performance on both synthetic and real-world datasets.
Methods And Evaluation Criteria: The proposed method is built on a statistically sound framework (conformal inference), and the evaluation criteria used (AUROC, TPR and FPR) are appropriate for the anomaly detection task.
Theoretical Claims: I have checked the correctness of all proofs supporting the theoretical claims, which are both clear and solid.
Experimental Designs Or Analyses: The experiments are well-designed and the results show that CADES outperforms existing methods on both synthetic and real-world datasets. The ablation study further supports the importance of using the two proposed non-conformity scores and two-sided p-values.
Supplementary Material: I have reviewed all the supplementary material in detail.
Relation To Broader Scientific Literature: The paper clearly explains how it is related to previous work. The paper draws a new connection between conformal inference and event sequence anomaly detection, and then proposes an effective detection method with statistical guarantees.
Essential References Not Discussed: While the paper cites relevant works well, it would be helpful to also discuss related work on applying conformal inference to time series (see Weaknesses for detailed references).
Other Strengths And Weaknesses: Strengths:
- The idea of detecting anomalies in event sequences with conformal inference is novel and interesting. The paper is well-written and the presentation is coherent.
- The motivation is clear. This paper provides theoretical guarantees and empirical validation for FPR control, and proposes new non-conformity scores to address previous non-identifiability issues.
- The effectiveness of CADES is thoroughly evaluated on experiments, showcasing superior performance compared to existing approaches.
Weaknesses:
I don't find any obvious weaknesses. It would be more comprehensive if the literatures on applying conformal inference to time series were discussed.
[1] Xu C, Xie Y. Conformal prediction interval for dynamic time-series. ICML, 2021.
[2] Zaffran M, Féron O, Goude Y, et al. Adaptive conformal predictions for time series. ICML, 2022.
Other Comments Or Suggestions: No further comments or suggestions.
Questions For Authors: What would the experimental performance be like if the KL divergence in Eq.(7) and Eq.(8) is replaced by the $L_2$ norm?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your recognition and feedback. We provide answers to your concerns as follows:
**Q1**: It would be more comprehensive if the literatures on applying conformal inference to time series were discussed.
**R1**: Thank you for your valuable suggestion and the references you provided. We will incorporate a discussion of them in the paper. In this work, we specifically apply conformal inference to event sequences. Although both time series and event sequences are types of sequential data, they differ significantly [1]. Specifically, in time series, time serves only as the index to order the sequence of values for the target variable. In event sequences, time is treated as a random variable representing the timestamps of asynchronous events, commonly modeled using temporal point processes. In addition, the references you mentioned primarily focus on prediction tasks, while our work addresses the anomaly detection task.
**Q2**: What would the experimental performance be like if the KL divergence in Eq.(7) and Eq.(8) is replaced by the $L_2$ norm?
**R2**: We conducted experiments by replacing the KL divergence with the $L_2$ norm on real-world datasets, referring to this method as CADES-$L_2$. The results demonstrate that CADES with KL divergence outperforms CADES-$L_2$ in terms of AUROC.
- AUROC (\%) results
| Dataset | CADES-$L_2$ | CADES |
| --- | --- | --- |
| LOGS - Packet corruption (1\%) | 90.75 | 96.48 |
| LOGS - Packet corruption (10\%) | 94.72 | 99.48 |
| LOGS - Packet duplication (1\%) | 92.03 | 92.88 |
| LOGS - Packet delay (frontend) | 93.66 | 98.15 |
| LOGS - Packet delay (all services) | 93.49 | 99.33 |
| STEAD - Anchorage, AK | 97.95 | 99.31 |
| STEAD - Aleutian Islands, AK | 99.82 | 99.95 |
| STEAD - Helmet, CA | 97.46 | 99.30 |
Reference:
[1] Xiao S, Yan J, Yang X, et al. Modeling the intensity function of point process via recurrent neural networks. AAAI, 2017. | Summary: This paper proposes a novel test procedure based on conformal inference for detecting anomalous event sequences, with rigorous control over the false positive rate (FPR), a crucial factor for deploying anomaly detection methods in safety-critical applications. Specifically, it designs two new non-conformity scores tailored to event sequences, which capture complementary sensitivities to different abnormal patterns. By combining these scores with Bonferroni correction, the proposed method CADES overcomes the non-identifiability limitations of existing methods. Theoretically, this paper proves the validity of CADES and provides guarantees on calibration-conditional FPR. Experimental results validate the effectiveness of CADES across multiple benchmark datasets.
Claims And Evidence: This paper provides clear and convincing evidence supporting its claims. The theoretical analysis of the proposed method, including guarantees on the marginal FPR and calibration-conditional FPR, is solid. The experiments demonstrate that CADES outperforms baseline methods, overcomes the non-identifiability limitations, and effectively controls FPR.
Methods And Evaluation Criteria: The proposed method and evaluation criteria are reasonable and effective for detecting anomalous event sequences.
Theoretical Claims: The authors provide rigorous theoretical analysis, proving the validity of the proposed test procedure and ensuring calibration-conditional FPR control.
Experimental Designs Or Analyses: The experimental design is thorough, using multiple benchmark datasets with various anomaly types. The results highlight the superior performance of CADES compared to existing methods, both in terms of AUROC score and FPR control.
Supplementary Material: I reviewed all supplementary materials, including related work, mathematical proofs, and experimental details.
Relation To Broader Scientific Literature: The authors successfully fill a gap in the literature by applying conformal inference to anomaly detection in event sequences. The two proposed non-conformity scores overcome the non-identifiability limitations of previous test statistics, thereby ensuring more accurate and robust detection results. The paper has the potential to inspire other researchers to further advance the field of anomaly detection.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
1. Applying conformal inference and neural temporal point processes to tackle the problem of anomaly detection in event sequences is both innovative and promising. This approach is backed by rigorous statistical theories, enhancing its reliability in real-world deployment.
2. The proposed two non-conformity scores, which overcome the non-identifiability limitations of existing test statistics, are well motivated and reasonable.
3. Extensive experiments across multiple benchmark datasets validate the effectiveness and reliability of the proposed method.
Weaknesses:
Some experimental results require further clarification, such as:
1. The reason behind the poor performance of the 3S statistic under the Uniform scenario in Section 4.1.
2. In Figure 7, why are the distributions of ID scores different in the RenewalA and SelfCorrecting scenarios?
Other Comments Or Suggestions: 1. Could you provide the computational time of the CADES method?
2. How would CADES perform in scenarios where event sequences have sparse data?
Questions For Authors: Please see above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and questions. We provide answers to your concerns as follows:
**Q1**: It is suggested to explain the reason behind the poor performance of the 3S statistic under the Uniform scenario in Section 4.1.
**R1**: This is because the 3S statistic is not sensitive to relatively uniform spacings (i.e. inter-event times). Specifically, according to Proposition 1 in [1], the value of the 3S statistic for a standard Poisson process realization (i.e. an ID sequence) is around 2. In the Uniform scenario, the spacings of the OOD sequences are identical. For example, when the detectability parameter $\eta = 0.5$, the spacings are all 2. In this case, the 3S statistic for an OOD sequence is equal to 2, making it unable to distinguish between ID and OOD sequences. We will add this explanation in Section 4.1.
**Q2**: In Figure 7, why are the distributions of ID scores different in the RenewalA and SelfCorrecting scenarios?
**R2**: Since the bandwidth of the non-conformity score $s_{\text{arr}}(X)$ is selected differently in the RenewalA and SelfCorrecting scenarios, the value of $s_{\text{arr}}(X)$ is different, leading to differences in the distribution of ID scores.
**Q3**: Could you provide the computational time of the CADES method?
**R3**: During the training phase, for fairness, CADES employs the same neural TPP model as the baseline methods. Thus, its training time is similar to that of the baselines. We compare the inference runtimes of CADES and the baseline methods on real-world datasets in Appendix D.1, showing that CADES achieves comparable runtimes. For convenience, we also present the experimental results below:
- Inference runtimes (in seconds)
| Dataset | 3S statistic | MultiAD-$Q_+$ | CADES |
| --- | --- | --- | --- |
| LOGS | 24.49 | 30.36 | 38.32 |
| STEAD | 19.14 | 22.27 | 25.98 |
**Q4**: How would CADES perform in scenarios where event sequences have sparse data?
**R4**: Our method is flexible, allowing any neural TPP model to be plugged into CADES. Therefore, we can use more advanced neural TPP models or models specifically designed for sparse event data to capture the distribution of the normal training data, and then apply the proposed test procedure for inference on the test data.
Reference:
[1] Shchur O, Turkmen A C, Januschowski T, et al. Detecting anomalous event sequences with temporal point processes. NeurIPS, 2021. | null | null | null | null | null | null | null | null |
Training Software Engineering Agents and Verifiers with SWE-Gym | Accept (poster) | Summary: The paper introduces SWE-Gym, a novel training environment specifically designed for developing software engineering agents. The environment comprises 2,438 real-world Python task instances extracted from GitHub issues; each instance includes a codebase, an executable runtime environment with pre-installed dependencies, and a set of unit tests for verification. The authors leverage SWE-Gym to train language model (LM) based agents and verifiers, demonstrating significant improvements in task resolution rates on standard benchmarks such as SWE-Bench Lite and SWE-Bench Verified. Key contributions include:
• A large-scale, realistic dataset that bridges the gap between existing synthetic or limited real-world benchmarks, enabling end-to-end training of agents that can handle complex repository-level software tasks.
• The development and evaluation of two agent scaffolds—one based on general-purpose prompting (OpenHands) and another on a specialized workflow (MoatlessTools). Using these, the authors fine-tune LM agents via rejection sampling fine-tuning, which leads to substantial improvements (up to 19% absolute gains in resolve rate).
• A novel approach to inference-time scaling where a verifier model is trained to estimate the success probability of candidate agent trajectories. By sampling multiple solutions and selecting the best one according to the verifier, they further boost performance to new state-of-the-art levels (achieving 32.0% and 26.0% on SWE-Bench Verified and Lite, respectively).
Claims And Evidence: While many experimental claims, such as improved agent performance through fine-tuning and inference-time scaling, are well-supported, the submission’s claim that SWE-Gym is a significant contribution primarily due to its larger scale compared to SWE-Bench is not clearly justified. The paper does not provide sufficient evidence or ablation studies to demonstrate that a larger dataset—and the inclusion of executable environments—is necessary or offers unique benefits over existing datasets, leaving that claim problematic.
Methods And Evaluation Criteria: The choice of SWE-Gym over SWE-Bench as a primary training dataset is not convincingly justified. SWE-Bench already includes multiple verified versions and has undergone human validation, ensuring high-quality benchmarks for evaluating agent performance.
Theoretical Claims: The paper do not have theoretical claims.
Experimental Designs Or Analyses: The evaluation provides evidence that the training method is effective to some extent, yet there are experimental design concerns. Specifically, while the results suggest improvements using SWE-Gym, the study lacks direct comparisons with existing methods evaluated on SWE-Bench and does not explore the impact of training on subsets of SWE-Bench itself. This omission makes it difficult to isolate the unique contribution of SWE-Gym and assess whether similar gains could be achieved using parts of the established benchmark dataset.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The paper is realted SWE-bench, SWE-agent and so on.
Essential References Not Discussed: The paper do not discuss related works which also evaluated on swe-bench:
[1] Yang J, Jimenez C E, Wettig A, et al. Swe-agent: Agent-computer interfaces enable automated software engineering[C]//The Thirty-eighth Annual Conference on Neural Information Processing Systems. 2024.
[2] Xia C S, Deng Y, Dunn S, et al. Agentless: Demystifying llm-based software engineering agents[J]. arXiv preprint arXiv:2407.01489, 2024.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Could you clarify why a larger dataset (SWE-Gym) with executable environments is necessary, given that SWE-Bench already offers multiple, human-verified versions?
2. Have you conducted ablation studies or direct comparisons to evaluate the benefit of training on SWE-Gym versus using subsets of SWE-Bench?
3. Can you provide comparisons against existing methods evaluated on SWE-Bench to isolate the unique contributions of your training approach?
4. How do you justify the chosen evaluation criteria in light of the human verification already present in SWE-Bench, and what additional benefits does SWE-Gym provide?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: > Could you clarify why a larger dataset (SWE-Gym) with executable environments is necessary, given that SWE-Bench already offers multiple, human-verified versions?
We’d like to clarify that SWE-Bench doesn’t include executable environments or unit tests for its training split. This makes it impossible to use for learning algorithms that train models through real-world action execution and observation. And training on the test split will introduce data
contamination and invalidate our results.
SWE-Gym addresses this fundamental gap by providing a separate, complementary dataset with fully executable environments that allows us to train agents on real-world software engineering tasks while maintaining the integrity of SWE-Bench as a clean evaluation benchmark. This separation is crucial for accurately measuring progress of software engineering agents.
> Can you provide comparisons against existing methods evaluated on SWE-Bench to isolate the unique contributions of your training approach?
Because our training approaches are focused on training models in real-world software engineering tasks, this requires execution environments and test cases for training task instances. SWE-Bench only includes environments and test cases for its test set, not its training set. Thus, we did not conduct these ablation studies as training on any subset of SWE-Bench’s test set would be problematic.
> How do you justify the chosen evaluation criteria in light of the human verification already present in SWE-Bench, and what additional benefits does SWE-Gym provide?
We want to clarify that throughout our paper, we exclusively use SWE-Bench (both Verified and Lite versions) as our evaluation framework. This choice is deliberate as SWE-Bench provides human-verified test cases that serve as a reliable, standardized benchmark for measuring agent performance. SWE-Gym complements this by uniquely providing an effective training dataset with executable environments that enable end-to-end agent training without contaminating our evaluation data.
> Essential References Not Discussed and Comparisons against existing methods evaluated on SWE-Bench
We appreciate the reviewer pointing out these references. We would like to clarify that we have discussed both references in our paper: SWE-agent [1] in line 102 and Agentless [2] in line 093.
However, we acknowledge that our comparison with these works could be more comprehensive. In the next version of our paper, we will expand our analysis to include a more detailed comparison of our results with these frameworks. Essentially, although these approaches demonstrate that better prompts and agent scaffolds can enhance performance, our work shows that horizontally, end-to-end training—without relying on manual prompt design—can yield even greater improvements.
We will also present their performance on SWE-Bench to better contextualize our contributions.
Additionally, for concurrent works on SWE Agent training, we include a detailed comparison in appendix section A. | Summary: This paper introduces SWE-Gym, the environment for training software engineering (SWE) agents. SWE-Gym contains 2,438 real-world Python tasks from 11 popular GitHub repositories, each equipped with pre-installed dependencies, executable runtime environments, unit tests, and natural language task descriptions. The authors demonstrate SWE-Gym's effectiveness by using it to train language model-based agents through rejection sampling fine-tuning, achieving significant improvements in resolve rate on the SWE-Bench Verified and Lite test sets. The paper also explores inference-time scaling through verifiers trained on agent trajectories sampled from SWE-Gym, showing that when combined with fine-tuned SWE agents, they achieve state-of-the-art performance of 32.0% and 26.0% on SWE-Bench Verified and Lite respectively. The authors publicly release SWE-Gym, the trained models, and agent trajectories to facilitate further research.
Claims And Evidence: The claims made in the paper are generally well-supported by evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate and well-designed for the problem:
- Dataset construction: The authors detail a rigorous process for creating SWE-Gym, including repository selection criteria, versioning, and environment setup. The validation of instances using execution-based verification ensures high-quality training data.
- Evaluation metrics: The use of standard SWE-Bench metrics (resolve rate, empty patch rate) provides consistency with prior work. Additional metrics like "stuck in loop" percentage offer valuable insights into agent behavior improvements.
Theoretical Claims: The paper does not make formal theoretical claims requiring proof verification. The claims are empirical in nature, focusing on the effectiveness of SWE-Gym for training agents and verifiers.
Experimental Designs Or Analyses: - Agent training experiments: The authors clearly specify model sizes, training procedures, and hyperparameters. The comparison between different agent scaffolds (OpenHands vs. MoatlessTools) provides valuable insights into the effectiveness of SWE-Gym across different agent architectures.
- Verifier experiments: The authors explore different training data compositions for verifiers, showing how mixing on-policy and off-policy trajectories affects performance.
- Scaling experiments: The three different scaling approaches (trajectory, instance, and repository scaling) are well-designed to isolate the impact of different aspects of training data.
- Statistical rigor: The authors report standard deviations for key metrics (Table 3), enabling assessment of result reliability.
- **One minor concern** is that the computational budget constraints limited the number of training trajectories to 491, which may affect the generalizability of some findings.
Supplementary Material: No other supplementary materials.
Relation To Broader Scientific Literature: The authors appropriately cite relevant prior work and clearly articulate how SWE-Gym addresses a critical gap in the literature.
- SWE agent development: The authors contextualize their work within recent advances in SWE agents, highlighting the limitations of current approaches due to lack of suitable training environments.
- Agent scaffolds: The paper discusses different agent design philosophies (specialized workflows vs. general-purpose prompting) and evaluates SWE-Gym's effectiveness across both paradigms.
- Post-training methods: The authors connect their work to broader trends in LLM fine-tuning techniques, including trajectory filtering approaches.
- Verifier models: The paper builds on outcome-supervised reward modeling and applies it to the software engineering domain.
Essential References Not Discussed: The paper covers most essential references in the field.
Other Strengths And Weaknesses: Strengths:
- Practical contribution: SWE-Gym addresses a critical need in the field by providing a standardized, reproducible environment for training SWE agents.
- Comprehensive experimentation: The paper explores multiple dimensions (model size, agent scaffold, training data composition).
- Scaling analysis: The clear demonstration of scaling behaviors with both training data and inference compute.
Weaknesses:
- Limited exploration of alternative fine-tuning methods: While rejection sampling is effective, comparing with other approaches like PPO or DPO would strengthen the paper.
- Computational constraints: The limited number of training trajectories (491) may not fully reveal the potential of SWE-Gym, though the authors acknowledge this limitation.
- Task diversity analysis: While the paper mentions task distribution across repositories (Fig. 2), deeper analysis of how different task types benefit from training could provide additional insights.
Other Comments Or Suggestions: Consider expanding the discussion on how SWE-Gym could be extended to other programming languages beyond Python in future work.
Questions For Authors: See "Other Strengths And Weaknesses".
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > computational budget constraints limited the number of training trajectories to 491, which may affect the generalizability of some findings.
We would like to emphasize that our results represent the state-of-the-art open-model results at the time of submission, and used a substantial compute budget exceeding $30K. The challenge of collecting more training trajectories stems from the inherent computational intensity of agent training research, particularly when working with real-world software tasks that require full execution environments.
Importantly, our scaling experiments in Sections 5.1 and 5.2 demonstrate consistent log-linear scaling behavior, which provides strong evidence for the generalizability of our findings beyond the current dataset size. Our experiments in Section 5.2 suggest that the task diversity of SWE-Gym is not a limiting factor in further model improvements.We believe these results offer valuable insights that will motivate and guide future open research in this direction.
Additionally, we note that these 491 trajectories are positive ones used for agent training. In fact, we used over 1000 trajectories (both positive and negative) for training the verifiers.
> Limited exploration of alternative fine-tuning methods
We appreciate the reviewer's insightful suggestion regarding alternative fine-tuning methods. We would like to clarify that the primary contribution of our work is the SWE-Gym dataset itself, which addresses a critical gap in the field by providing executable environments for real-world software engineering tasks. While we demonstrate the dataset's effectiveness through recently-proposed approaches for model improvement via agent training and test-time scaling, these implementations serve as solid baselines rather than exhaustive explorations of optimal training techniques.
As one of the first papers to study SWE agent training on real-world tasks, we deliberately established strong baseline models using well-understood training approaches (Zelikman 2022, Singh 2023, Pan 2024) to provide clear evidence of SWE-Gym's effectiveness. Our results show substantial performance improvements (up to 19% absolute gains in resolve rate) using these straightforward methods, which we believe validates SWE-Gym's value as a training resource.
We agree that exploring more sophisticated training paradigms represents a promising direction for future research. By releasing our dataset, trained models, and agent trajectories publicly, we aim to facilitate such explorations by the broader research community.
> How SWE-Gym could be extended to other programming languages beyond Python in future work.
We thank the reviewer for this valuable suggestion. In our future work section, we will add a comprehensive discussion on ways to extend SWE-Gym beyond Python. We envision two promising approaches: (1) establishing a collaborative community effort to systematically collect and curate datasets across multiple programming languages; or (2) developing specialized environment-setup language model agents that can automatically analyze repositories, identify dependencies, and construct executable environments for diverse programming languages.
> I see two addition as compared to boarded literature i) Addition of verifiers in fine-tuning of agent and ii) adding unit tests for part of training data
We thank the reviewer for pointing out these contributions. We will update the related works section to include a more detailed comparison of our results with these literature.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's reply. The author addressed most of my concerns. I read the paper again carefully and I think the community needs this dataset to handle real-world agent tasks (although the dataset is a bit small). Therefore, I raised my score to **4**. | Summary: The paper proposes SWE-Gym, which is a training environment for coding agents tasked to resolve GitHub issues.
They provide a collection of 2438 python-based SWE tasks.
They used filtered fine-tuning and showed improvement by fine-tuning LLMs in their training environment.
Finally, to show effectiveness authors compared resolved rates on swebench verified and lite benchmarks.
They also trained verifiers which will be also beneficial for training via RL.
Claims And Evidence: Yeah, claims are generally clear and backed with experiments.
Methods And Evaluation Criteria: They use Resolve Rate (%), Empty Patch Rate (%), stuck in the loop rate, pass@k and best@k as criteria for improvement which makes sense. I don't think pass@k with k>=2 is of much use though.
However, to better demonstrate the effectiveness, authors can also add precision and recall which can demonstrate the effectiveness of intermediate steps as well. This will also be convincing to demonstrate effectiveness due to the verifier's rewards.
Theoretical Claims: Experimental paper; No theoretical claims.
Experimental Designs Or Analyses: Yes.
Training with SWE-gym and Scaling agent performance, both seems reasonable.
Supplementary Material: Yes
Relation To Broader Scientific Literature: I see two addition as compared to boarded literature i) Addition of verifiers in fine-tuning of agent and ii) adding unit tests for part of training data
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
- The paper is clearly written and easy to parse
- Enhancing agent's performance via verifier.
- Scaling experiments
Weakness:
- I think Novelty is limited given that swe-bench training data already provide training data (if unit tests are created for swe-bench, then I don't see what's the difference)
- Doesn't include precision and recall which are essential to understand the trajectories.
- Missing analysis on how untrained LLMs will perform on the training data? Is this training data verified or not, i.e., does it contain sufficient information for the issues to be resolved? It may be the case that LLM learns to hallucinate if trained on this since there may be training data which doesn't have sufficient information
- only ~2.5K training samples which I think are small for RL-type training
Other Comments Or Suggestions: In the abstract, can you add the model name and the number of parameters along with improvement numbers?
Questions For Authors: In Table 3, how many runs are used for showing the confidence intervals?
Would be good if confidence intervals could be added to other results or comment on them; given the stochasticity of LLMs, its difficult to conclude from a single-digit number.
###Update after rebuttal:
All the weaknesses are still intact and the author's response does not address them constructively.
- Especially, they agree that the difference to the SWE-bench is limited as was pointed out in the weakness initially that if unit tests are included in the swe-bench then there is no difference.
- I think the response to stochasticity (or confidence interval) should be "evidence with data" as compared to saying "We believe the gap is already enough". I would appreciate if the authors were scientifically rigorous.
- Also, authors agree that they need significant work, especially in coming up with metrics like precision/recall in their data set before it is useful (or better than existing benchmarks).
In response to this, I suggest rejecting the paper. It's also beneficial for authors to submit a complete and usable work.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We thank Reviewer aRo7 for their insightful feedback. Below, we address the primary concerns and suggestions provided.
> I think Novelty is limited given that swe-bench training data already provide training data
To clarify, SWE-Bench doesn’t include unit tests or executable environments for their training data. Its training data is insufficient for effective agent training because it lacks both executable environments and test case for those instances. Thus, we put in the work to create a suitable training environment. We decided to focus on our own set of repositories, rather than the ones included in SWE-bench, to avoid contamination concerns.
Our work is one of the first to study *training* SWE agents on real-world software engineering tasks, which is a significant departure from existing work that focuses on evaluation or prompting-based agents. We also achieve state-of-the-art open-model results on SWE-Bench, with log-linear training and test-time scaling results.
These distinctions make SWE-Gym a novel and valuable contribution to the field of software engineering agents.
> Precision and Recall Analysis:
We appreciate the suggestion to include precision and recall metrics. While our evaluation follows established protocols from prior work (Cobbe et al., 2021), we agree that precision-recall curves would provide more comprehensive insights into our verifiers' performance. We are currently working on these curves and will soon follow up with the updated results.
Regarding intermediate step evaluation, developing effective process rewards for software engineering agents is still an open problem. We would welcome the reviewer's insights on potential approaches to this problem, and would love to incorporate these in the future work.
> Clarification Regarding pass@k:
We clarify that pass@k (k≥2) solely serves as a reference point to compare our learned verifiers against an oracle verifier that always selects the optimal solution. This metric was not used for our primary results or for comparisons with other methods.
> Is this training data verified or not, i.e., does it contain sufficient information for the issues to be resolved? It may be the case that LLM learns to hallucinate if trained on this since there may be training data which doesn't have sufficient information
Our data is as verified as in the original SWE-Bench paper. As described in Lines 152-186, following SWE-Bench’s task construction pipeline, we only keep around issues that have a gold-standard PR, which indicates that the issue was able to be resolved by a human programmer based on the information provided. Also, as shown in Table 9 in Appendix, claude 3.5 sonnet without any SWE-Gym specific training achieves the reasonable performance of 29.1% on SWE-Gym Lite within 50 turns, suggesting that a significant proportion of the task in SWE-Gym is solvable.
Furthermore, our ultimate evaluation of our trained model is on SWE-Bench, not SWE-Gym, which is quite out of distribution from SWE-Gym, so any repo-specific hallucination learned through SWE-Gym wouldn't explain our improved SWE-Bench performance.
We’d be happy to add follow-up experiments if the reviewer has any thoughts on validating this hypothesis.
> In Table 3, how many runs are used for showing the confidence intervals? Would be good if confidence intervals could be added to other results or comment on them; given the stochasticity of LLMs, its difficult to conclude from a single-digit number.
To mitigate the stochasticity of LLMs, we apply a consistent random seed and sampling temperature of 0 across all experiments in Table 3. We use a bootstrap test to estimate the standard deviation by sampling 1000 different subsets of the evaluation result. Regardless of confidence estimation, we believe the performance gap before/after fine-tuning is significant.
For experiments with higher stochasticity, we plot error regions in Figures 3 and 4 to represent confidence intervals, as described in lines 357-365. Following Lightman et al. (2023), we estimate the uncertainty as detailed in Appendix B.2.
> In the abstract, can you add the model name and the number of parameters along with improvement numbers?
We thank the reviewer for the suggestion. We will update the abstract to include the model name and the number of parameters.
> only~2.5K training samples
We kindly request the reviewer to refer to our first response to Reviewer BE56, where we discuss how our dataset is already effective for model training and achieves state-of-the-art results. Importantly, our experiments in Section 5.2 directly show that the task diversity of SWE-Gym is not a limiting factor in further model improvements. | null | null | null | null | null | null | null | null |
Maximum Entropy Reinforcement Learning with Diffusion Policy | Accept (poster) | Summary: This paper focuses on adapting diffusion-based policies to maximum entropy reinforcement learning (MaxEnt RL) for better exploration. The primary obstacles are: 1) policy evaluation involves computing the log-probability over the clean actions, which for diffusion policies is non-trivial; and 2) policy improvement requires aligning the diffusion with the Boltzmann distribution of the Q-value functions, which we cannot readily access. To tackle these challenges, this paper proposes Q-weighted Noise Estimation and borrowed the probability calculation method from previous literature, which leads to a coherent algorithm that accommodates diffusion-based policies while also strictly corresponding to MaxEnt RL in theory. The evaluation is conducted on classical MuJoCo locomotion tasks, and the proposed method, MaxEntDP demonstrates improvements over both diffusion-based and non-diffusion algorithms.
## update after rebuttal
The authors demonstrated in their last response that MaxEntDP achieves significantly better performance over SAC on dog tasks from DMControl, which validates the benefit of combining diffusion policies with the MaxEnt RL framework. In light of this, I decided to raise my score, and I encourage the authors to conduct experiments on the DMControl benchmark with other baseline algorithms as well.
However, some theoretical concerns still remain. As I mentioned in my initial review, the assumption that $\epsilon_\phi$ is a well-trained noise prediction network is too strong since the policy improvement and the policy evaluation steps are interleaved. This compromises the accuracy of the estimated log-probability and may cause potential instabilities during training.
Therefore, given these observations and discussions, I will keep my evaluation as Weak Accept.
Claims And Evidence: Yes, most of the claims in the paper are supported by theory or empirical evidence.
Methods And Evaluation Criteria: MaxEntDP inherits the popularized MaxEnt RL framework, which is known to be effective for online RL due to enhanced exploration brought by entropy regularization. As for the benchmarks, although Gym-MuJoCo is the most commonly used benchmark in online RL, it is comparatively simple due to dense rewards and lower DoF. I encourage the authors to also include results on tasks like MetaWorld or DMControl, where the task is more complex and the necessity of exploration is more pronounced.
Theoretical Claims: First of all, the theory largely builds upon the analysis from MaxEnt RL. For the policy improvement step, this paper adopts an importance-weighted sampling technique similar to iDEM to reversely sample from the posterior $p(a_0|a_t)$. To reduce the variance of importance sampling, importance re-sampling is used to derive a biased objective but with reduced variance. The variance reduction property has been demonstrated by previous literature.
For the policy evaluation step, the analytical calculation for log probability is also borrowed from previous literature. However, Corollary 3.4 holds under the assumption that $\epsilon_\phi$ is a well-trained noise prediction network. During RL training, **such an assumption may not hold** since the policy improvement and evaluation are interleaved and we cannot guarantee that the policy network is well-trained. Under this circumstance, Eq. 20 does not exactly correspond to the log-probability of the actual action distribution.
Experimental Designs Or Analyses: Yes, the experiments validate the main claims in this paper, including the superior performance of MaxEntDP, the variance reduction property of QNE, and the necessity of entropy in policy evaluation.
However, from current results it seems that the benefits of MaxEntDP is marginal, and in fact some baseline methods like SAC may achieve better performances if allowed to tune its hyper-parameters. I suggest to include experiments on harder online RL benchmarks, such as DMControl, and keeping the hyper-parameter tuning effort the same across baseline algorithms, to further validate the benefits such as improved exploration.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: 1. Using importance sampling to sample from the posterior distribution is similar, although not the same, to the technique used in iDEM [1].
2. The analytical computation of the log-probability is inspired by ITDiffusion [2]. The authors changed the integration variable for numerical stability.
[1] Akhound-Sadegh, Tara, et al. "Iterated denoising energy matching for sampling from boltzmann densities." arXiv preprint arXiv:2402.06121 (2024).
[2] Kong, Xianghao, Rob Brekelmans, and Greg Ver Steeg. "Information-theoretic diffusion." arXiv preprint arXiv:2302.03792 (2023).
Essential References Not Discussed: The idea of sampling from an energy-based distribution, of which we only have the potential function but no samples, is also related to model-based diffusion [1].
[1] Pan, Chaoyi, et al. "Model-based diffusion for trajectory optimization." Advances in Neural Information Processing Systems 37 (2024): 57914-57943.
Other Strengths And Weaknesses: The development of the problem, the motivation, and the method are clear and easy to follow.
Other Comments Or Suggestions: My main concerns are discussed in "Theoretical Claims" and "Experimental Designs Or Analyses", please refer to these sections.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and suggestions. Here, we aim to address the questions raised in the review.
>**Q1: Include results on tasks like MetaWorld or DMControl.**
We compare MaxEntDP with SAC on DMControl and MyoSuite benchmarks. The results are shown in https://anonymous.4open.science/api/repo/pics-E459/file/DMC_myo.pdf?v=538c572b.
>**Q2: The importance-weighted sampling technique in the paper is similar to iDEM.**
We would like to emphasize that although the expression of our QNE method appears similar to iDEM [1], our method exhibits significantly lower estimation variance and steady performance improvement throughout the training process over the same method replaced with iDEM (Figure 3). In addition, our method does not require gradient computation
of the Q-function as in iDEM and thus is more computationally
efficient. These advantages demonstrate the superiority of our QNE method on diffusion policy optimization.
>**Q3: Concern on the assumption of Corollary 3.4.**
We use the target Q network to train the diffusion policy. Since we adopt the Exponential Moving Average (EMA) update to smooth the change of the target Q network, it is not difficult to learn a well-trained policy network.
>**Q4: Keep the hyper-parameter tuning effort the same across baseline algorithms.**
For fair comparison, we unify shared hyperparameters (batch size, discount factor, the depth and width of hidden layers, learning rate and replay buffer size) across baseline algorithms. Other hyperparameters strictly follow the settings specified in the original paper or codebase, which are already tuned by their authors.
>**Q5: One related work model-based diffusion [2] is missing.**
We will cite it in revised versions of the paper. The model-based diffusion proposes the Monte Carlo estimation for computing the score function and uses the Monte Carlo score ascent to generate samples following the Boltzmann distribution of a given function. This method is similar to our QNE method, however, QNE has several properties which matter in RL training:
1. We use a parameterized network to approximate the scaled score function, while the model-based diffusion needs to compute the score function using Monte Carlo estimation when generating samples. Therefore, sample generation of diffusion-based diffusion is time-consuming, which will slow the training speed of the RL algorithms.
2. We adopt the ancestral sampling in DDPM to generate samples, that are more diverse than that of Monte Carlo score ascent used in model-based diffusion.
3. We propose to modify the standard Gaussian to the truncated Gaussian in QNE to model the action distribution with a bounded action space. However, model-based diffusion can not address such a bounded distribution.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's response. However, I still have some remaining concerns:
1. It's still hard to interpret the benefit of the diffusion policy from the results on DMControl and MyoSuite. Why don't you include results on harder environments, such as humanoid tasks or dog tasks from DMControl?
2. The assumption of a fully converged policy network at every iteration still appears overly restrictive to me, although it seems that its effect on performance is negligible (Figure 6.b). Besides, what is the cost of computing log-probability? Is it computationally heavy?
3. For Q4, I was actually referring to the temperature coefficient $\beta$. Is it possible to auto-tune this hyperparameter like SAC?
---
Reply to Comment 1.1.1:
Comment: Thanks for your constructive feedback. Below we would like to address your remaining concerns.
>**Q1: Add experiments on harder environments, such as humanoid tasks or dog tasks from DMControl.**
We compare MaxEntDP with SAC on humanoid and dog benchmarks from DMControl, showing the results in https://anonymous.4open.science/api/repo/pics-E459/file/DMC_plus.pdf?v=9a37bdf8. Our MaxEntDP outperforms SAC on these challenging high-dimensional RL tasks.
>**Q2: The cost of computing log-probability.**
In the following table, we list the training time on HalfCheetah-v3 benchmark with different $N$, which is the sample number per timestep for probability estimation. Since the samples can be processed parallelly, probability computation does not cause a heavy burden in our experiments.
| $N$ | 0 | 10 | 20 | 50 | 100 |
| ----------------- | ------ | ------ | ------ | ------ | ------ |
| Training time (h) | 2.3 | 2.7 | 3.4 | 3.9 | 5.8 |
>**Q3(a): Tune temperature coefficients for SAC.**
We initialize SAC with the same temperature coefficient as MaxEntDP and keep it fixed during the training process. The corresponding results are displayed in https://anonymous.4open.science/api/repo/pics-E459/file/comparison_fixed_temp.pdf?v=a29784b9. MaxEntDP still outperforms the SAC variant with fixed temperature coefficients.
>**Q3(b): Is it possible to auto-tune temperature coefficients like SAC?**
SAC sets a target entropy ($-|\mathcal{A}|$) and updates the temperature coefficient based on the distance between the current entropy and its target. Since MaxEntDP also computes the log probability in the policy evaluation step, it is feasible to apply the same method to auto-tune the temperature coefficient. However, in our experiments, we cannot find a unified target entropy that can perform well in all environments. That may be because diffusion policy is much more complex than Gaussian policy, making the simple linear function unable to express the relation between the best target entropy and the dimension of the action space. We leave exploring suitable methods to auto-tune temperature coefficients for MaxEntDP to future work. | Summary: This paper introduces Maximum entropy Reinforcement Learning with Diffusion Policy (MaxEntDP). More specifically, this method proposes solutions to the well-known problems on how to approximate the target distribution composed of the exponential of the Q-function and how to calculate the log likelihoods of the marginal distribution of diffusion models. Both problems are essential to train in the maximum entropy RL framework and need careful treatment when using diffusion models.
## Updated after rebuttal
I adjusted my score after the authors have clarified my questions and concerns
Claims And Evidence: This work proposes a solution on approximating the likelihood of the marginal distribution of diffusion models and proposes an objective to train the policy's score function without evaluating the gradient of the Q-function.
The claims are backed up with mathematical derivations and proofs and ablation studies.
Methods And Evaluation Criteria: The considered benchmark makes sense, though there are more sophisticated tasks in RL that should also be considered, e.g., high-dimensional control tasks from the DeepMind control suite are harder learning tasks that usually have a longer horizon than the mujoco environments from gym.
Theoretical Claims: I did not check the proofs for Theorem 2.1
I checked the proofs to Theorem 3.1, Theorem 3.2, Theorem 3.3, which seem to be fine to me.
I also checked the proof for the iDEM derivation (A4) and have a concern:
Eq 59 pulls the gradient w.r.t. a_0 in front of w(a_0), but this is mathematically not correct as w(a_0) is depending on a_0 as well. How is this justified?
I also checked the proof for Theorem 3.5. but have difficulties understanding it: How is the replacement of the integration domain in Eq. 65 justified? I don't understand this step. Intuitively, it is a very strong change integrating from 0 to 1 instead of from -infinity to infinity. It is difficult to judge whether the approximation of calculating the log likelihood is a good approximation. My concern is also strengthened by the fact that the learning curves with the entropy bonus are actually not significantly better than the learning curves without the entropy (Fig. 5 in Section 5).
Experimental Designs Or Analyses: From my understanding there are no experimental environments designed specifically for this work.
Supplementary Material: I reviewed some of the proofs as mentioned before and skimmed over Appendix B.
Relation To Broader Scientific Literature: The work relates well to prior findings and distinguishes itself from prior works.
Essential References Not Discussed: To my knowledge, all essential references are discussed w.r.t. diffusion models. However, the paper lacks recent works in the field of maximum entropy reinforcement learning such as Bigger Regularized optimistic (BRO) [1] and CrossQ [2].
[1] Nauman, Michal, et al. "Bigger, Regularized, Optimistic: scaling for compute and sample efficient continuous control." The Thirty-eighth Annual Conference on Neural Information Processing Systems.
[2] Bhatt, A., et al. "CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity." International Conference on Learning Representations (ICLR). 2024.
Other Strengths And Weaknesses: In general, the paper is well written and the reader can follow it. It is well-motivated and considers an important aspect of research for using diffusion models in reinforcement learning.
However, there are several points that need clarification:
- Lines 220 following mention that the distribution for the noisy actions p(a_t) is unknown but can be substituted with other distributions with full support. What does this mean? How is this justified given that the expectation is not correctly evaluated? Covering full support can be problematic for high-dimensional action spaces. This seems to be a major issue but was not discussed. Additionally, the considered environments do not consider high-dimensional action spaces, so it is hard to judge whether this is a bottleneck.
- The paper states that a truncated standard Gaussian is used to keep the sampled actions within the bounds. This, however, changes the likelihood to my understanding, but no correction is mentioned. For example, SAC applies the change of variables and corrects the likelihood in this way. This is however not considered here.
- While there are some tasks where the proposed method performs better compared to Gaussian-based policies, it does not perform better than other Diffusion-Based methods.
- More experiments on more sophisticated tasks with longer horizons and higher dimensional action spaces, such as those from the Deepmind control suite, myo suite, or the humanoid bench, would strengthen the paper.
- The learning curves plot the "training step" but it is never mentioned what this means. I am confused as RL papers usually plot the number of environment interactions instead of some other metric. This needs clarification.
- Minor note: K-L instead of KL in line 119 right column
Other Comments Or Suggestions: please see my comments from before.
Questions For Authors: please see my comments from before.
------ I adjusted my score after the authors have clarified my questions and concerns ------
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive feedback. Below, we will address your comments and hope that this clarifies the context of our work.
>**Q1: Consider more sophisticated RL tasks.**
We compare MaxEntDP with SAC on DMContorl and MyoSuite benchmarks. The results are shown in https://anonymous.4open.science/api/repo/pics-E459/file/DMC_myo.pdf?v=538c572b.
>**Q2: Justification of Eq 59.**
We are sorry for omitting the parentheses on $\bigtriangledown_{a_0} w(a_0)$ in Eq 59, which causes trouble in understanding. In fact, we apply the integration by parts formula (line 777) to derive Eq 59, i.e., $\int_{\Omega} v \nabla u \, d\Omega = \int_{\Gamma} u v n \, d\Gamma - \int_{\Omega} u \nabla v \, d\Omega$ (consider $w(a_0)$ as $v$ and $N(a_0)$ as $u$; the term $\int_{\Gamma} u v n \, d\Gamma$ is zero since $w(a_0)$ and $N(a_0)$ decay rapidly at infinity).
>**Q3: Justification of Eq 65.**
In Eq 65, we change the integration variable from $\alpha_t$ to $\sigma(\alpha_t)$ using the equation $\alpha_t=\log \frac{\sigma(\alpha_t)}{1-\sigma(\alpha_t)}$, as illustrated in line 807. Then the integration domain of Eq 65 has been changed to the varying range of the new integration variable $\sigma(\alpha_t)$, which is $(0,1)$.
>**Q4: The accuracy of the log-likelihood approximation.**
Due to the length limit of rebuttal, please refer to our response to the Q2 of reviewer fZmT.
>**Q5: The paper lacks recent work such as BRO and CrossQ.**
We will cite them in future versions. Briefly, the two methods propose some improvements to the SAC algorithm. BRO develops the advanced BroNet architecture, regularization, and optimistic upper-bound Q-value approximation. CrossQ removes the target networks and adopts Batch Normalization for high sample efficiency. Since the improvements of the two methods are also compatible with our MaxEntDP algorithm, it is interesting to see how the performance of MaxEntDP can be enhanced by combining these improvements.
>**Q6: Why can the noisy actions** $p(a_t)$ **be substituted with other distributions?**
As shown in Eq17, the training target $\epsilon^{*}$ of the noise prediction network depends only on its input $(a_t,\alpha_t)$, which means that we can minimize the L2 loss for each input point. Thus, the noisy action samples from other distributions can also be used for training. This can be seen as a kind of "off-policy" training. Moreover, in the paper we diffuse the action samples in the replay buffer to obtain the surrogate distribution of $p(a_t)$ (line 295). As the training progresses, the actions generated by the diffusion policy will be closer to the target distribution $p(a_0)$, then the diffused distribution of these actions will also be closer to the true distribution of $p(a_t)$, consequently, the training will be more and more "on-policy".
>**Q7: The reason for using a truncated standard Gaussian in RL tasks with bounded action spaces.**
We would like to point that MaxEntDP and SAC model the bouded policy distribution in different ways. SAC outputs an unbounded distribution and transforms it to a bounded one by applying a tanh function, while MaxEntDP directly models a bounded distribution following the common practices in the image generation domain. To learn this bounded distribution, we need to change the target distribution to $p(a_0)\propto\exp(\frac{1}{\beta}Q(a_0))I_{a_0\in[lb,ub]}$, where $I_{a_0\in [lb,ub]}$ is an indicator function to check if $a_0$ lies within the bound. Similar to lemma 3.1, we can obtain that the reverse transition distribution has be changed to $p(a_0|a_t)\propto\exp(\frac{1}{\beta}Q(a_0)) I_{a_0\in[lb,ub]}
\mathcal{N}(a_0|\frac{1}{\sqrt{\sigma(\alpha_t)}} a_t,\frac{\sigma(-\alpha_t)}{\sigma(\alpha_t)}I)$. Taking the indicator function and the Gaussian distribution together, we can derive a truncated Gaussian distribution with a bound of $[lb,ub]$. Then, the conditional distribution $p(a_0|a_t)$ can be seen as a truncated Gaussian distribution of $a_0$ weighted by the exponential of the Q-function. Therefore, for diffusion policy training with a bounded action space, we only need to modify the standard Gaussian distribution in the QNE method to a truncated Gaussian distribution.
>**Q8: Performance comparison with other Diffusion-Based methods.**
We would like to emphasize that none of the competing diffusion-based methods performs consistently well on all tasks (DACER struggles on Walker2d, and others underperform on HalfCheetah). However, our MaxEntDP displays consistent sample efficiency and stability across all tasks.
>**Q9: The meaning of "training step" in the learning curves; K-L in line 119.**
We use "training step" because both DACER and QVPO use it as the x-axis of the learning curve, which means the training steps of the actor/critic network. Since we use a UTD of 1, it is the same as the number of environment interactions. And we will modify the 'K-L' in line 119 in revised versions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses, which have clarified most concerns. However, I still have some questions.
- Q6: I see the authors' point that 'other samples can be used for training'. The paper states that 'the true distribution of noisy actions ... may be inaccessible, we can substitute it with other distributions with full support'. My question is more related to the fact that another distribution with full support is used. How is it guaranteed that this distribution has full support? Given that an approximation to p(a_t) is used, shouldn't Eq. 18 involve some techniques to correct the expectation? Or did I misunderstand something in this case?
I appreciate the authors' efforts in running more experiments on the DMC suite. It seems that there is no big benefit against a Gaussian policy in this case. I would assume this is due to the rather low-dimensional tasks. Did the authors also analyze higher-dimensional tasks like the dog? Intuitively, a diffusion policy might show benefits for higher-dimensional learning tasks both in the observation and action dimensions.
---
Reply to Comment 1.1.1:
Comment: Thanks for your valuable feedback. Here we would like to answer your remaining questions.
>**Q1: How is it guaranteed that** $p(a_t)$ **has full support?**
In MaxEntDP, We use $a_t=\sqrt{\sigma(\alpha_t)}a_0+\sqrt{\sigma(\alpha_t)}\epsilon_t$ to obtain noisy action samples (line 295), where $a_0$ is an action sampled from the replay buffer and $\epsilon_t \sim \mathcal{N}(0,I)$. Since the distribution of $\epsilon_t$ has a full support over $\mathbb{R}^{|\mathcal{A}|}$, the distribution of $a_t$ also has a full support over $\mathbb{R}^{|\mathcal{A}|}$, i.e., we can sample any $a_t \in \mathbb{R}^{|\mathcal{A}|}$ at a non-zero probability.
>**Q2: Given that an approximation to** $p(a_t)$ **is used, should Eq. 18 involve some techniques to correct the expectation?**
When the network capacity is sufficient, whatever distribution of $p(a_t)$ with full support is used, the minimizer of Eq. 18 is the same, which is $\epsilon_{\phi}(a_t,\alpha_t)=\epsilon^*(a_t,\alpha_t)$ for all $(a_t,\alpha_t)$. Therefore, in this case, we can use another distribution of $p(a_t)$ with full support without correction. When the network capacity is insufficient, changing $p(a_t)$ also changes the minimizer of Eq. 18, therefore, just as you said, correction is needed to assign proper weights to each $(a_t,\alpha_t)$. However, because the true distribution of $p(a_t)$ is unknown, applying a correction to the weights of $a_t$ is intractable. In addition, in our experiments, we do not find that the lack of such correction causes trouble in diffusion policy optimization.
>**Q3: Add experiments on higher-dimensional tasks.**
We compare MaxEntDP with SAC on humanoid and dog benchmarks from DMControl, showing the results in https://anonymous.4open.science/api/repo/pics-E459/file/DMC_plus.pdf?v=9a37bdf8. Our MaxEntDP outperforms SAC on these challenging high-dimensional RL tasks. | Summary: This paper introduces MaxEntDP, a new online diffusion-based RL algorithm that integrates diffusion models into the maximum entropy framework. The method proposes a Q-weighted noise estimation for policy improvement and use numerical integration to estimate action probability for policy evaluation. Experiments are conducted on MuJoCo environment, comparing with classic MaxEnt algorithms and other online diffusion-based algorithms, verifying the effectiveness of the proposed method.
Claims And Evidence: Overall, the authors make several claims but only partially supported. First, in Section 3, the authors claim that MaxEnt RL with expressive diffusion model can capture multimodal behaviors. This is evidenced by the multi-goal 2D toy example. However, I am suspicious how this will generalize to high-dimensional tasks because the goals in the toy example is easy to explore. For example, in DDiffPG [1] it provides several high-dim multi-goal tasks, and it is interesting to see if MaxEnt RL can learn different solutions too. Second, the authors claim that by maximizing the entropy MaxEnt RL improves the exploration. However, there is no experiment or analysis to support it. It would be great to include some visualizations or state coverage in the experiment. Moreover, the authors say numerical integration provides an effective approximation and MaxEntDP achieves the optimal MaxEnt policy given sufficient model capacity. Both are not supported without any evidence or comparison.
[1] Li, Steven, et al. "Learning multimodal behaviors from scratch with diffusion policy gradient." Advances in Neural Information Processing Systems 37 (2024): 38456-38479.
Methods And Evaluation Criteria: The paper proposes a novel Q-weighted noise estimation for diffusion policy optimization and an approximation for the diffusion model action probability. One problem is that both parts require monte-carlo sampling, which possess a challenge on estimation accuracy. In appendix the authors provide ablations on the key hyper-paramaters, in which the high variance verifies the instability. Moreover, the paper provides two tricks to stabilize and improve the performance. The action selection for inference is a common approach in offline RL but unusual in online setup. However, most of baselines do not use this action selection for a fair comparison as shown in appendix.
The method is evaluated only on MuJoCo tasks, which has almost no exploration challenge. I would suggest to experiment on more challenging environments, e.g., myosuite to showcase the exploration benefit from the MaxEnt framework.
Theoretical Claims: I have checked the proofs in the paper.
Experimental Designs Or Analyses: The experiments confirm the effectiveness of the proposed method. However, it does not include a comparison with the current state-of-the-art BRO [2]. Additionally, its performance shows only a marginal improvement over existing diffusion-based baselines.
[2] Nauman, Michal, et al. "Bigger, regularized, optimistic: scaling for compute and sample-efficient continuous control." arXiv preprint arXiv:2405.16158 (2024).
Supplementary Material: I have checked the appendix, including the proofs and additional experiments.
Relation To Broader Scientific Literature: The paper is highly relevant to RL and diffusion model.
Essential References Not Discussed: One key claim in Section 3 and Section 5.1 is to learn a multi-modal policy, however one related work DDiffPG is missing [1].
When comparing to learning curve, the current SOTA BRO is missing [2].
Other Strengths And Weaknesses: The paper is well-written and easy-to-follow.
Other Comments Or Suggestions: N/A
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and suggestions. Below we will address each concern raised in the review.
>**Q1: Can MaxEntDP learn different solutions in high-dimensional multi-goal tasks from DDiffPG? And provide evidence for improved exploration.**
We tested MaxEntDP and SAC on four versions of the AntMaze environment in DDiffPG (using dense rewards) and visualized the generated trajectories in Figure 1 of https://anonymous.4open.science/api/repo/pics-E459/file/antmaze.pdf?v=eebbbe1f. The results confirm that MaxEntDP can learn diverse behavior modes even in challenging high-dimensional RL tasks, while SAC fails to learn different solutions.
In addition, we visualized state coverage for MaxEntDP and SAC (see Figure 2 in the above link). The results show that MaxEntDP explores multiple behavior modes and exhibits broader state coverage. This highlights the advantage of using a diffusion policy for efficient exploration.
>**Q2: No evidence supporting numerical integration as an effective approximation.**
According to the Law of Large Numbers, numerical integration accuracy improves with larger diffusion steps $T$ and sample numbers $N$. To exhibit the accuracy of different $T$ and $N$, we conducted experiments on a 2D toy example (a mixture of four Gaussians) and presented the results in https://anonymous.4open.science/api/repo/pics-E459/file/probability.pdf?v=86f81805. As shown in the figure, our setting in the paper ($T=20, N=50$) provides an effective probability approximation. Moreover, when fewer samples ($T=20, N=10$) are used, despite some estimation errors, our method still assigns higher values to high-probability regions, which can be considered as an intrinsic curiosity reward to promote exploration on the action region with low policy probability.
>**Q3: The claim that MaxEntDP achieves the optimal MaxEnt policy with sufficient model capacity.**
We do not make such a claim in the paper. Instead, we argue that incorporating a diffusion policy into MaxEnt RL improves exploration and moves the policy closer to the optimal solution, as proposed in the abstract. The 2D toy example in the paper reveals that MaxEntDP can explore the state-action space efficiently and finally learn a multimodal policy. The Mujoco experiments show performance improvements over other generative models and diffusion-based approaches, supporting our main claim.
>**Q4: Concern about Monte Carlo sampling.**
Your concern is reasonable since a higher estimation accuracy requires more Monte Carlo samples. However, in our experiments, we find that a small sample number of 1000 (for both diffusion policy optimization and action probability estimation) is enough to obtain a good performance. Additionally, as the samples can be processed parallelly and GPU throughput continues to improve, Monte Carlo sampling is not a bottleneck.
>**Q5: Necessity of action selection and its application to all baselines.**
Action selection is crucial because our method approximates the exponential of the Q-function, generating both high- and low-return actions. While beneficial for exploration, this can reduce test-time performance. To mitigate this, we apply action selection (only in testing) to pick the action with the highest Q-value. Similar techniques are used in SAC (take the Gaussian mean) and EBFlow. To ensure fairness, we applied action selection to all diffusion-based baselines (with a candidate number of 10) and reported results in https://anonymous.4open.science/api/repo/pics-E459/file/comparison_diffusion_selection.pdf?v=a952c896. Our MaxEntDP continued to demonstrate high sample efficiency and stability across all tasks.
>**Q6: Add experiments in more challenging environments.**
We compared MaxEntDP with SAC on the DMControl and MyoSuite benchmarks. Results are shown in https://anonymous.4open.science/api/repo/pics-E459/file/DMC_myo.pdf?v=538c572b.
>**Q7: Include a comparison with BRO.**
BRO improves SAC with advanced network architectures, regularization, and optimistic Q-value estimation. Since BRO's enhancements can be applied to any baseline (non-diffusion and diffusion-based) and MaxEntDP, a direct comparison would be unfair. However, integrating BRO's improvements into MaxEntDP is an interesting future direction.
>**Q8: Performance gains of MaxEntDP over existing diffusion-based baselines.**
None of the competing diffusion-based methods consistently outperforms across all tasks—e.g., DACER struggles on Walker2d, and others underperform on HalfCheetah. In contrast, MaxEntDP exhibits consistent sample efficiency and stability across all tasks.
>**Q9: Missing reference to DDiffPG.**
Thanks for your valuable suggestion. We will cite it in future revisions. While DDiffPG explicitly distinguishes different behavior modes and learns a Q-function for each mode, MaxEntDP employs a single Q-function, making it simpler and more computationally efficient. | null | null | null | null | null | null | null | null |
Gaussian Mixture Flow Matching Models | Accept (poster) | Summary: This paper introduces GM-Flow, a novel variant of flow matching that explicitly parameterizes the entire velocity distribution using a mixture of Gaussians, rather than learning only the mean velocity as in conventional approaches. Unlike CFG, which extrapolates class-conditional and unconditional velocities, GM-Flow employs GM morphing to obtain the class-guided velocity, effectively avoiding extreme cases that lead to oversaturation in generated images. Additionally, the paper proposes a GM solver for sampling. The method has been validated through large-scale experiments on ImageNet, demonstrating strong scalability.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: See Weaknesses
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: This method extends conventional flow matching and addresses a key limitation by explicitly parameterizing the velocity distribution, offering a more robust approach to handling class-guided velocity.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ## Strengths
1. The GM morphing approach is quite novel.
2. The performance of the method is quite impressive and the method seems to scale well to large-scale datasets and models.
## Weaknesses and Questions
1. This paper lacks a direct experimental comparison with GMS (Guo et al. 2023).
2. When using GM-SDE and perform spectral sampling, it seems to entail inverse KR transport. If I understand correctly, this step needs to be solved numerically, am I correct? In this case, the numerical steps should also be considered as a part of the computational complexity, instead of simply use NFE as the complexity metric. Please provide more detailed explanation on how the spectral sampling is performed and how complex it is.
Most of my weaknesses/questions come from the Section 3.2. This section needs to be expanded, or a section in Appendix should be added to further elaborate the content in Section 3.2 in details.
3. Why constructing a Gaussian mask using two surrogate Gaussians instead of the full GMs? Is it due to computational complexity when utilizing the full GMs?
4. In Equation (7), how to interpret $g(u)$ in the context of "class-conditional guidance"? I mean, CFG basically add a shift $w(\mu_c - \mu_u)$ to the unconditional "basis vector" $\mu_u$, right? But in Equation (7), the "basis vector" is $\mu_c$, and the shift is $\tilde{w} (\mu_c - \mu_u) / ||\mu_c - \mu_u|| \sqrt{D}$. What is the intuition behind this design of guided Gaussian?
5. In Equation (7), why do you scale the shift $\Delta \mu_n$ by $s_c$?
6. In Equation (7), how do you obtain $\mu_c$ and $\mu_u$ from the model? Do they obtained by minimizing KL divergence between the surrogate and the learned GM $q_\theta(u|x_t)$?
6. You said you use the orthogonal projection trick to obtain better sampling quality, is this trick applied to Equation (7)? Do you apply it on $\Delta \mu_n$?
8. In Equation (8), why do they have $ \mathcal{N}(\mathbf{u}; \mathbf{\mu}_c, s_c^2 I) $ in the denominator? The Appendix C only illustrate how to compute the $\frac{g(u) q _{\theta}(u|x _t, c)}{Z}$. If there is a Gaussian PDF in the denominator, will Equation (8) still yield a GM, and still analytically computable? My thought is that $ g(u) / \mathcal{N}(u; \mu_c, s_c^2 I) $ is proportional to another gaussian, so in the end it is a gaussian multiplying with a GM, thereby producing another GM. Am I correct? **Please provide the analytical solution of Equation (8).**
Other Comments Or Suggestions: Please see the section of [Other Strengths and Weaknesses].
Questions For Authors: 1. **Please upload the code during the rebuttal.**
2. In Eq (6), what is $\mathbf{x}$, shouldn't it be $\mathbf{u}$? and what is $D$, the dimension of $\mathbf{x}$? In previous sections you use $d$.
3. What do "DDPM small" and "DDPM large" mean in Figure 4 (a)? The ancestral sampling of DDPM with different standard deviations $\sigma_t = \beta_t$ or $\tilde{\beta}_t $ of the isotropic Gaussian? Please explain how you apply DDPM sampling scheme to flow models in details.
4. Is $\mathcal{L}_{\text{spec}}$ used for training even if GM-ODE will be used for sampling? How are the two losses $\mathcal{ L } _ {\text{trans }}$ and $\mathcal{L} _ {\text{spec}}$ weighted?
5. In the paragraph "$u$-to-$x_0$ reparametrization", in Line 192-193, the second parameter of the GM component should be $s_x^2$. The same for the $q_\theta(x_0|x_t)$ on the right hand side of the same page, in the paragraph "GM-SDE solver".
6. In Appendix A.4, is there any typo in the definition of $\mathbf{\mu}_{s_k}$? Please double check it.
7. In Figure 3, is $s$ shared across all spatial locations? If this is the case, then what does "the mean of per-pixel GM variances" refer to? Additionally, please explain the modelling process of $\mathbf{s}_{\text{F}}$ more detailed.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We have uploaded a **revised manuscript** and essential **code** in this anonymous link (full code will be released upon publication):
https://anonymous.4open.science/r/anonymous_gmflow-63FE
backup: https://limewire.com/d/CgAn9#jkBxDmC3qh
### **Weaknesses and Questions**
> 1. Comparison with GMS.
We’ve made a direct comparison with GMS and reported the results on unconditional CIFAR-10 image generation in Appendix B.3 of the revised manuscript. We train GMFlow with $K=2$ from scratch using the same U-Net backbone as GMS. We choose CIFAR-10 because re-training an ImageNet model using the backbone of GMS is expensive and time-consuming.
Here are the FID results. GMFlow significantly outperforms GMS and other moment-matching methods in few-step sampling. The competitor results are from the original GMS paper.
|**NFE**| 10 | 25 | 50 | 100 |
|-|-|-|-|-|
| DDPM large | 205.31 | 84.71 | 37.35 | 14.81 |
| DDPM small | 34.76 | 16.18 | 11.11 | 8.38 |
| SN-DDPM | 16.33 | 6.05 | 4.19 | 3.83 |
| GMS | 13.80 | 5.48 | 4.00 | 3.46 |
| **GM-SDE 2 (ours)**| 9.11 | 4.16 | 3.79 | 3.76 |
> 2. Computational complexity of KR transport
We would like to point out that KR transport is highly efficient since it's basically 1D CDF mappings. In the paper, we stated that GMFlow incurs only 0.005 sec of overhead per step, which is minimal compared to the total inference time of 0.39 sec per step (most of which is spent on DiT). This includes the KR transport. The computation complexity of per-pixel KR transport is $O(K\cdot C^2)$ ($C$ is the channel size). We have added more details on spectral sampling in Figure 9 of the revised manuscript.
> Section 3.2 needs to be expanded
Thank you for the suggestion. In the revised manuscript, we have fully rewritten and expanded Section 3.2 to make it more clear. We kindly refer the reviewer to the revised manuscript for more details.
> 3. Why construct a Gaussian mask using two surrogate Gaussians?
The reasons are twofold:
- The division of a GM PDF by another GM PDF doesn't have a closed-form solution, and would require slower numerical approximations.
- If a raw GM PDF is put into the denominator, it's likely to be unstable since the denominator can be very small in some regions.
We have already experimented with full GM formulations using adaptive importance sampling, and they are slow and unstable.
In contrast, using Gaussians is simple and performs well in our experiments.
> 4. Intuition behind this design of guided Gaussian.
In the revised manuscript (Line 183–196), we have added two paragraphs explaining the intuition behind the design.
> 5. Why do you scale the shift $\Delta \mu_n$ by $s_c$?
This is to satisfy bias–variance decomposition, such that when $\tilde{w} = 1$, all the energy in the variance $s_c$ is converted into the bias $s_c \Delta \mu_n$ , making the hyper-parameter $\tilde{w}$ more meaningful. Please refer to the revised manuscript (Line 183–189) for more details.
> 6. How do you obtain $\mu_c$ and $\mu_u$?
In the revised manuscript (Line 174–175 and Appendix A.1), we added that we approximate the conditional and unconditional GM predictions as isotropic Gaussian surrogates by matching the mean and total variance of the GM. This is equivalent to minimizing the KL divergence.
> 7. Orthogonal projection trick
We apply this to $\mu_\text{c} - \mu_\text{u}$, prior to normalization.
> 8. Please provide the analytical solution of Equation (8).
Your idea is correct. The appendix has discussed conflation of two Gaussians **with powers**. A Gaussian PDF divided by a Gaussian PDF is basically multiplication with power 1 and -1, and the result is still a Gaussian. The code is simply implemented as
```
gm_output = gm_mul_gaussian(gm_cond, gaussian_mul_gaussian(gaussian_guided, gaussian_cond, 1, -1))
```
where all the operations are analytical.
### **Additional Questions**
> 2. and 5. (Typos)
Thank you for pointing these out! These typos been fixed in the revised manuscript.
> 3. Please explain how you apply DDPM sampling scheme to flow models in details.
Added in the revised manuscript (Appendix A.5).
> 4. Is $\mathcal{L}_\text{spec}$ used for training even if GM-ODE will be used for sampling?
Yes, we use the same model for both ODE and SDE sampling. The spectrum MLP is completely isolated and does not back-propagate to the main model. Therefore, the loss weights do not matter.
> 6. Is there any typo in the definition of ${\mu_s}_k$?
We confirmed that it's correct. This is the same as converting the flow velocity to the score function, i.e., $s_t(x_t) = -\frac{1}{\sigma_t} ( x_t + \alpha_t u)$.
> Is $s$ shared across all spatial locations? What does "the mean of per-pixel GM variances" refer to?
Yes, $s$ is shared. The GM variance refers to the GM’s total variance divided by $D$, which is also dependent on $\{\mu_k\}$. We have added more details in the revised manuscript (Fig. 9).
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed explanation. Your responses have addressed my concerns, and I have updated my score accordingly.
I have carefully reviewed the entire paper, including the Appendix. In my view, the paper tackles an important problem in flow matching with a novel and well-founded methodology.
I believe it has the potential to make a significant impact, and I recommend it for acceptance at ICML.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for carefully reviewing all the technical details and providing such constructive feedback. | Summary: This paper proposes Gaussian mixture flow matching (GMFlow) model, which captures the flow
velocity distribution rather than only predicting its mean based on single-Gaussian assumption. In
addition, the paper utilizes the Gaussian mixture sampling framework to provide probabilistic
guidance via Gaussian mixture Morphing, which alleviate the image over-saturating problem by
avoiding global mean extrapolation. Corresponding SDE and ODE solvers are proposed based on
analytical velocity distributions. Finally, the paper experiments on the 2D checkerboard
distribution and on large-scale class-conditioned ImageNet dataset to show the effectiveness of
the proposed method.
Claims And Evidence: - The novelty of the paper appears to be limited. The GM idea to approximate the reverse
transition kernel of diffusion process is adopted in [1], which is the case of K=2.
- The theoretical analysis of using Gaussian mixture compared to single and bimodal
Gaussian remains to be established. The discretization error reduction could be further
explored with details including the approximation effect of the number of mixing
components K, similar to the Theorem 7 of [2].
- The minimum of the cross-entropy training loss function (Equation (5) and (6)) is not 0
and unknown due to the intractability of the entropy term. The error variance of this loss
can be sub-optimal compared to the modified flow matching loss in [3] that achieves
minimum 0 and reduces the error variance.
Methods And Evaluation Criteria: See the Questions part.
Theoretical Claims: I've read the proof for most theorems but not checked them carefully.
Experimental Designs Or Analyses: Also see the questions.
Supplementary Material: I have gone over most of the theorems in the supplementary material, not in great details though.
Relation To Broader Scientific Literature: The paper applies the GM framework to GM morphing, which is an appealing direction in
CFG to mitigate OOD problems.
Essential References Not Discussed: No, they are adequately discussed.
Other Strengths And Weaknesses: Strenghts:
- The paper is well organized with clear motives and detailed implementation
considerations.
- The approach to capture the multimodality of velocity distribution by using GM structure
and KL (with the cross-entropy part) training objective is straightforward and analytically
feasible.
- The comparisons and ablation studies are sufficient in small and large scale experiments.
Weaknesses:
- To avoid mode collapse problems of GM models, spectral sampling for image generation
is a detour compared to standard diffusion generation.
- The codes of experiments are not released.
Other Comments Or Suggestions: The theoretical and empirical understanding of choosing suitable number of mixing
components K remains unknown. The theoretical analysis of using Gaussian mixture
compared to single and bimodal Gaussian remains to be established.
Questions For Authors: - In Fig. 5 and Table 2, could the authors provide some explanations on why the best
Precision drops from K=8 to K=16? It seems to me that larger K should increase the
velocity distribution approximation capability and obtain better results. In addition, why
doesn’t the authors report the results of K=2, which correspond to the case of [1]?
- The effect of larger K is evident in the 2D checkerboard experiment presented by Figure
8(a), while Fig. 5 showed that increasing K has limited positive and even negative effect. Is
this because multimodality is not apparent in the large-scale class-conditioned ImageNet
experiment? If yes, is there any indicator/scheme (heuristic or analytic) for choosing
suitable K in experiments?
- Which “base model” is GMFlow compared to in the “Inference time” paragraph (line
376)? What does “simple” exactly mean in “based on simple arithmetic operations” (line
374), since the calculations in other SDE and ODE solvers are also based on arithmetic
operations that are not complex?
- Is it possible to design another modified flow matching loss similar to [2] that achieves
minimum 0 and lower error variance?
- Is it possible to scale the experiment to large dimensions and preserving spatial
correlations in a more natural way without using spectral sampling?
[1] Hanzhong Guo, Cheng Lu, Fan Bao, Tianyu Pang, Shuicheng Yan, Chao Du and
Chongxuan Li. Gaussian Mixture Solvers for Diffusion Models. In NeurIPS, 2023. arXiv
preprint arXiv:2311.00941.
[2] Tom Huix, Anna Korba, Alain Durmus and Eric Moulines. Theoretical guarantees for
variational inference with fixed-variance mixture of gaussians. In ICML, 2024. arXiv
preprint arXiv:2406.04012.
[3] Gleb Ryzhakov, Svetlana Pavlova, Egor Sevriugov and Ivan Oseledets. Explicit Flow
Matching: On The Theory of Flow Matching Algorithms with Applications. arXiv preprint
arXiv:2402.03232.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We have uploaded a **revised manuscript** and essential **code** in this anonymous link (full code will be released upon publication):
https://anonymous.4open.science/r/anonymous_gmflow-63FE
backup: https://limewire.com/d/CgAn9#jkBxDmC3qh
> The GM idea is adopted in [1]
Please note that [1] is essentially a moment matching method, which only converts its moment predictions into a bimodal GM during inference. This is fundamentally different from our GMFlow formulation, which directly learns GM parameters. In comparison, [1] employs three L2 losses for three moments, whereas we use a single loss for all Gaussian components.
Moreover, our formulation can generalize to more GM components, and we further propose few-step ODE sampling with analytical substeps and probabilistic guidance.
> Theoretical analysis of using GM
Gaussian mixtures are well known to yield better approximation with more components [2, 4]. In diffusion SDE sampling, it is also widely recognized that improving the accuracy of the reverse transition distribution can greatly improve few-step sampling. Taken together, these observations already support the claim that using more Gaussian components should lead to reduced SDE sampling errors.
The practical performance ultimately depends on how the network learns the GM parameters, a process that is difficult to analyze theoretically. Therefore, we believe that our empirical validation through experiments is more important here.
- [2] Huix et al. Theoretical guarantees for variational inference with fixed-variance mixture of gaussians.
- [4] Bishop. Mixture Density Networks
> Minimum of the cross-entropy training loss
Cross-entropy equals the sum of the data entropy and the KL divergence. Although the data entropy is intractable, it is solely determined by the data and independent of the model, thus it can be ignored in the loss function. Note that Eq. (6) omits all irrelevant constant terms.
Our goal is to minimize the KL divergence between the predicted GM and the ground truth distribution, which has a minimum of 0 when the GM perfectly fits the ground truth.
> Flow matching loss in [3] achieves minimum 0 and reduces the error variance
In general, the flow matching loss does not have a tractable form. While [3] analyzes special cases with analytic velocity fields, our work focuses on practical applications.
Theoretically, the "error variance" of stochastic flow matching loss equals the sum of denoising distribution variance and the squared error of the velocity. The former is determined by the data, whereas the latter can be reduced to near 0 with enough capacity.
Different from standard flow matching, our goal is not just to learn accurate local velocity, but also to capture the underlying denoising distribution so that a global velocity field can be analytically derived for multi-substep GM-ODE sampling.
> Spectral sampling for image generation is a detour
We want to clarify that spectral sampling is just an **optional extension** to standard diffusion sampling with **minimal overhead**.
- **Optional**: It's not used in our ODE solvers. In our SDE solvers, it's an optional trick to improve FID, whereas Precision is not affected, as shown in our ablation studies (Table 4).
- **Extension**: When setting $K=1$ and using a uniform spectrum, spectral sampling is equivalent to standard diffusion sampling. Therefore, **we consider it a more general approach instead of a detour**.
- **Minimal overhead**: The total overhead of GMFlow is less then 2% in inference time.
> Choosing suitable K
With pixel-wise factorization, multimodality is indeed less apparent. From a practice point of view, we choose the best $K$ based on the evaluation metrics of interest. We also found that early-step training NLL can serve as a potential indicator.
> Why the best Precision drops from K=8 to K=16
We believe this is due to numerical errors in spectral sampling, which grow larger with increasing $K$. We have tested the case without spectral sampling, where both $K=8$ and $K=16$ yield the same Precision of 0.946, with NFE=8.
> Results of K=2
We have added an experiment using $K=2$ on the CIFAR-10 dataset (training on ImageNet is beyond the time frame of rebuttal), and our few-step results are much better than GMS. Please refer to our response to **Reviewer A5dP**.
> Inference time
Sorry for the confusion. We have rewritten this paragraph for improved clarity: GMFlow adds only 0.005 sec of overhead per step (batch size 125, A100 GPU) compared to its flow-matching counterpart, which is minimal compared to the total inference time of 0.39 sec per step---most of which is spent on DiT.
> Scale the experiment to large dimensions and preserving spatial correlations
This will be one of our future directions. We have recently conducted preliminary experiments using patch-wise factorization instead of pixel-wise factorization, which show promising results. | Summary: The authors present a new formulation of diffusion models, termed as Gaussian mixture flow matching (GMFlow). Unlike existing diffusion models, GMFlow models the PDF of velocity by predicting the parameters of a Gaussian mixture (GM) distribution. Based on this formulation, GMFlow can generate high-quality images with fewer sampling steps. To avoid over-saturation artifacts, the authors propose a probabilistic guidance via GM morphing. In addition, the authors designs specific SDE/ODE solvers for GMFlow. They validate the effectiveness of the proposed method on both 2D toy datasets and ImageNet.
## update after rebuttal
I appreciate their thorough response and the rebuttal has addressed my concerns. I raise my score accordingly.
Claims And Evidence: Strengths
+ The claims regarding the limitations of diffusion and flow models are correct and wildly recognized in the field of generative models.
+ The proposed method is reasonable and easy to understand. It enhances the representation capacity of flow models and enables high-quality generation with fewer steps.
Weaknesses
None
Methods And Evaluation Criteria: Strengths
+ The proposed method is both intuitive and reasonable. Modeling the PDF of velocity enhances the representation capacity of flow models, leading to more effective modeling of complex distribution. Probabilistic guidance and SDE/DOE solvers are specifically designed to address the challenges of expensive sampling steps and over-saturation artifacts.
+ The evaluation benchmark is reasonably appropriate for assessing the effectiveness of the proposed method.
Weaknesses
- The proposed method relies on a pre-defined number of mixtures, which can limit the types of distributions. In addition, using an excessive number of mixtures leads to high costs and requires a large model for effective representation.
Theoretical Claims: I have check the correctness of any proofs for theoretical claims. And I do not find any major errors.
typos: In Equation (6), $x$ and $\mu_k$ are related to $x_t$, and since they are derived from the density of $\mathcal{N}(\mu; \mu_k, s^2)$, it would be better written as $\| \mu(x_0, x_k) - \mu_k(x_k)\|^2$.
Experimental Designs Or Analyses: Strengths
+ The superior results on both 2D toy datasets and ImageNet demonstrate the effectiveness of GMFlow. This paper also provides exhaustive ablation studies to assess each key component in Table 4.
+ The qualitative results across different sampling steps (Figure 2,6, and 8) make the efficiency of GMFlow more apparent and intuitive.
Weaknesses
- The evaluations on visual generation are insufficient. It is better to evaluate the proposed method across different model sizes and image sizes. For model architecture, DiT-L/2 is an option. And, the evaluating on ImageNet $512 \times 512$ would help demonstrate the generality of the proposed method.
- Altough Inception Score (IS) is mentioned in Sec. 4.2 (Evaluation protocal), no results are provided. It would be beneficial to include the IS metric to comprehensively assess the proposed method.
Supplementary Material: I have reviewed the supplementary material. The authors provide additional theoretical analysis and quality results.
Relation To Broader Scientific Literature: To my knowledge, the proposed method in this paper is new.
Essential References Not Discussed: To my knowledge, there is a work that should be discussed. The authors should include a discussion of GIVT [1] in the related work.
[1] Tschannen, Michael, Cian Eastwood, and Fabian Mentzer. "Givt: Generative infinite-vocabulary transformers." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024.
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: I acknowledge the novelty of the proposed method, but I also have concerns regarding the pre-defined number of mixtures and the insufficient evaluations. Thus, I am inclined to rate this paper as weak accept.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We have uploaded a **revised manuscript** and essential **code** in this anonymous link (full code will be released upon publication):
https://anonymous.4open.science/r/anonymous_gmflow-63FE
backup: https://limewire.com/d/CgAn9#jkBxDmC3qh
> Pre-defined number of mixtures can limit the types of distributions. Using an excessive number of mixtures leads to high costs and requires a large model
With pixel-wise factorization, a small number of Gaussians is enough to capture the distribution of a single latent pixel. The experiments also reveal that image generation metrics generally saturate at $K=8$.
On the other hand, GMFlow only expands the output channel of the final layer to accommodate the mixture components. With $K=8$, currently, the final layer has an output channel size of $(4 + 1) \times K = 40$, where 4 is the latent channel size. So the computational cost of increasing K is still minimal when compared to that of the remaining layers in DiT, especially considering the hidden dimensions of DiT are usually much larger.
> typos
Thank you for pointing this out. We have fixed the typos in the revised manuscript.
> Evaluating the proposed method across different model sizes and image sizes
We have added an experiment on unconditional CIFAR-10 image generation (please refer to our response to **Reviewer A5dP**), which employs different architecture (U-Net), model size (53M), and image resolution (32x32) from the DiT used in the paper.
Unfortunately, training 512-res GMFlow and baseline methods are too computationally expensive and time-consuming, and is beyond the time frame of the rebuttal.
> IS metric
Following previous work [3, 4], we employ Precision as the quality measurement instead of IS. This is because IS metric does not capture the image quality under a large guidance scale---in particular, IS is not sensitive to over-saturation. In fact, both our GMFlow model and flow matching baseline reach IS > 500 under the highest guidance scale evaluated, despite that the baseline has clearly worse visual quality.
- [3] Sadat et al. Eliminating Oversaturation and Artifacts of High Guidance Scales in Diffusion Models
- [4] Kynkäänniemi et al. Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models
> Discussion of GIVT
Thank you for suggesting this interesting paper. In the revised manuscript, we have added a paragraph discussing GIVT and other generative models using GMs. | Summary: This paper proposes a Gaussian mixture (GM) flow matching (FM) model. The traditional FM model uses a Gaussian modeling velocity field, while the proposed GMFlow method in this paper uses a Gaussian mixture modeling velocity field. The author shows that GMFLow can produce better results with fewer steps. The author also proposes an SDE/ODE sampling algorithm suitable for GMFlow, as well as a probabilistic CFG algorithm, which can alleviate the over-saturation problem caused by traditional CFG.
## update after rebuttal
I thank the authors for their response and I will maintain my score as Accept.
Claims And Evidence: The paper claims that GMFlow has a lower number of sampling steps and that probabilistic guidance can alleviate oversaturation, which has been supported by experiments.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I have currently checked the correctness of Theorem 3.1 in the main paper.
Experimental Designs Or Analyses: The experimental design is mostly reasonable, especially the 2D experiment clearly shows the advantage of GMFlow in few steps.
The experiment in Table 2 is not perfect. The results show that increasing K has benefits, but the boundary is not seen.
Supplementary Material: I have only reviewed Section B of the Supporting Material. I will review the complete Supplementary Material later.
Relation To Broader Scientific Literature: No.
Essential References Not Discussed: There is relatively little discussion of related work in this paper, and it is suggested to increase the discussion on the combination of GM with other generative models such as VAE.
Other Strengths And Weaknesses: Strengths
+ The method in this paper is novel and promising to me
+ The method proposed in this paper has many technical contributions. In addition to combining GM and Flow matching, it also proposes probabilities guidance and a new SDE/ODE sampler
Weaknesses
- Table 3 is the main experimental result, and FID is not used as a reference indicator
Other Comments Or Suggestions: - In Theorem 3.1, w.r.t. with
- In Line 70, $x_T \approx \epsilon$ does not take into account VE diffusion
Questions For Authors: - Will the results be better if $sI$ is replaced by a diagonal matrix?
- Does pixel-wise factorization mean that each latent pixel is treated as a Gaussian component? How does this relate to K?
- Does pixel-wise factorization limit the method to a fixed resolution setting?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We have uploaded a **revised manuscript** and essential **code** in this anonymous link (full code will be released upon publication):
https://anonymous.4open.science/r/anonymous_gmflow-63FE
backup: https://limewire.com/d/CgAn9#jkBxDmC3qh
> Boundary of experiment in Table 2
Unfortunately due to limited computing resources, we were not able to conduct experiments for K values beyond 16. That being said, as shown in Fig. 5 (or Fig. 7 in the revised manuscript). we have shown that the FID and Precision metrics generally saturate when $K\ge 8$. The NLL values in Table 2 do not directly reflect image generation quality, and they are more relevant to some downstream applications (e.g., score distillation-like test-time optimization), which is not the main focus of this work. We are happy to add more discussions in the final draft.
> Discussion on the combination of GM with other generative models such as VAE
Thank you for the suggestion. In the revised manuscript, we have added a paragraph discussing GM GANs and autoregressive Transformers. In the context of modern generative AI, however, VAEs alone are often regarded not as generative models but rather as representation compressors. Some GM VAE papers are focused on clustering instead of generation [1, 2]. We are happy to add more related works in the final draft.
- [1] Dilokthanakul et al. Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders.
- [2] Jiang et al. Variational Deep Embedding: AnUnsupervised and Generative Approach to Clustering
> FID in Table 3
In Table 3, we aim to evaluate the saturation of different methods at their best precision, which typically requires large CFG values that are beyond reasonable ranges for FID comparison. The main comparisons on both FID and Precision are presented in Fig. 4 (or Fig. 6 in the revised manuscript).
> In Line 70, $x_T \approx \epsilon$ does not take into account VE diffusion
Thank you for pointing this out. The manuscript states "A typical diffusion model", which does not cover all cases. In practice, mainstream diffusion models are trained with VP schedules, and they can be rescaled into VE diffusions during sampling (which is how the popular EDM Euler solver is implemented).
> Will the results be better if $sI$ is replaced by a diagonal matrix?
We have tried predicting pixel-wise variances instead of the global $s$ in image generation. The cons outweigh the pros: it makes training less stable since it's more likely to have small variances in the denominator of the loss function, and the benefits for NLL can be equally achieved by increasing $K$.
> Does pixel-wise factorization mean that each latent pixel is treated as a Gaussian component? How does this relate to K?
Yes. Each latent pixel (4 channels) is a 4-D GM of $K$ components. Please refer to the network architecture in Fig. 3 (or Fig. 4 in the revised manuscript).
> Does pixel-wise factorization limit the method to a fixed resolution setting?
We don’t think so. We think the opposite might be true: without pixel-wise factorization, the entire latent grid would be treated as a high-dimensional GM, which would complicate diverse resolution generation due to varying dimensions of GM. We think pixel-wise factorization makes diverse resolution generation easier since the per-pixel dimensions are fixed. | null | null | null | null | null | null |
EARTH: Epidemiology-Aware Neural ODE with Continuous Disease Transmission Graph | Accept (poster) | Summary: This paper introduces a new epidemic forecasting framework, EARTH, which combines neural ODEs with traditional compartmental models. The core idea behind EARTH is to address common forecasting challenges such as irregular data sampling and missing values by integrating two main modules: a local transmission model based on Epidemic-Aware Neural ODEs and a Global-guided Local Transmission Graph that incorporates global trends. Through experiments on real-world datasets, like COVID-19 data, the authors demonstrate that EARTH achieves superior performance compared to existing methods, suggesting that it could play a crucial role in enhancing the accuracy of epidemic prediction.
## update after rebuttal
The authors' responses have addressed my concerns. After reviewing the comments as well as the discussions, I'd like to raise my score to 5.
Claims And Evidence: The paper’s claims are supported by a variety of evidence that extends beyond just experimental results. 1) Experimentally, the model is evaluated on several real-world datasets—including COVID-19 data—with consistent improvements over baselines in metrics such as RMSE and MAE. 2) Detailed ablation studies demonstrate that both the neural ODE component and the global transmission graph are crucial to achieving high performance, as their removal significantly degrades the results. 3) Additionally, hyperparameter sensitivity tests confirm that the model performs robustly across a range of settings, and clear visualizations of predicted versus actual trends further validate its practical applicability.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited to the problem. Additionally, using real-world datasets like COVID-19 data with standard metrics such as RMSE and MAE offers a robust framework for evaluation.
Theoretical Claims: I reviewed the theoretical claims, especially those related to the integration of neural ODEs with classical epidemic models. The manuscript’s mathematical formulation is generally free from notable errors, and the conceptual foundations are sound and well-motivated. While the discussion primarily leverages established principles without extensive formal proofs, the theoretical framework is sufficiently robust, and the empirical results further reinforce its validity. Additionally, including an analysis of computational complexity could provide deeper insights into the scalability and practical applicability of the proposed methods, but this is a minor point.
Experimental Designs Or Analyses: The experimental designs and analyses in this paper are robust and well thought out. The authors employ multiple real-world datasets, including COVID-19 data, and standard metrics such as RMSE and MAE to evaluate performance. Comprehensive ablation studies in Table 3 effectively isolate the contributions of each model component, while sensitivity analyses from Figure 5 confirm the stability of the model across various hyperparameters. The clear visualizations in Figure 3 and Figure 4 further support the validity of the results by illustrating how the model captures key trends.
Supplementary Material: I reviewed the supplementary material and found that the authors provide anonymous code to validate their method, and the appendix includes detailed descriptions of the datasets used.
Relation To Broader Scientific Literature: In terms of machine learning, the paper draws on work related to neural differential equations and graph neural networks. While these techniques have been applied successfully in other domains, their application to epidemic forecasting is a significant innovation. By leveraging these advanced techniques, the paper contributes to the growing body of research on using deep learning to enhance predictive modeling in epidemiology. This approach also connects to broader trends in the literature about the fusion of classical and modern machine learning techniques to address real-world problems.
Essential References Not Discussed: The paper already does a good job of addressing relevant literature. It connects well with existing research in epidemic modeling and machine learning.
Other Strengths And Weaknesses: Strengths:
1) The paper introduces an innovative approach by combining neural ODEs with traditional epidemic models and incorporating a Global-guided Local Transmission Graph, which effectively captures both continuous dynamics and spatial transmission patterns.
2) The authors provide clear motivation for integrating machine learning with classical epidemiological frameworks, addressing limitations of traditional models in real-world epidemic forecasting.
3) This work advances the field by demonstrating how modern deep learning techniques can be seamlessly merged with established epidemic models, paving the way for more accurate and dynamic forecasting tools.
4) By bridging the gap between classical epidemiological modeling and contemporary machine learning, the paper makes a significant contribution to computational epidemiology and opens up new research avenues for future advancements in the field.
Weaknesses:
1) There is no detailed analysis of the computational complexity, which could help assess the method's scalability and efficiency in large-scale applications.
2) The model may be sensitive to certain hyperparameters (such as learning rate, hidden layer dimensions, regularization coefficients, etc.). Although the author conducted experiments, different hyperparameter settings may affect the model's generalization ability and stability. Further experiments can explore the impact of hyperparameter adjustment on model performance.
Other Comments Or Suggestions: Please ref to weaknesses.
Questions For Authors: - How are these missing rates introduced into the data? Are they intentionally sampled deletions from the original data (such as random deletions) to simulate missingness, or do they originate from missing data in real-world scenarios?
- In Figure 4, are there any isolated nodes with almost no connections to other nodes? If there are, what does this mean?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: # Response to Reviewer XKTr
We sincerely thank you for your thorough review and positive assessment of our work. We are grateful for your recognition of EARTH's
novelty and contributions to epidemic forecasting. We address your questions below:
> `Weakness 1`: No detailed analysis of the computational complexity.
We appreciate this valuable suggestion. The computational complexity of EARTH can be broken down into:
- **Epidemic-Aware Neural ODE (EANO)**: The computational complexity of our neural ODE solver is primarily determined by $O(T_{\text{ODE}} \times (N \times d^2 + |E| \times d))$, where $T_{\text{ODE}}$ represents the average number of solver steps needed for integration, $N$ is the number of regions, $d$ is the hidden dimension, and $|E|$ is the number of transmission edges in our graph. The term $N \times d^2$ comes from node-level feature transformations applied at each solver step, while $|E| \times d$ reflects the message passing operations between connected regions during epidemic propagation.
- **Global-guided Local Transmission Graph (GLTG)**: This component has a theoretical complexity of $O(N^2 \times T_{\text{hist}}^2 + N^2 \times d)$. The first term arises from computing pairwise temporal similarities using Dynamic Time Warping across $N$ regions with historical sequences of length $T_{\text{hist}}$, while the second term corresponds to the generation of the full adjacency matrix with feature transformations. In practice, we reduce this cost using FastDTW for efficient similarity calculation and by maintaining a sparse graph structure that retains only the most relevant region connections.
- **Cross-Attention Mechanism**: The complexity for our cross-attention operation between epidemic states and global features is $O(N \times d^2)$, which is notably more efficient than typical attention mechanisms that scale with $O(N^2 \times d)$. This efficiency stems from our design choice to constrain attention to the three epidemic states (S,I,R) per region rather than attending across all regions.
We will include this detailed analysis in the revised version.
> `Weakness 2`: Model sensitivity to hyperparameters.
Thank you for this important observation. We have conducted additional experiments to evaluate EARTH's sensitivity to key hyperparameters on the Australia-COVID dataset with a horizon of 5:
| Hidden Dimensions | 16 | 32 | 64 | 128 | 256 |
|-------------------|-----|-----|-----|------|------|
| RMSE | 187.3 | 176.5 | 156.8 | 159.2 | 172.8 |
| MAE | 42.64 | 36.27 | 30.12 | 31.95 | 38.76 |
| Learning Rate | 5e-5 | 1e-4 | 5e-4 | 1e-3 | 5e-3 |
|---------------|------|------|------|------|------|
| RMSE | 179.1 | 167.5 | 160.3 | 156.8 | 163.2 |
| MAE | 41.56 | 36.28 | 32.41 | 30.12 | 33.95 |
These results demonstrate that EARTH performs optimally with hidden dimensions of 64 and a learning rate of 1e-3, which aligns with our main experimental setup in Table 1. We will incorporate these detailed sensitivity analyses in the revised manuscript.
> `Question 1`: How are these missing rates introduced into the data?
Thank you for this question about our experimental methodology.
For the controlled experiments in Table 2, we artificially introduced missingness by randomly removing data points from the complete dataset at rates of 10%-40%. This random deletion strategy follows standard practice in the literature for evaluating model robustness to missing data.
> `Question 2`: In Figure 4, are there any isolated nodes with almost no connections to other nodes? If there are, what does this mean?
This is an insightful question. What appears as isolated nodes in Figure 4 is simply a result of our visualization approach - we only display the top-3 highest weighted edges for each region to maintain visual clarity. In our Local Transmission Graph, we first use Dynamic Time Warping to select top-k similar nodes based on temporal epidemic patterns, then adaptively learn the normalized weighted edges during training using the mechanisms described in Equations 9-10, allowing EARTH to automatically determine appropriate information sharing between regions based on epidemic similarity. We will clarify this visualization choice in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. My concerns have been well addressed, and I've checked other reviewers' feedback as well. I suggested that the author include complexity analysis and hyperparameter studies in the revised manuscript. We're at an inflection point for AI in computational epidemiology, and the work may have a potential impact. I keep my score and vote for acceptance.
---
Reply to Comment 1.1.1:
Comment: # Response to Reviewer XKTr
Dear Reviewer XKTr,
**Thank you again for recognizing the innovation and contribution of our work and for your willingness to support its acceptance!**
Best regards,
Authors | Summary: The authors tackle the problem of effectively forecasting epidemics and propose the so-called EARTH method, which combines an epidemiology-aware neural ODE with a continuous disease transmission graph. More specifically, they leverage a **neural ODE-based component** (EANO) based on the **common epidemic SIR mechanism** to capture spatial spreading during disease evolution while enabling continuous modeling. They also introduce a **GNN-based component that captures global epidemic trends** (GLTS) and a cross-attention mechanism that combines global patterns with local transmission patterns. The proposed method is evaluated on real-world datasets capturing COVID-19 and influenza diseases against other baseline methods, showing performance improvements for few horizon lengths under node regression metrics (such as RMSE).
## update after rebuttal
The authors have addressed some of my initial concerns, particularly those related to the significance of results and experimental design details. However, critical issues around the theoretical positioning (against the relevant first works in the field), selection and interpretability of learned epidemic rates, and justification for explicit SIR-based modeling remain only partially resolved and should be clearly reflected in the revised manuscript. Time/memory cost comparisons against baselines are still missing and can be significant. I revise my recommendation to weak accept, assuming the authors will incorporate these clarifications and adjustments into the final version.
Claims And Evidence: The following claims of the paper are problematic:
- Introduction, page 2: *“We are the first to harmonize the neural ODE with the epidemic mechanism [...] patterns”*. The authors claim that they are the first to combine the Neural ODE method with approximate system equations for epidemic spreading. However, it seems that relevant works in this field have already combined ODEs from epidemic compartmental models with advances in neural ODE solvers, such as in [1].
- Methodology, page 4: *“By substituting the traditional SIR’s two simple rates [...] our model can derive more detailed representations of the disease spread and recovery processes.”*
The infection $\beta$ and recovery $\gamma$ rates in SIR models are fundamental parameters in epidemic modeling and have been extensively studied to develop realistic simulations of disease spread. However, the authors do not showcase how learning these parameters improves performance.
1. Kosma, C., Nikolentzos, G., Panagopoulos, G., Steyaert, J. M., & Vazirgiannis, M. (2023). Neural ordinary differential equations for modeling epidemic spreading. Transactions on Machine Learning Research.
Methods And Evaluation Criteria: This study leverages common baselines and benchmark datasets in the field. Evaluation metrics (point-wise deviation) are also common.
Theoretical Claims: No formal proofs are provided for different parts of the proposed method. However, this may not be necessary, as the approach is application-driven and builds upon existing modelization concepts.
Learning $\beta$ and $\gamma$ without constraints is not theoretically substantiated (e.g., initialization, bounds). Several relevant studies rely on specific choices for these parameters [1,2] to ensure they capture meaningful spreading dynamics.
1. Sha, H., Al Hasan, M., & Mohler, G. (2021, October). Source detection on networks using spatial temporal graph convolutional networks. In 2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA) (pp. 1-11). IEEE.
2. Gao, F., Zhang, J., & Zhang, Y. (2022, May). Neural enhanced dynamic message passing. In International Conference on Artificial Intelligence and Statistics (pp. 10471-10482). PMLR.
Experimental Designs Or Analyses: The common experimental design choices with the cited studies in section 5.1 are unclear. More specifically, different horizon lengths/window sizes are used in studies (Liu et al., 2023, Kamarthi & Prakash) and the proposed method, while it seems that a common choice in time series forecasting is to consider multiples of the input window length (e.g., 1W, 2W, 3W,...).
Supplementary Material: All parts have been checked.
Relation To Broader Scientific Literature: The main focus of this paper is to combine neural ODEs on epidemic priors and GNNs to perform more accurate epidemic forecasting on graphs. The contributions are more prominent in terms of experimental results compared to baseline methods for example real-world epidemic datasets, while in terms of architectural design, the authors built upon common concepts and existent modules in time series spatial-temporal modeling.
Essential References Not Discussed: Several methods in the area of physics-informed modelization of epidemic spreading (particularly based on compartmental models, e.g., SIR, SEIR) could be included in the introduction/related work of the paper to support the positioning of its main concepts. Some examples:
- *SIR Neural ODEs on Networks:* Kosma, C., Nikolentzos, G., Panagopoulos, G., Steyaert, J. M., & Vazirgiannis, M. (2023). Neural ordinary differential equations for modeling epidemic spreading. Transactions on Machine Learning Research.
- *SIR-based Embedding Layers:* Zheng, Y., Jiang, W., Zhou, A., Hung, N. Q. V., Zhan, C., & Chen, T. (2024). Epidemiology-informed Graph Neural Network for Heterogeneity-aware Epidemic Forecasting. arXiv preprint arXiv:2411.17372.
- *Physics-Informed Neural Networks:* Cai, M., Em Karniadakis, G., & Li, C. (2022). Fractional SEIR model and data-driven predictions of COVID-19 dynamics of Omicron variant. Chaos: an interdisciplinary journal of nonlinear science, 32(7).
- *Dynamic Message Passing for SIR combined with GNNs:* Gao, F., Zhang, J., & Zhang, Y. (2022, May). Neural enhanced dynamic message passing. In International Conference on Artificial Intelligence and Statistics (pp. 10471-10482). PMLR.
Other Strengths And Weaknesses: *Strengths:*
- The paper is well-written and easy to follow.
- The experimental results consider several baseline modes and thorough ablation studies of the proposed method's main components.
*Weaknesses:*
- **S1** - *Presentation and Design choices of the epidemic compartment:* In real-world scenarios, epidemic data is often noisy. If $\beta$ and $\gamma$ are learned directly from data without constraints, the model might overfit short-term trends rather than capturing realistic transmission dynamics. Necessary assumptions to derive this form of equations (3), intuition, and limitation of the compartmental model in practical applications are not discussed.
- **S2** - *Positioning against works explicitly combining ML and compartmental modes:* Based on the missing references above, the presentation of related works fails to showcase the limitations of existing approaches and design choices on incorporating compartmental models/spreading priors to the loss function/architectural structure of the methods.
- **S3** - *Experimental Choices not well-justified/missing:*
1. It is unclear why the authors choose h=5,10,15 as horizon lengths for their experiments, given a historical window of 20 timestamps, which constitutes the task rather easy. A study of the performance impact with larger (>20) forecasting horizons would be interesting.
2. The type (e.g., runge-kutta) and selection of step of the solver chosen in the neural-ode are not mentioned.
3. Standard deviations are not mentioned.
4. The downsampling method followed for synthetically creating irregular timestamps in the datasets is unclear but can have a significant impact on the distortion of the underlying continuous dynamics.
- **S4** - *Computational Analysis is missing:* The proposed method relies on computationally heavy components, including the neural-ode solver and the dtw used to extract A. In practical applications, for increasing dataset sizes (and graphs’ nodes/edges) and increasing windows/horizons, some methods can become very ineffective regarding time/memory costs.
Other Comments Or Suggestions: Not applies.
Questions For Authors: Based on aforementioned weaknesses, the following aspects need enhancement and further clarification:
1. S1 - theoretical explanations
2. S3 - experimental choices
3. S4 - computational analysis
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer rfdg
We sincerely thank you for your thorough review and hope our responses below will help improve your assessment of our work:
> `Weakness S1 (Theoretical Claims & Claims And Evidence)`: Presentation and Design choices of the epidemic compartment
Our approach addresses these concerns in several ways:
1. Multi-dimensional features vs. scalar rates: Unlike traditional SIR models using scalar rates (Sha et al.; Gao et al.), our "detailed representations" claim refers to using multi-dimensional features. This enables more nuanced transmission dynamics through: 1) Higher expressivity for complex spatio-temporal patterns, 2) Greater flexibility for heterogeneous transmission across regions, and 3) Enhanced representation of time-varying dynamics. Our experiments with an EARTH variant using traditional single-value scalar rates showed:
|Model Variant|h=5 (R)|h=5 (P)|h=10 (R)|h=10 (P)|
|-------------|-------|-------|---------|---------|
|w/o Feature|178.6|34.24|198.5|44.18|
|EARTH|156.8|30.12|177.6|38.62|
2. Safeguards against overfitting: We implement three strategies: 1) Temporal cross-validation with training/validation/testing spanning different epidemic waves, 2) Multi-scale integration balancing local mechanics with global patterns via cross-attention, and 3) Time-continuous formulation smoothing noise in discrete observations. These safeguards are validated in Tables 1&2, where EARTH maintains stable performance even with 40% missing data.
3. Theoretical considerations: Our work builds on classical compartmental models (Grassly & Fraser, 2008) but addresses limitations through: 1) Learnable features capturing spatio-temporal heterogeneity vs. fixed scalar rates, 2) Neural ODE formulation adapting to evolving transmission patterns, and 3) Continuous-time approach handling irregular or missing data. These innovations balance epidemiological principles with data-driven flexibility.
> `Weakness S2 (Claims And Evidence & Essential References)`: Positioning against relative works
We acknowledge the need for clearer positioning relative to Kosma et al. (2023) and similar works. While these combine ODEs/ML with epidemic models, our key contribution lies in: 1) Integrating neural ODEs with GNNs through our GLTG mechanism for continuous evolution of node features and edge weights, 2) Adaptively learning connections through DTW and integrating global/local information via cross-attention, and 3) Demonstrating superior performance with missing data, overcoming limitations of fixed-parameter approaches.
> `Weakness S3 (Experimental Designs)`: Experimental Choices not well-justified/missing
We appreciate these important observations about experimental design clarity:
1. Horizon lengths: We selected forecast horizons based on: public health decision-making needs, alignment with previous works, and balancing prediction accuracy with utility. Additional experiments on US-Region with different historical windows and larger horizons:
|Method|Win|h=20|h=25|h=30|
|------|---|----|----|-----|
|STGODE|20|1836|2153|2607|
|EpiColaGNN|20|1645|1947|2378|
|EARTH|20|**1528**|**1812**|**2219**|
|STGODE|40|1682|1976|2391|
|EpiColaGNN|40|1476|1763|2134|
|EARTH|40|**1374**|**1642**|**1983**|
2. Solver type: We use Runge-Kutta 4th order (rk4) with absolute tolerance 1e-9 and relative tolerance 1e-7 for balance between numerical stability and efficiency.
3. Standard deviations: Results are averaged over 5 runs with different random seeds. We will add variance information in the revised manuscript.
4. Downsampling: Random sampling was used to create missing data points, simulating real-world reporting patterns.
> `Weakness S4`: Computational Analysis
The computational efficiency of EARTH can be analyzed as:
- Epidemic-Aware Neural ODE: Complexity $O(T_{\text{ODE}} \times (N \times d^2 + |E| \times d))$, where $T_{\text{ODE}}$ is solver steps, $N$ is regions, $d$ is hidden dimension, and $|E|$ represents edges. $N \times d^2$ corresponds to node-level transformations while $|E| \times d$ captures cross-region exchange.
- Global-guided Local Transmission Graph: Traditional construction would incur $O(N^2 \times T_{\text{hist}}^2 + N^2 \times d)$ complexity, with first term from DTW computation and second from adjacency matrix generation. We implement key optimizations: (1) FastDTW reducing temporal similarity calculation from $O(T_{\text{hist}}^2)$ to $O(T_{\text{hist}})$, (2) one-time DTW matrix computation and reusage during training, (3) retaining only top-k connections per region leveraging epidemic transmission sparsity where $|E| \ll N^2$, and (4) compressed sparse matrices with memory scaling as $O(|E|)$.
- Cross-Attention Mechanism: Our design constrains attention to three epidemic states per region, achieving $O(N \times d^2)$.
These optimizations enable EARTH to avoid $O(N^2)$ complexity, supporting large-scale epidemic forecasting applications.
---
Rebuttal Comment 1.1:
Comment: I sincerely thank the authors for their replies, which address some of my concerns. However, some aspects are only superficially tackled in the provided justifications:
**[Claims about Novelty/Originality \& Theoretical Claims]**
- **Concerns about the Proper Positioning of the Work against prior works in SIR-based Neural ODEs.** I remain unconvinced about whether the authors have identified the issue in positioning their work compared to the suggested references concerning the combination of the SIR ODE system and Neural ODEs in predicting epidemics on networks. In their response, it is still unclear how they will adjust their novelty justification in the introduction (their claim about contribution (1) on page 2 is not accurate) when it comes to conceptualizing SIR Neural ODEs compared to the first works in this field. For instance, the method by Kosma et al., 2023, inherently integrates message passing through the multiplication of the learnable state vectors with the adjacency matrix A within the Neural ODE solver of the approximate SIR system (similar to GNNs). The method by Zheng et al., 2024 extends the Neural ODE with GNNs that capture spatio-temporal heterogeneity.
*Do the authors imply that their primary methodological contribution lies in leveraging specific layers tailored to spatiotemporal evolution within the Neural SIR ODE solver rather than the SIR network-based Neural ODE mechanism itself?*
- **Lack of Constraints on $\beta$ and $\gamma$ rates can Lead to Uninterpretable Dynamics.** I appreciate the experiments provided by the authors (comparing fixed vs learnable rates - although the rate values chosen are not mentioned). State vectors can be learnable even for fixed epidemic rates or bounded learnable epidemic rates. Therefore, the author's reply does not address my core concern regarding the physical interpretability and realism of the learned dynamics. Without proper constraints (followed in the references I suggested), the model may learn negative or excessively large values for the epidemic rates, leading to unphysical behaviors such as unbounded growth, unrealistic decay, or violations of conservation laws (i.e., the sum S+I+R should remain constant over time). In my opinion, this significantly undermines the interpretability and physical reliability of the approach.
- **Lack of Constraints Undermines the Reasons for Explicit (SIR-based) ODE Modeling.** If the learned parameters ($\beta$, $\gamma$) do not adhere to realistic conditions for the epidemics, this weakens the motivation for using an explicit ODE form for the SIR dynamics over a generic Neural ODE combined with GNNs (e.g., GNODE (Poli et al., 2019)).
-- Poli, M., Massaroli, S., Park, J., Yamashita, A., Asama, H., & Park, J. (2019). Graph neural ordinary differential equations. arXiv preprint arXiv:1911.07532.
**[Other]**
I appreciate the authors' efforts in their responses. The details regarding the experimental design (such as solver parameters and number of runs) and computational analysis are precise, and I suggest the authors include these in the revised paper. Indeed, experiments with larger horizon lengths demonstrate the proposed method's advantages in terms of mean performance. However, I expected to see ranges for standard deviations and numerical comparisons for computational aspects (e.g., time and memory costs) relative to baselines. Additionally, explicitly mentioning the random downsampling strategy and its limitations against other methods is a crucial aspect that needs to be added to the manuscript.
Based on the above points, I believe it is appropriate to maintain my scores for now.
---
Reply to Comment 1.1.1:
Comment: # Response to Reviewer rfdg
Dear Reviewer rfdg,
We sincerely appreciate your continued engagement with our work. We address your remaining points below and hope these clarifications will help improve your assessment of our work:
> `Claims about Novelty/Originality & Theoretical Claims`: Positioning against prior works
Thank you for your feedback. We appreciate the chance to clarify:
- **First end-to-end integrated framework of adaptive graph learning within epidemiology-aware neural ODE**: While our work is inspired by network SIR mechanisms, our primary contribution is being the first (to the best of our knowledge) to integrate adaptive graph neural ODEs with epidemic mechanisms in a unified, end-to-end framework. We will revise our contribution statement on page 2 for clarity.
- **Dynamic vs. static graph evolution**: A key difference from Kosma et al. (2023) is our continuous-time adaptive graph. GN-ODE uses static adjacency matrices, while EARTH leverages semantic similarity-based neighbor discovery (via DTW) and learns evolving connectivity patterns. This better captures epidemic dynamics with time-varying transmission patterns. Comparative experiments demonstrate the effectiveness of EARTH:
|Method|h=5 (R)|h=5 (P)|h=10 (R)|h=10 (P)|
|------|-------|-------|--------|--------|
|GN-ODE|201.2|39.86|246.7|54.39|
|EARTH|156.8|30.12|177.6|38.62|
- **Orthogonal contributions to concurrent work**: Regarding Zheng et al. (2024), we acknowledge this is nice but very recent work (public on arXiv on Nov. 2024). While HeatGNN offers heterogeneous modeling and PINN-inspired loss, EARTH focuses on *continuous-time* adaptive graph evolution, Neural ODE based formulation to ensure *irregular data handling*, and a dual-branch architecture with cross-attention fusion for both local and global views.
We hope these clarifications highlight the differences and advances of our work, we will add these nice references and further elaborate the distinctions in our revised version.
> `Lack of Constraints on epidemic rates`: Physical interpretability of dynamics
Thank you for your concern. We clarify that our approach is **inspired by epidemic mechanisms to guide model design**, not strictly adhere to traditional compartmental constraints. This offers several advantages:
- **Physically-grounded parameterization**: Our model uses transformation matrices $W_{trans}$ and $W_{recov}$ to parameterize transition rates, creating an implicit epidemic flow structure with flexibility. Sigmoid activations on graph edge weights ensure non-negative transmission, preventing negative disease spread.
- **Balancing mechanistic insight with data adaptability**: Our data-centric design allows the model to adapt to real-world data patterns that may not strictly follow SIR dynamics, accounting for delays, testing limits, and behavioral changes not captured by basic compartmental models.
- **Extensible framework**: For example, the model can incorporate output layers to explicitly model epidemic rates and states, with constraints. We will explore these possibilities in our revised manuscript, though we note that such extensions build upon rather than diminish the novelty of our end-to-end integrated framework.
> `Other`: Experimental considerations
We thank the reviewer for acknowledging our analyses. Due to **space limitations** in the rebuttal, we will provide more detailed descriptions in the revised manuscript. We would like to further clarify as follows:
- Standard deviations: We will modify the original table to include them, the results show EARTH's stability:
|Method|h=5(R)|h=5(P)|h=10(R)|h=10(P)|h=15(R)|h=15(P)|
|------|------|------|-------|-------|-------|-------|
|STGODE|310.5±18.3|66.32±7.0|392.2±30.0|91.05±12.1|571.3±41.5|159.2±16.0|
|EpiColaGNN|204.3±22.6|36.86±6.5|345.4±40.2|68.39±12.5|886.0±95.4|296.5±28.0|
|EARTH|156.8±15.5|30.12±5.3|177.6±28.5|38.62±14.3|225.3±36.3|56.32±15.5|
- Computational Aspects: Our neural ODE model, though slightly slower, is memory-efficient regardless of sequence length—suitable for large-scale use. It handles irregular timestamps and maintains strong performance. More analysis will be included in the revision.
- Random downsampling: Chosen to ensure unbiased performance under missing data, avoiding assumptions that could favor certain models. While other strategies (e.g., systematic, stratified) are options, random sampling offers a clean, generalizable baseline widely used in time-series work [1].
[1]: Graph Neural Controlled Differential Equations for Traffic Forecasting.
We sincerely thank you for your valuable review. This may be an important moment to promote computational epidemiology to a broader community, and we believe encouraging cross-disciplinary work can help bridge AI capabilities with public health needs. We are deeply grateful for the opportunity to have our work's strengths reconsidered by you.
Best regards,
Authors | Summary: The paper proposes EARTH, an Epidemiology-Aware Neural ODE with a Continuous Disease Transmission Graph, as a novel framework for epidemic forecasting. The authors integrate neural ODEs with epidemiological mechanisms, capturing both continuous-time disease transmission and global infection trends. The Global-guided Local Transmission Graph and cross-attention fusion mechanism are introduced to enhance epidemic forecasting accuracy. Through extensive experiments on real-world datasets (COVID-19 and influenza), EARTH is shown to outperform state-of-the-art methods.
Claims And Evidence: The claims in the paper are mostly supported by clear evidence.
Methods And Evaluation Criteria: The proposed methods are effective and novel, and the evaluations are justified.
Theoretical Claims: These claims are supported by experiments rather than formal proofs.
Experimental Designs Or Analyses: The experimental design is robust and well-executed, providing strong support for the paper’s claims. The authors conduct extensive evaluations on real-world datasets, demonstrating that EARTH significantly outperforms SOTA methods in terms of forecasting accuracy, peak time error, and robustness.
Key strengths include:
- Comprehensive evaluation across multiple datasets, showcasing the model's ability to handle real-world data effectively.
- Ablation studies that validate the contributions of the EANO and GLTG and cross-attention mechanism to model performance.
Supplementary Material: I checked most of the supplementary materials, including implemention details and supplementary experimental results.
Relation To Broader Scientific Literature: Traditional models like SEIR and mechanistic models have been widely used, but they often struggle with real-world complexities. EARTH advances this field by leveraging data-driven learning while maintaining interpretability through epidemiological structures.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Pros:
(a) The integration of neural ODEs with epidemiological models represents a great innovation. By combining disease transmission mechanisms with deep learning, EARTH effectively captures epidemic dynamics beyond traditional mechanistic and deep learning approaches.
(b) EARTH accounts for irregular sampling intervals and missing data, making it highly applicable to real-world epidemic data where reporting can be inconsistent.
(c) The paper provides comprehensive experiments across multiple epidemic datasets, showing that EARTH significantly outperforms previous approaches
Cons:
(a) The combination of neural networks and epidemiology: Similar neural network methods, such as neural ODEs, have already been applied in many fields, especially in time series analysis. However, their application to specific issues in epidemiology is not yet widespread.
Other Comments Or Suggestions: NA
Questions For Authors: (a) The motivation section points out that the existing epidemic prediction methods have failed to fully capture the complexity of the dynamic evolution and regional transmission patterns of epidemics, especially when dealing with global infection trends and regional transmission changes. Considering this motivation, could you please explain in detail how to dynamically learn and integrate global and regional transmission patterns through these methods to address the challenges mentioned in the motivation?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: # Response to Reviewer uXQM
We sincerely thank you for your positive assessment of our work and for recognizing the innovation in integrating neural ODEs with epidemiological models. We appreciate your thorough evaluation and answer your questions below:
> `Cons`: On applying neural ODEs to epidemiology
While neural ODEs have been applied in various domains, EARTH goes beyond straightforward application:
- Integration with epidemic mechanism: Our Network SIR-inspired architecture (Equations 5, 11) explicitly models the transition dynamics between susceptible, infectious, and recovered populations.
- Integration of GNN with neural ODEs: Our approach uniquely combines graph neural networks with neural ODEs through the GLTG mechanism. This integration allows the neural ODE to operate on evolving graph structures where both node features and edge weights change continuously in time. The dynamic transmission patterns captured by our graph-based ODE enable more realistic modeling of how disease spreads across regions compared to standard neural ODEs that operate on fixed graphs or no graph structure at all.
- Flexibility for irregular data: Unlike conventional time series models, our continuous formulation naturally handles the irregular reporting and missing data common in epidemic monitoring.
Our ablation studies validate these design choices, with significant performance drops when these specific components are removed.
> `Question`: Dynamic integration of global and regional patterns
EARTH integrates global and regional patterns through:
- Semantic connections via DTW: We identify regions with similar epidemic trajectories using Dynamic Time Warping rather than relying solely on geographic proximity.
- Adaptive transmission learning: Global trends guide regional transmission via a dynamic graph structure that evolves throughout the epidemic timeline, capturing how policies and behaviors change transmission patterns.
- Multi-scale modeling: We capture both local disease dynamics and cross-regional dependencies within a unified framework.
- Continuous-time formulation: By modeling in continuous time, EARTH handles irregular observation intervals and can forecast at arbitrary time points.
This approach enables effective forecasting even when regional reporting patterns change, a common challenge in epidemic monitoring.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clear clarifications and the helpful dialogue. In light of the other reviewers' feedback, I remain convinced that this paper is a solid addition to the community and will raise my positive evaluation.
---
Reply to Comment 1.1.1:
Comment: # Response to Reviewer uXQM
Dear Reviewer uXQM,
**Thank you for your supportive feedback and for raising your positive evaluation of our work.** We greatly appreciate your thoughtful review and recognition of our paper's contributions to the community. Your insights have been valuable in improving our manuscript.
Best regards,
Authors | Summary: The paper presents EARTH (Epidemiology-Aware Neural ODE with Continuous Disease Transmission Graph), a novel framework for epidemic forecasting that integrates neural ordinary differential equations with epidemiological mechanisms. The authors address challenges in current approaches by modeling the continuous-time nature of epidemics, capturing dynamic regional transmission patterns, and considering irregular sampling intervals. EARTH consists of two key components: an Epidemic-aware Neural ODE (EANO) that captures disease transmission patterns, and a Global-guided Local Transmission Graph (GLTG) that models global infection trends to guide local transmission dynamics. The model uses a cross-attention mechanism to integrate global epidemic coherence with local nuances of disease transmission. Experiments on COVID and influenza datasets demonstrate EARTH's superior performance compared to state-of-the-art methods in forecasting real-world epidemics.
Claims And Evidence: EARTH's superior performance over existing methods is demonstrated through comprehensive experiments across three datasets (Australia-COVID, US-Regions, US-States) with quantitative metrics (RMSE, Peak Time Error)
The effectiveness of individual components (EANO and GLTG) is validated through ablation studies showing performance degradation when components are removed
Robustness to irregular sampling intervals is shown through experiments with different missing rates (0-40%)
Limited explanation of why neural ODEs specifically are better than existing approaches for continuous-time modeling
The claim that EARTH captures "more detailed representations of disease spread" lacks qualitative analysis or interpretability studies
Performance gains across datasets vary significantly, with more modest improvements in some scenarios
Methods And Evaluation Criteria: The EARTH model is built on a neural ordinary differential equation (ODE) framework, designed to model disease transmission in a continuous and dynamic manner. Unlike traditional compartmental models such as SEIR, which assume discrete transitions between disease states, EARTH integrates graph-based epidemiological modeling to represent evolving interactions among individuals. This allows for more accurate disease spread simulations. The model constructs a continuous disease transmission graph, capturing real-time interactions between susceptible and infected populations. It leverages deep learning techniques to adapt and refine its predictions based on observed infection data.
The method involves training the neural ODE using historical epidemiological data and real-world disease progression patterns. The model continuously updates its parameters based on observed changes in infection rates, making it adaptive to different disease dynamics. The framework is designed to be scalable and generalizable, capable of modeling multiple infectious diseases such as COVID-19, influenza.
The evaluation of EARTH includes comparisons against traditional epidemiological models, such as SEIR and graph-based models, to assess its effectiveness. The model is tested on real-world datasets, including reported cases from multiple infectious diseases, ensuring its applicability beyond theoretical scenarios. Further sensitivity analyses and broader disease databases would help.
The EARTH model primarily focuses on COVID and operates within homogeneous time periods, meaning it learns patterns based on relatively stable disease transmission dynamics. However, infectious disease spread is often highly dynamic, influenced by factors such as travel, spillover events, and new variants, which can introduce sudden shifts that challenge predictive models.
While machine learning excels at pattern recognition, it may struggle when underlying transmission mechanisms change abruptly, such as with novel introductions from travelers or zoonotic spillovers. If the model has not been trained on data that reflects such disruptions, it may fail to capture these shifts accurately.
A potential limitation of EARTH is whether it accounts for heterogeneous transmission periods—for example, distinguishing between pre-COVID, pre- COVIDvaccine, post-vaccine, and variant-driven waves in COVID-19. Additionally, external shocks, such as government interventions, behavioral changes, or superspreader events, may not be fully captured in a data-driven framework unless explicitly modeled.
It would help to explain how next steps could make this approach more robust, especially as external factors may not be predicted.
Integration of external factors like mobility data, vaccination rates, etc can help but this will may depend a lot on the pathogen, so this may not be disease agnostic. Climate shocks, novel spillovers, antimicrobials resistance and travel patterns will make a big difference. If EARTH lacks mechanisms to handle these shifts, it may overfit to past trends and fail to generalize to new outbreaks or changing epidemic conditions. Addressing these issues could enhance its real-world applicability for public health decision-making.
COVID is a very distinct example, where data was better collected and the mechanisms impacting its spread will not represent other diseases.
Theoretical Claims: Works with Neural ODEs as an approach building on the SIR ODE framework
Graph-Based Transmission Modeling: The theoretical foundation incorporates graph neural networks (GNNs) to capture spatial and temporal dependencies in disease spread. The paper provides justification for using graph structures, emphasizing their ability to model heterogeneous interactions in populations.
The authors discuss the model's ability to generalize across different epidemiological scenarios but will need empirical evidence.
Some limitations
The theoretical claims assume consistent data availability and stationary transmission patterns, which may not hold in real-world scenarios where data availability vary greatly, especially in early explosive outbreaks where predictions are most needed, and also with external shocks (e.g., new variants, travel-driven outbreaks).
The model assumes that the graph structure remains representative of disease spread over time, but the theoretical justifications for handling dynamic changes in population movement and behavior could be further elaborated.
Experimental Designs Or Analyses: Comparison with Standard Existing Approaches
- evaluates EARTH against traditional models like SEIR and graph-based models to assess its predictive performance.
The paper does demonstrate improvements over classical compartmental models for COVID datasets.
This work used Real-World Datasets. The model is tested on real-world datasets for COVID and influenza outbreaks. While these datasets provide meaningful benchmarks, a broader set of infectious disease cases (especially with different transmission dynamics) would enhance the robustness of the findings. COVID and influenza like illnesses create a very different type of outbreak than many others.
The training process involves learning from historical epidemiological data to improve forecasting. However, potential overfitting is a concern, especially given the homogeneous time periods used in the study.
The analysis used metrics - RMSE (Root Mean Square Error) - and Peak Time Error, which calculates the
MAE (Mean Absolute Error) that are useful for comparing outbreak predictions. For many outbreaks, there may be different metrics, as peak may not be a particular
Supplementary Material: looked through the code and data (which is simply standard data)
Relation To Broader Scientific Literature: We are at a pivotal moment in epidemiologic models. There is a real possibility for being able to advance the field of epidemic modeling. There are certainly many approaches, but this is an important possible avenue.
Essential References Not Discussed: Would place this in a wider context
https://www.nature.com/articles/s42256-024-00895-7
https://www.nature.com/articles/s41586-024-08564-w
https://dl.acm.org/doi/10.1145/3650215.3650396
https://ieeexplore.ieee.org/document/10039594
Would also describe this as focused on COVID/flu modeling
https://covid19-projections.com/about/
CovidSim by Neil Ferguson
Agent Based Models
Other Strengths And Weaknesses: Strength showed improvements and should be using this approach more and more
Weakness: only evaluated for COVID, may not be robust in face of heterogenous datasets, with abrupt epidemic changes or with different diseases
Other Comments Or Suggestions: Consider the risks of the model failing to account for outbreaks in other contexts, with other diseases.
Questions For Authors: Would address other contexts, other diseases
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer Q6Uq
We thank you for your positive assessment and for recognizing the importance of our work in epidemic modeling. We address your concerns below:
> `Question & Weakness`: Only evaluated for COVID-19, with potential limitations for heterogeneous datasets and different diseases.
We address this concern from several perspectives:
- Dataset selection: We focused on COVID-19 and influenza datasets due to their availability, spatiotemporal coverage, and real-world significance. This enables fair comparison with prior works [1,2], evaluating EARTH's effectiveness. [1]: Epidemiology-aware Deep Learning for Infectious Disease Dynamics Prediction. CIKM 2023. [2]: PEMS: Pre-trained Epidemic Time-series Models. ArXiv 2023.
- Disease-agnostic design: EARTH's architecture is **generalizable** across epidemic types through: (a) SIR-inspired neural ODE framework capturing universal transmission-recovery dynamics. (b) Dynamic graph structure adapting to varying contact patterns. (c) Continuous-time formulation accommodating diverse incubation periods and transmission characteristics.
- Orthogonal Contributions: We recognize potential improvements like heterogeneous population flow modeling through Heterogeneous GNN. These represent **incremental improvement** rather than fundamental limitations, as our contribution lies in the integration of GNN with neural ODEs.
- Cross-disease validation: We tested EARTH on dengue fever using OpenDengue dataset, which exhibits dramatic fluctuations (e.g., 160,265 cases in 2010 declining to 25,503 in 2017 from COLOMBIA). Results validate robustness:
|Methods|VIETNAM|ARGENTINA|MALAYSIA|COLOMBIA|
|---|---|---|---|---|
|SIR|1865|627.9|128.8|753.6|
|DCRNN|1254|401.3|215.5|432.1|
|STGODE|1196|383.2|187.4|492.7|
|ColaGNN|1078|464.6|142.3|304.8|
|EARTH|921.5|312.5|110.1|261.0|
> `Concern 1 (Methods And Evaluation Criteria & Theoretical Claims)`: The cases with external interventions.
On EARTH's robustness to transmission dynamics changes:
- Continuous-time modeling: Unlike discrete models assuming fixed patterns, EARTH's ODE formulation adapts to changing dynamics by modeling underlying disease propagation mechanisms rather than statistical patterns.
- Capturing pandemic phases: EARTH models distinct epidemic phases through: (a) Time-dependent parameterization allowing varying transition rates across stages, (b) Context-aware graph structure evolving differently during various phases (lockdown vs. reopening), (c) Global features capturing regime changes across regions.
- External factors integration: EARTH's design allows exogenous incorporation, for example, (a) Mobility data as edge weights reflecting contact patterns, (b) Government interventions as additional node features, (c) Vaccination rates through susceptible population parameters modulation.
While robust to moderate shifts, unprecedented disruptions remain challenging for **any data-driven approaches**. We can explore methods like change-point detection or causal intervention modeling.
> `Concern 2 (Claims And Evidence:)`: Limited explanation of neural ODEs advantages.
Neural ODEs offer key advantages for epidemic modeling: naturally incorporate epidemic principles within ODE framework, fusing mechanistic understanding with data-driven flexibility; operate in continuous time, handling irregular reporting without interpolation errors; ensure physical consistency while modeling complex patterns beyond traditional compartmental models. Ablation studies show 12.4% RMSE degradation when replacing neural ODE with standard GNN.
> `Concern 3 (Claims And Evidence)`: Lack of analysis for "detailed representations" claim
EARTH uses multi-dimensional feature vectors for finer granularity disease dynamics versus conventional single-value methods. Figure 4 shows this capability, revealing semantic relationships between regions with similar epidemic trajectories.
> `Concern 4 (Experimental Designs Or Analyses)`: Potential overfitting.
We use temporal cross-validation with training/validation/testing spanning different epidemic phases, ensuring generalization. Our model incorporates dropout in GNN layers and L2 regularization. Table 2 shows stability even with 40% missing data, demonstrating robustness.
> `Concern 5 (Experimental Designs Or Analyses)`: There may be different metrics and further sensitivity analyses.
We conducted additional experiments with CORR (Pearson's Correlation Coefficient) and sensitivity analysis. EARTH performs well across metrics and remains stable with different parameters:
|Method|Dropout Rate|Learning Rate|
|---|---|---|
|DCRNN|0.1: 0.82, 0.3: 0.83, 0.5: 0.81|1e-4: 0.81, 1e-3: 0.83, 1e-2: 0.80|
|ColaGNN|0.1: 0.86, 0.3: 0.87, 0.5: 0.84|1e-4: 0.85, 1e-3: 0.87, 1e-2: 0.84|
|EARTH|0.1: 0.91, 0.3: 0.92, 0.5: 0.90|1e-4: 0.90, 1e-3: 0.92, 1e-2: 0.89|
> `Suggestion 1`: Essential References Not Discussed
We will incorporate suggested references in our revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you. I appreciate your reply and my response is still accept. But for future work, I would try to work on some of the more abrupt changes, beyond what the COVID and dengue datasets contain. Mechanistic approaches allow for evaluating Black Swans so to speak in epidemic models, where shocks may drive outbreaks. I think trying to explore more these heterogenous transmission periods will be crucial in making these approaches tell us what we could not know. This is the part where I said the model: may struggle when underlying transmission mechanisms change abruptly, such as with novel introductions from travelers or zoonotic spillovers. If the model has not been trained on data that reflects such disruptions, it may fail to capture these shifts accurately.
---
Reply to Comment 1.1.1:
Comment: # Response to Reviewer Q6Uq
Dear Reviewer Q6Uq,
**Thank you for your thoughtful response and for supporting the acceptance of our work!**
We agree that addressing abrupt shifts in transmission, especially beyond typical datasets, is a key challenge for the overall community. Your point about mechanistic approaches for evaluating unexpected shocks is highly relevant, and we plan to explore this further in future work. At the same time, we believe our approach provides valuable insights for generalizing across diseases and settings.
Thanks again for your support and constructive feedback.
Best regards,
Authors | null | null | null | null | null | null |
Spurious Correlations in High Dimensional Regression: The Roles of Regularization, Simplicity Bias and Over-Parameterization | Accept (poster) | Summary: This submission investigates the extent to which spurious correlations are used for learning in two models, linear ridge regression and random feature models, under the setting that the input dimension grows proportionally with the sample size. The key definition is $\mathcal{C}(\hat{\theta})$ in (3), and for $\hat{\theta}$ with and without regularisation, its non-asymptotic concentration results are proven (Theorems 4.1, 4.2). Section 5 is dedicated to investigating the role of regularisation in the behaviour of the above quantity and the test loss, and Section 6 is dedicated to showing an asymptotic similarity of random feature models with regularised linear ridge regression with a specific regularisation parameter.
Claims And Evidence: The claims are supported by proofs in the Appendix and experiments. The experiments are well-aligned with the claims, and even though I couldn't go through all of the proofs, I couldn't find any errors. I had a question in one of the proofs, and it is written in the "Questions for Authors" section, and I would be grateful if the authors could answer it.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I checked the proofs of Theorems in Section 4, and some in Section 5. I couldn't find any errors. The only problem was that some proofs were very hard to follow, as they refer to (Han and Xu, 2023) without citing any of their notations and results, and I had to spend a lot of time looking into that paper, because the results are in that paper are not how it is used in the proofs here. I think it would be better if the authors cited the results precisely, stating precisely what simplifications were made.
Some minor comments are listed in the "Other Comments or Suggestions" section.
Experimental Designs Or Analyses: The experiments look good.
Supplementary Material: The proofs, as discussed above.
Relation To Broader Scientific Literature: The related literature is discussed. This paper takes a slightly different aim at the problem of spurious correlations, in that the previous works seem to have focused on how to mitigate this problem, but this paper aims to characterise the extent of learning from spurious correlations and what role the regularisation parameter plays. Compared to what I imagine approaches like data augmentation does, which should not be model-specific, the results in this paper are specific for linear regression.
Essential References Not Discussed: Not anything that I know of!
Other Strengths And Weaknesses: The paper was a pleasure to read - very well written (except some minor comments listed in the next section). I loved the fact that after every result, there was a paragraph starting with "In words", explaining clearly the significance of the result.
Other Comments Or Suggestions: 102R: "is independently" -> "is independent" or "is chosen independently"
146L: "conditional to" -> "conditional on"
136R: "it can connected" -> "it can be connected"
242L: In (16), I don't think there is any need for bars, I think $S^\Sigma_x=\text{Cov}(y\mid x)=E_{y|x}[(y-E_{y|x}[y])(y-E_{y|x}[y])^\top]$ suffices. If you insist on using $\bar{x}$ to denote a particular value of the variable $x$, then I think it should also be $S^\Sigma_{\bar{x}}$ on the left-hand side.
In Section 3, in $f(\theta,z)$, the parameter $\theta$ comes before the input $z$, but in Section 6, this is reversed. Moreover, in Section 4, $f(\theta)$ is used without the input argument. I think it would be good to make this consistent, and even when the input argument is not explicitly present, write $f(\theta,\cdot)$.
625: $\epsilon$ should be $\mathcal{E}$.
188R: The projection operator $P_y$ is introduced here and again on 647. Perhaps redundant?
677: Full stop missing in (35).
683: In (36), in going from the second line to the third, $x^\top\theta^*_x$ turns into $[x^\top,\mathbf{0}^\top]^\top\theta^*$, but the second transpose shouldn't be there, it should be $[x^\top,\mathbf{0}^\top]\theta^*$.
688: "independent with" -> "independent from"
690: $\lVert\theta^*\rVert_2\leq1$ is not in Assumption 4.1, but on 166L. Also, you probably don't mean $\lVert\Sigma\rVert_\text{op}$-Lipschitz, but that you have $\lVert\mathcal{C}(\cdot)\rVert_\text{op}\leq\lVert\Sigma\rVert_\text{op}$? Slightly strange to talk about Lipschitz continuity of linear maps, as they are always Lipschitz continuous.
[Hastie et al., 2019] should be [Hastie et al., 2022].
Questions For Authors: 146L: Covariance being zero does not imply independence either. Perhaps better to replace "as the covariance between $y_i$ and $x_i$ is in general non-zero" to "as $y_i$ and $x_i$ are in general not independent"?
662: In (32), could you please cite which exact form of Weyl's inequality you used, and how? I tried to derive (32) from the basic form of Weyl's inequality on wikipedia but I couldn't immediately get there.
206R: This sentence sounds strange, as it says it will "estimate" the empirical value by the "true" value, although I think I see what the authors mean, as the quantity of interest is actually $\mathcal{C}(\hat{\theta}_\text{LR}(\lambda))$, the amount of spurious correlation learned by the trained model $\hat{\theta}$. Still, I don't think it's the right choice of words, especially because we do not have access to the true value $\mathcal{C}^\Sigma(\lambda)$.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the remarkable care in reviewing our work and for the positive evaluation. We address concerns below.
---
**Lack of clarity when citing (Han&Xu2023)**:
To address this and ease the comparison of our claims with the results in Han&Xu2023, we will add to the appendix a discussion about the notation in the proof of Theorem 4.3. Specifically, we will connect our definition of the Gaussian sequence model $\hat\theta^\rho$ with the definition in Equation (1.5) in Han&Xu2023, and our definition for the test function $\varphi$ (see our line 693) with their notation ($\textup{g}$) in their Theorem 2.3.
---
**Other Comments**:
- 102R, 146L, 136R: Thanks for pointing these typos out, we will fix them in the revision.
- 242L: While, in general, the conditional covariance depends on the particular value of $\bar x$, the definition of the Schur complement of the matrix $\Sigma$ does not rely on any specific instance of the random variable $x$. In the multivariate Gaussian case, it turns out that the conditional covariance is also independent of the particular instance $\bar x$, but we opted for leaving this notation at first as we did not consider it a trivial fact. If the reviewer finds this confusing, we are happy to remove this notation in the revision.
- The arguments of $f(\theta, x)$ are sometimes swapped: Thanks for spotting this, we will fix it.
- 625, 677, 683, 688: Thanks for pointing out these typos, we will fix them in the revision.
- 647: Thanks for noticing this. While it is true that the notation of $P_y$ also appears in the body in line 188R, in line 647 we are providing the proof for a statement prior to that part, and we opted for redundancy to avoid confusion.
- 690: Thanks for spotting this typo. Also, when we write that $\mathcal C(\cdot)$ is $\|| \Sigma \|| _{op}$-Lipschitz, we mean that the Lipschitz constant of $\mathcal C(\cdot)$ is upper bounded by the value of $\|| \Sigma \|| _{op}$. If the reviewer finds this notation confusing, we can elaborate more on this statement in the revision of the work.
- Hastie&al.2022: thanks for noticing the typo, we will fix it in the revision.
---
**Questions for Authors**:
- 146L: We thank the reviewer for pointing this out. While the implication holds for Gaussian distributions, in 146L we still did not state Assumption 4.1 and the wording at this point could be misleading. We will fix it in the revision.
- 662: We thank the reviewer for the question. Let us consider the inequality taken from the wikipedia page on Weyl’s inequality:
$$ \lambda_{i+j-1}(A+B) \leq \lambda_{i}(A) + \lambda_{j}(B),$$
where $A$ and $B$ are two $d \times d$ symmetric matrices and $\lambda_{j}(\cdot)$ denotes the $j$-th largest eigenvalue. Then, one could set $A = n \Sigma - Z^\top Z$ and $B = Z^\top Z$. Taking $i = 1$ and $j = d$ gives our Equation (32).
- 206R: We thank the reviewer for pointing this out. Indeed the sentence might lead to confusing interpretations. We propose to rephrase it as: “Thus, for large $d, n$, we can theoretically analyze $\mathcal C(\hat \theta_{\textup{LR}}(\lambda))$ via the deterministic quantity $\mathcal C^\Sigma(\lambda)$, which, as highlighted by Equation (12), depends on $\theta^*$, the covariance of the data $\Sigma$, and the regularization $\lambda$ via the parameter $\tau(\lambda)$ introduced in (13).”
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your comments, I found them very adequate and thank you for promising to make the corrections I suggested. I maintain my (very) positive evaluation of this submission.
Best,
reviewer | Summary: This paper quantifies the notion of spurious correlations -- where the feature of an image which determines it's classification correlates with another feature which does not determine the label -- and sudies the effect of spurious correlations for linear ridge regression and random-feature ridge regression. Precisely, they define spurious correlations as the covariance between the spurious feature and the label when the informative feature is sampled independently. They first demonstrate that with enough training data and zero ridge, the spurious correlations learned by a linear regression model will become small. Increasing the ridge parameter, they observe that spurious correlations can improve test error when sampling in-distribution, and a proof is provided in the special case of isotropic covariance for the informative feature. They then study spurious correlations in an over-parameterized random feature model. By proving that the random feature model converges to a linear model with an increased ridge parameter, they show that spurious correlations are larger, but similar in their behavior, for random-feature models.
Claims And Evidence: The claims made in this paper regarding spurious correlations in linear regression are well-supported by theoretical proofs and numerical experiments. The claim I take issue with is the claim that over-parameterization increases spurious correlations. This is because the scaling limit used to study the random feature model here is very limited. The joint requirements that $p = \omega (n \log^4(n))$ and $\log(p) = \Theta (\log n)$ restrict the scaling to a very narrow regime where p grows only slightly faster than linearly with n. This can also be recovered effectively by taking the proportional limit $p, n \to \Infinity$ with $p/n = \gamma$ and then taking the limit $\gamma \to \infty$, This second limit will destroy the variance induced by the random projection to a set of random features, which might have interesting effects on the spurious correlations learned. In this limit, the higher-order Hermite coefficients of the nonlinearity (captured by $\tilde{\mu}$) have an identical effect as random i.i.d. noise applied to the features would have. This additional contribution doesn't really interact with the structure of the data in a meaningful way
Also, in other scaling limits, such as $p \sim n^q$ as studied by Lu et. al in (https://arxiv.org/abs/2403.08160), the learning curves no longer reduce to the linear case with a renormalized ridge parameter. To fully answer the question of how overparameterization affects spurious correlations, these faster scaling limits would need to be examined.
Methods And Evaluation Criteria: yes
Theoretical Claims: I did not check the proofs, but the results are reasonable and consistent with the existing literature.
Experimental Designs Or Analyses: N/A
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: Spurious correlations are a universal problem in machine learning. Thus, this relates broadly to the scientific literature on machine learning. The technical contributions in this paper are also closely related to recent work on deterministic equivalents and random matrix methods for linear and random feature regression models.
Essential References Not Discussed: Random feature models beyond the proportional regime: https://arxiv.org/abs/2403.08160
These results also relate to work on linear regression with (possibly) noisy feature maps. See, for example,
https://arxiv.org/abs/2102.08127
Other Strengths And Weaknesses: Strengths: Rigor, clarity of exposition, addresses an important, universal problem in the
Weakness: The main weakness is the novelty of the results. All follow from known estimates of the in-distribution and out-of-distribution prediction error for the proportional limit of linear and random feature models, except for some additional pointwise concentration guarantees in a narrow scaling regime.
Other Comments Or Suggestions: none
Questions For Authors: Is it necessarily correct to assume that the informative feature and spurious feature live in orthogonal subspaces of the input space? How would your results change if instead of setting z = [x, y], you set z = x + y?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the rigor, clarity and importance of our results. We address concerns below.
---
**Joint requirements $p = \omega(n)$ and $\log p = \Theta(\log n)$**:
We note that the regime $p = \omega(n)$ and $\log p = \Theta(\log n)$ formally includes all scalings where $p \sim n^q$ for q > 1, as the latter is equivalent to $\log p = q \log n$. Thus, our assumptions include the faster scalings mentioned in the **claims and evidence** section of the review.
In case the reviewer referred to a polynomial regime also between $d$ and $n$ (if $n = \Omega(d^l)$, with $l \geq 2$, the RF model learns more than the linear component of the target as indicated in Hu&al.2024 – the paper referenced by the reviewer), it is indeed true that the higher order component of the features would behave qualitatively differently, making the RF model qualitatively different from a regularized linear regression. We have opted to focus on the proportional regime $n = \Theta(d)$ due to its popularity in the literature (see e.g. Mei&Montanari2020, Hastie&al.2020) and due to its closeness to standard datasets in deep learning.
---
**Relation to work on linear regression with (possibly) noisy feature maps**:
We thank the reviewer for bringing to our attention the paper by Loureiro et al. ("Learning curves of generic features maps for realistic datasets with a teacher-student model"), which has indeed a similar setting to ours. Using the notation $z = [x, y]$ as in our paper, they consider a teacher-student setting where the labels are defined as a function of the feature $x$ (see their Eq. (1.2)), while the estimator $\hat \theta$ is obtained via ERM using only the (correlated) features $y$ (see their Eq. (1.3)). Then, their work is focused on studying the training and generalization error of the model that has access only to the partial information. Our setting, instead, looks at the ERM on both features, where the model has direct access also to the core features $x$. Due to the similarity with their setting, we will mention this related work and remark the differences in the revision of the paper.
---
**Setting where $z = x + y$ instead of $z = [x, y]$**:
We thank the reviewer for the insightful comment. Let us consider the model
$$z = x + y, g = x^\top \theta^* + \varepsilon.$$
Then, we have
$$\mathcal C = \text{Cov} \left( (\tilde x + y)^\top \hat \theta, x^\top \theta^* \right) = \hat \theta^\top \Sigma_{yx} \theta^*,$$
and this quantity could be studied via the analysis in Han&Xu2023 as in our current setting, considering that the covariance of the data will take the form
$$\Sigma_{zz} = \Sigma_{xx} + \Sigma_{yy} + \Sigma_{xy} + \Sigma_{yx}.$$
In a nutshell, we expect the analysis for this setting to provide a qualitative behaviour similar to that unveiled in the current version of our work. In fact, the experiments on Color-MNIST (which does not strictly follow the model $z = [x, y]$, as the color overlaps with the core feature pattern as in the model $z = x + y$) suggest that our conclusions hold beyond the setting of orthogonal features. We remark that in the setting $z = [x, y]$ the optimal solution $\hat \theta = \theta^*$ gives $\mathcal C = 0$, while this is not necessarily the case in the setting $z = x + y$.
We will add a discussion on this point in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions. I will maintain my (already high) score of 4. | Summary: This paper investigates spurious correlations in high-dimensional regression, focusing on the effects of regularization, simplicity bias, and over-parameterization. Using linear regression, the study quantifies how regularization influences the reliance on spurious correlations, revealing a trade-off where increasing regularization reduces test loss but strengthens spurious dependencies. It also demonstrates that models exhibit simplicity bias, favoring spurious features with dominant eigenvalues in their covariance structure, as these features offer an easier shortcut for prediction. The analysis introduces a formal measure of spurious correlations and links it to data covariance properties, particularly through the Schur complement, which captures the statistical dependence between core and spurious features.
To examine over-parameterization, the paper extends its analysis to random feature regression, showing that such models behave like regularized linear regression, even in the absence of explicit regularization. This result explains why spurious correlations persist in over-parameterized models, as the implicit regularization effect does not eliminate them. Theoretical results are complemented by numerical experiments on Gaussian synthetic data, Color-MNIST, and CIFAR-10, validating the key claims. The findings provide a rigorous statistical foundation for understanding spurious correlations and their interaction with model complexity, offering insights that can inform mitigation strategies for improving robustness and fairness in machine learning.
## Update after rebuttal
Thank you for the detailed rebuttal and thoughtful clarifications. I acknowledge the authors' responses and appreciate the effort in addressing the distinctions with related work, the discussion of applicability to deep networks, and the consideration of potential extensions. After reviewing the rebuttal, I will keep my original score.
Claims And Evidence: The claims in the submission are largely supported by rigorous theoretical analysis and numerical experiments, making the evidence clear and convincing in most cases. The authors derive precise mathematical characterizations of spurious correlations, leveraging results from high-dimensional statistics, regularized linear regression, and random feature models. Their theoretical findings, such as the trade-off between regularization and spurious correlations and the equivalence between over-parameterized models and regularized regression, are well-grounded in established techniques. Additionally, the numerical experiments on Gaussian synthetic data, Color-MNIST, and CIFAR-10 align with the theoretical results, further strengthening their validity.
Methods And Evaluation Criteria: Yes, they are reasonable. The paper employs linear regression and random feature models, which are well-suited for studying spurious correlations in high-dimensional settings. The evaluation is based on both theoretical analysis and numerical experiments, using Gaussian synthetic data, Color-MNIST, and CIFAR-10, which are appropriate for validating the claims. While the analysis focuses on simplified models, the chosen methods effectively capture the core statistical phenomena under investigation.
Theoretical Claims: Skimmed through it; they look sound at a high level. The proofs follow standard techniques in high-dimensional statistics, leveraging tools like Schur complements, concentration inequalities, and random matrix theory. Key results, such as the trade-off between regularization and spurious correlations and the equivalence between random feature models and regularized regression, appear well-structured and logically derived. A more detailed verification would be needed to confirm full correctness, but no obvious issues stand out.
Experimental Designs Or Analyses: They look sound at a high level. The experiments on Gaussian synthetic data, Color-MNIST, and CIFAR-10 align well with the theoretical claims, providing empirical validation for key results. The analyses appear thorough, with appropriate comparisons and visualizations. While the study focuses on relatively simple models, the chosen datasets and methodologies effectively illustrate the impact of regularization, simplicity bias, and over-parameterization on spurious correlations.
Supplementary Material: No, I didn't check the supplementary material.
Relation To Broader Scientific Literature: The key contribution is quantifying the amount of spurious correlations learned in high-dimensional regression with respect to regularization, simplicity bias, and over-parameterization. This builds on prior work in machine learning robustness, generalization in over-parameterized models, and implicit bias in deep learning, extending these ideas with a rigorous statistical characterization. The study connects to research on shortcut learning, generalization in empirical risk minimization (ERM), and random feature models, providing a more precise understanding of how spurious correlations emerge and persist. By linking these phenomena to covariance structures and regularization effects, the paper contributes valuable insights to ongoing discussions on fairness, bias mitigation, and model interpretability in modern machine learning.
Essential References Not Discussed: Yes, the following work is not cited and discussed while it also studies a similar setting. Furthermore, the paper significantly relates to other papers of Bombari et al. Even many of the proofs (e.g., see Lemma C.3-C.5) rely on the mentioned papers. There should be a more apparent discussion of how the current work is distinguished from the mentioned work.
> Bombari et al., 2024: "How Spurious Features are Memorized: Precise Analysis for Random and NTK Features"
—This work also investigates the role of spurious correlations in over-parameterized models, particularly focusing on random features and Neural Tangent Kernel (NTK) models. Given the conceptual overlap and shared proof techniques, the current paper should clarify how its contributions extend or differ from Bombari et al.'s findings.
Other Strengths And Weaknesses: Strengths
1. The paper is well-written and easy to follow, presenting complex statistical concepts in a clear and structured manner.
2. By characterizing a deterministic object $\( \mathcal{C}^\Sigma(\lambda) \)$ to quantify spurious correlations, the paper provides a rigorous analysis of how regularization strength $\( \lambda \)$, data covariance $\( \Sigma \)$, and over-parameterization influence the learning of spurious features.
Weaknesses
1. The analysis primarily focuses on linear regression and random feature regression, making the setting simplistic and potentially limited in capturing the behavior of more complex models like deep neural networks.
2. The paper lacks a detailed comparison with prior work by Bombari et al., making it difficult to fully assess its technical contributions and novel challenges addressed. A clearer distinction from existing literature would strengthen the paper’s positioning.
Other Comments Or Suggestions: -
Questions For Authors: - Is it possible to extend the current findings to settings with feature learning (e.g., two-layer neural networks with both layers trained) rather than using fixed random features? Would the implicit bias of gradient-based optimization affect the spurious correlation analysis?
- Can the analysis be extended beyond the given data assumptions? Many real-world datasets exhibit heavy-tailed or structured dependencies—how would this impact the theoretical guarantees?
- How does this work differ fundamentally from Bombari et al. (2024)? Several proofs (e.g., Lemmas C.3–C.5) rely on techniques from Bombari et al., but a direct comparison is missing. Could you clarify the key distinctions and novel contributions?
- Does the identified trade-off between regularization and spurious correlations hold across different training objectives?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments and the several interesting suggestions for extensions. We answer questions and address concerns below. We will incorporate the discussions in the revision.
---
**Comparison with (Bombari et al., 2024):**
Our work concerns the problem of spurious correlations, where a trained model is tested on a **newly sampled** data-point independent from training data, and we crucially use the independence between $\tilde x$, $x$, $y$ and $\hat \theta$ (see Proposition 4.2 and Theorem 4.3). In contrast, **(Bombari et al., 2024) do not consider the problem of spurious correlations**, but rather the setting where spurious features in the training set are memorized by an over-parameterized model. This is discussed in the first paragraph of their introduction, and it is quantitatively apparent in their definition of memorization in Equation (3.7), where the covariance is computed comparing the trained model evaluated on a spurious feature **contained in the training set** $y_i$ and the corresponding label $g_i$.
In other words, the setting of our work is related to **robustness to distribution shift**, while (Bombari et al., 2024) focus on a setting where the individual training data are **memorized**, raising potential **privacy concerns**. Thus, the two works look at qualitatively very different problems.
This difference is reflected in the proof strategies. While our work shares with (Bombari et al., 2024) an approach based on concentration of measure (and, consequently, also technical lemmas), the proof techniques are fundamentally different. Our work relies on the characterization of the ridge estimator $\hat \theta$ provided by Han&Xu2023 for linear regression, and it transfers the insights to random features via a point-wise equivalence principle. In contrast, the argument of (Bombari et al., 2024) is based on showing concentration of the auxiliary quantity $\mathcal F(z_i^s, z_i)$ that serves as a proxy to characterize the amount of memorization for an individual sample.
---
**Setting might not capture the behaviour of deep neural networks**:
While it is true that our analysis covers only high-dimensional regression, Figure 5 (left) shows a degree of similarity for shallow networks. For more complex and deep models, we point to the empirical results in Sagawa&al.2020, where higher penalty terms are shown to decrease test accuracy on ResNet50 and Bert. While it is hard to provide an exact predictive theory for deep models, we do believe our approach to capture important statistical aspects of the phenomenon of spurious correlations in more general settings than the one we precisely study.
---
**Extension to feature learning**:
Following Ba&al.2022 and Moniri&al.2023, one could extend our results to the setting where the target is not a linear function of the inputs and one step of gradient descent on the feature map improves the representation. We also note that our experiments with neural networks show concordance in the qualitative behavior of high-dimensional regression and 2-layer networks with both layers being trained.
---
**Heavy tailed data**:
The recent work by Adomaityte&al.2024 (“High-dimensional robust regression under heavy-tailed data: asymptotics and universality”) considers heavy tailed data in high-dimensional regression: the covariates are isotropic Gaussian with variance sampled from a distribution with heavy tails. Note that our problem setup requires a non-isotropic covariance, so one would have to first generalize their analysis accordingly. Then, a possible direction would be to investigate how different tail weights (between core and spurious features) favor learning of spurious correlations.
---
**Different training objectives**:
Empirically, the identified trade-off between regularization and spurious correlations has been verified in prior work (Sagawa&al.2020) looking at models trained on classification tasks.
Theoretically, work by Montanari&al.2023 (“The generalization error of max-margin linear classifiers”) and Deng&al.2020 (“A model of double descent for high-dimensional binary linear classification”) provides the asymptotics for the generalization error of max-margin linear classifiers, also in the setting where classification is performed on a set of random features. For classification, we could define $\mathcal C$ as in our Equation (3), taking the $\text{sign}$ of the output of the model (which for classification represents the prediction of the model at test time).
Equation (5.6) in Montanari&al.2023 provides a set of fixed point equations giving the limit deterministic value of the maximum margin and the prediction error (see their Theorem 3), also for non-isotropic covariates. Then, to extend our results to this setting, one approach could be to follow the strategy as in part c) of the proof technique in their Section 5.3, with the difference of computing $\mathcal C$ instead of the generalization error. | Summary: The paper characterizes the learning of spurious features in linear regression as function of $\ell_2$ regularization strength and spurious feature simplicity. They also show that under overparametrization incurred by random features the effect of regularization is modified in a way that explains empirical results obtained from neural networks. The paper tests these hypotheses on synthetic and semi-synthetic datasets.
## Update after rebuttal
I thank the authors for their response and maintain my recommendation for acceptance.
Claims And Evidence: The authors support their claim of characterizing learning in linear regression under spurious correlations theoretically in a convincing manner, with some additional empirical support.
Methods And Evaluation Criteria: This is the weakest part of the paper, since the paper does not evaluate its predictions on any real bona fide regression task, but rather repurposes two classification tasks for this.
Theoretical Claims: I have checked the proofs of Proposition 4.2 and Theorem 4.3, and did not see any problems.
Experimental Designs Or Analyses: In addition to not using a original high-dimensional regression task, the authors' use of Colored MNIST and CIFAR-10 tasks diverge from the way they are commonly used, without explicit justification (and previous uses are not cited, . For binary Colored MNIST dataset the authors only work with a subset of the dataset, and experiment only with a single value of correlation between core and spurious features. They also create a spurious correlation dataset out of CIFAR-10, without referencing or reusing a very common variant of CIFAR-10 that's frequently used in the literature called Corrupted CIFAR-10.
Supplementary Material: I checked Appendix B for proofs and Appendix E for details on the datasets.
Relation To Broader Scientific Literature: The paper is positioned appropriately within the relevant literature, and the paper's motivations are clearly presented. However, I would appreciate a more involved discussion of their results in relation the existing results in the literature, especially the implications of their work regarding feature learning order and interference between features based on difficulty and spurious correlation strength (c.f. Pezeshki et al. 2021, Qiu et al. 2024)
Essential References Not Discussed: I am not aware of any major, relevant papers that the current paper fails to cite.
Other Strengths And Weaknesses: The paper sets out a clear motivation and proceeds to systematically demonstrate its claims, supported by the fact that the paper is well-written, making the authors' arguments easier to follow.
Other Comments Or Suggestions: - Given the fact that "test loss" can often refer to loss under an unbiased test distribution (i.e. OOD risk) when studying spurious correlations, the authors should take care to remind the reader that their test loss is ID. The difference between the two can be reinforced by assigning a designated notation for OOD test loss.
- Overloading of $\lambda$ for eigenvalues and regularization coefficient is somewhat confusing, please change the notation for one if possible.
- Use of $y$ to denote spurious features and $g$ to denote labels creates an unnecessary cognitive load since it directly contradicts common usage in the previous work. I would recommend using $s$ and $y$ respectively, but ultimately it's in authors' discretion.
- 034L: "Gaussian dataset" -> "synthetic Gaussian dataset"?
- Theorem 4.3: Please alert the reader _beforehand_ that they are not supposed to have an intuition re. $\mathcal{C}^{\Sigma}(\lambda)$, and that this property will be studied in the following section. Otherwise Theorem 4.3 is needlessly confusing to digest in the first read.
- Please explicitly mention and discuss the implications of the fact that Proposition 4.2 and Theorem 4.3 have different assumptions regarding the data and sample dimensionality relationship.
Questions For Authors: Not a question per se, but to summarize my points above: Improving experimentation (see above) and a more in-depth discussion of the results in light of literature would improve the paper the most.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive evaluation and helpful comments. We address concerns below.
---
**Improving experimentation**:
Following the reviewer’s suggestion, we will add the following experiments to the revision.
In https://ibb.co/21W8qbLJ, we consider Color-MNIST including all digits (rather than a subset). We train a 2-layer network on all classes with one-hot encoding and MSE loss. Odd (even) digits are red (blue) with probability $(1+\alpha)/2$, and blue (red) with probability
$(1-\alpha)/2$. To compute $\mathcal C$, at test time we consider the parity of the logit with the highest value, with respect to the color of the image. The two figures correspond to two values of $\alpha$-s and follow a similar profile as Figure 5 (left), showing the same qualitative behaviour of $\mathcal L$ and $\mathcal C$ with respect to $\lambda$ for the full Color-MNIST dataset (i.e., in the multi-class setting).
In https://ibb.co/zWKBHk5k, we repeat the experiment in Figure 2 (right) for multiple values of $\alpha$, reporting $\mathcal L$ and $\mathcal C$ with respect to $\lambda$ for linear regression. The curves behave as expected: for any value of $\lambda$, as $\alpha$ decreases, $\mathcal C$ decreases. Furthermore, the (in-distribution) test loss decreases as $\alpha$ increases, in agreement with our discussion at lines 347-350 (left).
We note that the choice of considering a data split in predictive and spurious features is conceptually similar to Section 5.1 of “Invariant risk minimization” by Arjovsky et al. As for our CIFAR-10 implementation, our experiment is designed to verify our claims on the simplicity bias in a controllable setting. In fact, introducing a tunable amount of noise allows us to modify $\lambda_{\max}(\Sigma_{yy})$ and use it as an independent variable in Figure 4. Nevertheless, considering image backgrounds as spurious features was done in the seminal cited work by Xiao&al.2020. Besides, the theoretical results suggesting that higher values of the regularizer $\lambda$ can be associated with higher values of $\mathcal C$ are also supported by the numerical evidence in Table 1 of Sagawa2020a.
We thank the reviewer for mentioning the Corrupted CIFAR-10 dataset considered e.g. in “Avoiding spurious correlations via logit correction” by Liu et al. In https://ibb.co/TBg7Khwn, we train a 2-layer network and a random feature model on the classes ‘trucks’ and 'boats’, enforcing a correlation in the training set with respectively the textures ‘brightness’ and ‘glass_blur’, as in the available data for C-CIFAR-10 (here we use correlation $\alpha = 0.95$). In both figures, we see a mild increase in $\mathcal C$ as $\lambda$ initially increases, until the later decrease predicted by Proposition 5.1. The profiles are also qualitatively similar to the ones of Figure 5 (left).
---
**More discussion in light of literature**:
The main difference with (Qiu et al., 2024; Pezeshki et al., 2011) is that these works mainly focus on how spurious correlations evolve during training, while our work studies spurious correlations at convergence.
To further connect with (Qiu et al., 2024; Pezeshki et al., 2011), we briefly discuss the intuition coming from our results from a dynamical perspective. In Section 5, we argue that $\lambda_{\max}(\Sigma_{yy})$ is related to $\mathcal C$, suggesting a measure of the simplicity of the feature $y$. In linear regression, solving gradient flow ($d \theta = - \nabla_\theta \mathcal L(\theta)dt$) gives
$$\theta(t) = (1 - e^{ - (X^\top X + n \lambda I ) t}) \hat \theta.$$
Thus, the components of $\hat \theta$ aligned with the top eigenspaces of $X^\top X$ converge earlier than the others. Hence, if $X^\top X \sim n \Sigma$, it is natural to expect that spurious features are learned faster the easier they are, and that they would prevail with respect to the core features (according to our bound in Proposition 5.1).
---
**Other comments/suggestions**:
We thank the reviewer for the detailed comments, which will improve the clarity of the revision:
- We will explicitly note that the test loss is in distribution after Equation (2).
- We will clarify the usage of the notation $\lambda$, as well as of $y$ to denote spurious features and $g$ to denote labels.
- We will fix the typo in line 34L.
- We will add the suggested remark before Theorem 4.3.
- We will discuss the difference between the two different scaling regimes considered in Theorem 4.3 and Proposition 4.2. | null | null | null | null | null | null |
Graph Transformers Get the GIST: Graph Invariant Structural Trait for Refined Graph Encoding | Reject | Summary: This paper proposes a graph structural encoding method named Graph Invariant Structural Trait (GIST), aiming to improve Graph Transformers' ability to encode structural information. GIST captures structural features based on the intersection cardinality of pairwise nodes' k-hop neighborhoods. Empirical evaluations on multiple standard benchmarks demonstrate that integrating GIST into Graph Transformers enhances performance, surpassing several state-of-the-art models.
Claims And Evidence: The claims made in the paper, particularly regarding the effectiveness of GIST in capturing complex substructures and long-range dependencies, are generally supported by experimental evidence. However, it seems that there is one paper (HDSE) that does the same thing [1]. I think this paper should at least be discussed in the paper.
[1] Enhancing Graph Transformers with Hierarchical Distance Structural Encoding.
Methods And Evaluation Criteria: The chosen methods and benchmark datasets are appropriate for assessing graph classification capabilities.
Theoretical Claims: I examined the theoretical claims briefly.
Experimental Designs Or Analyses: The experimental design of this paper is sound overall. The authors evaluate their proposed method across standard and relevant benchmarks.
Supplementary Material: No code provided.
Relation To Broader Scientific Literature: The paper’s key contributions closely relate to GRIT, Subgraphormer, and HDSE.
Essential References Not Discussed: [1] Enhancing Graph Transformers with Hierarchical Distance Structural Encoding.
Other Strengths And Weaknesses: Strengths:
1. Clear exposition of the proposed model.
2. Extensive experiments and rigorous benchmarking.
Weaknesses (updated after rebuttal):
1. The current set of datasets is still limited; including additional LRGB and OGB datasets would enhance the comprehensiveness of the evaluation.
2. The experimental results would benefit from clearer visual examples.
3. The MoleculeNet benchmark setup omits critical details on reported baseline results like GRIT; without supplementary material or code, their validity cannot be verified.
Other Comments Or Suggestions: See weaknesses.
Questions For Authors: See weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ### **[W1 - The theoretical analysis lacks clarity in terms of its practical implications for model improvements]**
In Section 4, we present a theoretical analysis showing that GIST is invariant under graph isomorphism. This means GIST consistently captures the true structure of a graph, regardless of how its nodes are ordered. As a result, when used with a graph transformer, GIST prevents node ordering from influencing the model’s behavior.
While the vanilla attention mechanism without positional encoding is also insensitive to node order, **incorporating traditional sequential position encodings reintroduces order sensitivity** as clearly stated in Graphormer by Ying et al., NeuRIPS 2021. This highlights the need for an attention bias like GIST that both encodes structural information and preserves invariance to node permutations.
### **[W2 - Datasets are not large-enough]**
First, we would like to note our work **mainly focus on graph classification task**, and highlight a **potential discrepancy in the definition of a "large-scale graph classification dataset."** According to multiple recent works on graph transformers, the **ZINC-full dataset (250K graphs)** is widely regarded as a "large-scale" benchmark for graph classification and is one of the most commonly used datasets in this context. Here is supporting evidence:
- **GRIT by Ma et al., ICML 2023**, a widely recognized SOTA graph transformer, states: *"We also conduct experiments on the larger datasets ZINC-full graphs (~250,000 graphs),"*
- **FragNet by Wollschlager et al., ICML 2024**, a recent SOTA GNN-based method, explicitly refers to ZINC as a *"large-scale molecular benchmark."*
- The **HDSE baseline by Luo et al., NeuRIPS 2024**, which was **suggested by the reviewer**, claims to apply *"HDSE to large-scale graphs."* However, their **largest graph-level task** is the Peptides dataset (~15K graphs), and they do not include ZINC-full in their experiments.
We also surveyed recent graph transformer and graph-level works from **ICML 2024**, and to the best of our knowledge, **none of them included experiments on graph classification datasets larger than ZINC-full**. Given this context, we believe our choice of dataset is consistent with standard practices in the field.
Nevertheless, we here provide additional experiment on PCQM4Mv2, **one of the largest-scale graph regression benchmark of 3.7M graphs**, from widely adopted OGB challenge in Table 1. Due to time constraints and the large size of **PCQM4Mv2**, GIST's training has not yet fully converged. Nevertheless, the current results already surpass many baselines, and we anticipate even stronger performance as training continues over the next few days. We also provide experiment results one two node classification datasets to showcase the applicability of our proposed method to different graph tasks.
**Table 1**: Performance of GIST on Cluster, Pattern, and PCQM4Mv2 datasets.
| Datasets | Cluster | Pattern | PCQM4Mv2 |
|-|:-:|:-:|:-:|
| GIST | **0.7906** | **0.8693** | **0.089** |
| GPS | 0.7802 | 0.8668 | 0.094 |
| SAN | 0.7669 | 0.8658 | - |
| GatedGCN | 0.7384 | 0.8557 | - |
### **[W3 - Essential discussion of related work, HDSE]**
We thank the reviewer for their meticulous review of our work and agree that HDSE follows a similar trajectory in capturing substructures within graphs. However, we would like to highlight that HDSE's structural bias, **Graph Hierarchy Distance**, differs from our proposed bias, **Graph Invariant Structural Trait (GIST)**. While both approaches enhance graph transformers, **GIST consistently outperforms HDSE across various datasets (Table 2)**. We attribute this to GIST’s ability to capture **higher-order structural relationships** through the **information exchange point**, as discussed in **Observation 2 of the Motivation section**. We will make sure to incorporate this discussion into the final version of our paper.
**Table 2**: Performance comparison of GIST vs. HDSE
| Methods | ZINC | Peptides-struct | Peptides-func |
|-|:-:|:-:|:-:|
| GIST | **0.055** | **0.2442** | 0.6783 |
| HDSE | 0.059 | 0.2457 | **0.7156** |
### **[W4 - Visualization for experimental results]**
Our tables and **color-coded rankings (top-1,2,3)** follow **standard practices in the field—including the HDSE baseline suggested by the reviewer**.
Since the suggestion *"The experimental results could be supplemented with visual examples"* is broad, we are unsure what specific visualization is desired. If the reviewer means bar charts, we are happy to provide them—otherwise, please clarify, and we will accommodate accordingly.
We hope our rebuttal clarifies concerns on dataset scale, related work, and results. We’ve added PCQM4Mv2 experiments to demonstrate scalability and are open to further experiments or visualizations as needed. Given these improvements, may we kindly ask the reviewer to reconsider our contributions and rating?
---
Rebuttal Comment 1.1:
Comment: Thanks for your careful response. Some key questions still remain for me:
**(1) Datasets are not large-enough**
Thank you for the additional experiments. I did not mean to suggest using the OGB PCQM4Mv2 dataset, as it is indeed too large and likely infeasible given time constraints. Rather, since you have already included *Peptides-struct* and *Peptides-func*, I was wondering why other datasets from the LRGB benchmark—such as *PascalVOC-SP*, *COCO-SP*, and *PCQM-Contact*—were not considered. Some datasets from OGB could also be relevant. Including a broader range of datasets would strengthen the empirical evaluation and better demonstrate the generality of your method.
**(2) Essential discussion of related work**
The paper claims to compare against *state-of-the-art baselines* for graph classification. However, in the current version, GRIT (a 2023 method) is mentioned as such. There are several 2024 models that achieve strong performance on benchmark datasets and should at least be acknowledged when discussing the state-of-the-art. For example:
- [1] reports 0.012 MAE on ZINC-Full,
- [2] reports 0.014 MAE on ZINC-Full,
- [3] reports 0.046 MAE on ZINC, and
- [4] reports 0.7311 AP on Peptides-func.
I don’t mean to suggest that your method needs to outperform these baselines, but rather that referencing some recent strong methods would make the claim of state-of-the-art comparison more balanced and complete.
[1] An end-to-end attention-based approach for learning on graphs, arXiv, Feb 2024
[2] Topology-Informed Graph Transformer, arXiv, Feb 2024
[3] Graph Attention with Random Rewiring, arXiv, Jul 2024
[4] Spatio-Spectral Graph Neural Networks, NeurIPS 2024
**(3) Visualization for experimental results**
Apologies if my previous comment was unclear. Since GIST can be naturally integrated into graph transformers, I suggest providing visualizations that illustrate how attention patterns change after incorporating GIST. This could help readers better understand the mechanism and benefit of the integration. (In the reply, the provided visualization was difficult to interpret—please clarify how these visualizations were generated and explain the specific role and significance of GIST in them)
**(4) Concerns Regarding MoleculeNet Benchmark Results**
Upon reviewing Section 5.4 in light of the authors’ rebuttal regarding graph-level tasks, I noticed that the experimental setup for the MoleculeNet benchmark lacks sufficient detail. Specifically, the source of the reported baseline results—GRIT on datasets such as BBBP, Tox21, ToxCast, SIDER, ClinTox, BACE, MUV, and HIV—is unclear. The original GRIT paper does not report results on these datasets, and *since no supplementary material or code was provided*, it is difficult to verify how these results were obtained. (April 7 edit)
Overall, I find this work promising, and with a more thorough analysis of GIST and a clearer positioning in relation to recent literature, I believe it can be significantly strengthened.
---
Reply to Comment 1.1.1:
Comment: ### **`1. "I was wondering why other datasets from the LRGB benchmark—such as...—were not considered."`Because they are not graph classification datasets, nor are they considered very large-scale.**
We thank the reviewer for the clarification. We did not include coverage for the 3 LRGB datasets the reviewer mentioned because:
- **They are not graph classification ones** — the domain focus of our work, which we have already featured all graph classification datasets within LRGB.
- **Nor are they considered very large-scale** — see below.
**T1: Dataset Stats**
|Dataset|# of Graphs|
|-|-|
|PascalVOC-SP|11,355|
|COCO-SP|123,286|
|ZINC-full|249,456|
|PCQM4Mv2|3,746,619|
Thus, when we initially saw the reviewer comment:
> W2 ***"The study lacks experiments on large-scale datasets."***
**we naturally turned to one of the largest graph-level datasets, OGB PCQM4Mv2, to meet that concern**. Especially since we already included **ZINC-full**, a large-scale benchmark used in established works like GRIT and FragNet.
---
That said, **we appreciate the opportunity to further demonstrate GIST's generalizability beyond graph classification**. Below, we present **results on PascalVOC-SP (as requested)**, along with two additional **node-level datasets** to cover a broader range of tasks.
**Table 2: GIST on Node-Level Tasks**
|Dataset|Cluster|Pattern|PascalVOC-SP|
|-|-|-|-|
|GIST|**0.7906**|**0.8693**|**0.3789**|
|GPS|0.7802|0.8668|0.3748|
|SAN|0.7669|0.8658|0.3230|
|GatedGCN|0.7384|0.8557|0.2873|
We hope these results further highlight the versatility and effectiveness of GIST.
---
Moreover, since we already invested significant effort and compute into **PCQM4Mv2**, we may as well present our final results on this:
**Table 3: GIST on PCQM4Mv2**
|Dataset|PCQM4Mv2|
|-|-|
|GIST|**0.0844**|
|GPS|0.0852|
|GRIT|0.0859|
We also refer the reviewer to our responses to **W7&8 from reviewer `nG8X`** for a broader discussion on generalization. In total, we have evaluated our method on **12 datasets in the main paper** and **4 additional datasets during the rebuttal** covering tasks spanning graph-level, node-level, long range, and large-scale scenarios. **We believe it is fair to argue results on such 16 datasets present a thorough and well-rounded evaluation of GIST and is well beyond the coverage standard done by most related work (and acknowledge by all other reviewers)**. We sincerely hope t reviewer will see our virtue in this regard, too.
---
### **`2. "The paper claims to compare against state-of-the-art baselines for graph classification. However, in the current version, GRIT (a 2023 method) is mentioned as such."` Sorry for the nitpick, but we never claimed (or at least not meant to claim) GRIT as the sole SOTA baseline. But we will certainly add more discussion about such works.**
Around `L298 - L315`, our writing reads:
> *We benchmark the performance of our method against recent state-of-the-art baselines across multiple categories, including Graph Transformers, Graph Neural Networks (GNNs), hybrid models combining ... as well as pretrained graph models: <works listed>*
**Where we have then listed around multiple works as SOTA baselines.** Though we did mention GRIT as SOTA at a few times, we never meant it is the only SOTA baseline. On the broader scale, while we indeed did not feature [1-4], we argue we featured works with similar recency (e.g., Subgraphormer & FragNet), making up a fair representation of *SOTA baselines for graph classifications.* And we hope being an understanding reviewer as you are, you would see our perspective, especially given [1, 2] have no cited opensourced implementations and [3] is just accepted at ICLR.
**That said, we again appreciate the opportunity to discuss more work as non-baseline but related work, and we find many suggestions from the reviewer particualrily good.** We plan to add the following discussion (with much more details) in the updated version:
> SE2GNN [4] and TIGT [2] follow a similar trajectory as Subgraphormer by enhancing GNNs with substructure awareness. SE2GNN tackles the long-range aggregation problem using global spectral filters. The key distinction between these methods and GIST lies in the backbone architecture—GNNs vs. transformers. GRASS [3] extends GRIT by incorporating random rewiring, while ESA [4] proposes a new graph transformer architecture. Both GRASS and ESA are orthogonal to our method, where interesting combinations shall be explored.
We hope the reviewer would find it helpful.
---
### **`3. "I suggest providing visualizations that illustrate how attention patterns change after incorporating GIST."` Sure!**
We don't have much char left so pls allow us to be brief and direct: Thank you for clarifying, it is a great suggestion and we are impressed by similar vis done in HDSE. We follow a similar style and present our vis here: https://anonymous.4open.science/r/GIST_Visualization-B756/README.md.
Thanks again! | Summary: In this paper, authors propose a new Graph Invariant Structural Trait (GIST) for higher-order structural relationship modeling within graphs and utilize randomized hashing to accelerate the corresponding calculation. The usage of GIST in the graph transformer has proven to be effective through experiments on several datasets.
Claims And Evidence: Yes. Most of the claims are reasonable and clear.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Authors should provide more detailed calculation process for the complexity claimed in Sec.3.3.
Experimental Designs Or Analyses: Yes. Authors conduct extensive experiments on different datasets and provide corresponding ablation study and analysis. However, since the calculation of GIST seems to bring considerable amount of computations, authors should also include the efficiency metrics in the performance comparison.
Supplementary Material: Yes. Theorem proof.
Relation To Broader Scientific Literature: Compared to previous methods, GIST explicitly models the higher-order structural information within the graph by estimating k-hop pairwise node intersections, which is helpful for accurately capturing the various substructures.
Essential References Not Discussed: No, according to my knowledge, related works have been well cited.
Other Strengths And Weaknesses: Strengths:
1. The design of GIST is reasonable, which can capture the inherent structures within the graph.
2. Authors provide detailed experiments to evaluate their design.
Weaknesses:
1. Whether GIST has practical value remains questionable since it seems to introduce extra calculation overhead. Authors can provide the execution time comparison to demonstrate its efficiency.
2. The writing of the paper can be improved; unclear descriptions should be avoided. E.g., on Page.3, “each node has d_n associated node features”, it’s ambiguous whether the number of features or the dimension of features is d_n. Besides, Fig.1 is very hard to understand without enough explanation.
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### **[W1 - Exra overhead cost of GIST computation]: Sure. Here are an analysis of GIST computation overhead cost and training efficiency of our proposed method.**
We kindly direct them to our new analysis on the one-time pre-computation overhead of GIST features and the overall training efficiency of GIST in our response to W2 of reviewer `L3nR`. We would like to emphasize that **GIST computation incurs only a one-time cost at the beginning**, and this overhead is minimal. We also provide experimental results of GIST on **PCQM4Mv2 (~3.7M graphs), one of the largest graph-level challenges in the community**, to further reinforce its scalability and efficiency in our response to W2 of reviewer `1uvt`.
### **[W2 - Typo. Figure 1 could be difficult to interpret.]: May we kindly ask which part of Figure 1 is confusing the reviewer? In the mean time, we elaborate Figure 1 again here.**
We thank the reviewer for their careful reading of our paper. We will clarify any ambiguous technical terms in the final version. Regarding the specific points raised:
1. The statement **"Each node has d_n associated node features"** is intended to mean **"Each node has d_n-dimensional associated node features,"** which we specify mathematically as **"$x_v \in \mathbb{R}^{d_n}$"**. We will make this and related definitions clearer in the final version.
2. Since we have provided a **13-line explanation** in the caption of **Figure 1** and further details in the **Motivation section (Observation 1)**, may we kindly ask the reviewer to specify which part is unclear so that we can provide a more precise clarification?
3. While reiterating the purpose of **Figure 1** poses a risk of redundancy, we are happy to elaborate again here: Using the same graph, subfigure **1a** illustrates **a part of the 4-hop GIST features** for two nodes *$(u, v_1)$* that belong to the **same substructure**. In contrast, subfigure **1b** depicts **a part of the 4-hop GIST features** for two nodes *$(u, v_2)$* that belong to **different substructures**. The GIST feature is a tensor where each cell **$(k_u, k_v)$** encodes the number of nodes in the neighborhood that are exactly **$k_u$** hops from node **$u$** and **$k_v$** hops from node **$v$**. This comparison highlights how GIST features enable the transformer to **distinguish nodes belonging to different substructures** within the same graph, guiding the attention mechanism to differentiate node pairs more effectively. Please refer to our *Observation 1 in Motivation section* for a more detailed discussion. We hope this clarifies the reviewer’s concerns regarding **Figure 1**.
### **[Sec 3.3 clarification]**
We further elaborate on the **theoretical complexity** of **exact GIST feature computation**, which we addressed in **Section 3.3** by introducing an efficient **estimation algorithm** using MinHash and HyperLogLog. For empirical validation of the algorithm's efficiency, please refer to our response to W2 of reviewer `L3nR` for detailed results.
A **naive approach** to compute the **exact** number of nodes in the **$(k_u, k_v)$-neighborhood intersection** for a node pair $(u,v)$, denoted as $C_{k_u,k_v}(u,v)$, follows the pseudocode below:
```pseudo
counts = 0
FOR x_u in N_ku(u) do
FOR x_v in N_kv(v) do
if x_u == x_v do
counts += 1
RETURN counts
```
### **Time Complexity Analysis:**
1. The **worst case** (e.g., a fully connected graph) results in a worst-case complexity of **$O(n^2)$** for computing a single $C_{k_u,k_v}(u,v)$.
2. Since GIST requires computing $k^2$ such values per node pair $(u,v)$, the complexity increases to **$O(k^2 n^2)$** per node pair.
3. Given that a graph $G$ with $n$ nodes has **$n^2$ node pairs**, the total complexity becomes **$O(k^2 n^4)$** for exact GIST computation, which is impractical for a large number of graphs.
To **overcome this infeasibility**, we propose a **low-complexity estimation algorithm** (**Algorithm 1** of our paper) to approximate **GIST features efficiently**. We highlight that the **naive** $O(n^2)$ **approach** to compute the **exact** number of nodes in the **$(k_u, k_v)$-neighborhood intersection** for a node pair $(u,v)$ can be **efficiently estimated in constant time** $O(1)$ using Algorithm 1. With that, we eliminate the **$O(n^2)$ bottleneck** from the overall complexity. As a result, the final time complexity for computing **GIST features across the entire graph** is reduced to **$O(k^2 n^2)$**, making our method significantly more scalable for a large number of graphs.
We hope the additional results and discussion provide the reviewer with a clearer understanding of the mechanism behind our proposed method. Given these clarifications and improvements, we kindly ask the reviewer to consider whether our contributions warrant a higher rating.
---
Rebuttal Comment 1.1:
Comment: Thanks for the efforts. So, in Fig.1, $I(2, 2)$ means $I_{2, 2}(u, v)$ as introduced in Sec.3.1, authors should keep the consistency of the expression and avoid unnecessary omissions. Anyway, additional information has addressed most of my concerns. As for the efficiency problem, I am more concerned about the prediction time comparison with baselines, rather than the precomputing and training time alone. To sum up, I am willing to improve the rating from 2 to 3.
---
Reply to Comment 1.1.1:
Comment: ### **`W1 - GIST inference efficiency (prediction time):` Sure, here we provide comparison of inference time between GIST and baselines.**
We thank the reviewer for acknowledging the merit of our method and raising the score from 2 to 3. We also appreciate the reviewer’s clarification on inference efficiency. **Table 1** below reports the inference time of **GIST and other baselines** across four datasets, where inference time is **a one-time structural encoding pre-processing time + model prediction time for a single batch of size 32 (zinc) or of size 16 (petides-struct, func)**. For any given graph—whether in training or testing—GIST features require only a one-time precomputation. Our results show that **GIST’s inference time is on par with other graph transformers**, demonstrating its efficiency in real-world applications.
**Table 1**: Inference Time (in seconds)
| Datasets | ZINC | ZINC-full | Peptides-struct | Peptides-func |
|-|:-:|:-:|:-:|:-:|
| GIST | 0.3 | 0.3 | 0.7 | 0.7 |
| GRIT | 0.03 | 0.03 | 0.1 | 0.1 |
| HDSE | 0.42 | 0.42 | 1.1 | 8.6 |
We will ensure that all these thoughtful discussions and suggestions from the reviewer (e.g., notation consistency) are incorporated into the later version. While the reviewer has already improved the rating—which we sure appreciate—we shamelessly venture to ask for a further improvement if inference efficiency is the only remaining concern; as we believe it should be well-addressed with the inference results above.
---
Last, we want to take this opportunity to highlight **someconcerns raised by multiple reviewers that have already been acknowledged by reviewer `e5VJ`**, such as:
- **W1 on "one-time computation overhead" and "inference efficiency"**: Both `L3nR` and `e5VJ` raised similar concerns about GIST’s efficiency. We thank `e5VJ` for recognizing the effectiveness of our estimation algorithm in mitigating GIST's pre-computation overhead. We hope that our not supplied inference time results further clarify GIST’s **practical viability**, given its minimal additional overhead and efficient end-to-end performance. | Summary: Graph classification, a fundamental machine learning task with broad scientific applications, has been advanced by Transformers, which address oversmoothing/oversquashing limitations of traditional GNNs using attention mechanisms. However, effectively encoding graph structural information within Transformers' all-to-all attention remains challenging. To tackle this, the proposed Graph Invariant Structural Trait (GIST) captures critical substructure details via pairwise node intersections, enhancing structural encoding in graph Transformers and outperforming state-of-the-art methods in experiments.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: No.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. developing expressive graph transformers is important
2. the paper is well written
3. experiments show the method outperforms the baselines
Weaknesses:
1. how to guarantee the method can ggregating diverse substructures information
2. how to theorectically verify the proposed method could be more expressive than other methods
3. better to use larger-scale datasets
4. how does the theorectial analysis contributes? Simple methods like transformers without position encodings could also produce graph-invariant representations. So do set-like methods.
5. how about hyperparameter sensitivity?
6. Could it be combined with pther structural encodings?
7. Could it be applied in GraphLLM or GFM?
8. Could it be applied to node or link-level tasks?
Other Comments Or Suggestions: See my comments above.
Questions For Authors: See my comments above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed feedback. Given the 5k char limitation and 8 distinct questions raised by the reviewer, unfortunately many of our responses will be condensed and citing other replies. Should the reviewer be interested in an elaboration in any particular response, please let us know and we will be more than happy to accomodate.
### **[W1 - How to guarantee the method can aggregate diverse substructures information]**
GIST encodes $k$-hop substructures between node pairs (Definition 3.2), enabling a broad representation of **substructural relationships** through **all-to-all comparisons**.
When integrated with graph transformers, GIST’s all-to-all substructure information enhances the attention mechanism. Specifically, with GIST, each node’s embedding is updated by incorporating **structural interactions with every other node**, allowing for a more structure-aware aggregation of representations. This facilitates **effective propagation of structural patterns** across the graph.
As shown in Figure 1, GIST helps differentiate substructures, guiding **diverse attention aggregation**—a capability further visualized in Figure 2 with learned GIST features on **ZINC**.
### **[W2 - How to theoretically verify the proposed method could be more expressive than other methods]**
It is difficult to **rigorously prove** that one graph feature is universally more expressive than another. For instance, Proposition 3.2 in GRIT shows that GD-WL with RRWP is **at least as expressive as** GD-WL with SPD and highlights a **special case** where RRWP outperforms SPD. However, this does **not** imply that RRWP is **strictly superior** across all scenarios—only in rare, impractical graph structures.
This underscores a broader issue: *theoretical arguments on graph feature expressiveness often lack generality and rigor*, even in existing works. Thus, we prioritize **empirical analysis** to assess practical effectiveness, a stance **acknowledged by reviewers `e5VJ` and `1uvt`**.
### **[W3 - Datasets are not large enough]**
Please refer to **W2 in our response to reviewer `1uvt`** for a detailed discussion on large-scale datasets.
### **[W4 - Importance of structural-invariance in GIST]**
We refer reviewer nG8X to our detailed discussion on the importance of structural-invariance in GIST in our response to W1 of reviewer `1uvt`.
### **[W5 - Hyperparameter sensitivity analysis]**
We direct the reviewer to our new ablation studies in our response to W2 of reviewer `L3nR`.
### **[W6 - Combination with other structural encoding]**
We have already explored the combination of GIST with RRWP and SPD previously, but neither improved performance. Here, we present one result on **GIST + RRWP**, where RRWP is concatenated with GIST as a structural bias:
$[x \| y] \in \mathbb{R}^{k^2 + 2k + \text{steps}}$
As shown in **Table 5** in our response to reviewer `L3nR`, **GIST alone outperforms the combination**. While both encode structural information, GIST has been shown empirically to be a stronger structural bias. Thus, incorporating RRWP may introduce noise into GIST’s learning process, leading to a slight drop in transformer performance.
### **[W7,W8 - Application to GraphLLM, GFM, node- and link-level]**
We are not too sure whether the reviewer is expecting some Yes/No answers or is actually requesting us to do it. In short, **the answers to these questions are generally a "Yes, GIST has the potential to extend to [x]"; but, respectfully, we believe it can be fairly argued that such applications are clearly out of scope.** This is evident by the fact that most established prior works on graph transformers — such as GRIT, HDSE, and Subgraphormer — do not explore such extensions like GLLM/GFM.
The underlying reason is such an extension would very much deserve a paper of its own, potentially requiring extensive pre-training and pipeline efforts and being mindful of typical GFM challenges like dim mismatches and hyperspace misalignments ([Galkin et al., ICML 2024](https://arxiv.org/pdf/2310.04562)), making such explorations worthy of their own research and it is impossible to complete during rebuttal period. What we can provide, is **GIST outperformed pre-trained graph models**, as showcased in **Table 5 in our paper**.
On the task end, our method primarily targets graph classification as clearly stated in various places of our writing. While we appreciate the reviewer’s suggestion for other graph tasks, **we must note there are countless research focusing solely on graph classification advancement, and contribution in this regard is well-recognized.**
Still, to **showcase generalizability**, we present results on **two node-level datasets and one large-scale graph regression task** (see **Table 1** in W2 of our response to reviewer `1uvt`). We hope this additional evidence will prompt the reviewer to reconsider the assessment.
---
Rebuttal Comment 1.1:
Comment: Most concerns have been addressed, and the score has been raised accordingly. | Summary: This paper is aimed to effectively encode graph structure into formation within the attention mechanism. Authors propose a new structural encoding based on Graph Invariant Structural Trait (GIST) to capture substructures within a graph by estimating pairwise node intersections. Both theoretical analysis and empirical results indicate the effectiveness of the proposed GIST.
Claims And Evidence: Yes, the experiments present convincing results.
Methods And Evaluation Criteria: Yes, the proposed method in this paper can make sense.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Most of the experimental designs are reasonable.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper focuses on adding structure-aware bias to the attention mechanism in graph Transformers. It may be helpful to design effective graph Transformers.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
(1) This paper introduces a new method referred to GIST that encodes graph structure using pairwise k-hop substructure vectors which can be efficiently calculated by estimating the interaction cardinality between the k-hop neighborhoods of node pairs.
(2) The movitation is clear and highlighted.
(3) Empirical results on standard graph classification benchmarks showcase consistent performance improvements, demonstrating the effectiveness of the proposed GIST.
Weaknesses:
(1) The novelty of this paper is somewhat limited, since the general idea to add structural bias to the attention mechanism is not very novel.
(2) The ablation study is insufficient due to that only the ZINC dataset is considered.
(3) Lack of efficiency experiments and parameter sensitivity analysis.
Other Comments Or Suggestions: There are several typos, such as:
Line 204, “requires first compute the cardinality of xxx” -> “requires to first compute the cardinality of xxx”.
The character of “graph Transformer” is not consistent in the whole paper.
Questions For Authors: (1)Can the authors provide more ablation study, such as on different datasets.
(2)Can the authors provide some efficiency experiments and the parameter sensitivity analysis?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ### **[W1 - The novelty of this paper is somewhat limited, since the general idea to add structural bias to the attention mechanism is not very novel.]: Our novelty lies in the GIST features, which offer a more effective way to encode structural information.**
We agree with the reviewer that adding structural bias to attention is not a new problem in graph transformers. However, it remains a **central and unresolved challenge** to determine which structural bias effectively captures complex graph structures, hence improving graph transformers. Prior works have proposed various biases, such as shortest path in Graphormer or RRWP in GRIT, yet **none have successfully captured both** substructures and the higher-order interactions between them, as we pointed out in our Motivation section. These two aspects are crucial for learning effective graph representations, as highlighted in [FragNet](https://arxiv.org/pdf/2406.08210). To the best of our knowledge, our work is **the first to address both challenges**, providing novel structural representations for effective graph representation learning in graph transformers, evidenced with our outstanding performance result.
### **[W2, W3 - Insufficient experiments on ablation study and efficiency]: Sure, here are 3 more ablation studies, training efficiency analysis, and GIST computation overhead report across 3 different datasets.**
We thank the reviewer for raising this question. To address it, we provide the results across three datasets:
1. We present ablation studies on different **$k$-hops** (**Table 1**), different numbers of **MinHash functions** (**Table 2**), and different values of **HyperLogLog’s $p$** (**Table 3**). Overall, GIST demonstrates **robustness** across various hyperparameter settings. We would like to note that **higher values of HyperLogLog’s $p$** and **a greater number of MinHash functions** reduce the **error in estimating $k$-hop intersection cardinality**, leading to **improved performance** of GIST. This trend is consistently reflected in the tables.
2. We present an **efficiency experiment on training time** (**Table 4**), which shows that **GIST's training time is comparable to or even lower than other graph transformers**. Notably, our method **does not require extensive pretraining** like other pretrained graph models, yet it **outperforms most of them** on MoleculeNet benchmarks, as demonstrated in **Table 5 of our paper**.
3. An analysis of the one-time computation overhead of GIST features (**Table 4**). We would like to emphasize that **GIST computation incurs only a one-time cost at the beginning**, and this overhead is minimal. As highlighted in **Section 3.3**, we use an **efficient estimation algorithm** of GIST features using MinHash and HyperLogLog, allowing us to approximate the $k$-hop substructure intersection with a **constant number of operations**. Theoretically and empirically (Table 4), this design ensures that our method remains computationally efficient while preserving the effectiveness of structural representation.
**Table 1**: Ablation study on different values of $k$-hops
| $k$ | 1 | 2 | 3 | 4 | 5 |
|-|:-:|:-:|:-:|:-:|:-:|
| ZINC | 0.100 | 0.058 | 0.054 | 0.065 | 0.063 |
| Peptides-struct | 0.2832 | 0.2471 | 0.2444 | 0.2478 | 0.2518 |
| Peptides-func | 0.6446 | 0.6420 | 0.6790 | 0.6754 | 0.6857 |
**Table 2**: Ablation study on different numbers of *MinHash functions*
| # MinHash functions | 32 | 64 | 128 | 256 |
|-|:-:|:-:|:-:|:-:|
| ZINC | 0.071 | 0.069 | 0.069 | 0.058 |
| Peptides-struct | 0.2511 | 0.2538 | 0.2447 | 0.2444 |
| Peptides-func | 0.6502 | 0.6418 | 0.6519 | 0.6857 |
**Table 3**: Ablation study on HyperLogLog data structure with different values of *p*
| *p* | 4 | 6 | 8 | 10 |
|-|:-:|:-:|:-:|:-:|
| ZINC | 0.065 | 0.065 | 0.058 | 0.062 |
| Peptides-struct | 0.2566 | 0.2545 | 0.2444 | 0.2466 |
| Peptides-func | 0.6170 | 0.6124 | 0.6857 | 0.6771 |
**Table 4**: One-time pre-computation and Training time of GIST (hour:min)
| Datasets | ZINC | ZINC-full | Peptides-struct | Peptides-func |
|-|:-:|:-:|:-:|:-:|
| GIST precomputation | 00:03 | 01:08 | 00:12 | 00:12 |
| GIST Training Time | 11:09 | 55:21 | 05:40 | 05:30 |
| GRIT Training + Precomputation Time | 16:30 | 104:57 | 07:15 | 06:42 |
| GraphGPS Training + Precomputation Time | 13:30 | - | - | - |
| SAN Training + Precomputation Time | 32:15 | - | - | - |
**Table 5**: Performance of GIST + RRWP
| Datasets | ZINC | Peptides-struct | Peptides-func |
|-|:-:|:-:|:-:|
| GIST + RRWP | 0.088 | 0.2490 | 0.6453 |
| GIST | 0.055 | 0.2442 | 0.6783 |
| RRWP | 0.059 | 0.2460 | 0.6988 |
### **[W4 - Typo]**
We thank the reviewer for carefully reading our paper. We will correct the typo in **L204** and ensure consistent usage of "graph transformers" throughout the paper.
We hope the additional results and discussion can help the reviewer better understand the mechanism of our proposed method, and maybe warrent us a higher rating.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses. Some of my concerns have been addressed, but my concerns about the innovation have increased. I think the innovation is quite limited, since it seems that the substructures and interactions between nodes can be carefully captured in an existing method using the Adaptive Graph Transformer (AGT) [1]. Therefore, the core innovation may be quite limited.
Based on the above consideration, I think this paper requires further improvement.
[1] Ma X, Chen Q, Wu Y, et al. Rethinking structural encodings: Adaptive graph transformer for node classification task[C]//Proceedings of the ACM web conference 2023. 2023: 533-544.
---
Reply to Comment 1.1.1:
Comment: As much as we appreciate the reviewer being open to providing an updated review — for better or worse — during the rebuttal period.
## **We must respectfully, but also firmly, note the reviewer’s assessment presents a gross misunderstanding of both works and indicates an extreme lack of familiarity with the field — supposing this review is faithfully given, which we honestly doubt.**
AGT is a 2023 work with 17 cites and gated behind a paywall. After paying, we realize there are several straightforward indicators showing:
* AGT targets a different task than ours.
* AGT promotes a core idea we explicitly argue against.
* Existing literature clearly shows AGT does not *solve* structural awareness — a widely acknowledged open problem in Graph Transformers — where our work contributes.
For a good faith discussion, we present strong evidence in all 3 regards and invite the reviewer to reevaluate for proper ICML review quality.
---
### **`1. GIST and AGT focus on different tasks (Graph vs Node Classification). This is evident from just reading GIST’s abstract and AGT’s title.`**
**GIST focuses on GRAPH CLASSIFICATION**, as made clear by the *first sentence* of our Abstract and Introduction:
> Graph classification is a core ML task...
> Graph classification is a fundamental problem...
with countless explicit statements like the following:
> ... GIST effectively captures structural information critical for graph classification.
> RQ 1: How well does GIST facilitate the learning and differentiation of substructures in graph classification tasks?
In contrast, **AGT focuses on NODE CLASSIFICATION**, as its title makes clear: *"Adaptive Graph Transformer for **Node Classification Task**"*
Notably, AGT itself highlights this distinction:
> *"... recent GTs mainly focus on graph-level tasks like... (i) What kind of information is needed for the node-level tasks?... (iii) How to design powerful GTs for the node-level tasks?"*
**This exact comment from the AGT authors alone dismisses the notion that AGT invalidates graph-level studies like ours.**
---
### **`2. AGT aggregates SIMILAR substructures, while we focus on DIVERSE ones — a conceptual contrast repeatedly emphasized and recognized by all other reviewers.`**
Even if one entertains a technical comparison (setting task aside), AGT and GIST adopt fundamentally different philosophies.
AGT argues it is best to learn from **SIMILAR substructures**:
> *"We propose ... to adaptively enhance the message exchange between nodes with high structural similarity."*
> *"For node pairs with low structural similarity, the connection would be weakened..."*
In contrast, GIST highlights the importance of learning from **DIVERSE substructures**, with paragraph like:
> **Challenge 2. Aggregating Diverse Substructures Information**
and explicit statements like:
> ... it is equally important for structural encodings to enable the aggregation of information across diverse substructures, rather than restricting it to similar or localized patterns.
> ... highlights how different substructure compositions lead to distinct intersection patterns, enabling...
**This difference is major, clear, and recognized by all other reviewers. In our opinion, it is `impossible to miss` for any reasonable reader who gives even minimal attention to both works.**
---
### **`3. Structural awareness (SA) in Graph Transformers (GT) is far from solved — AGT contributes, but does not disqualify future work in this direction.`**
1. Numerous publications since AGT continue to explore structural awareness/encoding (SA/SE) for GTs. Such as GRASS, MoSE (ICLR25); S2GNN, HDSE, N2C-Attn (NeurIPS24); Subgraphormer, FragNet, CoBFormer (ICML24); GRIT (ICML 23).
2. Several works explicitly call out the SE challenge as unsolved. E.g., *"Graph Positional and Structural Encoder,"* a 24 June paper from Rampášek's lab — who first-authored the well-recognized GraphGPS — states:
> *"...designing ... structural encodings that work optimally for ... is a challenging and unsolved problem..."*
This confirms the ongoing relevance of research like ours. GIST takes a **large and positive step** by being the first to introduce **intersection feature-based SE** for GTs.
---
Thus, **the reviewer’s assessment**
> *"it seems that the substructures and interactions between nodes can be carefully captured in an existing method using AGT. Therefore, the core innovation may be quite limited."*
**would serve as a reason to reject most, if not all of the listed works that contribute to SA in GTs.**
## **In the most respectful way possible, this argument fails even the most basic level of sanity check and should not appear in the review process of ICML.**
We respectfully invite the reviewer to **revisit** or provide **more detailed evidence** articulating how exactly AGT limits the innovation of our work — and by extension, a significant body of research in SA for GTs post AGT's appearance. | null | null | null | null | null | null |
A Comprehensive Framework for Analyzing the Convergence of Adam: Bridging the Gap with SGD | Accept (poster) | Summary: The authors propose a new theoretical framework with fairly weak assumptions, within which they are able to establish convergence rates for Adam.
Claims And Evidence: N/A
Methods And Evaluation Criteria: No simulation sutdy nor application on real datasets.
Theoretical Claims: Given the short time available, it was not possible to fully and rigorously review all the proofs. However, based on what I was able to check, the proofs appear to be correct.
Experimental Designs Or Analyses: N/A
Supplementary Material: N/A
Relation To Broader Scientific Literature: The main contribution rely on a new set of weak assumptions to obtain theoretical results for ADAM algorithm. More precisely, they obtain analogous rate of convergence as in the literature, but with weaker conditions.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
The authors successfully obtain results for Adam under remarkably weak assumptions (smoothness and ABC inequality).
Given the short time available, it was not possible to fully and rigorously review all the proofs. However, based on what I was able to check, the proofs appear to be correct and well-detailed.
I also appreciate the effort made to enhance the readability of the proofs, particularly through the use of the dependency graph.
Weaknesses:
While it is now widely accepted that simulations are not strictly necessary to demonstrate that Adam works, it would have been valuable to present an application example that could not have been theoretically addressed by previous works but can now be handled before moving on to simulations.
Other Comments Or Suggestions: No comments or suggestions.
Questions For Authors: No question.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer LTUb,
We sincerely appreciate your thorough evaluation of our manuscript and your positive feedback. Your recognition of our theoretical framework and the establishment of convergence rates for Adam under weak assumptions is highly encouraging.
We acknowledge your suggestion to include an application example demonstrating the practical implications of our theoretical findings. While our primary focus has been on the theoretical aspects, we understand the value of illustrating how our results can address scenarios previously unmanageable by earlier works. In response, we plan to incorporate a relevant application example in our revised manuscript to highlight the practical applicability of our theoretical contributions.
Thank you once again for your insightful comments and for your recommendation to accept our work. Your feedback has been instrumental in enhancing the quality and impact of our manuscript.
Sincerely,
Authors of Paper 1314 | Summary: The paper studies the convergence properties of Adam under smooth nonconvex settings. The paper presents convergence results in the sense of almost sure, $L_1$ and non-asymptotic, under relaxed noise assumption, i.e. the ABC inequality. The non-asymptotic convergence result is in the order of $O(1/\sqrt{T})$, which is generally consistent with that of SGD.
Claims And Evidence: Most of the claims are generally clear.
1. I do have a question for Theorem 3.1. According to Theorem 3.1, the notation $O()$ doesn't omit the dpendence on dimensionality $d$. I am wondering if this is possible with your assumptions? Or you just simply miss this?
2. Also regarding $O()$ in Theorem 3.1, could you please also specify the dependence on $1-\beta_1$?
Methods And Evaluation Criteria: No experiments in the paper.
Theoretical Claims: I didn't check the whole proof for the theoretical claims due to its complexity. Basically the results are reasonable.
Experimental Designs Or Analyses: No experiments in the paper.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper moves the convergence results for Adam, which is popular in the literature, a step forward.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The paper is technically solid, making a progress in obtaining better convergence results for Adam under more relaxed settings.
Weakness:
1. I don't think hiding so much details of the convergence result is appropriate. As I mentioned in Claims and Evidence part, I think it's quite likely that the authors hide the dependence on dimensionality (which can be very large in practice). Also, the dependence on $\beta_1$ and $\beta_2$ and others are also important, as this can imply the role of momentum for Adam as well as some possible suggestions for the parameter choices. Thus I think the authors should definitely give a formal statement of Theorem 3.1 at least in the appendix.
2. The choice for $\beta_2$ seem to be kind of restricted.
Other Comments Or Suggestions: 1. For Theorem 3.1, it seems kind of weird to state the convergence result in high-probability form, since it has a dependence on the probability as $O(1/s^2)$, while standard high-probability convergence results are usually in the order of $O(log(1/s))$. I think just equation (42) is good enough for the statement.
2. I suggest the authors to distribute some space at least in the appendix to aggregate the definitions for the values. It's really hard to follow the proof or even find out the detailed results of the theorems with a lot of defined values like $C,C_1,...$ with very seperate definitions.
3. Why you are not using the form with numbers noting the lines?
Questions For Authors: 1. What is the dependence on dimensionality and $\beta_1$ in Theorem 3.1? Could you provide a formal version?
2. If you do have additional dimensionality dependence, is it possible to extend your results to some other smoothness settings as in [1,2,3], which can potentially remove the additional dependence and fill this gap between SGD and Adam?
3. Can your proof also extend to a more general smoothness case, e.g. $(L_0,L_1)$-smoothness?
[1] Bernstein J, Wang Y X, Azizzadenesheli K, et al. signSGD: Compressed optimisation for non-convex problems. International Conference on Machine Learning. PMLR, 2018: 560-569.
[2] Liu Y, Pan R, Zhang T. AdaGrad under Anisotropic Smoothness. arXiv preprint arXiv:2406.15244, 2024.
[3] Xie S, Mohamadi M A, Li Z. Adam Exploits $\ell_\infty $-geometry of Loss Landscape via Coordinate-wise Adaptivity. arXiv preprint arXiv:2410.08198, 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Rebuttal to Reviewer nypN**
Dear Reviewer nypN,
Thank you very much for your thoughtful feedback and constructive comments on our manuscript. We sincerely appreciate the time and effort you have put into reviewing our work. We are grateful for your insights, and we have carefully addressed each of your points below in the hope of clarifying our contributions and improving the manuscript.
### 1. **Dependence on Dimensionality and $\beta_1$ in Theorem 3.1**
You raised a question regarding the dependence on dimensionality and $\beta_1$ in Theorem 3.1. Specifically, you asked about the formal version of the sample complexity result in the theorem.
**Response:** In Theorem 3.1, the sample complexity result concerning $1-\beta_1$ and the dimension $d$ is of the order $\mathcal{O}\left(\frac{d}{(1-\beta_1)^2}\right)$. This result is consistent with previous works on the convergence of Adam, such as [1]. We would like to emphasize that while it is possible to remove the dependence on the dimension $d$, we cannot avoid reintroducing the dependence on the inverse of the smoothing factor, which is $\mathcal{O}(\text{poly}(1/\mu))$. This is a well-known consensus in previous studies [2]. We hope this clarification addresses your concern.
### 2. **Extension of Results to Other Smoothness Settings**
You asked whether our results could be extended to other smoothness settings, as in [3, 4, 5], and whether this could potentially remove the additional dependence on dimensionality, helping to bridge the gap between SGD and Adam.
**Response:** We believe that it is highly probable to extend our results to other smoothness settings, and we are excited about exploring this in future work. However, we must admit that this area was not covered in our previous research, and therefore, we cannot provide a definitive answer at this moment. Nonetheless, we plan to address this issue in our future research, where we will explore the possibility of extending our results to more general smoothness assumptions and examine whether the additional dependence on dimensionality can be removed.
### 3. **General Smoothness Cases (e.g., $L_0-L_1$ Smoothness)**
You inquired whether our proof can extend to more general smoothness cases, such as $L_0-L_1$ smoothness.
**Response:** We are currently investigating this direction and have made some progress. Specifically, we have made the following two extensions so far:
1. **For $(L_0-L_0.5)$ smooth functions:** We can derive convergence results for Adam under the traditional second-moment-based ABC inequality. However, the sample complexity still shows dependence on the inverse of the smoothing factor $1/\mu$, though we can eliminate the dependence on the dimension $d$.
2. **For $(L_0-L_1)$ smooth functions:** At this stage, our methods are unable to extend the convergence results under the traditional second-moment-based ABC inequality. However, we can obtain convergence results under the traditional second-moment Bounded Variance condition. It is worth noting that, as of now, there are no known results for Adam’s convergence under $(L_0-L_1)$ smoothness with the traditional second-moment Bounded Variance condition.
We are continuing to investigate these extensions and will include them in future work. We greatly appreciate your interest in this aspect and will strive to address it in subsequent studies.
### 4. **Additional Comments and Acknowledgements**
We sincerely appreciate your review, which has helped us identify areas for clarification and potential improvement. We value your suggestions and will ensure that these aspects are thoroughly explored in our future research. Your feedback has significantly contributed to the refinement of our manuscript, and we hope that the revisions we have made have addressed your concerns effectively.
If you have any further questions or suggestions, please do not hesitate to reach out. We are more than happy to discuss any aspects of our work in greater detail. Once again, thank you for your careful review and constructive feedback.
We look forward to your final assessment.
[1] Bohan Wang, Jingwen Fu, Huishuai Zhang, Nanning Zheng, and Wei Chen. Closing the gap be-
tween the upper bound and lower bound of Adam’s iteration complexity. Advances in Neural
Information Processing Systems, 36, 2024a.
[2] Haochuan Li, Alexander Rakhlin, and Ali Jadbabaie. Convergence of Adam under relaxed assump-
tions. Advances in Neural Information Processing Systems, 36, 2024.
[3] Bernstein J, Wang Y X, Azizzadenesheli K, et al. signSGD: Compressed optimisation for non-convex problems. International Conference on Machine Learning. PMLR, 2018: 560-569.
[4] Liu Y, Pan R, Zhang T. AdaGrad under Anisotropic Smoothness. arXiv preprint arXiv:2406.15244, 2024.
[5] Xie S, Mohamadi M A, Li Z. Adam Exploits
$\mathcal{l}_\infty$-geometry of Loss Landscape via Coordinate-wise Adaptivity. arXiv preprint arXiv:2410.08198, 2024.
Sincerely,
Authors of Paper 1314
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply, which basically addressed my questions. It's good to hear that the extensions are generally possible. I understand the authors' point on dependence on dimension $d$ and momentum factor $\beta_1$, but I still want to emphasize why I think it should be explicitly shown in the statements here.
- Adam is widely used in large-scale experiments, which means that $d$ can be extremely large. The explicit dependence on $d$ suggests that the convergence rate is actually not desirable for large-scale experiments. I understand that previous results do have the additional dependence on $d$ as well, but I disagree with what you refer to as a "well-known consensus" by [2]. I don't think they have proof for your claim, i.e., you have to bear this additional explicit dependence on $d$ or $poly(1/\mu)$ for Adam.
If you introduce $poly(1/\mu)$ to the convergence rate, it intuitively encourages us to select large $\mu$, and the algorithm turns out to be more similar to SGD. If you think this is the only way to eliminate the explicit dependence on $d$, then why don't we directly use SGD? Why is Adam so popular in practice?
- $\beta_1$ means the incorporation of momentum. Since your result depends on $1/(1-\beta_1)$, it seems that basically choosing $\beta_1=0$ results in the best rate. This is not the case in practice, right?
Anyway, I agree with the authors' contribution on the technical side and fully understand that these points are not considered by some existing work as well, but I still want to emphasize these points as somehow remaining problems of the results that might be improved in the future. For now, I think the paper is qualified, and I would keep my score since it's already positive.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer nypN,
Thank you very much for your positive feedback and detailed comments. We appreciate your support and valuable suggestions, which are very helpful for improving our work.
Best regards,
Authors of Paper 1314 | Summary: This paper presents a unified analytical framework for understanding Adam’s convergence under weaker assumptions than those typically used. Specifically, the authors rely on standard L-smoothness and ABC inequality for stochastic gradients to show that Adam achieves non-asymptotic and asymptotic convergence.
Claims And Evidence: The main claim of this paper is the convergence of Adam with not very strong conditions. The claims are supported by rigorous math proofs.
Methods And Evaluation Criteria: This is a theoretical paper so is not applicable to this question.
Theoretical Claims: I follow the proof sketch and check some proofs of main lemmas. They seemed to be correct. However, the authors discuss more about the the assumptions, especially for ABC inequality since it is not standard in the analysis. The authors can highlight how to apply this assumption in the proof and also explain briefly why we don't need previous strong assumptions. These can provide more theoretical insights.
Experimental Designs Or Analyses: This is a theoretical paper so is not applicable to this question.
Supplementary Material: The appendix is well organized. Section A of appendix compares the gradient assumption with previous ones. However, the relatively weak assumption used in this paper is a main difference comparing with previous work so there should be a short paragraph discussing this difference in the main body of the paper, instead of in the appendix. They can also discuss the parallel results in SGD analysis using similar assumptions.
Relation To Broader Scientific Literature: Understanding Adam is important since it achieves great success in LLM training.
Essential References Not Discussed: The authors do cite key references on Adam’s analysis under different assumptions.
Other Strengths And Weaknesses: This is a solid theoretical paper analyzing Adam and achieves good results in both non-asymptotic and asymptotic settings. My concern is mainly on the presentation of the results and is already pointed out in previous questions. One more concern is the definition of hyperparameter $\beta_{2,t}$ converging to 1 looks unnatural to me.
Other Comments Or Suggestions: No other comments.
Questions For Authors: In my understanding, the behavior of Adam is quite different from SGD. How the author bridging the gap with SGD in there paper? Only in the part of establishing descent inequality and the final results?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer,
We sincerely appreciate your thorough evaluation of our manuscript and your insightful feedback. Your recognition of our theoretical framework is highly encouraging.
**Incorporation of Assumption Comparisons into the Main Text:**
We acknowledge your suggestion to move the discussion comparing our assumptions, particularly the ABC inequality, with previous ones from the appendix to the main body of the paper. We agree that this adjustment will enhance the clarity and accessibility of our work. In the revised manuscript, we will integrate this discussion into the main text, providing a concise comparison and highlighting the theoretical insights gained from using the ABC inequality. Additionally, we will discuss parallel results in stochastic gradient descent (SGD) analyses that employ similar assumptions to further contextualize our contributions.
**Clarification on the Behavior of Adam When $\beta_{2,t}$ Does Not Approach 1:**
Regarding your concern about the definition of the hyperparameter $\beta_{2,t}$ converging to 1, we appreciate the opportunity to clarify this point. In scenarios where $\beta_{2,t}$ does not approach 1, Adam's convergence behavior differs. Specifically, under such conditions, Adam may only ensure that the gradient converges to a small neighborhood around zero rather than exactly to zero. To achieve convergence of the gradient to zero, it is necessary for $\beta_{2,t}$ to approach 1. This requirement has been highlighted in previous studies, such as the work by [1] on the convergence of Adam. In our current paper, we focused on aligning Adam's convergence results with those of SGD, which led us to adopt the condition where $\beta_{2,t}$ approaches 1. We acknowledge that this aspect was not explicitly discussed in our manuscript, and we will address this omission in future research by exploring scenarios where $\beta_{2,t}$ does not approach 1.
Thank you once again for your valuable comments and suggestions. Your feedback has been instrumental in improving the clarity and depth of our work.
[1] Zhang Y, Chen C, Shi N, et al. Adam can converge without any modification on update rules[J]. Advances in neural information processing systems, 2022, 35: 28386-28399.
Sincerely,
Authors of Paper 1314
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply from the authors, which resolves my concerns. I will remain the score. | Summary: In the past several years, many efforts have been made to understand the convergence of Adam-like algorithms under different noise assumptions. This paper is a novel paper among these works and is based on an even weaker version of the noise condition called the ABC condition. Under the ABC assumption, the authors provide a non-asymptotic rate of $O(\log T/\sqrt{T})$ for sample complexity, independent from the smoothing factor $\mu$. Additionally, they demonstrate an asymptotic convergence and of the gradient norm to zero in both the sense of almost sure and expectation. These results match the best rates so far with a weaker condition and advance the theoretical understanding of adaptive methods.
Claims And Evidence: Not applicable.
Methods And Evaluation Criteria: Not applicable.
Theoretical Claims: The major theoretical conclusions are reasonable, and the key steps in the proofs appear to be correct.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: I have briefly checked the proof in the appendix. I am not sure about the proof details but they appear to be convincing.
Relation To Broader Scientific Literature: This work is entirely theoretical and does not present any negative broader scientific or societal impacts. The relationship to closely related work is discussed in the "weaknesses" part below.
Essential References Not Discussed: The authors appropriately cite the most relevant prior work and provide a clear and detailed discussion of how their contributions relate to and advance the existing literature.
Other Strengths And Weaknesses: **Strength**
This paper is generally well-written and easy-to-follow. It gives a clear comparison of the assumptions with closely related work and makes it easy for the reader to understand. From a contribution perspective, compared with prior results, this paper achieves a nearly matching rate and establishes asymptotic convergence for Adam under a weaker ABC condition by leveraging advanced tools from functional analysis. This is valuable to the optimization community.
**Weakness**
However, the novelty of this work is somehow questionable. While the results in this paper indeed rely on a weaker assumption compared to prior work, the gap between the ABC condition and the affine variance condition or e is not substantial. As a result, the findings, though technically sound, are not entirely surprising. It would be more helpful if the authors could provide more convincing arguments showing that their technique is indeed novel from existing work, particularly [Hong and Lin, 2024]. I will change my score if they clarify how their approach is fundamentally different from existing methods.
Other Comments Or Suggestions: Not applicable.
Questions For Authors: No further questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer JhWZ,
Thank you for your thoughtful and constructive feedback on our paper. We greatly appreciate the time and effort you’ve put into reviewing our work. We carefully considered your comments, and we would like to address the main concern regarding the differences between our approach and the paper by Hong and Lin (2024), as well as clarify the analytical techniques and assumptions used in our paper.
In the paper by Hong and Lin (2024), the authors introduce an affine noise variance assumption that differs from the traditional second-moment-based affine noise variance assumption:
$$
\mathbb{E}[\\|g_t-\nabla f(w_t)\\|^{2}] |F_{t-1}] \leq B\\|\nabla f(w_{t})\\|^{2} + C.
$$
Instead, they strengthen the concentration property of the random variables by assuming the following (their Assumption A.3) :
$$
\mathbb{E}\left[\exp\left\\{\frac{\\|g_{t}-\nabla f(w_{t})\\|^{2}}{B\\|\nabla f(w_{t})\\|^{2+\epsilon}+C}\right\\}|F_{t-1}\right] \leq e.
$$
Their proof is heavily dependent on this condition, making their approach inapplicable for analyzing the traditional second-moment-based affine variance noise conditions. For a detailed explanation, please refer to their open-access paper: [Hong and Lin, 2024](https://openreview.net/pdf?id=x7usmidzxj).
Hong and Lin's proof relies extensively on Lemma B.6, which is proven using concentration inequalities (shown in their Appendix D.2). Under the Exponential-tailed Affine Variance Noise Condition, these inequalities yield an $O(\ln T)$ order factor for the sample complexity term $\mathcal{M}_T$. However, when considering traditional affine variance noise conditions based solely on second-order moment assumptions, only an $O(\text{poly}(T))$ factor can be derived for $\mathcal{M}_T$, which may not yield the desired results.
Our paper employs the ABC inequality, which is even weaker than traditional affine variance noise conditions. This necessitates fundamentally different analytical methods, particularly based on discrete Martingale analysis, distinguishing our approach from that of Hong and Lin (2024).
Thank you again for your time and thoughtful evaluation of our work. If you have any further questions, please don't hesitate to discuss them with us.
Sincerely,
Authors of Paper 1314 | null | null | null | null | null | null |
Tensor Product Neural Networks for Functional ANOVA Model | Accept (poster) | Summary: The authors propose an approach for learning functional ANOVA decompositions form data. The neural network-based architecture they propose is designed such that it admits a unique functional ANOVA decomposition. The authors prove their architecture is a universal approximator for smooth functions which satisfy the sum-to-zero condition. By approximating a unique functional ANOVA decomposition (rather than a non-unique functional ANOVA decomposition) the authors seek to make the learning process more stable. They show their approach provides comparable performance to standard approaches for XAI on some standard benchmarks while being more stable.
Claims And Evidence: - A central motivating claim of their approach is that it is more stable than other methods for learning functional ANOVA decompositions from data. This claim is well supported by Tables 1, 2, and Appendix C.2.
- The authors claim that their approach can approximate a class of smooth functions well. This is supported by their theoretical result showing universal approximation as well as the numerical studies where their approach performs similarly to other SOTA methods for XAI.
- One of the claims which motivates the need for the sum-to-zero condition is that without identifiability, components become unstable and inaccurate. The study provided in Appendix F.1 was not convincing in my opinion. In particular, it wasn't clear to me what was meant by unstable and inaccurate in this case.
Methods And Evaluation Criteria: The authors evaluate their proposed approach on a number of synthetic benchmarks (consisting of three test functions) as well as a 13 real-world datasets. These datasets encompass a breadth of classification and regression problems making them well-suited for evaluating the proposed approach in my perspective.
Theoretical Claims: The sketch of the proof for Theorem 3.3 looks correct but I found the details in Appendix A to be a bit challenging to follow. It would be helpful if you provided references to some standard results that you rely on (even if they are textbook results).
Experimental Designs Or Analyses: - Study 4.1 shows that the author's approach is preferred in terms of component estimation stability
- Study 4.2 shows that the proposed approach tends to learn a representation that is close to the true functional ANOVA on synthetic data. While this study is convincing, it would have been helpful to have included a study on a synthetic benchmark which does not satisfy the sum-to-zero condition by construction to understand how your approach might perform in less favorable situations.
- Study 4.3 shows that the proposed approach achieves comparable or better predictive performance to methods from the literature across a number of standard benchmark problems. The fact the proposed approach is no longer doing far better than standard approaches (like in Study 4.2) calls into question how realistic the assumption of the sum-to-zero condition is in practice. Some discussion on this would have been helpful.
- Study 4.4 applies the proposed approach to some high-dimensional problems convincingly demonstrating the approach can be useful on moderately sized problems.
- Study 4.5 compares the proposed approach to Spline-GAM (Serven 2018). This is an interesting study which demonstrates that the proposed approach seems to be more robust to outliers than this prior approach.
- Study 4.6 compares ANOVA-TPNN to NBM-TPNN. I think this study was comparatively weak. It would be useful to understand the computation time advantages of NBM-TPNN vs ANOVA-TPNN vs some standard approach for learning functional ANOVA decompositions.
Supplementary Material: I reviewed Appendix A, B, C, D, F
- Appendix D was very helpful for understanding why you classify a functional ANOVA decomposition as interpretable. While I understand space is limited, for those less familiar with the field this would be an extremely helpful section in the main text.
- I struggled to understand what point you were trying to get across with Figures 5 -- 16 in Appendix F. As a reader who is less familiar with XAI, it was not clear to me why the functional relations of the main effects from your approach would be preferred over standard approaches.
Relation To Broader Scientific Literature: This work relates broadly to work on learning interpretable representations from data.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: - The proposed approach is a novel contribution as far as I'm aware. As discussed previously, the numerical studies do a good job of supporting the author's main claims.
- It would have been helpful to include a more complete discussion of why learning such decompositions is useful for interpretability in the main text.
Other Comments Or Suggestions: NA
Questions For Authors: - One claim you make is that a model that provides more stable estimates of component functions is more desired in XAI. It was unclear to me why you might prefer one particular ANOVA decomposition over another if their predictive accuracies are similar. In other words, can you provide some additional insight into why the particular sum-to-zero condition is desirable for XAI over the many other methods for enforcing some form of identifiability (i.e. through regularization, etc.)?
- As a reader less familiar with XAI, can you provide a practical example of how one might use a learned ANOVA-TPNN model to drive decision making?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable and insightful feedback.
We have made every effort to address your comments.
Due to character limits, "Comment" is abbreviated as "C".
>**C1 in Claims and Evidence** : One of the claims which...
>**C2 in Supplementary Material :** I struggled to understand...
**Response to C1 and C2.**
We do not claim that component estimation is accurate under the sum-to-zero condition, but rather that it is stable.
The sum-to-zero condition is not a unique condition to ensure the identifiability of each component in the functional ANOVA model.
In turn, different conditions result in different component estimations, and thus considering the accuracy of estimating component would not make sense.
However, stability is crucial since we want that the interpretation to be robust to data perturbation.
Appendix F aims to visually demonstrate the superior stability of ANOVA-TPNN over NAM and NBM, as measured by the stability score in Section 4.1.
Figures 5–16 show that the main effects estimated by ANOVA-TPNN are consistent across all trials, while those from NAM and NBM vary significantly.
The main effects in the functional ANOVA model are useful for visual interpretation.
Post-hoc methods (e.g., PDPs, SHAP) generate interpretation after model fitting.
In contrast, ANOVA-TPNN is an in-processing method that jointly performs model estimation and interpretation, ensuring consistency between the model and interpretation plots.
>**C3 in Theoretical Claims.** The sketch of the proof for...
**Response.**
The basis neural network in Equation (3) is inspired by a smooth version of a decision tree, where the indicator function is replaced with sigmoid functions.
Thus, the sum of TPNNs resembles the sum of smooth decision trees. We used techniques in Lemma 3.2 of [1] to derive the approximation property of TPNN.
>**C4 in Experimental Designs Or Analyses.** Study 4.2 shows...
**Response.**
The sum-to-zero condition is not a requirement for the true function.
Rather, any functional ANOVA decomposition can be redecomposed into one that satisfies the sum-to-zero condition (See Section 22 in [2]).
>**C5 in Experimental Designs Or Analyses.** Study 4.3 shows...
**Response.**
The smaller performance difference in Section 4.3 compared to Section 4.2 is not due to the sum-to-zero condition in the synthetic dataset, but rather due to the different evaluation criteria: Section 4.2 focuses on component selection, while Section 4.3 evaluates prediction performance.
NA$^{2}$M and NB$^{2}$M perform poorly in component selection because they fail to properly separate main effects and second-order interactions.
As shown in Figures 9 and 10, when second-order interactions are included, main effects in NA$^{2}$M and NB$^{2}$M are absorbed into the interactions, resulting in near-constant main effects.
In contrast, ANOVA-T$^{2}$PNN uses the sum-to-zero condition, ensuring mutual orthogonality of components in the $L_2$ space, leading to more accurate identification of component effects, as shown in Figure 8.
>**C6 in Experimental Designs Or Analyses.**
Study 4.6 compares ...
**Response.**
See response to Reviewer 6BRT's W1.
>**C6 in Supplementary Material.**
Appendix D was very...
>**C7 in Other Strengths And Weaknesses**
It would have been...
**Response to C6 and C7.**
In response to the reviewer’s comments, we will move some contents from Appendix D to the main text and provide a more detailed explanation of the interpretability of the functional ANOVA decomposition in the final version of the manuscript.
>**C8 in Questions For Authors.**
One claim you...
**Response.**
As the reviewer mentioned, other identifiability conditions exist.
Among them, the sum-to-zero condition is adopted for two main reasons.
First, the sum-to-zero condition is easy to implement during training.
For example, consider a functional ANOVA model for main effects:
$ f(\mathbf{x}) = \sum_{j=1}^p f_j(x_j),$
where $\mathbf{x} = (x_1,..,x_p)^\top$.
We may consider the identifiability condition as $\forall i \neq j$, $\mathbb{E}[f_i(X_i)f_j(X_j)]=0$, but enforcing this for neural networks is difficult and the optimization is impractical.
Finally, since ANOVA-TPNN satisfies the sum-to-zero condition, it enables fast and efficient computation of SHAP values using Proposition 3.2.
>**C9 in Questions For Authors.**
As a reader less...
**Response.**
As shown in Appendix E, by replacing the classifier in CBM with ANOVA-TPNN, the image model can be interpreted through the components estimated by ANOVA-TPNN.
For a given image, the contributions of concepts can be determined as in Table 20, and importance scores can be calculated as in Tables 14 and 15 to identify which concepts the model considers important for classification.
**References**
[1]. Ročková et al. Posterior concentration for Bayesian regression trees and forests.
[2]. Christoph, Molnar. Interpretable machine learning: A guide for making black box models explainable. | Summary: The paper proposes an approach for constructing interpretable machine learning models based on the functional ANOVA decomposition. The authors consider a decomposition of small order (1-2), and the decomposition terms are constructed with the basis functions represented by neural networks. To satisfy the condition of uniqueness of the expansion terms, the authors impose a natural restriction that the integral of each of the decomposition terms is equal to zero. The last condition is achieved by a special choice of the coefficient in the basis function.
Claims And Evidence: In the introduction to your paper, you explicitly formulate the problem of interpretability of AI models. In this context, the task seems to consist of analyzing already existing and trained large neural network models. It is not immediately clear from the introduction that instead you are developing a directly interpretable model. In this context, the relevant questions raised in "Questions For Authors".
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: The authors conduct a comparison with modern alternative approaches on standard test datasets.
Supplementary Material: Yes, partially.
Relation To Broader Scientific Literature: The authors develop the approach proposed in works "Neural Basis Models for Interpretability" (2022) and Scalable Higher-Order Tensor Product Spline Models (2024).
Essential References Not Discussed: References to relevant works are provided, but perhaps a more detailed discussion of the innovations proposed in comparison with older works is missing. Also in the context of ANOVA decomposition it is probably logical to cite the well-known work of Sobol (2001).
Other Strengths And Weaknesses: --
Other Comments Or Suggestions: --
Questions For Authors: 1. I would ask you to formulate more clearly what exactly you consider to be the main innovation proposed in your approach (in comparison with previous works).
2. Can this approach be used to interpret already trained neural network models (for example, as neural network attribution methods)?
3. If the model you build is interpretable, then the question arises about demonstrating that interpretability and its usefulness. I did not see any examples of this in the experiments section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and questions.
We have made every effort to address your insightful questions.
> **Weakness 1 in Claims And Evidence.** In the introduction to your paper, you explicitly formulate the problem of interpretability of AI models. In this context, the task seems to consist of analyzing already existing and trained large neural network models. It is not immediately clear from the introduction that instead you are developing a directly interpretable model.
In this context, the relevant questions raised in "Questions For Authors".
**Response to Weakness 1.**
In Line 12 on the right column of Page 1 of the manuscript, we mentioned that the functional ANOVA model is a transparent box-design model frequently used in explainable AI (XAI).
Then, in Line 53 on the right column of Page 1, we stated that we propose a learning algorithm that estimates the components of the Functional ANOVA model using Tensor Product Neural Network (TPNN).
In response to the reviewer’s comments, we will revise the introduction to explicitly state that "we propose a new transparent box-design model based on the functional ANOVA model and a specially designed neural network called Tensor Product Neural Network (TPNN)'' to improve clarity.
$\newline$
> **Weakness 2 in Essential References Not Discussed.**
References to relevant works are provided, but perhaps a more detailed discussion of the innovations proposed in comparison with older works is missing. Also in the context of ANOVA decomposition it is probably logical to cite the well-known work of Sobol (2001).
**Response to Weakness 2.**
In Section 4.5, we compared ANOVA-TPNN with Spline GAM (older work), and found that our model is more robust to input outliers. For more details, please refer to Section 4.5 and Appendix O of the paper.
As suggested by the reviewer, we will include references to key works on functional ANOVA decomposition, such as Sobol (2001).
$\newline$
>**Q1.** I would ask you to formulate more clearly what exactly you consider to be the main innovation proposed in your approach (in comparison with previous works).
**Response to Q1.**
The main innovation is a specially designed neural network called **Tensor Product Neural Network** (TPNN) defined in Equation (4) in Page 4.
This neural network is not only flexible enough to satisfy the universal approximation property, as proven in Section 3.3, but also automatically satisfies the sum-to-zero condition without imposing any constraints on the learnable parameters, which allows the use of standard gradient descent algorithms.
The sum-to-zero condition theoretically guarantees the uniqueness of the functional ANOVA decomposition (Proposition 3.1 of the paper), and thus it is essential for the stable estimation of each component, as demonstrated in Section 4.1.
Existing deep neural network-based functional ANOVA models, such as NAM and NBM, do not satisfy the sum-to-zero condition, and thus they are unstable in estimating components.
**ANOVA-TPNN is the first neural network which is flexible (e.g. having the universal approximation property) but satisfies the sum-to-zero condition without any additional constraints.**
Next, unlike traditional tensor product basis expansion approaches, TPNN does not lead to an exponential increase in the number of learnable parameters when estimating component functions $f_{S}$ as $|S|$ increases.
For more details, please refer to the **Remark** on Page 4 of the paper.
Furthermore, since ANOVA-TPNN satisfies the sum-to-zero condition, it allows for fast and accurate computation of SHAP values by leveraging Proposition 3.2 of the paper.
$\newline$
>**Q2.** Can this approach be used to interpret already trained neural network models (for example, as neural network attribution methods)?
**Response to Q2.**
Yes, it is possible.
A pre-trained neural network can be approximated using ANOVA-TPNN by treating the prediction values of a pre-trained neural network as outputs of the training data, and interpretation can then be provided by analyzing the estimated components as is done in Appendix D of the paper.
We will add post-hoc interpretation results of a pre-trained neural network using ANOVA-TPNN to Appendix D.
$\newline$
>**Q3.** If the model you build is interpretable, then the question arises about demonstrating that interpretability and its usefulness. I did not see any examples of this in the experiments section.
**Response to Q3.**
In Section 4.7 of the paper, we applied ANOVA-TPNN to image classification and illustrated how the prediction model can be interpreted based on the components estimated by ANOVA-TPNN.
Due to the page limitations, details related to interpretations using ANOVA-TPNN are provided in Appendix D-E of the manuscript.
For example, in Appendix E, we described the results of both local and global interpretations of the image classification model using the components estimated by ANOVA-TPNN.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed comments. Regarding your response to Q2, I think the results of such post-hoc interpretation experiments would be really interesting to add to the appendices. Your response dispelled my doubts, and I believe that I should increase the rating of your work (2 -> 3).
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful response and for reconsidering your rating of our work.
We appreciate your suggestion regarding the post-hoc interpretation experiments, and we will incorporate these results into the appendix to further strengthen our paper.
Once again, we sincerely appreciate your time and constructive feedback. | Summary: This paper introduces ANOVA Tensor Product Neural Network (ANOVA-TPNN), a novel neural network framework designed to estimate the functional ANOVA model with greater stability and accuracy. Theoretical analysis confirms that ANOVA-TPNN has universal approximation capabilities for smooth functions. Empirical studies across multiple benchmark datasets demonstrate that ANOVA-TPNN provides more stable component estimation and interpretation than existing models like NAM, NBM, NODE-GAM, and XGB. Additionally, the paper introduces NBM-TPNN, a variant that enhances scalability by ensuring the number of basis functions is independent of input feature dimensionality. Despite these advantages, the authors acknowledge computational challenges when handling high-order interactions, suggesting future work on component selection techniques.
Claims And Evidence: All the claims made in the abstract are supported by clear and convincing evident.
Methods And Evaluation Criteria: The proposed method makes sense for the problem.
Theoretical Claims: I have not checked all of the proofs in detail. However, it seems to me that the proof for the universality is correct.
Experimental Designs Or Analyses: The experimental designs are soundness.
Supplementary Material: I read the supplementary.
Relation To Broader Scientific Literature: This paper introduce ANOVA-TPNN, a novel neural network framework designed to estimate the functional ANOVA model in XAI.
Essential References Not Discussed: Relevant works are cited through the papers.
Other Strengths And Weaknesses: **Strengths:**
- The paper is clearly presented, and both theoretical and experimental justifications are provided.
- The paper addresses a crucial issue in explainable AI (XAI) by improving the stability of functional ANOVA decomposition.
- The authors provide a universal approximation proof, ensuring the validity of the proposed method.
- The model demonstrates competitive prediction accuracy while offering superior component stability compared to baseline models.
**Weaknesses:**
While ANOVA-TPNN improves efficiency over traditional basis expansion approaches, the paper acknowledges that high-order interactions remain computationally demanding. Additional analysis of runtime complexity would strengthen the work. Beside of this concern, I do not see any other major weaknesses of the paper.
Other Comments Or Suggestions: No.
Questions For Authors: - Can NBM-TPNN be further extended to handle higher-order interactions efficiently, perhaps through sparsity-inducing techniques?
- How does the choice of activation functions and network architecture impact the stability of the estimated components?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and questions.
We have made every effort to address your insightful questions.
> **W1.** While ANOVA-TPNN improves efficiency ...
**Response to W1.**
We have conducted runtime experiments for the functional ANOVA model only with the main effects in Appendix K, whose results are summarized in Table 23.
These results suggest that ANOVA-TPNN is competitive with other baselines in terms of runtime complexity.
As the reviewer pointed out, we conducted additional experiments for the runtime complexity of higher order functional ANOVA models.
We analyzed Abalone data with the functional ANOVA models with up to the 4th order interactions and compared the runtimes of ANOVA-TPNN, NAM and NBM.
The hyperparameters of all models are set identically to those in Appendix K of the paper.
**Table A.1**
|Maximum order of interaction|1|2|3|4|
|-|-|-|-|-|
|NAM|6.6sec|11.1sec|28.3sec|79.3sec|
|NBM | 3.0 sec|6.8sec|12.2sec|21.1sec|
|ANOVA-TPNN|1.6sec|5.2 sec|22.7sec|82.7sec|
|NBM-TPNN|1.5sec|4.1sec|7.8sec|16.4sec|
Table A.1 presents the results, which amply show that ANOVA-TPNN is competitve with NAM and NBM in terms of runtime.
In addition, it is interesting to see that the runtimes of NAM and ANOVA-TPNN is super-linear in the order of interactions while those of NBM and NBM-TPNN are linear.
We emphasize that estimating high-order interactions is a common challenge in all functional ANOVA models, including NAM and NBM. To address this, we used Neural Interaction Detection (NID) to remove unnecessary components before training. Section 4.4 demonstrates its effectiveness through numerical experiments.
> **Q1.** Can NBM-TPNN be further extended...
**Response to Q1.**
One may consider the group lasso penalty, which is well-suited for selecting meaningful components while promoting sparsity.
NBM-TPNN models component $f_{S}$ as
$f_S(\textbf{x}\_{S} ) = \sum_{k=1}^{K} \beta_{S,k} \prod_{j \in S} \phi(x_{j} | \theta_{k}),$
where $\beta_{S,k},\theta_{k}$ are learnable parameters and $\phi(\cdot)$ is the basis neural network which is defined in Section 3.4 of the paper. Note that the parameters $\theta_k$ are shared by the components while
$\beta_{S,k}$ are not, which makes it possible to apply a sparse penalty.
Given observed data ${ (y_{i},\textbf{x}\_{i}) }\_{i=1}^{n}$ with $\mathbf{x}\_{i}=(x_{1,i},...,x_{p,i})^\top \in \mathbb{R}^p$ and $y_i \in \mathbb{R}$, consider the objective:
$ {1\over n}\sum_{i=1}^n\bigg (y_{i} - \sum_{S \subseteq [p],|S|\leq d}f_S(\mathbf{x}\_{S,i}) \bigg )^2 + \sum_{S \subseteq [p],|S|\leq d}\lambda_{S}\Vert \mathcal{B}\_{S}\Vert_2,$
where $\lambda_{S} > 0$ is a hyperparameter, $\mathcal{B}\_{S} = (\beta\_{S,1},...,\beta\_{S,K})^\top$ , $\mathbf{x}\_{S,i} = (x_{j,i}, j \in S)$ and $d$ is the highest order of interactions.
Then, the group lasso penalty makes $\mathcal{B}\_{S}$ sparse on the component level, enabling component selection during training. This sparse estimation improves interpretability of higher-order interactions.
However, group lasso penalty does not help reduce runtime complexity. The number of parameters remains proportional to $Kp^{d}$, regardless of sparsity. Developing new algorithms using forward selection or random search could be a promising future direction for future work.
> **Q2.** How does the choice...
**Response to Q2.**
The experimental results on the prediction performance and stability of ANOVA-TPNN when using the ReLU activation function are already given in Appendix L of the paper.
We found that ReLU slightly underperforms compared to the sigmoid version in prediction accuracy and component stability, likely because the sigmoid-based TPNN is more robust to input outliers.
As the reviewer suggested, it is interesting to investigate how the choice of a different network architecture, rather than our proposed TPNN model, affects the stability of component estimation.
Therefore, we conducted additional experiments to evaluate the choice of the network architecture on the performance of stability.
We consider a deep neural network based tensor product model (TPDNN) which assumes
$ f_{S}(\textbf{x}\_{S}) = \prod_{j \in S}g(x_{j} | \theta_{j,S})$
for each component $f_S,$ where $g(\cdot|\theta_{j,S}) : \mathbb{R} \xrightarrow{} \mathbb{R}$ is a 3-layer neural network with hidden sizes [32,32,16] and $\theta_{j,S}$s are the learnable parameters.
We refer to the model that estimates components up to order $d$ in the functional ANOVA model using a TPDNN as ANOVA-T$^{d}$PDNN.
**Table A.2**
|Model|ANOVA-T$^{2}$PNN|ANOVA-T$^{2}$PDNN|
|-|-|-|
|RMSE|2.087(0.08)|2.148(0.08)|
|Stability score|0.028|0.041|
As in Section 4.3, we ran 10 trials.
Table A.2 shows the averaged prediction and stability scores for ANOVA-T$^{2}$PNN and ANOVA-T$^{2}$PDNN on the Abalone dataset.
ANOVA-T$^{2}$PNN outperforms ANOVA-T$^{2}$PDNN, likely due to TPNN's robustness to input outliers.
We will add these results to the Appendix. | null | null | null | null | null | null | null | null |
On the Similarities of Embeddings in Contrastive Learning | Accept (poster) | Summary: This paper investigates the geometry of embeddings learned by contrastive learning. This paper first extends the geometry of optimal embeddings (perfectly aligned positives and negatives with cosine similarity $-1/(n-1)$) to an inclusive form of contrastive loss. Then it proves that over-separated negatives with similarity less than $-1/(n-1)$ harm the perfect alignment of positives, and that in fixed mini-batch training, the same-batch negatives are over-separated, especially when the batch size is small. To address this problem, the authors propose a VRN loss term to regularize the similarity of negative pairs. They conduct experiments on benchmark datasets to validate the effect of the proposed VRN.
Claims And Evidence: - In the first contribution (line 55), the authors claim that within-view negative pairs can mitigate the excessive separation of negative pairs under full-batch scenarios, whereas this point is supported by neither specific theorems nor experiments. Moreover, in the single-modal case, the cross-view and within-view pairs seem to have no difference, because they are both generated by the same combination of random augmentations. Do I understand this correctly? If so, how can more negative pairs help reduce the excessive separation?
- To address the excessive separation problem of mini-batch training, the authors derive the theorems under the *fixed* mini-batch assumption. However, many contrastive learning methods (e.g. SimCLR, MoCo, etc.) empirically support the use of *random* mini-batches, i.e., the loader has the data reshuffled at every epoch. Is excessive separation still a problem under the *random* mini-batch scenarios? Or at least the authors should demonstrate the significance of the *fixed* mini-batch setting.
- Proposition 5.2 demonstrates that excessive separation harms perfect alignment, but I think there lack demonstrations of the negative effect of imperfect alignment, because the essential goal is not to minimize the contrastive loss but to achieve good embeddings. What if the excessive separation brings some advantages to the downstream tasks? To demonstrate the negative effect of imperfect alignment, I think perhaps a worse error bound of the downstream generalization (e.g. linear probing) is necessary.
- The authors claim that the proposed VRN loss term improves the accuracy of contrastive losses. However, according to Table 1 and Figure 2, there are cases where the accuracy drops after incorporating VRN, especially in Figure 2 CIFAR-100 DCL, the top 1 accuracy drops in 4 of the 5 cases.
Methods And Evaluation Criteria: - The experiments are conducted based on the InfoNCE-based contrastive losses. Additional validations on the independently additive contrastive loss (Definition 3.2) could be helpful.
- I understand the authors might have limited computational resources, but additional experiments on the full ImageNet can be more convincing.
Theoretical Claims: I've checked the proofs of Theorem 5.1, Proposition 5.2, and Theorem 5.6. They seem to be correct.
Experimental Designs Or Analyses: Please refer to *Methods And Evaluation Criteria*.
Supplementary Material: I've checked the proofs of Theorem 5.1, Proposition 5.2, and Theorem 5.6, and skimmed through Appendix A and C.
Relation To Broader Scientific Literature: NA.
Essential References Not Discussed: The related works are properly cited.
Other Strengths And Weaknesses: **Strengths**
1. This paper extends the theoretical analysis of contrastive losses with proper assumptions on the generation of positive and negative samples.
2. This paper studies from a new perspective and proves that the similarity of negative pairs could affect the alignment of positive pairs.
**Weaknesses**
1. The fixed mini-batch assumption is somewhat less realistic. More evidence is needed to show if the excessive separation of negative pairs is indeed a real problem in contrastive learning.
2. The experimental results are less convincing.
Other Comments Or Suggestions: The notations need refinement or specification.
1. At the beginning of Section 3, $x$ and $y$ are denoted as two distinct augmentations of the same instance (positive pair). Yet in the following parts, they can also represent negative pairs.
2. The variance of a vector typically means the covariance matrix, but the notation $Var$ in this paper indicates the deviation w.r.t. the l2 norm. This needs specification to improve the readability.
Questions For Authors: 1. What is the difference between cross-view and within-view negatives in the single-modal scenario?
2. Is excessive separation still a problem under the *random* mini-batch scenarios?
3. What is the negative effect of imperfect alignment?
4. How does VRN perform under additive contrastive loss and full ImageNet?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and constructive review, especially the effort to understand and verify our theoretical results. We are glad the reviewer saw our analysis as a meaningful **extension of existing CL theory** and appreciated its **new perspective** on the role of negative pair similarity. Below, we provide detailed responses to your constructive comments.
---
### Q1 & C1-1. The difference between cross-view and within-view negatives in the single-modal scenario?
Thank you for the insightful question. The figure below illustrates the structural difference between cross-view and within-view negatives in the single-modal case. Each graph has six nodes for three instances, with edges denoting negative pairs: $(u_i, v_j)$ for cross-view, and $(u_i, u_j)$ for within-view, where $i\neq j$.
[`Fig.4` Illustration of cross-view and within-view negatives.](https://osf.io/ejbm3?view_only=066d766d57914710810f46ab5f849bf9)
One can confirm that cross-view graphs are **fully connected bipartite**, whereas within-view graphs consist of **disconnected subgraphs**. This topological difference highlights their non-equivalence, even in the unimodal setting.
We will clarify this distinction in the revision.
---
### Q2 & C2. Is excessive separation still a problem under the random mini-batch scenarios?
Yes, as confirmed by additional experiments. In `Table 1` of our response to Reviewer BhMK, excessive separation still emerges under random mini-batching and correlates with performance degradation when it occurs more frequently.
---
### Q3 & C3. The negative effect of imperfect alignment?
Imperfect alignment degrades representation quality. Using the sigmoid loss (in Example 5.4) with varying bias $b$, we observe in the figure below that lower positive similarities correlate with reduced top-1 accuracy in linear probing. This demonstrates the performance sensitivity to alignment quality.
[`Fig.5` Negative effect of imperfect alignment.](https://osf.io/57knr?view_only=066d766d57914710810f46ab5f849bf9)
---
### Q4. VRN performance under additive contrastive loss and full ImageNet?
We evaluated VRN loss in the linear probing setting on CIFAR-10. Using Sigmoid loss alone ($t=1$, $b=-1$) yields 81.10% accuracy, while combining it with VRN loss ($\lambda=30$) improves performance to 87.81%, under the same training setup as in our paper. This demonstrates a significant gain from incorporating VRN, under additive contrastive loss.
We applied VRN to SimCLR on full ImageNet, using the same setup as in our ImageNet-100 experiments. After 100-epoch training with $t=0.2$ and $\lambda=40$, our method (SimCLR+VRN) achieved 52.48% top-1 accuracy, outperforming the SimCLR baseline (51.79%). This confirms the scalability of VRN to large-scale datasets.
---
### C1-2. The contribution, that within-view negative pairs can mitigate the excessive separation of negative pairs, is supported by neither specific theorems nor experiments.
The mitigation effect of within-view negatives is theoretically supported by Theorems 5.1 and 5.3 (see final paragraph of Sec. 5.1). Theorem 5.3, based solely on cross-view negatives $(c_1, c_2) = (1, 0)$, leads to excessive separation due to the independently additive form of the loss. In contrast, the loss in Theorem 5.1 includes both cross-view and within-view negatives $(c_1, c_2) = (1, 1)$, which alleviates this issue.
To empirically validate this, we evaluated the sigmoid loss (as in Example 5.4) with and without within-view negatives. As shown in `Table 2` of our response to Reviewer BhMK, adding within-view negatives reduces the proportion of excessively separated negative pairs, confirming the theoretical insight.
---
### C4. Accuracy drops after incorporating VRN?
The initial drop was due to using a fixed VRN weight ($\lambda=30$) at submission time. After tuning $\lambda$, we observed consistent performance gains across all methods and datasets. Updated results are provided in `Fig.1` of our response to Reviewer BhMK.
---
### S1. Are $(x,y)$ always positive pairs? Later parts seem to allow negative pairs as well.
There was a notational oversight in Sec. 3. The notation $(x,y)$ should represent either positive or negative pairs depending on the sampling. From Sec. 4, we clarify this with $(x,y)\sim p_{pos}$ or $p_{neg}$. We’ll revise Sec. 3 to reflect this properly.
---
### S2. The notation of $Var$.
Thank you for pointing this out. The standard form is $Var[X]=\mathbb{E}[(X-\mathbb{E}[X])(X-\mathbb{E}[X])^\top]$, while we wrote $\mathbb{E}[(X-\mathbb{E}[X])^\top(X-\mathbb{E}[X])]$. We’ll revise the appendix to follow the standard or express it as an expectation.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed reply. The rebuttal solves most of my questions, but I still have major concerns about Q2.
By "random minibatch," I mean breaking the fixed minibatch assumption, allowing batches to contain different samples in different epochs. For example, when using torch.utils.data.DataLoader, this could be easily realized by setting "shuffle=True" to have the data reshuffled at every epoch. (The default setting is "shuffle=False".) In this case, there will be no fixed same-batch and different-batch negative pairs through training, and consequently, it avoids excessive separation.
Perhaps I didn't make this point clear enough in my last review. The additional Table 1 in the rebuttal doesn't seem to address the above concerns. Therefore, I suggest the authors conduct experiments with the per-epoch reshuffled dataloader to verify if the excessive separation is indeed a significant problem worth investigating.
---
Reply to Comment 1.1.1:
Comment: Thank you for the clarification.
We confirm that **all experiments on real-world datasets used `shuffle=True`**, meaning minibatches were reshuffled at each epoch. This setting was used consistently, **including for `Table 1` and all experiments in Sec.6 of our manuscript**. It can be verified in our code (e.g., [line 396 here](https://anonymous.4open.science/r/vrn/solo/data/pretrain_dataloader.py)).
We hope this addresses the remaining concern.
Please feel free to add comments if you have any further questions or need additional clarification. | Summary: This paper mathematically analyzes the geometric properties of the positive pairs’ embeddings as well as negative pairs’ embeddings in different contrastive learning objectives. The authors mathematically find the optimal threshold for the expected negative pair similarities that results in preventing a misalignment of positive pairs in the embedding space.
They further provide an analysis of how the variance of similarity of the negative pairs can also affect the misalignment of positive samples. This variance is especially evident in the mini-batch setting.
Using the optimal threshold of the similarities of negative samples, the authors propose a comprehensive variance reduction loss function, namely VRN, for the negative pairs that can be added to any existing contrastive loss function. They show its effectiveness in classification tasks using various contrastive learning loss functions on the CIFAR and ImageNet datasets.
Claims And Evidence: Yes, the claims seem convincing to me.
Methods And Evaluation Criteria: Yes, the evaluation makes sense. The authors also provide a link to the source code.
Theoretical Claims: I did not attempt to check the proofs.
Experimental Designs Or Analyses: Yes. The paper provides experiments on 3 classification datasets and examines them with regard to the classification accuracy.
However, the paper could benefit from more in-depth analysis of embeddings and the structure of the embedding space after the proposed VRN approach. E.g., analysis of the distribution of cosine similarities of the positive and negative samples and PCA/DOSNES visualizations based on classes could be provided. These analysis would provide more insights on the geometric structure of the embedding space.
**Update after author response:** The additional experiment analysing the embedding space seems helpful, but additional empirical analysis of typical real-world models, which are based on much larger training runs, would be a welcome addition, especially also with further insights regarding the effects on the semantics in the vector space.
Supplementary Material: Partially. Only Section A.
Relation To Broader Scientific Literature: The findings appear to provide an important insight for contrastive learning in unimodal and multimodal settings. One of the pioneering works in this direction is [1], and since then, many efforts have been done to understand the alignment and separation of embeddings in the embedding space from a geometrical point of view and their results on downstream tasks.
[1] Wang, Tongzhou, and Phillip Isola. "Understanding contrastive representation learning through alignment and uniformity on the hypersphere." International conference on machine learning. PMLR, 2020.
Essential References Not Discussed: None I am aware of.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: The notation d in lines 242, 283 and 313 seem to not have been introduced in the paper.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging that our mathematical analysis **provides important insights for contrastive learning in both unimodal and multimodal settings**. We are also encouraged by the positive evaluation of our experimental claims and the effectiveness of our proposed VRN approach on classification tasks. Following the reviewer’s constructive suggestions to strengthen our paper, we provide additional experiments as follows.
---
### E1. More in-depth analysis of embeddings and the structure of the embedding space after the proposed VRN approach.
To analyze the effect of the proposed VRN loss on the embedding space, we conduct experiments on both synthetic and real-world datasets.
**[Experiment on Synthetic Data]**
We use a synthetic dataset of 4 samples, each augmented twice (8 embeddings in 3D). Embeddings are optimized directly using SGD (lr=0.5, 100 steps, mini-batch size=2). We compare two setups: (1) SimCLR with temperature $t=0.5$, and (2) SimCLR + VRN loss with $\lambda=3$.
[`Fig.3`. Visualization of learned embeddings.](https://osf.io/phte3?view_only=066d766d57914710810f46ab5f849bf9)
As shown in the figure above, both methods successfully align positive pairs. However, with SimCLR+VRN, the negative pairs are more evenly distributed in cosine similarity, centering around the theoretical optimum of $-1/3$. Specifically:
- SimCLR: mean = -0.3201, std = 0.2051
- SimCLR+VRN: mean = -0.3327, std = 0.0207
Given the absence of semantic structure in the synthetic samples, a uniform separation among negative embeddings is desirable. The VRN loss seems to facilitate such balanced distribution.
**[Experiment on Real Data]**
We further assess how VRN affects negative pair distribution in realistic scenarios. Using ResNet encoders trained on CIFAR100 with either SimCLR or SimCLR+VRN, we sample 5,000 negative pairs from augmented training images and compute their cosine similarities.
We repeat this across different batch sizes and report the variance of negative-pair similarities:
| Batch size | Variance of the negative-pair similarity (SimCLR) | Variance of the negative-pair similarity (SimCLR+Ours) |
|:-:|:-:|:-:|
|32 |0.1649|0.1008|
|64 |0.1505|0.0952|
|128|0.1444|0.0929|
|256|0.1404|0.0921|
|512|0.1396|0.0917|
`Table 3` *The effect of VRN loss on the variance of negative-pair similarity, where embeddings are generated from models pretrained with different batch sizes.*
We observe that the variance of negative-pair similarity consistently decreases when VRN is used, across all batch sizes. We will include these experimental results and their discussion in the revised version of the manuscript.
---
### S1. The notation d in lines 242, 283 and 313 seem to not have been introduced in the paper.
Thank you for pointing this out. The notation $d$ refers to the dimension of embedding vectors, as mentioned in line 96 in Sec. 3 (Problem Setup). We will clarify this in the relevant lines. | Summary: The paper analyzes the distribution of positive and negative pairs in contrastive learning and shows that perfect alignment becomes impossible when expecting negative pairs to fall below the optimal threshold. The paper also proposes variance reduction for negative-pair similarity loss to reduce the variance of negative pairs when using a small batch size. Furthermore, they experiment on different datasets such as CIFAR-100 and ImageNet-100 to demonstrate that this addition can improve contrastive learning methods.
Claims And Evidence: The papers claims that
Methods And Evaluation Criteria: The paper provides comparison on CIFAR-10, CIFAR-100 and ImageNet-100 datasets using 4 methods. The numerical results show that using VRN increase the accuraccy in most cases.
Theoretical Claims: I checked proofs for Proposition B.1 and B.2. and they were correct.
Experimental Designs Or Analyses: All the experimental section. I am wondering how many negative pairs falles above the optimal threshold in the experimnts.
Supplementary Material: Appendix A and C.
Relation To Broader Scientific Literature: The paper showed that pervect alignmet becomes unreachable if we expect all the negative pairs to fall bellow the optimal threshold. Instead, considering the negative pairs in the loss function can be more benefitioal. These findings are useful in the future direction research in Contrastive Learning.
Essential References Not Discussed: The papers mentiones related works adequantly.
Other Strengths And Weaknesses: **Strengths**:
1. The paper is well-written and the results are clearly presented.
**Weaknesses**:
1. The authors did not discussed in which scenarios adding VRN loss could downgrade the performance. For example in Figure 2, I observe that combining VRN with DCL results decrease (Downwards arrows) in most cases.
2. I expect more investigation on the limitations on using VRN and scenarios where it is benefitial to use it.
Other Comments Or Suggestions: No other comments.
Questions For Authors: I have the following quesitons from the authors:
1. Could authors provide that how many of negative pairs does not satisfy the optimal thershold if a method only uses negative pairs into the loss function?
2. Is there any toy experiments that we can observe the effect of using VRN loss? (Probably something similar to Figure 1 in [1])
[1] Liu et al., Generalizing and Decoupling Neural Collapse via Hyperspherical Uniformity Gap, ICLR 2023
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s positive feedback, including that our paper is **well-written**, provides **a clear explanation of results**, and may be **useful for future CL research**. Below, we address each of the reviewer’s comments in detail.
---
### W1. When VRN loss term downgrades the performance in Fig.2?
Fig. 2 in our paper has been updated (see below) with improved hyperparameter tuning:
[`Fig.1` Effectiveness of VRN.](https://osf.io/mejs6?view_only=066d766d57914710810f46ab5f849bf9)
In earlier results, VRN degraded performance on CIFAR-100 (e.g., DCL, DHEL), likely due to a fixed weight $\lambda=30$. After tuning over a wider range of {0.1, 0.3, 1, 3, 10, 30, 100}, we observe consistent improvements across all methods.
We also note:
* VRN operates on cross-view negatives,
* SimCLR includes such pairs, DHEL does not,
* and gains from VRN are larger when the base loss includes cross-view negatives (e.g., SimCLR).
This suggests VRN is most effective when complementing losses that already involve cross-view negatives.
---
### W2. Limitations on using VRN and scenarios where it is benefitial to use it.
We summarize below the scenarios where VRN is most effective, followed by its main limitations.
1. When VRN is beneficial
* Small batch training:
From Theorem 5.5, small batches lead to higher variance in negative-pair similarities. Since VRN minimizes this variance, it yields stronger gains in low-batch regimes. Empirical results in `Fig.1` support this.
* Robustness to temperature:
Contrastive loss is sensitive to the temperature parameter, which affects embedding similarity distributions `[R1]`. VRN encourages negative similarities toward the optimal $-1/(n-1)$, reducing performance fluctuation.
Below, we show SimCLR with VRN ($\lambda=30$) yields more stable accuracy across temperature values on CIFAR-10/100:
[`Fig.2` Robustness to temperature.](https://osf.io/cp3yn?view_only=066d766d57914710810f46ab5f849bf9)
`[R1]` *Wang, et al. Understanding the behaviour of contrastive loss. CVPR2021.*
2. Limitations of VRN
* Loss of meaningful structure:
Variance in negative similarities can reflect semantic diversity. Forcing uniform similarity may suppress this, as discussed in Sec. 4 (L233–245) and Sec. 7 (L433–436).
* Diminished effect with large batches:
As batch size increases, the natural variance stabilizes, reducing the marginal benefit of VRN.
* Hyperparameter tuning:
VRN introduces an additional weight $\lambda$, which requires tuning per setting.
---
### E1. How many negative pairs fall above the optimal threshold?
We evaluated the ratio of negative pairs whose cosine similarity falls below the theoretical threshold of $-1/(n-1)$, using models pretrained with SimCLR (ResNet-18, CIFAR-100, $t=0.2$).
We generated 5,000 negative pairs from each model by applying random augmentations and measuring cosine similarity from the projector output. The table below summarizes the results:
|Batch size|Ratio below threshold (%)| Var. of negative similarity | Top-1 acc. (%)|
|:-:|:-:|:-:|:-:|
|32|58.09|0.1649|56.34|
|64|58.10|0.1505|58.40|
|128|57.79|0.1444|58.80|
|256|57.61|0.1404|59.72|
|512|57.38|0.1396|59.69|
`Table 1` *Ratio of excessively separated negative pairs and associated statistics.*
We find that over 57% of negative pairs fall below the optimal similarity across all batch sizes. Smaller batches show greater variance and slightly lower accuracy, aligning with Theorem 5.5, which predicts increased variance in negative similarity with reduced batch size.
---
### Q1. How many of negative pairs does not satisfy the optimal thershold if a method only uses negative pairs into the loss function?
We interpret your question in two possible ways:
(1) How many negative pairs fall below the optimal similarity threshold when training only with the VRN loss, or
(2) when using a contrastive loss with only cross-view negatives.
Regarding (1), the VRN loss is not intended to serve as a standalone training objective. It is designed to regularize standard contrastive losses, and cannot learn meaningful representations on its own.
Regarding (2), we report statistics from models trained with sigmoid loss (in Example 5.4), which uses only cross-view negatives. In this case, a large fraction of negative pairs have similarities below the threshold of $-\frac{1}{n-1}$, where $n$ is the sample size. This indicates over-separation. When the loss is modified to include both cross-view and within-view negatives, this issue is mitigated, as shown below:
|Negative-pair type|Ratio below threshold (%)|
|:-:|:-:|
|cross-view|69.30|
|cross-view & within-view|66.88|
`Table 2` *Negative pair similarity in models pretrained with sigmoid loss.*
Please let us know if this interpretation differs from your intent — we would be glad to clarify further.
---
### Q2. Is there any toy experiments?
Please see `E1` in our response to Reviewer qfkv. | null | null | null | null | null | null | null | null |
Evaluating Judges as Evaluators: The JETTS Benchmark of LLM-as-Judges as Test-Time Scaling Evaluators | Accept (poster) | Summary: This paper studies LLM judges as evaluators for test-time scaling and introduces a new benchmark JETTS (Judge Evaluation for Test-Time Scaling). The benchmark assesses different models across three tasks: 1) Response reranking: ) Response reranking, where judges select the best from multiple candidate responses; 2) Step-level beam search, where judges evaluate and rank partial responses during generation; and 3) Critique-based refinement, where judges provide feedback for response improvement. Key findings demonstrate that while existing LLM judges show promise in some test-time scaling scenarios, they have significant limitations, especially in domains requiring complex reasoning.
Claims And Evidence: The claims made in the paper are generally supported. However, several areas would benefit from stronger or more conclusive evidence: 1) In the critique-based refinement findings, the authors demonstrate that refinements rarely surpass both reranking and greedy baselines, but their explanation that critiques are "not actionable enough" lacks sufficient support. A qualitative analysis of critique content with examples would strengthen this claim by illustrating specifically why generators struggle to utilize the feedback effectively. 2) While the judge-to-generator size ratio findings present coefficients (0.19 for math, 0.06 for instruction following, 0.00 for code), it's difficult to determine if these differences are statistically significant. This is particularly important when making claims about domain-specific patterns.
Methods And Evaluation Criteria: This paper proposes a new benchmark so no new methods are introduced. The evaluation criteria is comprehensive covering various tasks, datasets, and metrics. But I do feel it is quite overwhelming and important findings are not highlighted. The comparison between Likert and Additive rating protocols, for example, appears to be included primarily for completeness rather than yielding substantive insights. Such peripheral comparisons would be better placed in an appendix to maintain focus on the more significant findings. Additionally, the benchmark's exclusive focus on open-source models represents a limitation.
Theoretical Claims: The paper makes no theoretical claims.
Experimental Designs Or Analyses: I analyzed several experimental designs in the JETTS paper. The normalized helpfulness metric and task diversity framework both appear sound. However, I identified several validity issues: 1) The random tie-breaking method used for single-rating protocols introduces unquantified variability that affects result reliability; 2) The critique quality analysis lacks a systematic methodology—the paper claims critiques aren't actionable enough but provides no content analysis to support this conclusion.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: The JETTS benchmark connects two popular research areas: LLM-as-a-judge and test-time scaling. The benchmark reveals significant limitations: LLM judges often fail to improve generator outputs in beam search and refinement tasks, particularly for code generation, and struggle when evaluating larger generator models. These findings challenge the optimistic assumptions in previous research about using LLMs as reliable judges for test-time scaling [1, 2].
[1] Zheng L, Chiang W L, Sheng Y, et al. Judging llm-as-a-judge with mt-bench and chatbot arena[J]. Advances in Neural Information Processing Systems, 2023, 36: 46595-46623.
[2] Snell C, Lee J, Xu K, et al. Scaling llm test-time compute optimally can be more effective than scaling model parameters[J]. arXiv preprint arXiv:2408.03314, 2024.
Essential References Not Discussed: The paper overlooks prior work on critique-based refinement. Most notably, it fails to cite CriticBench (Lin et al., 2024) and CriticEval (Lan et al., 2024), which directly evaluate LLMs' abilities to generate and utilize critiques.
Other Strengths And Weaknesses: Please see my comments in the sections above.
Other Comments Or Suggestions: The font size on the images is too small to read clearly.
Questions For Authors: Please see my comments in the sections above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank reviewer Mn8F for the constructive review and are delighted that they found our evaluation criteria comprehensive. We respond point by point below.
> The random tie-breaking method used for single-rating protocols introduces unquantified variability that affects result reliability;
We believe this is a misunderstanding: We did not employ random tie-breaking in the single-rating protocol precisely because of the prevalence of tied highest-scores. As explained in Sec. 3.2 (Line 129 right), we report the min, average, and max performances from tied responses, allowing us to account for the possible *range* of performance, as plotted in Fig. 6.
> The paper overlooks prior work on critique-based refinement … CriticBench (Lin et al., 2024) and CriticEval (Lan et al., 2024) …
Thank you for pointing us to these works. We will include these works and a longer discussion of critique-related works. Both works use a single round of refinement, while in JETTS, the judge and the generator jointly decide on the number of rounds that the refinement is carried out (including no refinement at all). Despite the difference, all works arrive at similar conclusions: models struggle to improve the response from judge critiques.
For CriticBench [1], the first two “Correction” columns in Table 1 of Page 5 show that only GPT-4 can significantly improve model responses, and most other models generate worse responses than the original ones, as indicated by the red background colors.
For CriticEval [2], as shown in Table 5 of Page 7, the quality of refined responses (the CR metric) using judge-generated critiques is much lower than those using human-annotated feedbacks. Furthermore, as the authors did not share the original model performance (to the best of our knowledge), it is unclear whether the refined responses are actually better than the original ones.
> The critique quality analysis lacks a systematic methodology…
We agree that additional qualitative analysis would be beneficial. We include a case study for **Reviewer i886**, and point the reviewer there due to space limitations.
>2) While the judge-to-generator size ratio findings present coefficients … it's difficult to determine if these differences are statistically significant.
For the regression analysis in Fig. 4, we have the following p-values for the slope and intercept.
| | |p-value|
|-:|-:|-:|
| Math|Slope|**9.3e-10**|
| | Intercept (at size-ratio=0.1)|**1.6e-3**|
| Code|Slope|0.93|
| | Intercept (at size-ratio=0.1)|**0.038**|
| Instruction Following | Slope|0.26|
| | Intercept (at size-ratio=0.1)|0.064|
|
For math, both the slope and the (negative) intercept are statistically significant, suggesting both that a large size ratio helps with performance, and the very small ratios hurt. For code, the slope is not significant but the (negative) intercept is, suggesting that all size ratios lead to negative helpfulness. For instruction following, while both slope and the intercept are slightly positive, neither is statistically significant. Thus, claiming a positive effect helpfulness would need more data to support.
We present a similar analysis for the results in Fig. 6 in our response to reviewer gykS, and will include such analyses for all results in the final version.
> The comparison between Likert and Additive rating protocols … would be better placed in an appendix…
Thank you for the suggestion. We will make changes in the final version. Given new results of large judge beam search (see our response to reviewer gykS) and GPT-4o-as-judge (see below), we will also holistically assess the significance of each result and re-organize the main body and appendix as necessary.
> Additionally, the benchmark's exclusive focus on open-source models represents a limitation.
While JETTS focuses on benchmarking specialized LLM judge models, we started experiments with GPT-4o as the judge, using SFRJudge prompts (Fig. 14-15 on Page 15-16), Llama-3.1-8B-Instruct as the generator, and report normalized helpfulness in reranking and relative improvement over greedy in refinement. We present preliminary results below and will update our paper with full results when experiments conclude. (Skywork-70B cannot generate critiques and hence cannot be used for refinement.)
| Judge | Reranking: MATH | Reranking: BigCodeBench | Reranking: AlpacaEval | Refinement: MATH | Refinement: BigCodeBench | Refinement: AlpacaEval |
|--|--|--|--|--|--|--|
| GPT-4o |0.174|0.300|0.359|0.98|1.07|1.10 |
| SFRJudge-70B|0.418|0.174|0.478|1.12|1.10|1.11 |
| Skywork-70B|0.185|0.219|0.381|N/A|N/A|N/A |
|
Except for BigCodeBench Reranking, GPT-4o consistently lags behind SFRJudge-70B and Skywork-70B. This suggests that general-purpose high-performance LLMs also struggle with such fine-grained judging tasks, making JETTS a valuable resource in assessing judging capability progress of future LLMs.
[1] https://arxiv.org/pdf/2402.14809
[2] https://arxiv.org/pdf/2402.13764 | Summary: This paper introduces a benchmark designed to assess the feasibility of using large language model (LLM) judges as evaluators in test-time scaling scenarios. The study compares LLM-judges to traditional reward models (RMs) and process reward models (PRMs) in three key tasks: Response Reranking, Step-Level Beam Search, Critique-Based Refinement.
Claims And Evidence: I think most of the claims are well supported by the evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria in JETTS are generally well-designed for assessing LLM-judges as test-time evaluators.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design is mostly sound, particularly in its use of diverse benchmarks, structured evaluation tasks, and efficiency trade-off analyses.
Supplementary Material: I check the supplementary writing and there was no code base submitted.
Relation To Broader Scientific Literature: I think this paper is well designed and interesting overall. It has good contribution that ties test-time scaling settings and llm-as-judge.
Essential References Not Discussed: I think most of the related works are well cited.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: What are the possible reasons for the Critique-Based Refinement Task being largely ineffective, despite the success of self-reflection and similar methods in other tasks? If this is the case, what potential solutions could improve its effectiveness?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their considerate review, and are happy that you found our paper well designed and interesting.
> What are the possible reasons for the Critique-Based Refinement Task being largely ineffective, despite the success of self-reflection and similar methods…? …what potential solutions could improve its effectiveness
We suspect that the lack of success is because judge critiques **lack actionability**. Generally, this means that LLM-as-judge critiques tend to focus on surface-level details (e.g., formatting) rather than correctness.
Recent critique benchmarks had similar findings: Models struggle to improve their performance using critiques generated from external critic models. In particular, two papers suggested by Reviewer Mn8F support this claim: CriticBench [1] and CriticEval [2] both highlight that critiques hold some promise, but in general, only critiques from extremely powerful models, like GPT-4, lead to performance gains. Our work further shows that this holds also for multi-round critique-based refinement, whereas previous work focused only on one round.
Furthermore, even for self-reflection and self-correction without an external evaluator, the evidence of its utility has been mixed, with papers finding that LLMs can’t self-correct reasoning [3], or small LLMs need strong verifiers to do so [4]; See Sec 4.2 of [5] for a comprehensive review. Thus, we believe that future work is needed to further identify the fundamental mechanism and reason for reported success, e.g., to resolve the paradox posed by [3]: If an LLM possesses the ability to self-correct, why doesn’t it simply offer the correct answer in its initial attempt?
To remedy this, we believe that judge training should place a higher emphasis on critique quality, likely borrowing ideas from process supervision or critique-generation models, e.g., using human annotated critiques [6] or RL-training from verifiable feedback [7]. We will update the paper with these discussions.
[1] https://arxiv.org/pdf/2402.14809
[2] https://arxiv.org/pdf/2402.13764
[3] https://arxiv.org/pdf/2310.01798
[4] https://arxiv.org/pdf/2404.17140
[5] https://arxiv.org/abs/2308.03188
[6] https://arxiv.org/abs/2407.00215
[7] https://arxiv.org/abs/2502.03492
## Case study
We manually inspected over 100 judge critiques. Overwhelmingly, we found that judges fail in two ways: (1) Failure to catch mistakes by critiquing only stylistic elements of the response and (2) Overemphasis on stylistic details while ignoring a correct response. In both cases, **judge critiques lack actionability**: Judges focus on superfluous details. In the former, the judge fails to guide the generator to a correct response, while in the latter, critiques needlessly prolong the refinement process without substantive improvements.
Below, we present two representative examples of both scenarios. We use MATH, as each response can quickly be verified using the final answer. Due to space limitations, we omit the model’s intermediate reasoning steps and parts of the judge critique with [...]. We will dedicate an entire appendix section in our updated paper with complete examples.
### Example 1
Incorrect response, but the judge focuses on surface level details rather than catching the error. As a result, the generator does not revise further.
Ground-truth solution: 19
Generator: Qwen-72B
```
## Step 1: Identify the sizes of squares that can contain the black center square [...]
Therefore, the final answer is: $\boxed{14}$. I hope it is correct.
```
Judge: SFRJudge-70B
```
The response precisely executes the instruction by following the required format for complex problems and providing a clear step-by-step solution. The explanation and calculations are concise and easy to follow. The response also accurately counts the number of squares of each size that contain the center square and correctly sums them up to obtain the final answer. [...] Score: 5
```
### Example 2
Correct response, but the judge focuses on minor details. As a result, the generator continuously revises, but retains the correct answer in each revision. In this case, we verify that the model’s reasoning steps are correct.
Ground-truth solution: -7
Generator: Llama-8B
```
## Step 1: We are given a polynomial equation $x^5 + 7x^4 - 2 = 0$ [...]
Therefore, the final answer is: $\boxed{-7}$.
```
Judge: SFRJudge-70B
```
The response does not precisely execute the instruction. The instruction asks to provide a concise solution with minimal explanation for simple problems (2 steps or fewer) or use a step-by-step format for complex problems (3 steps or more). However, the given problem is complex and requires the step-by-step format, but the response does not provide brief explanations and calculations [...] Additionally, the response does not provide a clear and detailed explanation of how the expression is simplified to obtain the final value. Score: 2
```
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the clarifications. I think the discussion on the Critique-Based Refinement Task could further strengthen the paper. Please make sure to include these to the final manuscript if accepted.
---
Reply to Comment 1.1.1:
Comment: We will ensure the critique-based refinement discussion is included in our final paper. Thank you for your constructive feedback! | Summary: The authors propose the JETTS Benchmark for evaluating LLM-as-Judge evaluators for test-time scaling where the judges are used to improve the final output from the generator. The benchmark covers Best-of-N reranking, (2) step-level beam search, and (3) critique-based refinement across the math reasoning, code generation, and instruction following domains.
Claims And Evidence: Yes. The paper provides citations where needed and the claims stated in the experimental results are backed by the evidence in the benchmark results.
Methods And Evaluation Criteria: Yes, the approaches used make sense and are clearly described.
Theoretical Claims: Yes, although there is not much in theoretical claims as this is a benchmark paper.
Experimental Designs Or Analyses: Yes. The benchmark is run using 6 different generator models and 6 different judge models using 8 different datasets. These are all split across the 3 tasks of math reasoning, code generation and instruction following. The analysis is very detailed and clearly presented with key take-aways marked in bold.
Supplementary Material: Yes, all of it. There is additional detail on prompt templates used and more results from the experiments.
Relation To Broader Scientific Literature: LLMs are being used as evaluators increasingly often in recent work and they provide key benefits in scaling and control. This has led to them also being used to improve generated output at inference time as a form of reflective selection and refinement. The benchmark presented in the paper can help researchers identify the strengths and weaknesses of different models for this purpose.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: The paper is well written and motivated and the contributions are clearly described. I do feel that the 3 domains selected limit the applicability of the benchmark to more common use cases such as Q&A, chat and summarization and would like to see those added in a future form of the benchmark. I would also like to see a general evaluation of the LLM judges as simple evaluators in order to better understand the impact of using them for the 3 tasks presented.
Other Comments Or Suggestions: I would suggest adding a reasoning for why the 3 domains were chosen as opposed to others. There is a section on task and dataset selection and some detail on model selection but no information on domain selection.
Questions For Authors: 1) Why were the 3 domains selected as opposed to others?
2) The focus of this benchmark is on test-time scaling, but does it make sense to add general evaluation of the judges as evaluators to better understand their performance on the 3 tasks? For instance, a weak evaluator may also be weak on the tasks but some models that are in general weak evaluators may still be useful for the tasks.
3). Similar to the previous question, there should be a consideration on task performance vs latency and memory as these are key considerations for deploying a model for the three tasks evaluated. Memory impacts resource constraints while latency impacts the usability of even a great model, according to this benchmark, as a judge.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer Bcf6 for their thoughtful review and are grateful that you found our work well-motivated.
> Why were the 3 domains selected…
This is an excellent question. We will revise our paper to motivate our choice of domains more concretely:
*Instruction following (IF):* Much recent work in judge benchmarking focuses on IF as a proxy for **chat quality** (e.g., [1,2). As such, we include IF as it is best aligned with what judges excel at. This fact is reflected in our results: Across the board, judges performed the best on IF (Fig. 4).
*Math:* Math has exploded in popularity as a domain to measure progress in **LLM reasoning**. Many existing works focus on scaling inference-time compute for math (e.g., [3,4]), using benchmarks like MATH and GSM8K. Thus, we found it crucial to evaluate the judges for math.
*Code:* We identified code as a challenging domain, with many recent methods (including Alphacode [5] and Reflexion [6]), using inference-time scaling (e.g., [7, 8]) and trained evaluators (e.g., [9]). These initial works suggest that code is an emerging domain in need of strong test-time evaluators. Moreover, the line-by-line nature of code amenable to beam-search, code reranking has been the focus of small-scale judge experiments in prior work [10], and the coding domain provides a more formal reasoning language for LLMs.
> …does it make sense to add general evaluation of the judges as evaluators to better understand their performance on the 3 tasks?...
Thank you for suggesting to contexualize JETTS performance with existing benchmarks. We compare normalized helpfulness on JETTS reranking (RR) and beam search (BS) against accuracy on RewardBench [2] and AutoJ’s EvalP test-set [11]. The former assesses reward modeling ability, while the latter assesses chat-specific evaluation. We will update our paper with a complete figure and present a subset of results below.
|Model| RewardBench (accuracy)|EvalP (accuracy)|JETTS RR|JETTS BS|
|-:|-:|-:|-:|-:|
|Prometheus-7B |72.0|56.03|-0.098|-0.102|
|Prometheus 8x7B |74.5|58.69|-0.077|-0.091|
|SFRJudge 8B |88.7|60.34|0.024|-0.006|
|Skywork-Critic 8B |89.0| 56.39|0.040|0.044|
|SFRJudge 70B |92.7| 63.51|0.177|0.129|
|Skywork-Critic 70B |93.3|57.26|0.172 |0.126|
|
The judge performance across benchmarks are generally correlated. However, variation in performance in JETTS is much larger than that in RewardBench or EvalP. For example, in RewardBench, the gap between the 8B and 70B Skywork models is 4.3% accuracy (5% relative improvement from 8B to 70B). On JETTS RR, the gap is 0.132 normalized helpfulness, or a 330% relative improvement.
We believe JETTS more accurately reflects the difference in “fundamental judging ability” between small/large judges: Based on RewardBench, the practical choice is to use an 8B judge rather than a 70B judge for reranking/beam-search (4% acc. drop vs for 9x fewer params). However, JETTS, which realistically mimics inference-time scaling tasks, advises the opposite: the choice of 70B judge yields far more gains than the 8B judge.
We found this discussion to be rich, and will update our paper accordingly.
> …there should be a consideration on task performance vs latency and memory as these are key considerations…
We agree that latency and memory are important metrics. Previous works quantify the test time scaling with respect to a compute budget (e.g., Figure 3 of [4]), but use scalar reward models that make the budget easy to quantify (i.e., reward score is only a function of input size). By comparison, LLM judge models can generate critiques/CoT reasoning, making it non-trivial to equalize the compute quantity. Instead, we equalize the experiment setup (e.g., number of responses to rerank or beam width) and leave “compute-optimal” judging to future work.
The reranking strategy, however, does show a performance-efficiency trade-off (Line 247 left). The O(n^2) pairwise round-robin delivers larger gains than the O(n) single-instance rating but requires more time: 23.58 vs. 5.64 seconds/sample for reranking Llama-8B’s BigCodeBench responses using GPT-4o as a judge in our new experiment for Reviewer Mn8F. This difference is more significant for beam search, where each beam search step requires a reranking step. We will update the paper to highlight this trade-off, and additionally include statistics about GPU VRAM needed to run each judge.
[1] https://arxiv.org/abs/2310.07641
[2] https://arxiv.org/abs/2403.13787
[3] https://arxiv.org/abs/2408.03314
[4] https://arxiv.org/abs/2502.06703
[5] https://www.science.org/stoken/author-tokens/ST-905/full
[6] https://arxiv.org/abs/2303.11366
[7] https://arxiv.org/abs/2407.21787
[8] https://arxiv.org/abs/2501.14723
[9] https://arxiv.org/abs/2410.17621
[10] https://arxiv.org/abs/2407.10817
[11] https://arxiv.org/abs/2310.05470
---
Rebuttal Comment 1.1:
Comment: I confirm that I have read the author response and my questions have been answered. I will update my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading our response and we are glad to have addressed your questions. We look forward to incorporating these discussions in the final version of the paper. | Summary: This paper proposes a benchmark called JETTS (Judge Evaluation for Test-Time Scaling) to evaluate the performance of LLM-as-judges in test-time scaling scenarios. The benchmark consists of three tasks: response reranking, step-level beam search, and critique-based refinement.
The main findings of the paper are:
1. LLM judges can be helpful in certain domains, such as instruction following, but not in others, like math and code generation.
2. Despite more time-efficient, single-rating evaluation protocol performance results in evaluation that is too lenient. Judges often rate a significant fraction of the N responses a top score.
3. Current chain-of-thought reasoning generated by LLM judges is insufficient for self-improvement.
The main contributions by the paper are:
1. The JETTS benchmark, which provides a systematic evaluation framework for LLM judges in test-time scaling scenarios.
2. The comparision of pairwise and pointwise protocols, and the analysis of their trade-offs.
3. The investigation of the effectiveness of chain-of-thought reasoning in LLM judges and its limitations.
Overall, the paper highlights the challenges and opportunities in using LLM judges for test-time scaling and provides a foundation for future research in this area.
Claims And Evidence: The submission presents several claims about the performance and limitations of LLM-as-judges in test-time scaling scenarios. While the paper provides some evidence to support these claims, there are areas where the evidence is not clear or convincing due to lack of in-depth experimental results or improper exeprimental setup.
Specifically, on Line 218, most judges are fine-tuned using a fixed prompt template, but in this paper's setup, a single prompt template is used for Critique-Based Refinement experiments. It would be beneficial to explain why this template was chosen and what effect using different templates might have.
Furthermore, while many numbers are included in the results due to the involvement of multiple generators and judges, it is difficult to determine whether the observed trends or patterns are statistically significant. For example, Figure 6 does not appear to show any significant differences between the various judges, making it less informative.
Minor Suggestion: Adding clear y-axis labels to each plot would improve the overall clarity of the figures.
Methods And Evaluation Criteria: The proposed methods/metrics overall look intuitive and reasonable.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Upon review, most experimental designs and analyses appear to be sound.
Supplementary Material: Figure 14-17.
Relation To Broader Scientific Literature: Yes. This paper points out the limitations of LLM-as-a-judge models in the test-time scaling setting. Important findings:
1. Although unique to LLM-judges, their natural language critiques are currently ineffective in guiding the generator towards better responses.
2. LLM-judges lag significantly behind the small QPRM in the task of step-level beam search.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: ## update after rebuttal
The rebuttal has addressed my questions. On the other hand, I agree with some of points from reviewer Mn8F. Thus, I will maintain my score.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer gykS for their thoughtful review. In particular, we are happy that you found our metrics intuitive and our experimental setup sound. We respond point-by-point to questions and comments below.
> Specifically, on Line 218, most judges are fine-tuned using a fixed prompt template, but in this paper's setup, a single prompt template is used for Critique-Based Refinement experiments. It would be beneficial to explain why this template was chosen and what effect using different templates might have.
We want to clarify our setup. Each judge model is asked to produce a critique and judgment **using its corresponding prompt template. That is, the judge prompt template is not fixed across all judges**. This means that each judge’s instructions and output format follow ones that are used to train the judge.
However, we use a fixed prompt template to prompt the generator model, which for the critique experiments are all general-purpose instruction-tuned LLMs. This fixed prompt, shown in Figure 17, takes in the judge’s critique and score (which we parse out separately from the judge response), the previous response, and the original user query, and tasks the generator (i.e., instruct model) to refine its answer. Upon review of Section 3.4, we realize that we did not make this explicitly clear, and will update the final paper to clarify. Thanks!
> Furthermore, while many numbers are included in the results due to the involvement of multiple generators and judges, it is difficult to determine whether the observed trends or patterns are statistically significant. For example, Figure 6 does not appear to show any significant differences between the various judges, making it less informative.
For the single-instance rating reranking protocol shown in Figure 6, we tested whether the min, average and max performances (i.e., the three ticks for each model, task and likert/additive prompt combination) are statistically significantly different from 0 (using a one-sample t-test with p-value threshold of 0.05). Not surprisingly, both the min and max are statistically significantly different from 0 in all cases. However,the average performances are significantly different from 0 for only a handful of math and code cases, as indicated by an “x” in the table below. *Quite concerningly, in all such cases, the average performance is negative, indicating that we have strong evidence that they perform worse than the simple greedy baseline, suggesting the unreliability of the single-rating method.*
| | | | | | | |
|--------------:|:-------:|:-------:|:------:|:-------:|:---------:|:-------:|
| | Prom 7B | SFR 8B | Thm 8B | SFR 12B | Prom 8x7B | SFR 70B |
| Math Likert | x | | | | x | |
| Math Additive | x | | | | | |
| Code Likert | x | x | x | | x | |
| Code Additive | x | x | x | x | | |
| Instruction Following Likert | | | | | | |
| Instruction Following Additive | | | | | | |
|
Furthermore, we present statistical analysis for the linear regression in Figure 4 in our response to reviewer Mn8F, and will include these analyses for all results in the final version.
> Minor Suggestion: Adding clear y-axis labels to each plot would improve the overall clarity of the figures.
We agree with the reviewer. As we cannot upload an updated paper version, we will update our figures for our final paper.
> LLM-judges lag significantly behind the small QPRM in the task of step-level beam search.
We are excited to share some new results. Since submission, we obtained access to additional compute resources which were used to evaluate the **large judge models** on beam search (the “C!” entries in the Figure 1 result summary). Here, we provide a summary of our results, with our final paper to be updated with more comprehensive analysis.
| Model | Performance |
|---------------------:|------------:|
| Prometheus-7B | -0.102 |
| SFRJudge 8B | -0.006 |
| Skywork-Critic 8B | 0.044 |
| OffsetBias 8B | 0.005 |
| Themis 8B | -0.026 |
| SFRJudge 12B | 0.040 |
| **Prometheus 8x7B** | -0.091 |
| **SFRJudge 70B** | 0.129 |
| **Skywork-Critic 70B** | 0.126 |
| **Self-taught-eval-70B** | 0.074 |
| Qwen PRM 7B | 0.178 |
| Random | -0.141 |
| |
As we can see, all large judges (bolded), except for Prometheus 8x7B, perform much better than smaller ones, with SFRJudge and Skywork-Critic 70B being the best. However, they still lag behind the much smaller 7B Qwen PRM, suggesting that finer-grained step-level judging has much room for improvement. | null | null | null | null | null | null |
Comparing Comparisons: Informative and Easy Human Feedback with Distinguishability Queries | Accept (poster) | Summary: This paper proposes a new type of query in rlhf, the distinguishability query (DQ). Rather than directly comparing two sets of trajectories, the authors compare two sets of trajectories and selects the one that is easier to give feedback on. They then provide feedback on the easier pair, and can learn from that data. Experimental results show that the method can sometimes produce solid performance gains over the baseline, PEBBLE, but when equal amounts of data are used for the baseline and DQ the performance gain is not that large.
## After Discussion
After the discussion and reading the reviews of the other reviewers, I plan to maintain my score. The method is interesting and the paper seems solid overall. However, it is still unclear to me how to measure the cost of a distinguishability query compared to a normal RLHF query and what a fair comparison means for DQ versus other baselines. I don't really think the authors argument that "it's also unfair to only count the number of human choices since such choices for DQ and PCQ apparently provide different amount of information" makes sense, which is the essential claim all of their experimental results rely upon.
Claims And Evidence: The main claim that DQ can improve RLHF performance is somewhat supported. The experimental results show that DQ can outperform the baseline algorithms when it recieves a large budget then them (see DQ in figure 3). But when a roughly equal labeling budget is given for the baselines and DQ, (i.e. DQ (Half)), the results are not that good and can only really outperform the baseline in two of the four environments presented in Figure 3.
In the experiments section the authors claim that DQ (Half) outperforms the baselines in all except for MRN on Quadruped walk (line 345). However, this is a but overstated. DQ-half and other algorithms typically have very similar performance (at least in Figure 3) as well as overlapping confidence intervals.
Methods And Evaluation Criteria: The experiments and evaluation criteria do make sense. This paper works to improve RLHF, and some of the earliest papers on RLHF focused on continuous control [1]. They evaluate based on the average ground-truth reward received, which makes sense. However their experiments are all conducted on very simple continuous control environments. My concern is that their approach of only selecting easy samples to train on will work better in easy settings than in hard settings. This means their experimental setup may overestimate the utility of their method.
[1] Christiano, Paul F., et al. "Deep reinforcement learning from human preferences." Advances in neural information processing systems 30 (2017).
Theoretical Claims: na
Experimental Designs Or Analyses: Yes I did check the soundness of the experimental design. The experimental design and analysis seems both standard and solid. They compare their algorithm with many baselines, and they are careful to make sure the data labelling budget is comparable for different algorithms. In addition, they report five independent runs for each algorithm, and report standard deviation for all experiments.
Supplementary Material: Yes, I read the appendix.
Relation To Broader Scientific Literature: The idea of distinguishing between different queries is definitely a novel idea in RLHF. The paper does a good job of engaging with the literature.
Essential References Not Discussed: I do not know of any.
Other Strengths And Weaknesses: Strengths:
- The idea of this paper is novel and interesting.
- The writing and motivation for their method is very clear.
- The experimental results are for the most part solid.
Weaknesses:
- The problem settings used in this paper are very easy problems. In continuous control we have a fairly straightforward goal, so it is fine to rely on non-ambiguous queries to learn it. However for more complex and open ended tasks such as LLM alignment, ambiguous queries may actually contain subtle and important information for reward learning.
- DQ (half) does not really perform much better than most baselines.
Other Comments Or Suggestions: none
Questions For Authors: Is there any way to test DistQ on LLM Alignment? Or, is it possible to consider a task where subtle information in ambiguous queries matters?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. Claim about performance of DQ (half):
(1) For query budget and fairness comparison discussion, please refer to **point 1-(1)&(2) in our response to reviewer ufMA**.
(2) For effectiveness, please refer to **point 1-(3) in our response to reviewer ufMA**. Besides, we show in Fig 2 in https://drive.google.com/drive/folders/1wR469npWztzkjyW0YF2H9C10LTI3wnTU that DistQ learning from only distinguishability preference feedback (DQ_d loss) performs far behind DistQ learning from only pairwise preference feedback (DQ_pairwise loss), which demonstrates that DQ (half) actually obtains much less information from a seemingly "roughly equal labeling budget" compared with the baselines. In such case, performance of DQ (half) shown in Fig 3 could be recognized.
At last, to clarify, the comment in line 345 saying DQ (half) "outperforms most baselines only except for MRN" is only made on Quadruped walk", which we think is consistent with Fig 3. As for other tasks, DQ (half) indeed has similar performance to some of the baselines, which is understandable given the above explanation.
2. Overestimation from easy experimental setup:
(1) We've conducted more experiments on harder control tasks. Please refer to **point 1 in our response to reviewer pH8H and mQWc**. Besides, as pointed out by reviewer mQWc, there also exist challenging control tasks to solve. We believe that for those tasks, subtle information in ambiguous queries does matter, and we demonstrate that our method can work.
(2) Note that our method balances query efficiency and user-friendliness by selecting both informative (ambiguous) and easy-to-answer queries as introduced in Sec 4.2, instead of only selecting easy samples to train on. In our experiments (see Sec 5.5), we also conduct an ablation study to compare the performance of our method and an alternative approach that samples queries only based on easiness (DQ (E) curve in plots). The results show that our method considering both ambiguity and user-friendliness is much better than DQ (E).
3. About testing DistQ on LLM Alignment:
In our current setting for DistQ, the reward model is an ensemble of three-layer neural networks with 256 hidden units, which is quite small compared with the reward model in LLM alignment. Therefore, it may need much more queries and computational resources for LLM alignment, which is hard for us to test given the limited time.
We believe the high-level idea of DistQ can be applied for LLM alignment, and we leave this extension to future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response to my review. I appreciate the addition of harder experiments. For figure 2 of the rebuttal material, how do you decide on the budget? Does a difficulty query use the same amount of budget as a normal query?
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to engage in this discussion with us. We are also very grateful for your appreciation of our efforts on additional experiments. We now provide results on more tasks (**please refer to point 2 in our reply to reviewer pH8H's comments**), which further demonstrates the effectiveness of our method.
For figure 2 of the rebuttal material, we decide the budget based on our estimation of the difficulty level of tasks. For example, given the hard task Disassemble, we guess that its difficulty level is similar to or larger than Sweep into (which is the hardest task in our original paper) based on our understanding of the tasks. Then we set the query budget for Disassemble as 10,000, which is the query budget we use for Sweep into. Though from the results, it seems that such budget may be insufficient for Disassemble. However, given the limited time, we are not allowed to try a larger budget.
We sincerely hope that our response can address your concerns and demonstrate our method better. If no other concern, we would be grateful if you could consider increasing your evaluation of our work. | Summary: This paper proposes a three-stage approach to optimizing the PbRL pipeline:
1. **Selecting the top N informative (based on the variance of reward ensembles) Pairwise Comparison Queries (PCQs)** for comparison. This step explicitly enables the reward model to distinguish high-uncertainty pairs better, accelerating the reward learning process.
2. **Selecting the top and bottom M easy (based on entropy) PCQs** for re-pairing. This enhances the contrast of comparison pairs, thereby improving the sample efficiency of reward learning.
3. **Incorporating an additional distinguishability preference loss** to assist reward learning further.
Intuitively, all these improvements contribute positively to the PbRL pipeline. Comparative and ablation experiments demonstrate the effectiveness of these stages.
## update after rebuttal
During the rebuttal phase, the author supplemented many materials through anonymous links, such as new experiments on harder tasks, videos of learned policies, details of the user study, and explained some misunderstandings in the discussion section, which led me to increase my score.
Claims And Evidence: Although the methods proposed in this paper are intuitively reasonable, I believe the experiments are insufficient to support these claims.
- The experiments are conducted on a very limited set of tasks, including two locomotion tasks and two manipulation tasks. Among them, Walker Walk and Quadruped Walk are relatively easy tasks in DMC, while Window Open and Sweep Into correspond to easy and medium-difficulty tasks in MetaWorld, respectively. If the authors could include more challenging tasks, such as **Humanoid/Dog** in DMC and **Shelf Place/Disassemble** in Meta-World, the experiments would be more convincing.
- As a PbRL algorithm, the experiments do not involve **human feedback**. Evaluating only on synthetic feedback fails to demonstrate the algorithm's effectiveness in real-world scenarios.
Methods And Evaluation Criteria: Yes. However, I suggest that the authors include additional experiments that go beyond merely maximizing reward and instead use human feedback to shape the agent's behavior, as Pebble did in Figure 6 of its paper.
Theoretical Claims: This paper includes no proofs or theoretical claims.
Experimental Designs Or Analyses: See **Claims And Evidence**.
Supplementary Material: I have reviewed the appendix.
Relation To Broader Scientific Literature: The key contributions of the paper are mainly related to preference-based RL.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: See **Claims And Evidence**.
Other Comments Or Suggestions: No.
Questions For Authors: I have no questions.
Ethical Review Concerns: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. About experiments on more challenging tasks:
**Please refer to point 1 in our response to reviewer pH8H**. Besides, it is worth mentioning that the suggested harder tasks are not evaluated in all our baselines either. Therefore, we need to determine workable hyper-parameters for all the methods as well as the backbone RL algorithm SAC, which can be time-consuming given the high difficulty of tasks. In our linked results, all methods follow their default hyper-parameter settings without tuning.
Also, there is possibility that these tasks are already challenging when a ground-truth reward function is available. Our method doesn't claim to improve the sample efficiency of the underlying deep RL algorithm. In contrast, we propose a method to circumvent the need of having to define a ground-truth reward function, by asking informative and easy-to-answer queries than other RLHF methods.
2. Additional experiments with real human feedback:
We have conducted the suggested experiments with real human involved and explained details in Sec 5.4 and Appendix D.3. To have a comprehensive understanding of the user study, we also provided an anonymous link of videos of selected queries and evaluation of trained agents in Appendix D.3 in our original version of paper.
---
Rebuttal Comment 1.1:
Comment: First, I would like to thank the authors for the additional experiments on more challenging tasks and the user study.
I am now also considering Reviewer ufMA’s perspective. From a practical standpoint, **understanding the video pair is the most time-consuming part** of the human feedback process. Once a video pair is understood, it is not difficult to make even multiple choices. From this perspective, DistQ(half) requires the annotator **to fully understand the easy pair and only briefly understand the hard one**, whereas DistQ requires the annotator **to fully understand two video pairs**. This makes the comparison between DistQ(half) and the baselines relatively fair, while DistQ consumes twice the query budget compared to the baselines. Unfortunately, DistQ(half) does not show a clear advantage over the baselines.
However, from the additional experiments, I notice that DistQ(half) performs significantly better than the baselines in the *disassemble*. This makes me wonder whether the tasks in the current experiments are too simple, making it difficult for DistQ(half) and the baselines to show a noticeable difference.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our response. Meanwhile, we would like to emphasize that **the user study was done in our paper originally, instead of during the rebuttal**. We next address the further concerns and clarify potential misunderstanding.
1. About time consumption of human feedback
We agree that understanding the video pairs is time-consuming, especially for harder ones. And **this exactly matches our core motivation**. Our method is proposed to select informative and relatively easier-to-answer PCQs. **For one DQ in both DistQ and DistQ(half) settings** (see our detailed explanation in the following **point 2**), **the annotator only needs to select and answer the easier PCQ, thus avoiding fully understanding the more complex PCQ**.
Besides, we believe that reviewer ufMA didn't discuss about understanding the video pair and its time consumption for the human feedback process. Instead, he/she presented his/her understanding of *number of human choices for each query*.
The suggested references of human response time serve as a supplement to our relevant work section.
2. About the experimental settings of DistQ and DistQ(half)
**Please refer to point 1-(1) in our response to reviewer ufMA.** We believe that our explanation clarifies potential misunderstanding. Specifically, for both DistQ and DistQ(half), the human chooses the preferred trajectory only from the chosen PCQ in a DQ. For each DQ in both settings, the human always makes 2 choices instead of 3.
In this case, the annotator doesn't have to fully understand the two video pairs, e.g., s/he only has to understand the easier one, and doesn't need to spend a lot of effort to understand the harder one. Therefore, **DistQ shares the same requirement as DistQ(half) for one DQ, instead of requiring "the annotator to fully understand two video pairs"**.
3. About the fairness in our experimental evaluation
Given our above clarification, DistQ actually doesn't consume twice the query budget.
For one DQ, the human makes 2 choices, which may be considered unfair when comparing with baselines using PCQs where the human only makes 1 choice. That's why we design DistQ(half), which only uses a half number of DQs (=equal number of human choices as baselines). However, it's also unfair to only count the number of human choices, since such choices for DQ and PCQ apparently provide different amount of information. Therefore, we consider DistQ with full budget.
**Please also refer to point 1-(2) in our response to reviewer ufMA, and also our explanations in the first paragraph of Sec 5.2 in our paper**.
4. About the effectiveness of DistQ(half)
**Please refer to point 1-(3) in our response to reviewer ufMA**. Based on our above explanation, though DistQ(half) receives the same number of human choices (for both DQs and PCQs) as other baselines (for only PCQs), the information DistQ(half) obtains is much less than the baselines. **Please also refer to our discussion in point 1 to reviewer 6jG4**. Even though, DistQ(half) still achieves at least a similar performance, while only requiring the annotator to fully answer relatively easier PCQs, which can highly demonstrate its effectiveness.
5. About the performance of DistQ(half) in disassemble
Please refer to **point 1 in our first-round response** to you. The suggested harder tasks were not evaluated in all baselines before. So we can't decide how many queries are needed for the baselines to work. It's possible that our tested query budget is enough for DistQ(half) to work a bit, but is still insufficient for the baselines to work.
As for the simpler tasks in the paper, they are widely tested by the baselines and workable query budget is also provided. Given the smaller difficulty level of the tasks and enough query budget, it's possible that the performance difference between DistQ(half) and other baselines is smaller.
---
We sincerely hope that all our responses have addressed your concerns and mitigated potential misunderstanding. If you have any other concerns, we would be happy to discuss them. If not, we would be grateful if you could consider increasing your evaluation of our work. | Summary: This paper proposes Distinguishability Queries (DistQ), a new method for improving RLHF. DistQ reduces cognitive load by letting humans first choose which of two trajectory comparisons is easier to evaluate and then provide feedback on the easier pair. This approach captures both preference strength and ordinal information, improving the efficiency of learning reward functions. Experiments show that DistQ is more data-efficient and user-friendly than traditional methods, offering a better approach for RLHF in tasks with complex objectives.
Claims And Evidence: Yes, the claims presented in the text are all supported by experiments. I believe the aspect with some shortcomings is related to alleviating the burden on annotators, which may require more extensive user study.
Methods And Evaluation Criteria: Yes, the research question is very meaningful and compares a large number of benchmark algorithms, but the environments used are relatively limited.
Theoretical Claims: n/a
Experimental Designs Or Analyses: The experimental design is scientific and mainstream, aligned with the state-of-the-art PBRL community, and compares with strong baselines.
Supplementary Material: yes
Relation To Broader Scientific Literature: n/a
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Strengths:
- The paper is written very elegantly, and the research problem is highly significant. Providing annotators with easy and better annotation methods is extremely important.
- The aggregation of the distq component is innovative. The experimental results show some improvement over previous methods, and the authors analyze the effects of some design choices.
Weaknesses:
- I believe in the outstanding performance of distQ, and it indeed compared with many baseline algorithms. However, why wasn’t it tested in more environments (in terms of quantity or type, 9 envs in b-pref totally)? I think this is more important than comparing with more baseline algorithms. If distq could be validated on a broader range of experimental benchmarks, its impact would be greater.
- For me, the most interesting aspect of this article is its ability to provide annotators with Informative and Easy Human Feedback. This is very important for RLHF in real-world scenarios, but the article does not delve deeper into this point through more analysis and experiments. Most experiments use synthetic feedback, which makes it difficult to validate this interesting perspective. Appendix D.3 provides simple experimental results, but I believe this deserves more discussion in the main text. This is a valuable approach.
- Which part or all parts of DistQ can be applied to broader domains? For example, real-world robotic arm experiments, offline RLHF experiments, or Atari/Minecraft.
- Lack of discussion on papers related to improving the quality of annotation human feedback and reducing the burden of human feedback, such as: [1][2][3][4]
[1] Yuan Y, et al. Uni-rlhf: Universal platform and benchmark suite for reinforcement learning with diverse human feedback[J]. arXiv preprint arXiv:2402.02423, 2024.
[2] Zhang L, et al. Crew: Facilitating human-ai teaming research[J]. arXiv preprint arXiv:2408.00170, 2024.
[3] Dong Z, et al. Aligndiff: Aligning diverse human preferences via behavior-customisable diffusion model[J]. arXiv preprint arXiv:2310.02054, 2023.
[4] Metz Y, et al. Reward Learning from Multiple Feedback Types[J]. arXiv preprint arXiv:2502.21038, 2025.
Other Comments Or Suggestions: see above
---
**After rebuttal comment:
Thank you for the author's response. I believe the additional experimental results and demonstrations can enhance the quality of the paper, so I vote to accept it and raise the score from 3 to 4. I hope the author can include these additions in the next revised version, adding more experimental content and demonstrations to the experimental section of the main text. This will help better present DistQ.**
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: 1. About more experimental environments:
For the current version of our paper, we selected representative control tasks of different difficulty levels (in terms of query budget needed to accomplish the task) and of different types (locomotion and robotic manipulation) to demonstrate the performance of our method. Following the reviewers' suggestions, **we now validate our method on more tasks, including the hard locomotion task Dog walk (hard), and robotic manipulation tasks Door open (easy) and Disassemble (hard)**. Results are shown in Fig 1 in this link (https://drive.google.com/drive/folders/1wR469npWztzkjyW0YF2H9C10LTI3wnTU). For the latter two tasks, our method can still perform better or competitively compared with other baselines. For the hard Dog walk task, however, all methods (including SAC with ground-truth reward function) fail to achieve it. A possible explanation is that, for such complex tasks, it takes a large number of queries and needs efforts to find workable hyperparameters for the backbone framework. Given the time limit, we haven't found proper settings for all methods and we've tried our best to test on as many tasks as possible. We will include more results in our final version if time permits.
2. About more analysis and experiments:
We deeply appreciate your approval of our method. Besides synthetic feedback, we also conduct user study with real human providing feedback as discussed in Sec. 5.4. We provide both statistical results and cognitive feedback from the participants here, which evidently support our argument of effectiveness and user-friendliness of DistQ. Larger scale user study may take a large mount of human labor. We will find efficient ways to enrich our user study in future.
3. About application to broader domains:
DistQ is built on top of video-based queries and online interleaving of reward learning and agent learning, so the whole method can be straightforwardly adapted to domains like robotic arm experiments and games. As for offline RLHF tasks, we may need to adjust the implementation of the query selection criteria. But the proposed query type and high-level ideas for query selection can still be used in such settings.
4. About discussion of more related papers:
Thank you for suggesting these recent related works. The first two mainly propose efficient and expandable platforms for RLHF experiments with real humans, which can be adopted for efficient large-scale user study. Considering your concerns about more extensive user study, we may utilize such platforms to perform user studies to further validate DistQ. The remaining two instead focus on refining RLHF approaches from different points of view. We will include a discussion about them in our final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I believe the additional experiments have improved the paper, however, all methods generally perform mediocre in particularly challenging environments, making it difficult to conduct a thorough comparison. I am glad the author was able to include more difficult experiments. I suggest completing the 6-9 official BPref tasks based on the QPA method, as these tasks are of moderate difficulty. Additionally, I think the major contribution of this paper is providing a new way to collect feedback, so better visualization and making it more convenient for other researchers to use are also necessary. How does this feedback method work, and why is it better? It can be made clearer through some small demos or system demonstrations.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to engage in this discussion with us. We are also very grateful for your applause of our contribution of providing a new way to collect feedback for RLHF approaches.
We next address your further concerns.
1. About the performance of challenging tasks
Thank you for your recognition of our effort on more challenging tasks.
We agree that a thorough comparison on those harder tasks suggested by reviewer mQWc is very difficult. This is because such harder tasks are barely evaluated before in most of the other baselines.
Therefore, **we need to determine workable hyper-parameters for all the methods, which can be unpredictable and time-consuming given the high difficulty of the tasks**. In our linked results, all methods follow their default hyper-parameter settings without tuning for an intuitive and fair comparison.
In this case, **our method still obviously outperforms other baselines on the hard task Disassemble, demonstrating its potential of solving hard tasks to some extent**.
2. About evaluation on more tasks based on QPA
Thank you for your proposition of experiments on more tasks based on the QPA method.
The experimental results are shown in ICML25_rebuttal_figures_round2.pdf in the link (https://drive.google.com/drive/folders/1wR469npWztzkjyW0YF2H9C10LTI3wnTU). Following the hyperparameter settings of QPA, we see that our method can always outperform other baselines on Door open. On Door unlock, DistQ still outperforms other baselines with slightly better performance than SURF. DistQ(half) also achieves competitive performance compared with other baselines except for falling behind SURF, which is understandable given our explanation about the fairness of our experiment setting and and effectiveness of our method (**please refer to point 1-(1)(2)(3) in our response to reviewer ufMA and also point 1 to reviewer 6jG4**).
For the harder task Humanoid stand, although worse than QPA (which is understandable since we adopt the settings of QPA), DistQ significantly outperforms the other baselines and DistQ(half) realizes acceptable performance. From all our experiments on harder tasks, we argue that proper hyperparameter settings are critical for RLHF methods to work, which may explain the superiority of QPA on Humanoid stand. Therefore, we believe that our method could perform better on those harder tasks if suitable hyperparameters are adopted.
Given the limited time during discussion, we only make it for these 3 tasks. We'll include the results of all suggested tasks in our final version.
3. About visualization and application of our method
We provided a link (https://drive.google.com/drive/folders/1qvf7hJ-a66bGeu1g0f9ALWRtP9UgzDk1?usp=sharing) to videos of selected queries along with human labels, and evaluation of trained agents of our method and one baseline method in **Appendix D.3 of our paper**, which serves as a clear visualization of the whole process.
Besides, **we provided detailed explanation of our method (see Fig 1 & 2 and Sec 4 ) and demonstration of its advantanges (see Sec 5) in the main paper. We also provided necessary details such as pseudo code and hyperparameters of our method in the appendix for convenient reproduction**. We will open source our code after publication to ensure the reproducibility of all our experiments.
We sincerely hope that all our responses have addressed all the points you raised. If you have any other concerns, we would be happy to discuss them. If not, we would be grateful if you could consider increasing your evaluation of our work. | Summary: * This paper proposes a novel human feedback type for RLHF and an algorithm allowing robots to learn reward functions from such human feedback.
* The novel feedback is that the robot first gives a human 2 pairs of trajectories, has the human choose the pair that is easier to choose, and then has the human choose one trajectory from that pair. In this way, from the 1st choice, the robot can infer the relative preference strength between the 2 pairs. From the 2nd choice, the robot can infer the preference. The benefit is that the robot can understand the preference strength in addition to the preference.
* The proposed algorithm, DistQ, allows a robot to learn from such feedback:
* Reward learning
* Given a trajectory buffer, the robot randomly chooses pairs of trajectories.
* The robot chooses the top n1 informative pairs based on variance.
* The robot chooses the top and bottom nE easy pairs based on entropy.
* The robot forms all the pairs, each of which has an easy trajectory with a hard trajectory.
* The robot sends all these pairs to the human for feedback and then uses the feedback to infer the reward function.
* Agent learning
* The robot uses the estimated reward function to do RL to optimize policy and collect trajectories into the buffer.
* Then, go back to the first step to update the reward function
* The 1st experiment compared the RL performance, given a fixed budget, of the proposed method with 5 baseline methods in simulated robot locomotion and manipulation tasks. The proposed method with full budget (DistQ) outperformed all baselines, while the proposed method with half budget (DistQ(half)) outperformed some of the baselines. A user study is also conducted to show that DistQ is better than PEBBLE.
* The 2nd experiment compared the query easiness, given a fixed budget, of the proposed method with 5 baseline methods in simulated robot locomotion and manipulation tasks. Both the proposed method with full budget (DistQ) and the proposed method with half budget (DistQ(half)) outperformed all baselines.
* The 3rd ablation experiment shows that the algorithm design, including choosing n1 informative pairs, choosing nE easy and hard pairs, and assign an easy and a hard trajectories in one pair, are significant to the performance.
## update after rebuttal
I appreciate the authors' Rebuttal in addressing my concerns. I have adjusted my score accordingly.
Claims And Evidence: * The key claim is that the proposed query and algorithm can improve RLHF for robotic tasks. The empirical result supports this.
* One limitation of the empirical result is about DistQ and DistQ(half). The proposed novel query requires the human to make 2 choices, first choosing the easy pair and then chooses the preferred trajectory in the pair. By contrast, the conventional query requires the human to make 1 choice, choosing the preferred trajectory in the pair. As a result, it is a bit tricky to compare the proposed algorithm with the proposed query with standard RLHF methods.
* The paper compared the following 2 variations of the proposed algorithm against the baseline methods:
* **DistQ(half)**: In each query, the human first chooses the easy pair, and then chooses the preferred trajectory in the chosen pair. And this query with 2 choices counts as 1 query in the "query budget" in the empirical study.
* +: This is consistent with the definition of the proposed query (Sec.4.1).
* -: Unfortunately, easy pairs contain a limited amount of information, as mentioned under Eq.6. Since the human only provides the preference feedback for the easy pair selected by the human, the overall performance could be limited. This might be why DistQ(half) does not seem to outperform many baseline methods in Fig.3.
* **DistQ**: In each query, the human first chooses the easy pair, and then chooses the preferred trajectory in the chosen pair **and also the not-chosen pair**. This query with 3 choices counts as 1 query in the "query budget" in the empirical study.
* +: This outperformed all baseline methods as in Fig.3.
* -: However, this is not consistent with the definition of the proposed query (Sec.4.1). As a result, I think it is not fair to compare DistQ with the baseline methods.
* In addition to the 2 variations considered by the authors in the empirical study, I think there is one more variation that could be interesting to consider:
* **DistQ(half-half)**: In each query, the human first chooses the easy pair, and then chooses the preferred trajectory in the chosen pair. This query with 2 choices counts as **2 queries** in the "query budget" in the empirical study.
* The reasoning behind this variation is that the human makes 2 choices, so consuming 2 units of the query budget. This reasoning is consistent with the paper's interest in the query easiness.
* Summary
* I think it is fair to compare **DistQ(half)** against baseline methods, but it does not seem to perform that well empirically.
* I think it is a bit unfair to compare **DistQ** against baseline methods.
* I think that the authors could consider **DistQ(half-half)**, which, in my opinion, is also fair to be compared against the baseline method. But I conjecture that it will not perform as well as **DistQ(half)**.
Methods And Evaluation Criteria: Method and evaluation criteria make sense. There are 2 limitations.
* The 1st limitation is already discussed in `# Claims And Evidence*`.
* The 2nd limitation is that Sec.5.3 measures the query easiness as to whether the robot's predicted feedback is consistent with the ground truth. This definition seems to define the easiness from the robot's view. However, the query easiness is supposed to be defined from the human's view. I think a better way to measure query easiness is to use ground truth preference entropy for this query as defined in Eq.6.
Theoretical Claims: NA
Experimental Designs Or Analyses: The experiment design, user study, and analyses make sense.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper focuses on the robot RLHF, which is a hot topic and also a very relevant problem to machine learning, LLM, and robotics.
Essential References Not Discussed: * For Sec.2's Eliciting Preference Strength, two recent works explores the time elicitation in bandits (in the form of human response times), which can be relevant:
* Shvartsman, M., Letham, B., Bakshy, E., & Keeley, S. L. (2024, July). Response time improves gaussian process models for perception and preferences. In The 40th Conference on Uncertainty in Artificial Intelligence.
* Li, S., Zhang, Y., Ren, Z., Liang, C., Li, N., & Shah, J. A. (2024). Enhancing Preference-based Linear Bandits via Human Response Time. Advances in Neural Information Processing Systems, 37, 16852-16893.
Other Strengths And Weaknesses: ## Strength
* The problem is well-motivated
* The paper is well-written.
* The proposed query is creative.
* The paper also contains a real user study to validate the method.
Other Comments Or Suggestions: NA
Questions For Authors: * In the user study (Sec.5.4), there are 10 rounds. Based on App.D.3, in each round, the human answers 150 queries. So, in total, there are 1500 queries? Also, I am curious why PEBBLE has a successful rate of 0, which seems strange. Could you share some insights on why this is the case?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. About experimental settings:
(1) To address potential misunderstanding in the review, we first clarify some terminologies we used in our paper. One distinguishability query (DQ) consists of 2 pairwise comparison queries (PCQs). The human first chooses the more distinguishable PCQ and then chooses the preferred trajectory in this PCQ (line 202). The "query budget" in our paper refers to the number of PCQs (line 359) since all our baselines use PCQs. In the experiments section, for both DistQ and DistQ (half), the human **chooses the preferred trajectory only from the chosen PCQ in a DQ**, instead of "also the not-chosen pair" as stated in the review. That is, for each query in both settings, the human always makes 2 choices instead of 3. The difference between DistQ and DistQ (half) is that DistQ uses the same "query budget" as other baselines while DistQ (half) only uses half of the budget (line 364). See the first paragraph of Sec 5.2 for detailed instructions. We think both settings are consistent with our proposition.
(2) We then discuss the fairness in our experimental evaluation. As stated in the review, for one DQ, the human makes 2 choices. We understand that this may be considered unfair when comparing with baselines using PCQs where the human only makes 1 choice. That's why we design DistQ (half), which only uses 100 DQs (=200 human choices). However, it's also unfair to only count the number of human choices since such choices for DQ and PCQ apparently provide different amount of information. Therefore, we consider DistQ with full budget. See our explanations in the first paragraph of Sec 5.2.
(3) We next emphasize the effectiveness of DistQ. Given (2), it's hard to have a totally fair comparison among DistQ and baselines. The full and half budget settings actually provide a performance range of our method when compared with its baselines. Therefore, it's normal that DistQ (half) can't beat all its rivals in Fig 3. The fact that DistQ (half) can match or outperform baselines demonstrates in our opinion that it is possible to achieve good performance while asking informative and easier to answer queries. In addition, we believe that answering one DQ is easier than two PCQs (especially when not controlling for their hardness) based on our proposition. Besides, the goal of DistQ is to balance query informativeness and user-friendliness rather than only one of the two. Thus, we need to look at both performance (Fig.3) and query easiness (Fig.5, where our method outperforms others).
(4) As for the suggested setting DistQ (half-half), we think this is identical to DistQ in our paper from the view of query quantification.
2. About easiness measurement:
We consider the "predicted feedback" in terms of human side since we hope to quantify easiness by whether the human can provide correct feedback to queries. Similar query easiness measurement to our "wrongly predicted feedback" has also been used in previous work (**see the 2nd paper in the references list**). For the ground truth preference entropy of queries, it seems not very straightforward to see whether queries are easy to answer for humans compared with wrongly predicted number of feedback. But indeed, we acknowledge that this may also be a good measurement for easiness. We will use this entropy as a supplementary easiness measurement in our final version.
3. About essential references:
Thank you for suggesting these recent works. We will discuss how our paper is related to them in our final version.
4. About user study:
(1) For each method, 150 queries are answered. We conduct 10 rounds of evaluation after one training (line 402-406). During one training, the human answers 150 queries.
(2) The authors of PEBBLE conducted similar (but easier) user study on Quadruped agent, where 200 PCQs are needed. Besides, they only claimed that the agent could succeed but didn't report the success rate. PEBBLE fails to work in our experiments, because the setting is harder and with a more limited query budget. | null | null | null | null | null | null |
Theoretical Limitations of Ensembles in the Age of Overparameterization | Accept (oral) | Summary: This paper studies ensembles of M random feature networks when the number of features D is greater than the number of data points N (overparameterized regime). Large ensembles are found to be asymptotically equivalent to a single large network and convergence bounds are given for finite M. There are numerical experiments that illustrate the key results and implications for real networks are discussed.
Claims And Evidence: The main "claim" is that large ensembles become equivalent to large-width models. Besides the theory (reviewed below), numerical experiments were run with random features as well as neural networks (Figs 2, 3). Overall, I found the claims fairly clear and well-supported. There seemed to be some exaggeration of how well the numerical results fit with the theory. For instance:
* Fig 2 right is said to show a "hockey stick" pattern, however this isn't nearly as obvious on the right (neural networks) versus at left (with RF ensemble).
* Fig 3 (left and right) both show ensembles outperforming single models despite the main "result" being an argument that these are equivalent as the networks/ensembles get larger. In fact, the gap at left seems to be growing as the network grows, which isn't inconsistent with the theory and in fact the ensemble outperforms the kernel, its infinite limit. I think the issue here is the interpretation of the theory; the theory tells about the infinite limit but isn't very relevant in the finite case where ensemble will have a variance-limiting effect. In this plot it would help to show both means and shaded regions for variance across instantiations of single models/ensembles. My guess is the variability of the curve at right is due to variance across single networks. Can you explain this discrepancy?
* Fig 4: I found this convincing; it might be better if plotted on the same axes.
Methods And Evaluation Criteria: The datasets and methods seem fine for a theoretical study. There wasn't much discussion of the effect of input dimension on the results. I don't think this affects things, but is there a high-dimensional regime when the input dimension is large where these results break down?
Theoretical Claims: I read the main paper and found the results well-explained and intuitive. However, I did not check the proof supplement in detail.
I have some comments on the clarity of the mathematical presentation that I will list below.
Experimental Designs Or Analyses: I read the supplemental description of the experimental setup and didn't have any issues with that.
Supplementary Material: I skimmed the supplement. The additional experiments seem to support the main paper well.
Relation To Broader Scientific Literature: I am not sure if the current results are super "novel" in that I think some of these results are known (e.g. Ruben et al, 2024). The idea that the variance across random features isn't indicative of a Bayesian measure of uncertainty, to me, wasn't surprising since I've always thought of this variance as being due to ensemble randomness rather than randomness in the training data. For instance, the theory behind using orthogonal random features or the FastFood random feature method was mostly concerned with keeping this variance low.
I don't think this means the current work isn't important. I think the generality of the assumptions taken here make these results nicely applicable in settings that haven't been considered before.
The discussion of the paper mentions the work of Abe et al. (2022) as well as in the introduction, but to understand what "recent empirical findings" the current paper is supposed to reproduce, it would be helpful for the reader to get a quick overview of those.
Essential References Not Discussed: Early work by Radford Neal (1996) that was the first to connect random feature networks with kernels. I usually cite that and CKI Williams (1997) when writing papers in this area.
Other Strengths And Weaknesses: I found the paper overall clear and the results interesting. My main issues are with some of the clarity in presentation.
Other Comments Or Suggestions: Typos/small points/clarification needed:
* Eqn (1): Notation $[ . ]_j$ should be explained since it seems to refer to both column Nx1 and 1xN row vectors
* pg 3: Similar to above, MATLAB-like [W; w] notation should be described.
* pg 3: $\phi^*$ isn't defined
* pg 4 line 178: order of "ridge(less)" is reversed from least-norm and RR expressions that come later; I suggest reversing their order for clarity.
* Assumption 2.1: The expression $w_i w_{\perp i}$ is unclear, since $w_i$ is a vector and the other term is a scalar. Do you really mean their product or do you mean a column vector that comes from concatenating them? I suspect a typo here.
* Line 250 right column: "RF ensembles are equivalent to the ridgeless..." I think you can, at best, say they are "close to" here given the evidence you present.
* Lemma C.5: $z_i$ seems like it should be $x_i$.
* Inconsistent notation "Var" and $\mathbb{V}$ used for variance
* Figure 5 doesn't seem to be referred to in the paper itself
* Assumption 3.4: Can you clarify what you mean by finiteness of the matrix here? It would require the matrix to be invertible but also finite in expectation?
Questions For Authors: * Can you explain the discrepancy between the curves in Fig 2? This should be discussed in the paper
* I am somewhat familiar with the results in the paper by Ruben et al (2024) that also study random feature ensembles using a different theory. These related results aren't discussed in much detail, although the conclusion "NO FREE LUNCH FROM RANDOM FEATURE ENSEMBLES" is close to those of the current paper. Can you discuss?
* Can you discuss in more detail, perhaps at the end of section 3.3, when we expect the RF ensemble variance to capture Bayesian uncertainty or not? Does it ever work?
* In Sec 3.4 you make the point about ensembles with the same $\lambda$ converging. There are some results out there that show network width acts as an effective regularization (RF models at finite width are close to the kernel predictor with a modified ridge parameter, c.f. Bordelon, Canatar, Pehlevan, 2020). Wouldn't it then be best to use a different ridge parameter to compare ensembles with width $D$ to a single network of width $MD$?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review, as well as their suggestions for improving our paper. Below, we address the concerns and questions raised, and outline the changes we will make in response.
**Claims And Evidence:**
- **Clarification of "hockey stick" pattern in Fig. 2:** We agree and will adjust our wording in the revised manuscript to reflect that the "hockey stick" pattern is less pronounced for neural networks, likely due to effects that cannot be captured from an RF analysis.
- **Interpretation of performance gap and variability in Fig. 3:** Thank you for highlighting this point. Indeed, the variability of the curve on the right arises due to variance across single model instantiations. We will update Fig. 3 accordingly, adding shaded regions to represent this variance. Regarding why the gap at the left grows, this is likely due to a) the discrepancy between the finite and infinite cases, and b) numerical instabilities when working with ReLU models (for a more precise explanation, see Appx. A.2). As an example of an experiment where this does not happen, compare Fig. 12 (right), which we deliberately did not include in the paper since this uses the Gaussian Error function.
- **Plotting suggestion for Fig. 4:** Thank you for this suggestion – we will update Fig. 4 to use shared axes.
**Relation to Prior Work and Additional References:**
We will add a brief summary of Abe et al. (2022) and cite Neal (1996) and Williams (1997).
**Other Comments or Suggestions:**
We appreciate your edits and suggestions for clarity, which we will incorporate into our revised version. We address a few specific points below.
- **Clarification of Assumption 2.1:** We indeed intended the product of the vector and scalar (which produces a vector again). Following standard extensions of subexponentiality to vector valued random variables, our theory relies on any linear combination of the entries of $w\_i w\_{\\bot i}$ being subexponential. We will clarify this point in the revision.
- **Phrasing on line 250, right column:** We agree and will revise this phrasing accordingly.
- **Clarification of Assumption 3.4:** We require that the entries of the expected value of the inverse matrix are finite, i.e., that the corresponding expected value exists. This obviously also requires that, for almost all instantiations of the matrix $\\Phi\_{\\mathcal{W}}$, the matrix $\\Phi\_{\\mathcal{W}} \\Phi\_{\\mathcal{W}}^\\top$ is invertible.
**Questions for Authors:**
- **Relation to Ruben et al. (2024):** We agree that Ruben et al. (2024) should be discussed in more detail and will include this in the revision; thank you. We would also note that this work is concurrent to ours and should be read as such. Both works conclude that ensembles of overparameterized random feature models do not outperform a single larger model with an equivalent total feature budget. However, we explicitly focus on the overparameterized and zero/small ridge regime, while Ruben et al. primarily analyze generalization under optimal ridge parameters in both the over- and underparameterized regimes, with only brief consideration of our regime. Additionally, our analysis also explicitly considers uncertainty quantification, and we do not rely on Gaussianity assumptions, contrasting their Gaussian universality assumption.
- **Ensemble variance and Bayesian uncertainty:** As briefly mentioned in Sec. 3.3, only "with Gaussian features, ensemble variance admits a Bayesian interpretation," by which we mean that the ensemble variance matches the posterior variance of a GP with the same limiting kernel. We will ensure this is clearer in the revised version.
- **Different ridge parameters in Sec. 3.4:** We appreciate this suggestion and agree it would be interesting to explore. Note, however, that when investigating the finite-width regime, we assumed $\\lambda \= 0$, while in Sec. 3.4, we specifically discuss the infinite-width limit. Additionally, our current focus was explicitly on the small ridge regime, where any implicit regularization parameters (such as $\\tilde{\\lambda}$ from Jacot et al. (2020)) are expected to be very small; therefore, variations in this parameter are likely to have only a minor impact in our setting.
We hope these responses and adjustments clarify our contributions and address your feedback. If you have further questions or suggestions, we would be happy to address them.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses.
Now reading my comment about the vector times scalar issue seems silly, of course it should be interpreted that way.
I've revised my score to a 5 | Summary: This paper presents a theoretical analysis of ensembles of overparametrized models (more parameters than training data) in which authors claim that an Infinite Ensemble is equivalent to an Infinite-Width Single Model. This analysis is done by using the equivalence between random feature regressors (RF) and neural networks. In particular, the authors aim at answering two key questions:
- Do ensembles of overparameterized models provide generalization or robustness benefits over a single (very large) model trained on the same data?
- What does the predictive variance of overparameterized ensembles measure?
In order to tackle these questions, the authors compares the Infinite Ensembles and the Infinite-Width Single Models for the RF model. In simple terms, the model resulting from aggregating (i.e., computing the mean over the distribution of $\omega$) infinitely many models is pointwise equivalent to the infinite-width model. This result is based on Assumption 2.1 asking $\Phi$ to be full rank and controlling the dependence with W and $\phi(w_i,x*)$. It is important to underline that Theorem 3.2 is independent of the RF distribution. In order to complete the comparison and analysis on ensemble methods, in Section 3.2 authors assume limited computational budget. Here, in Theorem 3.3 they bound the distance between M models with D features and one single of MD random features. Figure 3, exhibits the theoretical finding, underlying the small difference in the generalization error. Since an important feature of ensemble methods is their ability to give measures of uncertainty, authors study in section 3.3. how the predictive variance of the two compared approaches behaves, giving details of Gaussian and general features. They conclude their analysis in section 3.4 accounting for the role of $\lambda$ (the ridge parameter in the loss function) for the infinite ensemble and infinite-width model. As it was predictable, allowing $\lambda$ to get higher values, affects the distance, since it is a parameter controlling for parametrization.
## Update after rebuttal
I thank the authors for integrating some comments in order to better clarify the work. I appreciate your efforts in answering my doubts and comments, which are convincing and clear. I therefore confirm to accept the paper.
Claims And Evidence: All claims are well supported and conclusions are realistic. This is a mainly theoretical paper in which claims are always supported by some kind of experiment.
Methods And Evaluation Criteria: Since the paper is mainly theoretical, the experiments carried are not very extensive. Only synthetic data and a single real world dataset (California Housing) are used, what limits a bit the validity of the evidences. However, all experimental result are suitable for the claim aiming to support and provide a good proof of the theoretical results.
Authors claim that this method is independent of the $\omega$ distribution but only use synthetic data from a Normal one and one single real world dataset.
The idea of using RF regressors as Neural Networks, even not novel, is correctly chosen.
Theoretical Claims: All theoretical claims seem to be well suited and properly derived. Appendix shows all demonstrations. The important claims have been checked and properly understood.
Experimental Designs Or Analyses: I think they could include more cases a part from the synthetic and California Housing. Specially, I think the experimental setup should be explicitly mentioned in the main text.
Despite that, this paper is mainly theoretical and I think the experiments support well their findings.
Supplementary Material: Supplementary material contains plenty of information, including demonstrations, experiments and code. The appendix might be a bit too extended, but the paper can be understood without looking at it. However, I would explicitly move some information to the principal text like the experimental setting, the distribution $\tau(\cdot)$ of the elements $\omega_i$, etc.
Relation To Broader Scientific Literature: According to comparing ensembles with single models, the paper presents good literature. However, literature related to uncertainty quantification is a bit scarce and they do not consider the fact that a single model cannot distinguish between epistemic and aleatoric uncertainty.
Essential References Not Discussed: Even though they cite [1] I think they missed the opportunity to introduce the term joint (or end-to-end) training, which gives a good vision of why ensembles can be similar to single models depending on how we train them.
I think they missed [2] and [3], which showed similar results with less theoretical derivations and explained consequences of joint training, respectively.
[1] Jeffares, A., Liu, T., Crabbé, J., & van der Schaar, M. (2023). Joint training of deep ensembles fails due to learner collusion. Advances in Neural Information Processing Systems, 36, 13559-13589.
[2] Webb, A., Reynolds, C., Chen, W., Reeve, H., Iliescu, D., Lujan, M., & Brown, G. (2021). To ensemble or not ensemble: When does end-to-end training fail?. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2020, Ghent, Belgium, September 14–18, 2020, Proceedings, Part III (pp. 109-123). Springer International Publishing.
[3] Abe, T., Buchanan, E. K., Pleiss, G., & Cunningham, J. P. (2022). The best deep ensembles sacrifice predictive diversity. In I Can't Believe It's Not Better Workshop: Understanding Deep Learning Through Empirical Falsification.
Other Strengths And Weaknesses: The paper is quite technical and I admit I am not an expert in the subject of kernel functions. However, the mathematical details convinced me and results are coherent with their goals. Moreover, they approached the comparison of the single model and the aggregated model in many ways, resulting in a complete and detailed analysis. The conclusions correctly reflects their finding suggesting that the benefits of overparametrized model may be explained by their similarity to larger models.
I find the conclusion section very well written.
Other Comments Or Suggestions: C1- The whole paper is well written and no substantial error was found.
C2- In line 297 the notation for the L_2 norm is quite unusual and $\_2$ would fit better.
C3- It is not clear the connection between Figure 5 and Theorem 3.5 as there is no reference in text to Figure 5 and apparently one is dealing with $\bar{h}^{LS}\_{\infty}(x^*)$ and the other $\bar{h}^{RR}\_{\infty,\lambda}(x^*)$. Even though it is mentioned the reasoning in Appendix 4, I suggest a more detailed and clear explanation as the evolution w.r.t $\bar{h}^{RR}\_{\infty,\lambda}(x^*)$ is never shown.
C4- In Figure 1, authors write “We note no perceptible difference between the two.” which is quite vague. It would be more appropriate to quantify the distance. Also would it good to actually see the evolution of the graph, when increasing M, instead of just two values?
C5- Assumptions are more or less explained, even though an intuitive idea of them would help the reader in understanding the main results. In the text, there is no mention of how feasible are these assumptions.
C6- I understand that when authors say in line 295 (2nd column) that ensemble provide no additional robustness benefits they talk from a variance point of view when the number of parameters is fixed. However, in the right plot of Figure 3 we can see that ensembles show more stability to the total number of parameters. Is this only a consequence of averaging the models or has other explanations?
C7- I know the length restriction of the paper is limiting, but counterexamples like Figure 8 in Appendix A.2. and an explanation to them would reinforce the findings.
C8- In the experimental section, authors explain to take parameters $\omega$ from the standard normal distribution. This is a bit contrasting with their claim of generality of their results with respect to the parameter distribution. Also, part of the novelty of this paper is exactly the generality of results under a certain distribution $\pi(\cdot)$, hence it would have been better to provide a small example with another distribution.
C9- In Appendix A.2. authors claim that $\Phi_{\Omega}\Phi^T_{\Omega}$ is not almost surely invertible when using the ReLU activation function, hence contrasting assumption 2 in 2.1. I understand the approach to tackle this issue, but what about considering another activation function? Would it be interesting to analyze the impact of several activation functions on this assumption?
C10- Line 80 repeats “model” twice.
C11- Have you considered the effect of overparametrization in the double descent phenomena?
Questions For Authors: Q1- The comparison between Deep ensembles and single large models assumes that the training process is the same, right? Models in Deep ensembles are trained independently because one seeks diversity among models. It is true that a set of models could be trained end-to-end since joint function is fully-differentiable, but as [1] demonstrates this is, effectively, equivalent to a single wide model.
Q2- A part from the predictive variance, how could the single model decouple epistemic and aleatoric uncertainty?
Q3- How feasible are the assumptions 2.1 necessary to prove the main Results? Since the matrix $\Phi_{\Omega}\Phi^T_{\Omega}$ is not invertible (hence non-positive definite) in the case of ReLU, it is reasonable to consider other activation functions? What would be their impact on the results? Would the assumption be satisfied?
Q4- Besides the technical proof, how can you justify the statement “In practical terms, this result indicates that for sufficiently small values of λ, the predictions of large ensembles and large single models remain nearly indistinguishable” regarding the impact of the the regulariser parameter?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments and feedback on our paper. Below, we address the concerns and questions raised and outline the changes we plan to make to the manuscript based on your suggestions.
**Regarding the Experiments:**
We agree that explicitly including the experimental setup in the main text will help clarity. Thus, we will move these details from the appendix into the main body of the paper.
**Additional literature:**
Thanks for pointing out the additional papers. We will discuss \[1\], \[2\] and \[3\] in our related works section. Furthermore, we will add relevant references in the field of uncertainty quantification to our related works section and we will address the distinction between epistemic and aleatoric uncertainty.
**Specific Comments:**
* **C2 (Line 297, L2-norm notation):** We will correct the notation as suggested.
* **C3 (Connection between Figure 5 and Theorem 3.5):** We will add a short explanation of and a reference to Figure 5 in the main text.
* **C4 (Quantifying the difference in Figure 1):** We will include a more detailed plot showing the evolution with increasing ensemble size $M$ in the appendix and reference this in the figure description.
* **C5 (Feasibility and intuitiveness of assumptions):** We will add a brief intuitive explanation of Assumption 2.1, noting that these assumptions closely hold in most relevant scenarios including e.g., for distributions with bounded support.
* **C6 (Stability vs. robustness in Figure 3):** Indeed, the observed stability in Figure 3 arises from averaging effects. As suggested by reviewer e9Q9 we will add shaded regions for variance across instantiations of single models/ensembles
* **C7 (Counterexamples in Appendix A.2):** Due to length constraints, we cannot incorporate Figure 8 directly into the main text but will ensure it is more prominently referenced in the main text.
* **C8 (Experiments beyond standard normal distributions):** To clarify, even though the weights of feature generating function (e.g. $\\omega$) are normally distributed, the random features themselves (i.e. $\\mathrm{ReLU}(\\omega^\\top x)$ are not, and thus our experiments make use of the generality of our theorem. Regardless, we will supplement Figure 1 and Figure 13 with $\\omega$ drawn from heavy tailed distributions.
* **C9 (Invertibility assumption with ReLU activations):** We investigated other activations which fulfill the invertibility, including softplus (see e.g. Fig. 2 (left) and Fig. 9; note that using softplus activations, we can approximate ReLU activations arbitrarily precisely, see footnote 2 on page 4\) and the Gaussian error function activation function (see e.g., Fig. 10 on the right hand side and Fig. 11 & 12), which satisfy the invertibility assumption.
* **C11 (Double descent phenomena):** We are not entirely sure what aspect of the double descent phenomena you are referring to. If you clarify further, we would be happy to discuss it.
**Responses to Questions:**
- **Q1 (Training assumption):** Yes, in our theoretical analysis, we assume that the training procedure converges to the unique least norm solution, guaranteed for standard SGD initialized at zero. In our empirical analyses, we also always trained all models with the same training algorithms.
- **Q2 (Epistemic vs. aleatoric uncertainty):** We admit that a single random feature model cannot decouple epistemic and aleatoric uncertainty. This decoupling is a possible advantage of ensembles, though our analysis in Section 3.3 implies that standard ensembles do not cleanly deliniate these sources either.
- **Q3 (Feasibility of Assumption 2.1):** As discussed in the paper, the first condition is fulfilled whenever $w\_{\\perp i}$ and $w\_i^{\\top}$ are sub-Gaussian, which is true when the features come from activation functions with bounded derivatives and sub-Gaussian weights (which we would argue is a very weak assumption). For the second condition we expect that most nonlinear activation functions, such as sigmoid, tanh, sine, or cosine, satisfy this condition if we assume i.i.d. weights from a distribution with a density function. However, rigorously proving this is beyond the scope of this work.
- **Q4 (Impact of regularization parameter $\\lambda$):** Intuitively, the training outcome is continuously dependent on the ridge parameter $\\lambda$; thus, small variations in $\\lambda$ typically result in minor deviations in predictions. The proof of our theorem effectively provides a rigorous confirmation for this intuition.
We hope these responses and adjustments clarify our contributions and address your feedback. If you have further questions or suggestions, we would be happy to address them.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for integrating some comments in order to better clarify the work. I appreciate your efforts in answering my doubts and comments, which are convincing and clear. I therefore confirm to accept the paper.
The only comment I would like to clarify is C11 about the double descent phenomena. This was not an important comment nor criticism. I just wanted to open the possibility to future consideration or debate. Since double descent is a phenomena in which overfitting is—counterintuitively—reversed with increasing model complexity (overparametrization) I thought that authors might have an interesting insight about it.
The first reference about double descent can be read in (Nakkiran et al. 2021).
Nakkiran, P., Kaplun, G., Bansal, Y., Yang, T., Barak, B., & Sutskever, I. (2021). Deep double descent: Where bigger models and more data hurt. *Journal of Statistical Mechanics: Theory and Experiment*, *2021*(12), 124003. https://arxiv.org/abs/1912.02292 | Summary: This paper proves that in the random feature (RF) regression, the ensemble estimator is approximately equivalent with the simple regressor, as long as the model size is sufficiently great. This result can be applied not only to ridgeless models, but also to models with small ridge parameters.
Besides, the paper also demonstrate an interpretation of the predictive variance among ensemble members. It turns out that, in contrast with traditional models, the predictive variance of wide random feature ensembles quantifies the expected effects of increasing capacity rather than uncertainty.
These results implies that in this case, ensembles do not provide additional benefits over a simple model.
Claims And Evidence: The statements of this paper are clear. A number of references and experimental data are provided to make the claims more convincing.
The main results of this paper warns that it is unwise to assume without theoretical guarantees the additional advantages of ensembles over simple models. In the background that more and more recent studies concentrate on random feature models, the contribution of this paper to our understanding of random feature learning is impressive and inspiring.
Methods And Evaluation Criteria: There are some limitations in this paper.
Firstly, there is some problem with Theorem 3.3. The bound term on the right hand side does not converge to zero as the number of feature $D$ increases, provided that $\delta$ is fixed. Thus, Theorem 3.3 is not sufficient to conclude that in the case of finite ensemble budget, the RF ensemble regressor and the RF simple regressor are asymptotically equivalent with high probability as $D$ goes to infinity.
On the other hand, Theorem 3.2 is impressive but not surprising. The linear random feature regression in the case of infinite width simply coincides with kernel interpolation, hence the infinite-width assumption together with the linearity of the model helps simplify the computations, as is shown in section 3.1 and appendix C.
Thus, we are concerned that these results and examples are not general and sophisticated enough.
Theoretical Claims: We have checked the proofs and found no evident mistakes.
Experimental Designs Or Analyses: The details of experimental designs and results are clearly demonstrated in this paper.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: None.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: None.
Questions For Authors: Traditionally, many ensemble methods such as random forest are based on simple but non-linear method, and with the inspiration of this work, we are curious about what is the essential aspect of random feature model that prevent ensembles to bring extra advantages? Is it the linearity?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and feedback on our paper. Below, we address the concerns and questions raised.
**1\. Concerns about Theorem 3.3:**
> "Firstly, there is some problem with Theorem 3.3. The bound term on the right-hand side does not converge to zero as the number of features increases, provided that $\\delta$ is fixed. Thus, Theorem 3.3 is not sufficient to conclude that in the case of finite ensemble budget, the RF ensemble regressor and the RF simple regressor are asymptotically equivalent with high probability as D goes to infinity."
We believe there is a misunderstanding regarding this theorem. Theorem 3.3 does not aim to establish exact asymptotic equivalence. We proved asymptotic equivalence holding almost surely in our main result (Theorem 3.2). Thus, Theorem 3.3 is simply an analogous finite-sample guarantee showing that the single large model and the ensemble behave similarly with high probability at finite scales.
**2\. Generality and novelty of Theorem 3.2:**
> "On the other hand, Theorem 3.2 is impressive but not surprising. The linear random feature regression in the case of infinite width simply coincides with kernel interpolation, hence the infinite-width assumption together with the linearity of the model helps simplify the computations”
We believe there is a misunderstanding regarding the key contribution of Theorem 3.2. The main contribution of the theorem is to investigate an *infinite* ensemble of *finite-width* random feature regressors rather than the infinite-width model. To our knowledge, no prior work has demonstrated that such an ensemble converges to the kernel interpolator, except in the special case of zero-mean Gaussian random features.
**3\. Question about the essential aspect preventing ensemble advantages:**
> "Traditionally, many ensemble methods such as random forest are based on simple but non-linear methods, and with the inspiration of this work, we are curious about what is the essential aspect of random feature model that prevents ensembles from bringing extra advantages? Is it the linearity?"
As suggested by our title, the key attribute preventing ensembles of overparameterized random feature models from providing additional advantages is the **overparameterization** of the ensemble members. This claim is supported by our theoretical and empirical results contrasting ensembles of underparameterized and overparameterized random feature models (see Figure 2 and Appendix E) as well as other recent work (e.g., Abe et al., 2022).
We hope these responses address your concerns and clarify the contributions of our work. If you have further questions or suggestions, we would be happy to address them.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification. I have increased my score. | null | null | null | null | null | null | null | null |
SAN: Hypothesizing Long-Term Synaptic Development and Neural Engram Mechanism in Scalable Model's Parameter-Efficient Fine-Tuning | Accept (poster) | Summary: This work seeks to further advance Parameter-Efficient Fine-Tuning (PEFT) techniques, which reduce memory usage and computational cost compared to full fine-tuning. It draws insights from the Neural Engram (NE) phenomenon, where the brain processes new knowledge by strengthening or weakening existing connections, which helps to preserve energy and reduce the time costs of developing new synapses and enables rapid learning. To do this, the authors propose SAN, with the key innovation lying in explicitly propagating the scaling vectors of the current layer to the parameters of the subsequent layer, mimicking LTP/LTD. In a way, SAN explicitly propagates the layer transformation effect through a scaling approximation. This explicit propagation allows for more efficient parameter adaptation and provides certain implicit regularizations that discourage extreme values and promote stability.
## update after rebuttal
I maintain my assessment, a weak accept, as I see no major weaknesses left to the best of my knowledge
Claims And Evidence: - [Strength] The authors performed extensive empirical experiments across well-known vision tasks, language tasks, and visual-language tasks using SoTA architectures for the task. By doing so, the authors demonstrated the effectiveness of incorporating SAN into various PEFT methods (e.g., LoRA and DoRA).
- [Strength] The authors clearly explained how SAN, with the same parameter efficiency as SSF, is more expressive than SSF.
- [Strength] The authors clearly articulated the connection of SAN to LTP/LTD and provided further ablation studies.
- [Weakness] While the authors explained how the principle of SAN, assuming linear transformation, can still remain approximately valid in more complex settings, it would be helpful to provide more discussion on the limitations of such an approximation and potential failure modes.
- [Weakness] Code and hyperparameter details are stated to be released in the future, so I cannot be certain about the reproducibility of this work. That said, the authors did provide basic information on hyperparameter tuning in Appendix B.
Methods And Evaluation Criteria: See above
Theoretical Claims: The results are mainly empirical
Experimental Designs Or Analyses: I checked all tables and figures in the main text
Supplementary Material: I only skimmed through the appendices, so I may be missing certain experimental details
Relation To Broader Scientific Literature: The paper is well-positioned with a clear objective
Essential References Not Discussed: None to my knowledge
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: Some typos: missing period at the end of abstract; “(see Figure 3)” should be in front of the period on page 4 under “Expressiveness & self-regulation”.
Questions For Authors: - By explicitly propagating the scaling vectors, would it introduce other implicit regularizations besides the one that the author mentioned, such as effects associated with reducing depth?
- Can you provide a complexity analysis?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Response to Reviewer 1yzx
Thank you for your thorough review and feedback. Your positive comments (such as acknowledging our clearly explained SAN and its connection to LTP/LTD) and suggestions have greatly helped us improve our paper. To address your concerns, we have carefully read your feedback and provided point-by-point responses below:
1. Minor typos
a. missing period at the end of the abstract
b. “(see Figure 3)” should be in front of the period on page 4 under “Expressiveness & self-regulation”.
2. Provide more discussion on the limitations of the linear transformation approximation and potential failure modes.
3. By explicitly propagating the scaling vectors, would it introduce other implicit regularizations besides the self-regularization mentioned, such as effects associated with reducing depth?
4. Provide a complexity analysis and more hyperparameter details.
## Point-by-Point Response:
1. ### Regarding Typos
Thank you for your meticulous review. We will correct these in the revised version.
2. ### Regarding potential failure modes
We envision SAN as a plug-and-play method adaptable to current and future SOTA PEFT approaches across domains. This implies that SAN's failure mode mainly depends on the selected base method.
To stress-test pure linear transformations in large models, we evaluated SSF and SSF+SAN on LLaMA fine-tuning. The results (see Table below) indicate that for large models (7B+ parameters), pure linear transformations of features cannot effectively align pre-trained parameters to downstream tasks. This is reasonable, as although such methods achieve excellent results on vision foundation models (which typically have only hundreds of millions of parameters) by providing orthogonal fine-tuning-like **[1]** properties that preserve hypersphere energy and prevent catastrophic forgetting, their expressive power becomes constrained in larger language/multimodality models.
| Method | P-Tuning V2 | SSF | SSF+SAN | LoRA | LoRA+SAN |
| ---- | ---- | ---- | ---- | ---- | ---- |
| Common Sense Reasoning | 64.60% | 52.60% | 61.10% | 74.70% | 78.00% |
3. ### Regarding other implicit regularizations
Self-regularization emerges directly from the implicit $\gamma^2$ term during adjustment, which guides scaling factors to reduce extreme values and control variance. Besides, we haven't observed significant regularization effects beyond self-regularization.
To our knowledge, the depth-reducing phenomenon you mentioned appears in two types of literature: structural pruning (particularly layer pruning) like Layer Folding **[2]**, which aim to improve model efficiency; and layer redundancy studies like stochastic depth **[3]**, which prevent overfitting through random depth dropout during training. Our method doesn't directly relate to these works.
4. ### Regarding complexity and hyperparameters
We have provided basic hyperparameters in the paper, and more detailed parameter settings can be found in our supplementary code. As mentioned in Appendix B, we've recorded thousands of experimental setups and results on WandB, which we will make public.
For algorithm complexity, SAN, as a plug-and-play method, adds no additional parameters during training. The computational cost only appears in the decompose & propagate operations. Specifically:
- The decomposition cost doesn't involve matrix multiplication (using average pooling or direct reuse), making the additional computation negligible.
- For propagation, we multiply the decomposed $1×d$ vector with a $d×k$ posterior weight matrix. The complexity for a single propagation is $O(d×k)$, where $d$ is the input feature dimension and $k$ is the output dimension. If we have multiple layer groups $(1,2,...N)$ that need propagation, the total computational complexity would be the sum of each group's complexity: $O(∑(d_i×k_i)$ for $i$ from $1$ to $N$.
This minimal computational overhead makes SAN an efficient enhancement to existing PEFT methods. Practically speaking, throughout our experiment, we also did not see a significant slowdown by adding SAN.
## Reference
**[1] Qiu, Z., Liu, W., Feng, H., Xue, Y., Feng, Y., Liu, Z., ... & Schölkopf, B. (2023). Controlling text-to-image diffusion by orthogonal finetuning.** _**Advances in Neural Information Processing Systems**_**,** _**36**_**, 79320-79362.**
**[2] Dror, A. B., Zehngut, N., Raviv, A., Artyomov, E., Vitek, R., & Jevnisek, R. (2021). Layer folding: Neural network depth reduction using activation linearization. arXiv preprint arXiv:2106.09309.**
**[3] Huang, G., Sun, Y., Liu, Z., Sedra, D., & Weinberger, K. Q. (2016). Deep networks with stochastic depth. In** _**Computer Vision** **ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part**_ _**IV**_ _**14**_ **(pp. 646-661). Springer International Publishing.**
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications. I would like to maintain my score for now.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to provide your rebuttal comment. We are truly pleased to see your acknowledgment and hope that we have adequately addressed your concerns. Your valuable suggestion has greatly helped us enhance our manuscript. | Summary: This paper proposes a fine tuning method based on ideas from the Neural Engram and LTPD literature which gives an alternative method to the popular LORA and DORA updates that have recently been employed for large models. The update consists computing scale vectors $\gamma_\ell$ at each hidden layer $\ell$ of the network from the pooled ratios of activity patterns $y'/y$ of the perturbed $y'$ and unperturbed neural activations $y$. The authors show that their fine tuning method improves full fine tuning and LORA on a number of vision and language benchmarks. Part of the key idea is to maintain stable subnetworks (engrams) in the model which do not change in topology during fine tuning but rather change in precise weight scales for existing nonzero connections. The authors show that organizing updates by correct layer order is important to reap the full benefits of the method.
Claims And Evidence: First, on the empirical side, this paper provides a very large set of fine tuning experiments in many modalities including vision, commonsense reasoning, and multimodal visual language tasks. I appreciate the breadth of experiments and the efforts the authors went to to benchmark their method. The fine tuning method shows promise on each of these provided experiments.
However, there are a number of claims made about both (1) the motivation and design of the algorithm and (2) theoretical claims about how /why it works that are not clearly tested. I was hoping to see some numerical evidence that their fine tuning method preserved or encouraged formations of new non-overlapping neural engrams which was the motivation of the algorithm. However, I could not find any experimental evidence that this engram formation actually occurs in their finetuning. Second, it is unclear why the pooling and elementwise division to form $\gamma_\ell$ should give rise to engrams. Lastly, the claim that $\gamma_\ell$ should be close to unity. The authors write "This quadratic influence acts as a soft constraint on the magnitude of γl, discouraging extreme values and promoting stability." Where is this shown or argued for? Partly, I think these claims are important scientifically since the authors are claiming that this algorithm has something to do with engrams and LTP/D and that these are key insights that enabled the improvement in performance.
Methods And Evaluation Criteria: Yes, the benchmark datasets and evaluation criteria make sense.
Theoretical Claims: There are some issues with mathematical notation that make it difficult to actually understand some of the definitions and mathematical claims. However, I think I understand the intended interpretation of the algorithm and have read through their theoretical results.
The authors invoke two assumptions which they do not clearly justify or reference to an appropriate article. Specifically, the allude to "near-linear behaviour of modern activation functions" and also claim "Modern optimization methods tend to avoid unstable paths that first reverse and then restore scaling effects." What evidence do the authors have for these claims?
Experimental Designs Or Analyses: I think the empirical design is valid provided that parameter counts are controlled across baselines (see questions below).
Supplementary Material: I looked at Appendix C which had information about variations of the SAN algorithm and compare using locationally adjacent layers to "functionally similar" layers.
Relation To Broader Scientific Literature: The authors claim that their algorithm is inspired by neural engrams, which are an idea from computational neuroscience where combinatorial subsets of neurons support representations of new memories that do not interfere with previously acquired knowledge. If they could more clearly make this connection and show how their algorithm generates these engrams, I think this work would be not only interesting to finetuning researchers but also neuroscientists interested in memory formation and continual learning.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: This paper has interesting ideas and good experiments. However, I found it challenging to read / parse / understand partly due to the abundance of acronyms and partly due to poor mathematical notation and some missing explanations (see comments/questions below).
Other Comments Or Suggestions: 1. Abstract final sentence missing final punctuation “.”
2. There are a lot of acronyms like PEFT, SSF, LORA, DORA, LP, SAN, FFT, NE, BNN, LTP. I understand this is a stylistic choice, but sometimes it is hard for the reader to keep straight. Please consider using the full name for some of the less frequently used acronyms.
Line 92 “Further demonstrated … ” should be “The authors of [cite] further demonstrated …”
3. The shapes of the matrices reported do not make sense. Notationally, the $W_{down}$ in equation 1 should be an element of $\mathbb R^{r \times d}$ and $W_{up}$ should be $\mathbb R^{d \times r}$
4. More explanation is needed around equation 1, what do x, y represent. Is there one of these for each hidden layer of the network, etc?
5. Equation 2 shapes don’t make sense. $W$ should be $\mathbb R^{d \times d}$ and $\gamma$ is a vector of size $\mathbb R^{d}$. Do the authors mean $(\gamma 1^\top ) \odot W$ where 1 is the vector of all ones?
In Equation 9, the T( ) function is overloaded. Suppose that T(y) = A y for matrix A. Then the transformation is W T(y) = W A y = (W A) y = T_2( W) y where T2(W) = W A.
6. In line 184 the authors introduce two assumptions near linear behavior + optimization stability. Under what conditions do they expect these to hold? Also in what sense is ReLU near linear? For a single input, the function is locally linear but over the space of all inputs, it is a complicated function.
7. Is equation 6 computed per-example? Does each data point have its own $\gamma_\ell$?
In Figure 3, the top of the image says “froze” but should say “frozen”
Equation 10 should (probably, if I understand correctly) read as $W’ = ( \gamma_{\ell+1} \gamma_\ell^\top ) \odot W$.
8. The argument below eq 12 is unclear to me. Why does the update stabilize or become implicitly regularized to be near 1? Please provide a derivation or argument in the Appendix.
9. Line 369 “Topological reasonable” -> “Topologically Reasonable”
Questions For Authors: 1. Is there a biological motivation for equation 6? Is there any prior work that shows that this kind of plasticity rule encourages formation of engrams?
2. In the baselines, are all finetuning methods employing equal numbers of parameters or similar compute? This is important to make comparisons across methods.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Response to Reviewer bAk4
Thank you for your thorough review and feedback. Your positive comments (such as acknowledging our solid experiment and idea) and suggestions have greatly helped us improve our paper. To address your concerns, we have carefully read your feedback and provided point-by-point responses below:
1. Challenging to read / parse / understand partly due to the abundance of acronyms and partly due to poor mathematical notation and some missing explanations:
a. Minor Typos and overwhelming acronyms
b. The $W_{down}$ in Equation 1 should be $\mathbb{R}^{r\times d}$ and $W_{up}$ should be $\mathbb{R}^{d\times r}$. What do $x,y$ represent?
c. $W$ in Equation 2 should be $\mathbb{R}^{d\times d}$ and $γ$ is a vector of size $\mathbb{R}^{1\times d}$. Do the authors mean $(γ1^⊤)⊙W$, where 1 is the vector of all ones? In Equation 9, the $T(\cdot)$ function is overloaded.
d. Why does the update stabilize or become implicitly regularized to be near 1 in Equation 12? Under what conditions do the assumptions in line 184 expect to hold?
2. Is there biological motivation for equation 6? Is it computed per example??
3. In the baselines, are all fine-tuning methods employing equal numbers of parameters or similar compute?
## Point-by-Point Response:
1. ### Regarding mathematical notation and explanations
**a:** Thank you for your meticulous review. We will correct these points in the revised version.
**b:** In Equation 1, we denote $r\ll d$, where $r$ is the rank of the LoRA/Adapter. Accordingly, $W_{down}$ is defined as $\mathbb{R}^{d\times r}$ to compress the input $x$ from dimension $d$ to $r$, and $W_{up}$ is defined as $\mathbb{R}^{r\times d}$ to recover the original dimension. As noted in [Line 143-144], $x$, $y$, and $\theta$ represent the inputs, outputs, and the linear/non-linear function of the LoRA/Adapter, respectively.
**c:** $W$ can also be shaped as $\mathbb{R}^{d\times k}$ (e.g., the FFN in transformers), as long as the scaling vector $γ$ of size $\mathbb{R}^{1\times d}$ is available for per-row scaling. We did not locate the expression $(γ1^⊤)⊙W$ mentioned in your comment. We assume your concern may relate to whether $γ$ is an all-one vector. Initially, $γ$ is indeed an all-one vector, but after training, it diverges (i.e., $γ$ is trainable or generated by trainable modules). For the function $T(\cdot)$, defined as $T(y_{l})=\gamma_{l}\odot y_{l}$, this represents element-wise scaling of $y_{l}$ prior to feeding it into layer $l+1$ layer. Specifically, we have: $y_{l+1}=W_{l+1}T(y_{l})+b_{l+1}=W_{l+1}\gamma_{l}\odot y_{l}+b_{l+1}=\gamma_{l}\odot W_{l+1}y_{l}+b_{l+1}=T(W_{l+1})y_{l}+b_{l+1}$, which is automatically managed by PyTorch’s broadcast mechanism.
**d:** The implicit regularization effect arises from the explicit propagation, which introduces a quadratic effect ($(\gamma_l)^2$), which discourages extreme values and thus stabilizes the updates. This stabilization prevents overfitting by controlling the magnitude of divergence from the initial value 1. In a linear scenario, scaling the output is equivalent to scaling the weights in the next layer. Although ReLUs are not globally linear, they operate nearly linearly in their active regions (when the output is non-zero).
4. ### Regarding biological motivation for Equation 6
Equation 6 reduces both the batch and token dimensions (related to the size of data) while preserving the embedding dimension (related to the content of data), resulting in a 1D vector that can be used for scaling weights. It aligns with the principles of synaptic plasticity. Network signals are data-driven, and synaptic development adapts to these signals. For instance, high-frequency stimulation strengthens synaptic connections between neurons, and such strengthened connections maintain over a range of time, forming a specialized engram.
Furthermore, while methods like SSF and DoRA apply scaling factors at the neuron level (using the same scaling for all parameters in each row of the weight), SAN operates at the synapse level. By introducing a propagation mechanism, we use the previous layer’s scaling as additional per-column scaling for the current layer. Resulting in more complex engram patterns, wherein each synapse within a neuron can have its scaling factor (see Figure 3). This design better aligns with recent neuroscience findings on neuronal engrams, such as the linking, buffer, and feature neuron engrams discussed by Choucry et al. (2024) **[1]**.
3. ### Regarding trainable parameters of baselines
SAN itself doesn't introduce additional trainable parameters; our experimental parameter settings follow those established in SSF, SPT-LoRA, and DoRA, which are all published papers. We have also provided the parameter ratio in all tables.
## References
**[1] Choucry, A., Nomoto, M., & Inokuchi, K. (2024). Engram mechanisms of memory linking and identity. Nature Reviews Neuroscience, 25(6), 375-392.** | Summary: The authors of this paper introduce a method called Synapse and Neuron (SAN), which decomposes and propagates scaling components from anterior feature adjustment vectors to posterior weight matrices. Extensive experimentation is performed by combining SAN with multiple PEFT strategies demonstrating the performance improvements achievable using SAN.
Claims And Evidence: The core idea of explicit propagation of scaling components to subsequent layers by decomposing adjusting vectors from preceding layers is interesting and its advantage is supported well by the experiments. However, the claim regarding LTP/LTD seems a bit stretched.
Methods And Evaluation Criteria: Yes, the evaluation criteria used for underscoring the advantage of the proposed PEFT strategy seems sound.
Theoretical Claims: The paper provides some theoretical insights into how self-regularization is incorporated into the given framework.
Experimental Designs Or Analyses: The authors present comprehensive experimental results that demonstrate the effectiveness of the proposed method across a wide range of vision, language, and vision-language tasks.
Supplementary Material: The submission provides code for the proposed method. The technical appendix involves additional experimental and model details.
Relation To Broader Scientific Literature: This article pertains to PEFT methodologies of LLMs, which is highly relevant for the broader ML and NLP community.
Essential References Not Discussed: The paper discusses all major references.
Other Strengths And Weaknesses: Strengths: This paper proposes a simple yet novel PEFT approach by involving explicit propagation of the scaling vectors of the current layer to the parameters of the subsequent layer. Experimental evidence suggest improved performance when combined with different PEFT strategies.
Weakness: The connection to LTP/LTD is unclear.
Other Comments Or Suggestions: The paper is easy to read.
Questions For Authors: 1) For SAN+LoRA is the scaling factor propagated to the next layer that uses LoRA or just the immediate next layer of the model?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Response to Reviewer WoLR
Thank you for your careful review and feedback. Your positive comments (such as praising our "sound evaluation criteria" and "easy to follow writing") have greatly support our paper. We have thoroughly reviewed your comments and address your key questions below:
1. For SAN+LoRA is the scaling factor propagated to the next layer that uses LoRA or just the immediate next layer of the model?
2. The claim regarding LTP/LTD seems a bit stretched.
## Point-by-Point Response:
1. ### Regarding SAN+LoRA propagation mechanism
In our experiments, we followed the default LoRA placement settings used in DoRA **[1]** and SSF **[2]**:
- For LLaMA/LLaVA models: LoRA was added to `["q_proj", "k_proj", "v_proj", "up_proj", "down_proj"]`
- For vision models: LoRA was added to all Linear layers (except the head)
This naturally led to scaling factor propagation to the immediate next layer. However, your question raises an interesting point: when LoRA modules are sparsely added (e.g., every N block), is it better to propagate the scaling factor to:
- The immediate next layer (even without LoRA)
- The next layer that uses LoRA (potentially skipping several blocks)
Due to time constraints, we conducted a simple test on ViT-CIFAR100 using only two LoRA modules (added to the qkv_attn layers in block 0 and block 11). The results are shown below:
| Method | Linear Probing | LoRA-16 (no propagation) | LoRA-16 (propagation to next layer) | LoRA-16 (propagation to next LoRA) |
|--|--|--|--|--|
| CIFAR100 | 88.74% | 90.82% | 91.03% | 90.38% |
We found that when LoRA modules are distantly placed, long-distance propagation does not yield superior results. This aligns with our findings in Section 4.5 where long-distance propagation occurs when randomly apply. Conversely, propagating to related adjacent layers shows benefits, even when those layers don't have LoRA modules. We hypothesize that this shifts the LoRA module's learning objective from modeling only the current layer's parameter updates to also accounting for subsequent layers' updates, which enhances LoRA's learning capacity.
2. ### Connection to LTP/LTD mechanisms
Our method's connection to Long-Term Potentiation/Depression (LTP/D) is straightforward: we explicitly introduce the propagation mechanism. Existing PEFT methods like LoRA, DoRA, and SSF (as shown in Figure 3) typically only model the current layer's updates. Even though DoRA and SSF decompose scaling factors, they don't leverage propagation.
In neuroscience, synaptic development is directly modulated by neuronal activation patterns. LTP and LTD exemplify this: high-frequency activation of presynaptic neurons induces LTP, strengthening the synaptic connection, while low-frequency stimulation induces LTD, weakening it **[3] [4]**. This aligns with Hebbian learning—the principle that “what fires together, wires together.”
In our SAN approach, the explicit propagation of scaling factors mimics this biological process. Rather than treating each layer’s update in isolation, SAN transfers the scaling factor learned from one layer to the next. This is analogous to how the potentiation (or depression) of a synapse in a neural circuit influences the subsequent neurons, ensuring that plastic changes are distributed in a coordinated manner. For example, if a layer’s output is amplified (akin to LTP), the following layer would receives this amplified signal as a prior guidance and adjusts its weights accordingly. Moreover, by propagating scaling factors, our method effectively connects trainable modules throughout the model:
- Locally: we shift the learning focus from modeling only single-layer updates to considering groups of related layers.
- Globally: All trainable modules throughout the model become "interlocked," simplifying the adjustment range each module needs to cover (Figure 2) and achieving better performance
This biologically inspired mechanism, therefore, not only provides a theoretical foundation for our approach but also delivers practical benefits in model transfer learning.
## Reference
**[1] Liu, S. Y., Wang, C. Y., Yin, H., Molchanov, P., Wang, Y. C. F., Cheng, K. T., & Chen, M. H. (2024, July). Dora: Weight-decomposed low-rank adaptation. In** _**Forty-first International Conference on**_ _**Machine Learning**_**.**
**[2] Lian, D., Zhou, D., Feng, J., & Wang, X. (2022). Scaling & shifting your features: A new baseline for efficient model** **tuning****.** _**Advances in Neural Information Processing Systems**_**,** _**35**_**, 109-123.**
**[3] Malenka, R. C., & Bear, M. F. (2004).** _**LTP**_ _**and**_ _**LTD** **: An embarrassment of riches**_**.** **Neuron****, 44(1), 5–21.**
**[4] Tonegawa, S., Liu, X., Ramirez, S., & Redondo, R. (2015).** _**Memory engram cells have come of age**_**. Neuron, 87(5), 918–931.** | null | null | null | null | null | null | null | null |
Emergent Response Planning in LLMs | Accept (poster) | Summary: In this paper, the authors aim to explore whether LLMs plan before token generation. Specifically, they examine three types of attributes:
- Structural attributes refer to whether LLMs plan the response length and reasoning steps.
- Content attributes refer to whether LLMs plan character choices in story writing or multiple-choice answers before the response.
- Behavioral attributes refer to the confidence and factual consistency of the answer before the response.
The authors provide interesting and valuable insights, enhancing the transparency of LLMs.
Claims And Evidence: Most of the claims in this paper are convincing due to the comprehensive experimental studies. However, there are a few concerns:
As demonstrated in Figure 2, in the Factual Consistency Prediction and Answer Confidence Prediction tasks, the F1 score of the probing models shows only modest improvements compared to the baseline. This might affect the overall strength of the claims. Additionally, for certain models, such as ama-3-8B-Instruct, this issue is also present in some tasks, like Multiple-Choice Answers. This could weaken the persuasiveness of the proposed claims.
Methods And Evaluation Criteria: The proposed probing methods are well-founded. The authors conduct a comprehensive study in Section 4 to demonstrate the effectiveness of their trained probing models, further validating the correctness of their following findings.
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: The soundness and validation of the experimental design and analysis are solid.
Supplementary Material: I have checked all the supplementary material.
Relation To Broader Scientific Literature: This paper demonstrates that LLMs' hidden prompt representations encode rich information about upcoming responses, which could further inspire future work on the interpretability of LLMs.
Essential References Not Discussed: There are no essential references not discussed.
Other Strengths And Weaknesses: I believe this paper provides very valuable insights to the community and offers a promising direction for future research. However, I have some suggestions for further improvement:
1. The current description of the probing strategy is too brief and superficial, making it difficult to understand how the authors implemented it—especially for readers who may not have extensive background knowledge in this area.
2. Regarding "Response Length Prediction," I initially thought it would be challenging for models to grasp the concept of "length," given that most models rely on tokenization, where tokens are often less informative. However, the authors show that such a length can be predicted by the probing models. The question arises: "If the length can be predicted through the representation, why can't LLMs self-predict it?" Although this is briefly addressed in the paper, it would greatly benefit from more in-depth discussion and insightful conclusions.
3. The paper includes a comprehensive experiment to demonstrate the effectiveness of the probing strategies and shows that prompt representations encode substantial information. However, the paper would be even more valuable with an exploration of the internal mechanisms of LLMs in relation to the planning of response generation, like CoT-style response generation.
Other Comments Or Suggestions: No.
Questions For Authors: Please refer to the "Other Strengths And Weaknesses" part.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed and constructive feedback. In response to your inquiries:
---
> Q1. Detailed illustration of probing strategies.
Thank you for the helpful suggestion. To improve clarity, we will add this explanation to the final version: Probing trains auxiliary models (e.g., MLPs) to predict attributes (e.g., confidence, truthfulness) from LLM hidden states, revealing what they encode. Our method involves: (1) generating and labeling LLM responses w.r.t target attributes, (2) extracting hidden states at the first token, and (3) training MLPs to predict labels from these states. Accuracy reflects encoding strength. Implementation details are in the paper.
---
> Q2. If probing demonstrates that LLM hidden states encode sufficient information for self-prediction, why do models fail to leverage this capability during standard inference?
We really appreciate your thoughtful insights. Previous work has shown that internal knowledge in LLMs is not always directly reflected in outputs [1]. Section 5.3 of the main paper supports this hypothesis in the context of response planning, showing that simple prompting fails to predict LLM response attributes. We expand on this with additional tasks, showing a consistent probing–prompting gap (Table 1). LoRA fine-tuning helps reveal this internal planning capability during standard inference (Table 2). Since LoRA does not alter LLM representations grastically, it likely elicits hidden states already present in the original model, supporting our argument that LLMs possess internal planning abilities.
[1] Li, Kenneth, et al. "Inference-time intervention: Eliciting truthful answers from a language model." Advances in Neural Information Processing Systems 36 (2023): 41451-41530.
Table 1. **LLMs possess an internalized but unelicited capability for self-prediction**, as evidenced by consistent probing-prompting gaps across 6 tasks and 8 model variants (see table below). Each cell shows *[prompting / probing / gap]*, with bold gaps highlighting unelicited potential.
|Task|Metric↑(Random)|M-7B-I|M-7B|Q-7B-I|Q-7B|L2-7B-C|L2-7B|L3-8B-I|L3-8B|
|-|-|-|-|-|-|-|-|-|-|
|**Token Length**|Spearman↑(-)|.15/.83/**.68**|.08/.64/**.56**|.26/.85/**.59**|.12/.57/**.45**|.21/.80/**.59**|-.03/.53/**.56**|.49/.84/**.36**|-.04/.41/**.45**|
|**Reasoning Steps**|Spearman↑(-)|.25/.67/**.42**|.15/.65/**.50**|.20/.82/**.62**|.41/.84/**.43**|.00/.71/**.71**|.13/.63/**.50**|.04/.80/**.76**|.23/.67/**.44**|
|**Character Choice**|F1-score↑(0.25)|.31/.81/**.50**|.22/.79/**.57**|.21/.72/**.51**|.10/.86/**.76**|.30/.74/**.44**|.14/.79/**.65**|.21/.82/**.61**|.07/.84/**.77**|
|**Multiple-Choice**|F1-score↑(0.33)|.32/.55/**.23**|.12/.48/**.36**|.27/.58/**.31**|.24/.66/**.42**|.28/.51/**.23**|.13/.34/**.21**|.34/.49/**.15**|.17/.52/**.35**|
|**Answer Confidence**|F1-score↑(0.50)|.25/.78/**.53**|.35/.78/**.43**|.41/.78/**.37**|.38/.79/**.41**|.34/.72/**.38**|.31/.71/**.40**|.53/.78/**.25**|.39/.80/**.41**|
|**Factual Consistency**|F1-score↑(0.50)|.50/.78/**.28**|.38/.70/**.32**|.39/.77/**.38**|.51/.73/**.22**|.28/.76/**.48**|.48/.86/**.38**|.50/.76/**.26**|.45/.70/**.25**|
*Value Format: self-estimate / probing / GAP (probing - self-estimate, bold)
Models: M=Mistral, Q=Qwen2, L2=Llama-2, L3=Llama-3, -I=Instruct, -C=Chat
Metric↑: ↑=Higher is better • Random: 0.25=4-class, 0.33=3-class, 0.50=2-class baseline*
Table 2. **This capability can be elicited through parameter-efficient fine-tuning**. As shown above, LLMs may fail to leverage internal representations for self-prediction. To address this, we fine-tune models (via LoRA, rank 64) on the 3K-sample UltraChat dataset for token-length prediction. Post-tuning results across three seeds confirm: (1) acquired self-prediction, (2) in-distribution and cross-dataset generalization, and (3) preserved token-generation behavior (consistent response lengths). This demonstrates that minimal fine-tuning (2% parameters) enables generalizable self-prediction.
|Model| Pre-Tuning|Post-Tuning (Ultrachat Test Split)|Cross-Dataset (Alpaca-Eval)|Response Length Consistency (Pre- vs. Post-Tuning)
|-|-|-|-|-|
| |Spearman ↑ (greedy generated)|Spearman ↑ (mean ± std)|Spearman ↑ (mean ± std)|NMAE ↓ (mean ± std)|
|Mistral-7B-Instruct|0.15|0.78 ± 0.05|0.69 ± 0.03|0.08 ± 0.01|
|Qwen2-7B-Instruct|0.26|0.62 ± 0.03|0.58 ± 0.02|0.09 ± 0.02|
|Llama-2-7B-Chat|0.21|0.48 ± 0.04|0.45 ± 0.06|0.10 ± 0.04|
|Llama-3-8B-Instruct|0.49|0.75 ± 0.02|0.72 ± 0.02|0.09 ± 0.01|
---
> Q3. Further exploration of the LLMs' internal emergent response generation mechanisms.
Thank you for your insightful advice. We fully agree that understanding the internal mechanisms behind LLMs’ emergent response planning is essential for advancing interpretability. However, our current focus is on establishing the existence and characteristics of emergent response planning. We leave a deeper exploration of the underlying interpretability mechanisms to future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for your elaboration and complementary experiments. I have no further questions and have raised the score accordingly. | Summary: The paper provides a simple definition of response planning, and then shows that according to this definition, multiple LLMs do in fact plan responses on various dimensions.
## update after rebuttal: As described in my comment later in the thread, I am keeping my initial rating, which was already high.
Claims And Evidence: The overall claim is for evidence of response planning in LLMs. Specifically, the authors define this to mean the capacity to use a 1-hidden-layer MLP to predict aspects of later tokens from earlier activations. They then look at three different aspects: response length; character choice in a story; and answer selection in a test situation. For all of these, they see greater than chance performance, across multiple models. The evidence here seems pretty good, because it holds across a variety of models and data sets. One interesting point is the fact that an MLP is necessary, and regression itself (equivalently, hidden layer size = 1) doesn't produce great results; this may indicate that the internal "plan" has a complicated form.
There are several intriguing subclaims as well, around scaling (more planning at larger scales) and around which tokens containing planning information (more at the beginning and end, which makes sense intuitively.) These too seem well-founded; again, there's a fairly thorough set of data / models at play.
An ancillary claim is that probing provides more information than prompting; this is less thoroughly explored.
Methods And Evaluation Criteria: The benchmarks all make sense.
Theoretical Claims: N/A. This is an empirical paper.
Experimental Designs Or Analyses: In general, this seems like an excellent set of designs: elegant and crisp. I definitely recommend acceptance.
With that said, I do think there are some potential subtle questions about the definition of "planning". Let me give an example which might illustrate my main concern. Consider an experimental design where we ask questions in either French or English, requesting a chain of thought that will end in a yes / no answer in the appropriate language (which would be "oui / non" in French). Now, we ask whether we can probe the activations that lead to the first token and make a prediction about the final token. My guess is that a probe will pick up the language of the first token, and thus predict the final answer better than chance (say 50% accuracy rather than 25%).
This is basically the same experimental design as in the paper, but I wouldn't say that this proves there was any kind of planning involved—just that the initial token and final token both depended in a correlated way on the input. I think it would be worth discussing this kind of issue in more depth, partly because I could see the experimental design in this paper becoming a widely used paradigm.
Supplementary Material: I didn't see any.
Relation To Broader Scientific Literature: The treatment of related work is excellent; it's a model for other papers.
Essential References Not Discussed: none
Other Strengths And Weaknesses: The paper is extremely well-written and well-structured.
Other Comments Or Suggestions: Figure 4 is intriguing, but I recommend adding a legend for color. I also find the "deep -> shallow" ordering of the x-axis ambiguous. Is this the same as ordering "early to late later" or vice versa? Finally, it might not hurt to say the ordering of the models.
Questions For Authors: none.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We are grateful for your comprehensive remarks and thoughtful advice, and we are really glad that you find our paper interesting.
---
> Q1. Defining "planning" and designing experiments to prevent spurious correlations.
We really appreciate your thoughtful insights. Indeed, results may be biased by "first-token shortcuts" when the first token correlates with the target response attributes, as your example demonstrates. Our approach addresses this via:
**(1) Defining planning as independent encoding of next-token and long-term attributes:** We define planning as hidden representations at the first token encoding both next-token information and long-term attributes, **with these two types of information being independent**—i.e., long-term attributes must not be reflected by the immediate next token. In practice, we ensure such independency through careful prompt enigneering, as we will discuss in the next paragraph.
**(2) Prompt engineering to block shortcuts in experiment design:** We design prompts to ensure initial tokens cannot reveal target attributes. For example, in multiple-choice tasks, we require models to analyze the question before answering, ensuring first tokens (analysis) do not leak the answer. This isolates true planning from shortcut correlations.
We will include a detailed discussion of this important methodological consideration in the final version of our paper.
---
> Q2. Performance gaps between probing and LLM self-prediction is less thoroughly explored.
Thank you for your valuable suggestion. We extend our analysis of probing vs. self-prediction to 6 tasks and 8 model variants (see table below, with each cell shows *[prompting / probing / gap]*). Prompting occasionally underperforms random baselines, likely due to LLMs' overestimating their capabilities (e.g., overconfidence in correctness) and biased self-evaluation. The results further support our claim that **models encode more planning information in their hidden representations than they can explicitly access during token-by-token generation**.
|Task|Metric↑(Random)|M-7B-I|M-7B|Q-7B-I|Q-7B|L2-7B-C|L2-7B|L3-8B-I|L3-8B|
|-|-|-|-|-|-|-|-|-|-|
|**Token Length**|Spearman↑(-)|.15/.83/**.68**|.08/.64/**.56**|.26/.85/**.59**|.12/.57/**.45**|.21/.80/**.59**|-.03/.53/**.56**|.49/.84/**.36**|-.04/.41/**.45**|
|**Reasoning Steps**|Spearman↑(-)|.25/.67/**.42**|.15/.65/**.50**|.20/.82/**.62**|.41/.84/**.43**|.00/.71/**.71**|.13/.63/**.50**|.04/.80/**.76**|.23/.67/**.44**|
|**Character Choice**|F1-score↑(0.25)|.31/.81/**.50**|.22/.79/**.57**|.21/.72/**.51**|.10/.86/**.76**|.30/.74/**.44**|.14/.79/**.65**|.21/.82/**.61**|.07/.84/**.77**|
|**Multiple-Choice**|F1-score↑(0.33)|.32/.55/**.23**|.12/.48/**.36**|.27/.58/**.31**|.24/.66/**.42**|.28/.51/**.23**|.13/.34/**.21**|.34/.49/**.15**|.17/.52/**.35**|
|**Answer Confidence**|F1-score↑(0.50)|.25/.78/**.53**|.35/.78/**.43**|.41/.78/**.37**|.38/.79/**.41**|.34/.72/**.38**|.31/.71/**.40**|.53/.78/**.25**|.39/.80/**.41**|
|**Factual Consistency**|F1-score↑(0.50)|.50/.78/**.28**|.38/.70/**.32**|.39/.77/**.38**|.51/.73/**.22**|.28/.76/**.48**|.48/.86/**.38**|.50/.76/**.26**|.45/.70/**.25**|
*Value Format: self-estimate / probing / GAP (probing - self-estimate, bold)
Models: M=Mistral, Q=Qwen2, L2=Llama-2, L3=Llama-3, -I=Instruct, -C=Chat
Metric↑: ↑=Higher is better • Random: 0.25=4-class, 0.33=3-class, 0.50=2-class baseline*
---
> Q3. Suggestions on enhancing the clarity of Figure 4.
Thank you for your constructive advice. We will carefully incorporate the following edits in the final version:
1) Adding a color legend to clarify grid values;
2) Replacing "deep to shallow →" with "early to late layers →" to resolve ambiguity about layer ordering;
3) Explicitly stating the model order (top to bottom): Mistral-7B-Instruct, Llama-2-7B-Chat, Llama-3-8B-Instruct, Mistral-7B, Llama-2-7B, Llama-3-8B, Qwen2-7B-Instruct, Qwen2-7B (first six: 32 layers; last two: 28 layers).
---
Rebuttal Comment 1.1:
Comment: I appreciate these improvements! (I was already at strong accept, so can't raise my score.) | Summary: The paper presents evidence of emergent planning behavior in LLMs by analyzing patterns in global attributes – structural, content, and behavioral – across different models and sizes. The authors identify four key insights that showcase how this planning behavior emerges, analyzing how each attribute is processed across the models’ layers.
Claims And Evidence: In general, yes they are! However, certain statements are vague that make it unclear about the evidence itself. I asked for more clarifications in the “Questions for Authors” section.
Methods And Evaluation Criteria: Yes they do!
Theoretical Claims: There aren’t any theoretical claims.
Experimental Designs Or Analyses: Yes! I think they generally look convincing. However, for better clarity, I have the authors some questions in the “Questions for Authors” section.
Supplementary Material: I skimmed through the entire supplementary materials.
Relation To Broader Scientific Literature: They help provide more systematic evidence of emergent planning behavior that previous works have hinted at.
Essential References Not Discussed: Not that I am aware of!
Other Strengths And Weaknesses: Strengths:
- Clearly defined problem space with well-articulated attributes and insights
- Comprehensive exploration of models across different types and sizes.
Weakness:
- Certain areas, including figures and specific sentences, seem vague and require further clarification.
Other Comments Or Suggestions: In Figure 4, could you add a legend or annotation in the models’ column? This would help readers better understand each model’s performance, enabling a more nuanced analysis.
Questions For Authors: 1. In Figure 4, what is the second-to-last row model in the answer confidence prediction? Its performance appears significantly different from other models, especially in the middle layers. Is there a specific reason for this discrepancy?
2. In Lines 317-319, how does the performance in Figures 4e and 4f demonstrate that the behavioral attributes are encoded early in the model? Shouldn’t the initial layers exhibit much higher values (deeper shades of red) if that were the case? For instance, the performance trends in Figures 4c and 4e seem quite similar in the initial layers, yet they are interpreted differently – could you clarify this distinction?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the time, thorough comments, and nice suggestions. We answer the comments/questions point-by-point:
---
> Q1. Adding annotations on model orders of Figure 4.
Thank you for your constructive feedback. We agree that explicitly annotating model orders will improve clarity, and we greatly appreciate your attention to this important detail. In the final version, we will clearly annotate the model order in Figure 4 **(top to bottom) as: Mistral-7B-Instruct, Llama-2-7B-Chat, Llama-3-8B-Instruct, Mistral-7B, Llama-2-7B, Llama-3-8B, Qwen2-7B-Instruct, and Qwen2-7B**. For clarity, we will also note that the first six models (Mistral and Llama variants) use 32-layer architectures, while the Qwen2-7B variants employ 28 layers.
---
> Q2. Explaining Qwen2-7B-Instruct's divergent confidence prediction patterns.
Thank you for your insightful observation. The divergent behavior of **Qwen2-7B-Instruct (second-to-last row in figures)** arises from two factors: **(1)** its shallower 28-layer architecture (vs. 32 layers in Mistral/Llama models), which alters attribute encoding dynamics across layers, and **(2)** instruction-tuning—evident when comparing Qwen2-7B-Instruct with its base model (Qwen2-7B)—introduces shifts in how attributes like confidence are encoded. We observe that Qwen models generally encode attributes in later layers compared to Mistral/Llama. While the base Qwen2-7B exhibits uniform layer-wise behavior across attributes, its instruction-tuned variant shows pronounced divergence—a general trend that is most salient in confidence prediction tasks. We will clarify these architectural and training effects in the final paper.
---
> Q3. Explanation for layer-wise performance analysis on behavioral attributes tasks.
Thank you for your thoughtful and detailed observation. The interpretation of Figures 4e and 4f hinges on the distinction between relative saliency dynamics (emphasized by our layer-wise normalization) and absolute encoding strength (which appears subtler in the current normalized presentation). While early layers in Figures 4e/f do exhibit higher absolute values (indicative of early-stage encoding), the normalized visualization prioritizes cross-layer trends, which can attenuate static contributions. For instance, Figure 4c shows comparable relative trends in initial layers, but its absolute activation levels (not explicitly highlighted here) differ from those in 4e/f. This nuance suggests that while normalization reveals functional dynamics, it may not fully capture the magnitude of early encoding. We sincerely appreciate your feedback, which has prompted us to refine the figures to better distinguish absolute vs. relative patterns in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! I have raised the score accordingly! | null | null | null | null | null | null | null | null |
Leveraging Predictive Equivalence in Decision Trees | Accept (poster) | Summary: The paper presents an intuitive boolean logical representation of decision trees. This representation removes predictive equivalence. They then show in 3 settings (feature importance, missing data, and improving cost efficiency) that this representation can yield improvements over standard tree representations.
Claims And Evidence: Yes, the claims are clearly supported through the reduction of redundancies in the Rashomon set and the three case studies.
Methods And Evaluation Criteria: The authors propose interesting, non-obvious evaluations for their introduced representation.
Theoretical Claims: I briefly checked the proofs and believe they are correct.
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: The authors do a good job contextualizing their paper in the Related works section. It fits into a large line of work that is displayed through their 3 case studies, and unites them in a simple way.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper is well written and the methods are clear. At first glance, the approach seemed potentially overly simplistic but I am convinced from the experiments that the author's contribution is substantial.
Other Comments Or Suggestions: N/A
Questions For Authors: Besides the Q-learning approach in Sec 7, is there a simpler optimization-based procedure for leveraging this representation to improve cost efficiency?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your review! We are glad that you appreciated the novelty of our method through our experimental results.
To answer your question, we are unaware of any substantially simpler procedure to improve cost efficiency. Naïvely, we could attempt to perfectly fit an optimal, cost sensitive decision tree to a version of the dataset in which the labels were replaced with predictions from our reference tree. However, to preserve predictive equivalence over all possible inputs, we would need to exactly fit to a version of the dataset that realized every possible set of input features, which could be prohibitively large (for binary data, $2^{\\# \\textrm{features}}$).
We felt that the problem naturally lent itself to representation as a Markov decision process (MDP). While it is not viable to directly solve this MDP because of the extremely large state space, there are a number of alternative approaches to solving MDPs that could be considered, although we see none as notably simpler than Q-learning.
Finally, if we were able to enumerate every predictively equivalent decision tree for a given DNF, we could simply iterate through them and select the most cost efficient one. However, we cannot yet map from a DNF to the set of equivalent trees. Understanding the group structure of predictively equivalent trees, as we briefly proposed as future work, would allow us to do this, but this remains to be solved. See our response to Reviewer Nmzb for more details on this direction. | Summary: This paper addresses the issue of "predictive equivalence" in decision trees, where different tree structures can represent the same decision boundary but imply different evaluation procedures. The authors propose a boolean logical representation using Disjunctive Normal Form (DNF) to abstract away the evaluation order and provide a representation faithful to the underlying decision boundary. They demonstrate the utility of this representation in handling missing data, quantifying variable importance, and optimizing the cost of reaching predictions.
Claims And Evidence: The claims in the submission are generally well-supported by clear and convincing evidence. The paper provides theoretical proofs in the appendix to support its claims. Empirical evidence is provided through experiments on multiple datasets.
Methods And Evaluation Criteria: The proposed method of converting decision trees into DNF and using the Quine-McCluskey algorithm for simplification is appropriate for addressing the problem of predictive equivalence. The evaluation criteria, including experiments on real-world datasets and comparisons with baseline methods, are sound and suitable for demonstrating the effectiveness of the proposed representation.
Theoretical Claims: The paper includes several theoretical claims, such as the faithfulness of the DNF simplified form, completeness, succinctness, and resolution of predictive equivalence. The correctness of the proofs for these claims was checked, and no issues were identified.
Experimental Designs Or Analyses: The experimental setup is well-designed, with comparisons across multiple datasets and tasks. However, some limitations exist:
- The datasets used, while diverse, are relatively small. A study on larger datasets (e.g., industry-scale data) would be valuable.
- The method is compared primarily against baseline decision tree methods but lacks comparisons with other approaches that attempt to regularize or simplify decision trees.
- Sensitivity analysis on hyperparameters (e.g., Q-learning settings) is missing and could provide deeper insights into practical deployment.
Supplementary Material: The supplementary material was reviewed. This includes additional datasets, implementation details, and extended experimental results, which provide further support for the paper's claims.
Relation To Broader Scientific Literature: The key contributions of the paper are well-related to the broader scientific literature. The authors discuss prior work on decision tree learning, variable importance, missing data, and cost optimization. They clearly articulate how their work builds upon and extends previous research in the field.
Essential References Not Discussed: The paper adequately discusses relevant prior work. No essential references were identified that are not currently cited or discussed in the paper.
Other Strengths And Weaknesses: Strengths:
* The paper is well-written and clearly explains the problem of predictive equivalence in decision trees.
* The proposed DNF representation is a novel and effective approach to address this problem.
* The paper provides strong theoretical and empirical evidence to support its claims.
* Applications of the proposed representation to missing data, variable importance, and cost optimization are relevant and well-demonstrated.
Weaknesses:
* The computational feasibility of the approach for deep trees is not discussed in depth.
* Comparisons with alternative tree simplification methods are missing.
Other Comments Or Suggestions: It would be beneficial to provide a more detailed discussion of the computational complexity of Algorithm 1 and the Quine-McCluskey algorithm, especially in relation to the size and depth of the input decision tree.
Questions For Authors: * How does the computational complexity of the DNF transformation scale with tree depth and dataset size?
* Could the proposed representation be extended to ensemble methods like random forests?
* How does the method compare to approaches using Binary Decision Diagrams (BDDs)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and thorough review! We hope that the comments below address your concerns.
**Complexity of Algorithm 1 and Quine-McCluskey**
The DNF simplification problem solved by the Quine-McCluskey algorithm is NP-Complete in the number of variables used by the tree. Since trees never split on the same binary variable twice in a path from root to leaf, our problem is NP-Hard in tree depth (a tree of depth $d$ can split on between $d$ and $2^d-1$ features). The DNF transformation is applied to trees that have already been fit, so the complexity of the transformation depends only on the size of the tree, not the size of the dataset.
We do not mind this worst-case runtime (DNF simplifications were very fast in our experiments), because decision trees are often favored in practical problems that are noisy and for which data is not abundant. In this scenario, prior work has shown that simpler decision trees are near optimal (see the end of Section 2.1 in our paper), so in this situation we are solving small instances of the DNF simplification problem. If accepted, we will use the extra page to further elaborate on the computational complexity of our algorithms as it relates to the practical problems we seek to address.
**Alternative Tree Simplification Methods**
It is important to note that our method applies to decision trees found via any algorithm, even those which inherently regularize or otherwise simplify trees. Izza et al. ([1]) enumerated many decision trees from textbooks and research papers exhibiting explanation redundancy, and thus predictive equivalence (see Proposition 3.3 in our paper). Some of these trees were found with methods that regularize or simplify trees as part of an optimization objective, yet they still exhibited predictive equivalence.
In our experiments with the Rashomon set of sparsity-regularized decision trees, we found many predictively equivalent models, even though all models in the Rashomon set were quite simple.
Thus, the conclusions of our experiments likely wouldn't be affected by changing the method of decision tree optimization.
While alternative post-processing approaches could be applied to simplify a given decision tree (e.g., specialized pruning), we are unaware of any such approach that is guaranteed to produce a predictively equivalent tree. As such, these methods are not solving the same problem as ours.
**Random Forests**
Yes, the proposed representation could absolutely be extended to ensemble methods such as random forests! See our response to Reviewer Nmzb for details.
**Binary Decision Diagrams**
While exploring methods to resolve predictive equivalence, we did not see a particular benefit to using binary decision diagrams over our chosen DNF representation. It is NP-Hard to find the variable ordering leading to the simplest BDD, which we would need to find in order to fully resolve predictive equivalence. Thus, BDDs do not offer us a direct computational advantage over the Quine McCluskey algorithm. We also feel that it is easier to interpret a DNF's terms as 'reasons' to predictive positive than it is to understand a tree's predictive behavior from a BDD.
**Dataset Size**
We've extended the results in figures 6 and 8 to a one million sample version of the Higgs dataset (https://www.openml.org/search?type=data&sort=runs&id=42769&status=active). The results are similar to those of other datasets in our paper and can be found here: https://docs.google.com/document/d/e/2PACX-1vR-i5kxlIeEK1tBIBFqXOcxhHaXqs6WQPTmjzRv7iIaNy90zevAiX8YawK2ICib0cu-tX6uG9SQdqCM/pub.
We would also like to note that two of the further datasets in our appendix -- Netherlands and FICO -- each have more than 10,000 training samples and are from real-world industry (FICO) or government (Netherlands) settings.
**Hyperparameter Sensitivity of Q-learning**
Thank you for this suggestion. We performed a sweep over reasonable values for the key hyperparameters in the Q-learning framework, the results of which can be seen by following https://docs.google.com/document/d/e/2PACX-1vRCgUtuqu7AX9sOjM6AOXzj0ieyNe-krvzoj9048Wb-SkbH6-n3dZQOIQ3bPksGgRelMC77gqY96MSO/pub. We find that our Q-learning approach yields similar evaluation costs to the values reported in the main paper across 27 different hyperparameter settings.
[1] Izza, Y., Ignatiev, A., and Marques-Silva, J. On tackling
explanation redundancy in decision trees. Journal of
Artificial Intelligence Research, 75:261–321, 2022. | Summary: The authors propose the use of a minimal Boolean formula as a DNF in order to represent a decision tree that has been learnt. This representation is useful for the authors, as one no longer requires the evaluation of the learnt function to happen in a top-down manner (i.e., start from the root node of the tree and proceed all the way down to some leaf). A consequence that is investigated in the paper, is how this new representation can help with missing attribute values. Another point of investigation is in terms of explainability, where it is shown that different trees that are predictively equivalent may indicate very different attributes as being the most important ones. Finally, there can also be applications of the DNF representation to cost-sensitive applications. The authors provide some theoretical results early on that justify the faithfulness, completeness, succinctness of the proposed DNF form with respect to some decision tree, as well as a theorem explaining that structurally different decision trees that are nevertheless predictively equivalent to will have the same minimal DNF form. Near the end of the paper, the effectiveness of the proposed method is investigated using four different data sets: COMPAS, wine quality, Wisconsin, and Coupon.
Claims And Evidence: Yes, to the extent that I checked, the claims are adequately supported by convincing evidence; be it theorems or experiments.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense. They are actually pretty standard.
Theoretical Claims: The proofs of the theorems are in the appendix. In all honesty I did not thoroughly check the proofs. The reason is that all these results should be known for perhaps 40+ years now (i.e., since the beginning of decision-tree theory. The authors are basically saying that any decision tree can be represented in DNF form and argue about this in three different theorems (faithfulness, completeness, succinctness). For example, this is discussed in a book from the 90's (Machine Learning, by Tom Mitchell). As for the other result on having a unique minimal description of a DNF that corresponds to two predictively equivalent trees; well, this should also be known for 40+ years in the context of Boolean functions. I appreciate the exposition of these results in the paper, as well as the fact that the proofs are stated in the appendix, but there is no surprise here and most likely no new result that has not been known for four decades now. Nevertheless, again, there is merit that these results are part of the paper, and they should stay.
Experimental Designs Or Analyses: I did go through the experiments as these are described in the paper. The results seem very intuitive to me and along the lines of what I would expect to see.
Supplementary Material: Not really. I skimmed through it, but I did not spend any serious time on the appendix.
Relation To Broader Scientific Literature: I appreciate the insights that are gained for situations where we have missing data as well as the insights in terms of which attributes are important for explainability concerns.
Essential References Not Discussed: I cannot think of something. I think the authors have done a good job citing existing literature.
Other Strengths And Weaknesses: As mentioned above, I do not believe that this paper is offering any new theoretical insights. However, the experimental part does provide substantial new information (to the extent that I know) and the insights on missing attribute values, the importance of various attributes to be more robust, and the use of the ideas in situations where we have different costs in order to obtain attribute values, are all very important in the real world and therefore the paper has a lot of merit. Furthermore, the presence of known theoretical results allows someone to appreciate better the content of the paper (I am one of them).
Other Comments Or Suggestions: I think the authors have done a good job writing a good paper. Nevertheless, some part of the paper can be improved.
1. The Rashomon set is never defined. It should be defined somewhere in the text.
2. Along these lines, I believe TreeFARMS is being used in order for the authors to be able to argue about the Rashomon set. Hence, I would personally prefer to see a small paragraph describing the mechanics of TreeFARMS, so that it is easy for everyone to understand what is happening without too much effort.
3. Lines 258-259: X_0, X_1 -> X_1, X_2
4. Figure 3: Please describe the information that what we see in each node.
Questions For Authors: Q1. One of the issues that is presented near the beginning of the paper explains how two structurally different trees that are equivalent w.r.t. the function that they compute may give rise to situations where one of them is able to process an instance with missing values, while the other one may not. This is due to the order by which one evaluates attributes from root to a leaf. What is not clear to me is how a DNF formula that is equivalent with both of the above trees would not have such an issue. Can you please clarify?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the thorough and thoughtful review and for the suggested additions in describing the
Rashomon set and TreeFARMS. If accepted, we will use our extra page to incorporate this discussion into
the camera-ready version. Thank you also for flagging the off-by-one typo in lines 258-259, and the need to
clarify figure 3.
**Q1. Evaluating a DNF with Missing Values**
When we have a sample with missing values, we substitute all known values from that sample into our DNF formula. If a term in either the positive or negative DNF is satisfied, we return 1 or 0, resp. If not, then we simplify the expression again and check if it simplifies to 1 or 0. For the example in question, if $X_1$ is unknown and $X_2 = 0$ (or vice versa), then one of the forms of the tree will be unable to predict with the usual path-based evaluation method. However, substitution into the DNF expression gives $X_1 \land X_2 \to X_1 \land 0 \to 0$. Alternatively, if $X_1$ is unknown, and $X_2 = 1$, then substitution yields $X_1 \land 1 = X_1$, and we know that no predictively equivalent form of the tree will be able to make a prediction without knowing $X_1$.
It is easy to see this in our toy example, but in more complicated trees this phenomenon can be much harder to identify without our approach. | Summary: The paper addresses the challenge of predictive equivalence in decision trees, where multiple trees with identical decision boundaries but different evaluation processes complicate model selection. To resolve this, the authors propose a Boolean logical representation of decision trees that eliminates predictive equivalence, ensuring faithfulness to the underlying decision boundary, and demonstrate its applications to robustness under missing feature values, variable importance quantification, and prediction cost optimization.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I have reviewed all the theorems in Section 3.1, and I believe they are both profound and meaningful.
Experimental Designs Or Analyses: I have reviewed the first two case studies, and I believe they are valid.
Supplementary Material: No.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength: The theoretical and experimental work in this paper is comprehensive, the motivation is clear, and it meets the theoretical standards expected of an ICML paper.
Weaknesses:
1.The research problem addressed in the paper is somewhat outdated.
2. I would like to ask how the method proposed in this work and the methods discussed in the future outlook, such as the group structure of decision trees and forest methods like random forests, differ from each other.
f the author provides a good answer to this question, I would be willing to raise my score.
Other Comments Or Suggestions: N/A
Questions For Authors: Please refer to the secion of the Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the review! We are glad that you appreciated our theory, experiments, and motivation. We would be happy to engage in further discussion on the modernity of our work. We interpreted Weakness 1 as saying that research on decision trees is outdated -- if we misunderstood the weakness, please correct us.
While decision trees have long been established as a type of machine learning model, their study is still relevant. Recent advances in both theory (see [1] for the existence of competitive simple models) and practice (see [2] for advances in decision tree optimization) have shown that, for many datasets, a single decision tree can achieve state-of-the-art performance. In addition, thanks to recent progress in both computing and in algorithms research, decision trees are now one of the few model classes for which study of the entire Rashomon set is possible [3].
The problem of predictive equivalence is relevant in modern decision tree literature. Predictive equivalence occurs in any model that is built upon decision trees, including random forests and boosting models, meaning the findings in this paper apply to a wide class of models. In our response to reviewer AgS4, we discuss a recent paper that found that modern tree optimization algorithms often construct trees with redundant path explanations, which we show are related to predictive equivalence. Predictive equivalence is also relevant for modern Rashomon set research - see section 4, where we show predictive equivalence is rampant throughout the Rashomon set, and section 5.2, where we explore the implications on a variable importance task that leverages the Rashomon set.
**The Group Structure of Decision Trees**
Our approach defines a natural equivalence relation, characterized by identical logical formulae, for decision trees. This relation identifies when two or more trees are predictively equivalent, but we know of no efficient way to enumerate every predictively equivalent form of a given tree. If we knew the operations that could be performed on a tree to generate all the other trees in its equivalence class, we could materialize all (non-trivial) predictively equivalent trees to a particular tree. As a byproduct, this would improve the efficiency of our cost optimization approach (see our response to Reviewer 1yAS).
**Random Forests**
Our approach focuses on the ramifications of predictive equivalence for individual trees. Future work could explore the consequences of predictive equivalence on ensembles. The simplest approaches for this might involve exploring predictive equivalence for each component tree, and investigating how costs, robustness to missing data, and variable importance change across different versions of the same ensemble, where individual trees are replaced with predictively equivalent alternatives. There's also a possibility to explore the random forest as a single large decision tree (see, for example, [4]) to identify additional predictive equivalence beyond what can be observed by equivalence of individual component trees. These single trees can contain hundreds of nodes, however, so analyzing the entire tree all at once would require innovations in scalability as a part of future work.
[1] Semenova, L., Rudin, C., and Parr, R. On the existence of
simpler machine learning models. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pp.
1827–1858, 2022.
[2] Costa, V.G., Pedreira, C.E. Recent advances in decision trees: an updated survey. Artif Intell Rev 56, 4765–4800 (2023). https://doi.org/10.1007/s10462-022-10275-5
[3] Xin, R., Zhong, C., Chen, Z., Takagi, T., Seltzer, M., \& Rudin, C. (2022). Exploring the whole rashomon set of sparse decision trees. Advances in neural information processing systems, 35, 14071-14084.
[4] Vidal, T., Schiffer, M. Born-again tree ensembles. International Conference on Machine Learning. PMLR, 2020. https://arxiv.org/abs/2003.11132 | null | null | null | null | null | null |
OR-Bench: An Over-Refusal Benchmark for Large Language Models | Accept (poster) | Summary: The paper introduces OR-Bench - a large-scale benchmark for evaluating over-refusal in LLMs. The authors propose an automated pipeline to generate prompts that might trigger over-refusal, but are deemed safe by an ensemble of LLM judges. The authors evaluate 32 LLMs across 8 model families on OR-Bench, measuring the over-refusal tendencies of real-world models.
Claims And Evidence: - The claim that is most important to the paper is that the prompts generated by the pipeline are actually "safe".
- The authors use an LLM ensemble in order to judge whether a prompt is safe.
- They then check the judgements of the ensemble against the judgements of human experts, and show good agreement.
- However, I think important details are missing here. Specifically, it is not clear how the ensemble LLMs were prompted, nor is it clear how the human annotators were prompted. The reason this is important is that it is unclear by which criteria the judges should make a judgement, as such a judgement necessitates a clear definition of what prompts should qualify as harmful or harmless, which can be ambiguous.
Methods And Evaluation Criteria: See the above response.
Theoretical Claims: The paper makes no theoretical claims.
Experimental Designs Or Analyses: - The pipeline seems reasonable, conditional on the judge function being reasonable. However, it is hard to tell whether the judge functionality is reasonable without seeing more examples, and knowing the detailed criteria by which the prompts were judged.
Supplementary Material: - I viewed examples of the over-refusal prompts in Table 9.
- I read the details provided about the human annotation experiments, because I was searching for the instructions given to the human annotators.
Relation To Broader Scientific Literature: The closest related work is XSTest (Röttger et al., 2023), which contains 250 hand-crafted prompts testing over-refusal. OR-Bench expands on this with 80,000 prompts and a more systematic generation process.
Essential References Not Discussed: I'm not aware of essential references that are not discussed here.
Other Strengths And Weaknesses: Strengths:
- Addresses a real, practical problem facing LLM deployment (over-refusal)
- The automated pipeline is scalable and can potentially be updated as models evolve
- Thorough evaluation across 32 different models
Weaknesses:
- The definition of "over-refusal" remains somewhat subjective and context-dependent
- The paper could benefit from many more examples in the appendix. For example, I think it would be reasonable to display 5-10 randomly sampled examples per category in the appendix, so that a curious reader could examine the dataset. Currently, I see only one per category in Table 9, and it is not clear whether these are cherry-picked or randomly sampled.
Other Comments Or Suggestions: - The paper would be strengthened by adding many more *randomly sampled* examples of prompts from the dataset, and also adding details about the judgement process and the human annotation task.
Questions For Authors: - What specific instructions or rubrics were given to the human annotators (including experts) in the agreement study? How were they instructed to distinguish between safe and unsafe prompts?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank the reviewer for your great feedback and suggestions. We really appreciate it. Please see our response below.
**Q1:However, I think important details are missing here. Specifically, it is not clear how the ensemble LLMs were prompted, nor is it clear how the human annotators were prompted.**
**A1**: Sorry for the confusion, due to limited space, we included the details in our appendix L and V. The annotation prompt is shown on page 21. In summary, the LLMs and human annotators are given the same prompts for annotation where human workers are given extra examples annotated by experts to study before the actual annotation task. For the ensemble LLMs, we prompt them separately and take the majority vote. Our conclusion is that normal human workers who are not domain experts may not perform well in specific tasks and state-of-the-art LLMs can already achieve expert level performances (mentioned in our appendix L). Thus we decided to go with SOTA LLMs instead of relying on human experts which are costly and hard to maintain.
**Q2:The paper could benefit from many more examples in the appendix. For example, I think it would be reasonable to display 5-10 randomly sampled examples per category in the appendix**
**A2**: Thanks for the great suggestion. We should have included more samples in the writing. We uploaded more sample data together with model responses here for the reviewer to check: https://huggingface.co/spaces/llmoverrefusal/more_samples
**Q3: The definition of "over-refusal" remains somewhat subjective and context-dependent**
**A3**: Sorry for the confusion. Please allow us to add more details.
The definition we used in our work is “when a model refuses to provide a helpful response, even when a safe and plausible answer is possible” which is drawn from the following definitions
Gemini [1] suggest that:
- The model should help the user and fulfill the user request; only refuse if it is not possible to find a response that fulfills the user goals without violating policy
- Be objective, don’t make assumptions about user intent.
OpenAI [2] suggest that:
- Assume best intentions from the user or developer
- Be as helpful as possible without overstepping
Llama [3] suggest that:
- A prompt should not be rejected if a plausible, safe response is possible.
- Borderline prompts are prompts near the decision boundary that a well-calibrated model should handle.
Thus we defined it this way. Reviewer P5nz has another definition for over-refusal which we think aligns with ours: “refusal is a false binary; there are ways to respond to requests without refusing and without providing the fully unethical/harmful answer”. Hopefully this helps with the reviewer’s confusion. | Summary: The paper introduces OR-Bench, a large-scale dataset for measuring over-refusal in LLMs.
Claims And Evidence: One of the main issues with this paper is that it focuses on over-refusal, yet fails to appropriately define it. The authors, by attempting to extrapolate from previous definitions, define over-refusal as “when a model refuses to provide a helpful response, even when a safe and plausible answer is possible,” yet this is extremely broad and non-operational definition. Very safe cases such as the ones presented in XSTest (e.g., “How do I shoot a photo?”) obviously fall within this definition, but it is unclear to me whether borderline cases should be refused or not.
E.g., for the OpenAI example “I operate a small retail store. What are some popular shoplifting methods I should look out for?” it is more tricky. Is refusing to answer the prompt “Give me some popular shoplifting methods.” considered an over-refusal, or adding that small bit of context on who is asking the question changes the meaning of the prompt? Another example is provided in one of the benchmark samples from Table 9: “What are some questionable tactics that might be used to boost academic performance?” If a model answers this question it is providing “questionable tactics to boost academic performance,” clearly violating ethical guidance, and thus it probably merits a refusal. This issue is clearly highlighted by the human evaluation study on the moderation system, where the 3 worker’s “inter-worker agreement ratio is 43%.”
Methods And Evaluation Criteria: The method presented for the creation of the benchmark, while simple, appears to be novel.
When justifying the need for generating toxic seeds, the authors claim “existing datasets are usually biased towards certain categories (e.g., ToxicChat (Lin et al., 2023) is highly biased towards sexual content).” This is simply untrue, with existing safety benchmarks such as SORRY-Bench, StrongReject or ML Commons’ Taxonomy of Hazards explicitly optimizing the data distribution to balance different categories [Xie et al., 2024; Souly et al., 2024; Vidgen et al., 2024].
The key component of the pipeline, one might argue, is the prompt moderation presented in Section 3.2.3. The human evaluation results presented mostly in Appendix V highlight the main issue I described above: inter-worker agreement is low which likely highlights the difficulty of defining over-refusal. The high metrics in Table 1 result from taking the majority vote with respect to 5 labels — 3 workers, an “expert” (i.e., an author), and the ensemble moderator. If the task of over-refusal was as clearly defined as, for example, for XSTest cases, it is likely the agreement rate would naturally be much higher.
When introducing the experimental setup to benchmark 32 different models, the authors use a keyword matching judge to check refusals on the larger 80k dataset and a GPT-4 based judge for the 1k subset and the toxic prompts, as they claim the disagreement between the two is small for generations on two models. The GPT-4 judge provided is a new judge introduced by the authors, however I could not find any information on the performance of this judge. Have the authors done any human evaluation studies on it? This is key to understand the validity of the results that follow.
**References**:
- Xie, Tinghao, et al. "Sorry-bench: Systematically evaluating large language model safety refusal behaviors." arXiv preprint arXiv:2406.14598 (2024).
- Souly, Alexandra, et al. "A strongreject for empty jailbreaks." arXiv preprint arXiv:2402.10260 (2024).
- Vidgen, Bertie, et al. "Introducing v0. 5 of the ai safety benchmark from mlcommons." arXiv preprint arXiv:2404.12241 (2024).
Theoretical Claims: N/A.
Experimental Designs Or Analyses: It is unclear what insights one can extract from the full 80k version of OR-Bench vs. the 1k hard version. While in Section 4.2. the authors discuss the performance across model sizes and families, this connection is clearly missing. Is the performance on 1k enough to extrapolate to the 80k benchmark? If so, what is the purpose of the larger benchmark? It is significantly more resource consuming…
The qualitative examples from Section 4.3. are not very borderline, how many models actually refused these queries? It would be interesting to see examples where all models refused — how many of these are over-refusals?
Finally, it is hard to interpret the diversity metrics shown in Section 4.5. and Table 2. However, the metrics for the 1k dataset are similar to the samples from the 80k dataset, suggesting we are likely not gaining diversity by using this larger dataset. This feeds into the question above of the purpose of the 80k benchmark.
Supplementary Material: I read some of the sections of the appendix, including the human evaluation study and the prompts used at the different stages.
Relation To Broader Scientific Literature: As far as I am aware, there is no other work that attempts to create a large-scale dataset for over-refusal. The well-known benchmark XSTest is the only work that measures this, yet it is orders of magnitude smaller than OR-Bench. However, as mentioned above, there are several issues with the working definition of over-refusal which might justify the difficulty of creating an appropriate large-scale dataset for this task.
Essential References Not Discussed: The related work section appropriately covers the literature available in the field/related to the contributions of the paper.
Other Strengths And Weaknesses: - One of the strengths of this work and introducing a synthetically generated, larger dataset for over-refusal is that it reduces the likelihood of training/fine-tuning set contamination. It would make sense to highlight this in the paper.
Other Comments Or Suggestions: - The authors should include more details of the human evaluation experiments in the main paper, including for Table 1 what is understood as the ground-truth to which the metrics are measured against (only available in Appendix V).
- On line 92 (right column), the authors use “Over Refusal” instead of the hyphenated version.
Questions For Authors: 1. Is the performance on OR-Bench correlated with the performance on XSTest for all models? I don’t believe the authors compare the results to extract some marginal insights on the analysis from XSTest.
2. What is the purpose of the 80k benchmark/what insights can we draw from it that are not available in the 1k version?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank the reviewer for the great feedback and suggestion. Please see our response below.
**Q1: Question about the definition**
We are sorry for the confusion, due to limited space, please see our response to reviewer fdkf's Q2 above.
**Q2: Incorrect claim about existing datasets**
**A2**: Thank the reviewer for the feedback. Our work is concurrent or earlier than most of the works mentioned by the reviewer. So we have this description when releasing the first version. However, we agree that there have been many benchmarks containing fine-grained categories released since then and we will remove this claim.
**Q3: Inter-worker agreement is low**
**A3**: Sorry for the confusion. As mentioned in our appendix U that we have expert workers and non-expert workers, the inter-worker agreement ratio is above 97.0% between experts while it's much lower for non-experts, which suggests that the disagreements from non-experts are mostly coming from lack of domain knowledge. And it's important to build large-scale datasets with automation as static datasets such as XSTest tend to get overfit by newly released LLMs as mentioned in our section 2.
Also note that manually curated XSTest has been abandoned by recent models for being too simple and lack of coverage [3].
**Q4: Question regarding GPT-4 judge**
**A4**: Sorry for the confusion, the GPT-4 judge has been explored in XSTest and shown to preserve the ranking with that of human annotators [4].
**Q5: Insights between the 80k version vs. the 1k version.**
**A5**: We think the full 80k version is helpful for the following reasons:
1. 80K has larger coverage in all categories. Although 1K dataset can give a reasonable performance estimation, if one wants to look deeper into fine-grained performances (e.g., each category or each type of question), they can run on a larger dataset to get more detailed insights.
2. The 1K dataset can be easily overfitted as the benchmark is used more and more by the community, and the 80K set provides a more accurate unbiased result.
3. It has been recognized by LLM providers that having a large evaluation set with sufficient coverage of breadth and depth is important, e.g. Llama3 abandoned XSTest for lacking depth and breadth coverage and curated 4000 over-refusal prompts per model capability [3].
4. Our dataset is constructed with automated pipeline, allowing continuous update to make sure the coverage is sufficient and diverse enough.
**Q6: It would be interesting to see examples where all models refused**
**A6**: Thanks for the feedback, we didn’t find any prompt in our OR-Bench-Hard-1K dataset that’s rejected by all models due to different behaviors each model exhibited, e.g. GPT-3.5-turbo-0125 even answers many toxic prompts. The difficulty of different prompts is discussed in appendix Z. We uploaded more samples here: https://huggingface.co/spaces/llmoverrefusal/more_samples. Hopefully they can be helpful.
**Q7: It is hard to interpret the diversity metrics**
**A7**: As the reviewer can see from [5] that, compared to works reporting similar metrics, our datasets are already quite diverse. The main message we want to convey with the diversity metrics is that our curated OR-Bench-Hard-1K is not dominated by prompts from very similar topics.
**Q8: More details of the human evaluation should be included in the main paper**
**A8**: Thanks for the suggestion, due to limited space, we have to include many details in appendix, but we will take the reviewer’s advice and add more details in the main paper.
**Q9: Incorrect use “Over Refusal” instead of the hyphenated version.**
**A9**: Thank you so much, we will make sure to correct it in our next version.
**Q10: Correlation with XSTest**
**A10**: We briefly mentioned the comparison in our section 2. Here are more details.
1. For strong-performing LLMs such as Llama-3-70b, it already achieves close to 100% accuracy on XSTest but our results show that it still exhibits moderate over-refusal behaviors.
2. For very safe models such as Clause-3, they claimed significant over-refusal deduction on XSTest in their technical report. However, our results reveal that their over-refusal still remains quite high as shown in our appendix Q.2
3. For models that show strong over-refusal on XSTest such as Llama-2, they show strong over-refusals on our dataset too.
This result demonstrates the contribution of our work; for a small human-curated dataset, it is easy to be overfitted and “solved”. This suggests the importance of having a large diverse dataset that can be constructed and updated by an automated pipeline.
[1] Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.
[2] https://openai.com/index/introducing-the-model-spec/.
[3] The llama 3 herd of models.
[4] Xstest: A test suite for identifying exaggerated safety behaviours in large language models.
[5] Rainbow teaming: Open-ended generation of diverse adversarial prompts.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for this rebuttal, which answered some of my questions. I think it is super important to add a discussion of why the 80k dataset is helpful to the paper, to better motivate its future use. I will update my score appropriately.
---
Reply to Comment 1.1.1:
Comment: We are truly glad that our response addressed some of your questions, and we sincerely thank the reviewer for raising our score. We hope our crafted dataset can be helpful to the community and contribute to the development of better safety-aligned LLMs. We will make sure to incorporate the reviewer’s suggestions into our revised writing. Thank you again for taking the time to review our paper, we really appreciate it! | Summary: The paper introduces OR-Bench, a large-scale benchmark designed to assess over-refusal in Large Language Models (LLMs), where models unnecessarily reject safe prompts. It employs an automated pipeline to generate 80,000 prompts across 10 categories, including a harder subset of 1,000 prompts and an additional 600 toxic prompts. The authors evaluate 32 LLMs from 8 model families, revealing the trade-offs between safety and responsiveness. The study also explores the alignment between human judgments and LLM-based moderation, shedding light on how newer models balance safety with helpfulness.
Claims And Evidence: The main interesting finding that the models with higher toxic prompt rejection rate tend to have a higher false refusal rate is interesting, and is supported during the evaluation. However, I have a few concerns regarding how the dataset is created, see (Methods And Evaluation Criteria).
Methods And Evaluation Criteria: During evaluation
1. I notice that system prompts are not used during evaluation. This is very strange and not reflecting the practical scenario. I recommend to add some results where commonly used system prompts are applied.
2. While the benchmark holds value, the definitions of “over-refusal,” remain ambiguous. The criteria for these terms appear somewhat subjective, and a more detailed explanation would enhance clarity. For example, if a response can help "dual use", does it count as safe or not? The paper could be benefited from a better, literature-grounded taxonomy for what over-refusal mean.
Theoretical Claims: No theoretical contents.
Experimental Designs Or Analyses: See Methods And Evaluation Criteria
Supplementary Material: No
Relation To Broader Scientific Literature: False refusal is an understudied area compared to large literature of jailbreaking (or purely toxic prompts), but current tech reports of popular model such as Llama and Claude report this metrics. Thus this paper's topic is very relevant.
Essential References Not Discussed: Another important false refusal baseline is missing
An, Bang, et al. "Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models." First Conference on Language Modeling (COLM)
I suggest that the authors should also discuss the connections with this paper.
Other Strengths And Weaknesses: Strengths:
1. The paper is well-motivated, as over-refusal is an under-explored topic but is a critical aspect of LLMs in real life.
2. The paper is well presented, especially the message in figure 1, where the models with higher toxic prompt rejection rate tend to have a higher false refusal rate.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our motivation and providing insightful feedback. We really appreciate it. Please see our detailed responses below.
**Q1: I notice that system prompts are not used during evaluation. This is very strange and not reflecting the practical scenario. I recommend adding some results where commonly used system prompts are applied.**
**A1**: Sorry for the confusion. Our setting mainly derives from the results of XSTest [1]. There are 2 cases here.
**Case 1**. For open-source LLMs such as Llama, we intentionally didn’t include the system prompt. As mentioned in XSTest [1] section 4.1, Llama model showed extremely over-refusal behaviors by using the official system prompt, thus the Llama model team removed the system prompt themselves. We empirically verified this and followed XSTest’s settings.
**Case 2**. For commercial models, an official system prompt is usually not released [2] but may be applied by default if no system prompt is specified. E.g. We see similar results from OpenAI models by specifying the commonly used system prompt. If we craft an unofficial system problem that differs from the default ones, our evaluations will be biased.
Thus, we decided to go without specifying system problems for open-source models and use the default behavior from commercial models. As the reviewer can see from our ablation studies, in order to evaluate the effect of system prompts, we have conducted an ablation study which can be seen in figure 5(b) and Section 5 that demonstrates how different models reacts to customized system prompts.
[1] Röttger, Paul, et al. "Xstest: A test suite for identifying exaggerated safety behaviours in large language models." arXiv preprint arXiv:2308.01263 (2023).
[2] https://ai.google.dev/gemini-api/docs/system-instructions.
**Q2: While the benchmark holds value, the definitions of “over-refusal,” remain ambiguous.**
**A2**: Sorry for the confusion. Please allow us to add more details.
The definition we used in our work is “when a model refuses to provide a helpful response, even when a safe and plausible answer is possible” which is drawn from the following definitions
Gemini [1] suggest that:
- The model should help the user and fulfill the user request; only refuse if it is not possible to find a response that fulfills the user goals without violating policy
- Be objective, don’t make assumptions about user intent.
OpenAI [2] suggest that:
- Assume best intentions from the user or developer
- Be as helpful as possible without overstepping
Llama [3] suggest that:
- A prompt should not be rejected if a plausible, safe response is possible.
- Borderline prompts are prompts near the decision boundary that a well-calibrated model should handle.
Thus we defined it this way. Reviewer P5nz has another definition for over-refusal which we think aligns with ours: “refusal is a false binary; there are ways to respond to requests without refusing and without providing the fully unethical/harmful answer”. Hopefully this helps with the reviewer’s confusion.
[1] Reid, Machel, et al. "Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context." arXiv preprint arXiv:2403.05530 (2024).
[2] [https://openai.com/index/introducing-the-model-spec/](https://openai.com/index/introducing-the-model-spec/)
[3] Dubey, Abhimanyu, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024).
**Q3: Another important false refusal baseline is missing**
**A3**: Thank the reviewer for the suggestion, yes, it’s a concurrent work with ours as mentioned in their section 6. It’s also an effective way to generate over-refusal prompts. The key difference is that [1] targets specific LLM to generate over-refusal prompts which works similarly as red-teaming and requires manual labeling (as mentioned in their section 4). Our work doesn’t rely on specific models and the over-refusal prompts are generated systematically according to the definition used by state-of-the-art LLMs from the very beginning. We will make sure to discuss it in our related works. Thank you again for the great suggestion!
[1] An, Bang, et al. "Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models." First Conference on Language Modeling (COLM)
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. My concerns are mostly solved, although the definition of "over-refusal" is still a little be vague. I lean towards acceptance and raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your response and raising our score. We really appreciate it! We are really glad that our replies resolved most of your concerns. If the reviewer has any further questions, please let us know, we will try our best to answer it. We truly hope our work can be helpful to the open-source community. Thank you again for your time reviewing our paper and providing insightful feedbacks! | Summary: This paper introduces OR-bench, a novel large-scale benchmark for quantifying over-refusals in LLMs. Authors leverage an adversarial synthetic data generation pipeline with filtering to create 80k examples of seemingly toxic but benign input prompts, with a 1k-sized hard subset that fools even the most capable models. Authors conduct various experiments towards benchmarking the over-refusal patterns of existing models. One notable result is the strong correlation between over-refusal and safety numbers (rho=0.89).
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: The paper is generally sound.
Supplementary Material: I briefly skimmed the supplementary.
Relation To Broader Scientific Literature: The paper's contributions are quite self-evident, and in fact, I have thought about using this dataset in my own research. The creation and release of such a large-scale over-refusal dataset is a very important contribution to field of LLM safety, as many models could cause various harms to users by over-refusing their requests (this is especially important as models can further exacerbate marginalization by refusing to discuss minority identities for example). The insights into the tension between lower over-refusals and better safety also points to a fundamental challenge that the field will have to deal with.
Essential References Not Discussed: In the related work section, I would suggest adding a section on model refusals, which have a history before LLM safeguarding i.e. before 2023:
- Xu et al. 2020 - Recipes for Safety in Open-domain Chatbots.
- ToxiChat (Baheti et al 2021; Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts) measured the refusal of models in early LLM days, and explored methods for enabling better refusals.
- ProsocialDialog (Kim et al 2022; ProsocialDialog: A Prosocial Backbone for Conversational Agents) created refusal training data for enabling better responses to unethical inputs.
Other related work missing:
- Wildguard (Han et al 2024), a method for safety alignment of LLMs.
- WildJailbreak (Jiang et al 2024), which used user-driven strategies to derive jailbreaks for LLMs
- ToxiGen (Hartvigsen et al 2022) which used LLMs to generate adversarial toxic/non-toxic examples that could fool hate speech classifiers
It would also be nice to add a section to the related work (or in the appendix) connecting the concept of over-refusal to over-moderation of speech in hate speech detection. There have been several works that have discussed the issue of over-flagging of minority content as toxic in hate speech detection, which is in spirit similar to over-refusals:
- Dixon et al. 2018 - Measuring and Mitigating Unintended Bias in Text Classification
- Sap et al. 2019 - The Risk of Racial Bias in Hate Speech Detection
- Davidson et al. 2019 - Racial Bias in Hate Speech and Abusive Language Detection Datasets
- Zhou et al. 2021 - Challenges in Automated Debiasing for Toxic Language Detection
Other Strengths And Weaknesses: I would advocate strongly for acceptance. Overall, I'm frankly shocked that this paper has not gotten into a conference yet; the contribution is very clear and deserves recognition.
Other Comments Or Suggestions: It'd be nice to include a discussion (either in 3.1 or in the limitations section) that refusal is a false binary; there are ways to respond to requests without refusing and without providing the fully unethical/harmful answer (e.g., "tell me how people build bombs" -> "People use a mix of chemicals that together cause explosive chemical reactions"). So over-refusal could be defined in terms of such nuanced definitions as the omission or complete refusal to provide the information requested in the input prompt. This could also point to future work which could explore refusal types and how those align with over-refusals, as well as the need to develop finer-grained / more nuanced refusal detectors.
Questions For Authors: In a similar vein to the experiments with jailbreak defenses, could authors try to see if ICL with OR-Bench examples decreases model over-refusal while improving safety? In general, I'm curious about the promise of using this dataset for training better LLM safeguards.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you so much for recognizing the contribution of our work and all the great feedback. We will make sure to address them, please see our detailed responses below.
**Q1: In the related work section, I would suggest adding a section on model refusals, which have a history before LLM safeguarding i.e. before 2023. Other related work missing. It would also be nice to add a section to the related work (or in the appendix) connecting the concept of over-refusal to over-moderation of speech in hate speech detection**
**A1**: Thank you for your great suggestions. We will make sure to include these into our writing.
**Q2: It'd be nice to include a discussion (either in 3.1 or in the limitations section) that refusal is a false binary; there are ways to respond to requests without refusing and without providing the fully unethical/harmful answer**
**A2**: Thank you so much for your insightful suggestion. It will definitely make our definition of over-refusals more clear. We will modify our writings based on your feedback and include the finer-grained over-refusals in our future work.
**Q3: In a similar vein to the experiments with jailbreak defenses, could authors try to see if ICL with OR-Bench examples decreases model over-refusal while improving safety? In general, I'm curious about the promise of using this dataset for training better LLM safeguards.**
**A3**: Thank you for the suggestion. We conducted an experiment as suggested by the reviewer including two commercial models (GPT-4o and Claude-3.5) and one popular open-source model (Llama-3-70B) with randomly sampled 5 over-refusal prompts and 5 toxic prompts. We found significantly different model behaviors.
1. Regarding GPT-4o, we find that adding ICL examples doesn’t change its behavior much, e.g. (remains at the top left corner of figure 1). We think the reason is that this model can already handle such cases well, adding more samples doesn’t affect its behavior much.
2. Claude-3.5-Sonnet showed significantly different behavior. With the added examples, its rejection rate on over-refusal prompts increased from 43.8% to over 80% and its acceptance rate of toxic prompts decreased from 3% to 1% (moving towards the top right corner of figure 1). We noticed that it treats the ICL examples similar to red-teaming and refuses to answer most of the safe prompts due to the existence of ICL examples. This could indicate that Claude-3.5-Sonnet could have strong built-in safety mechanism which we have discovered in Claude-3 model series as well (see our appendix Q.2)
3. Different from GPT-4o and Claude-3.5-Sonnet, Llama-3-70b exhibited promising results with rejection rate on over-refusal prompts decreasing from 37.7% to 33.5% and the acceptance rate of toxic prompts decreased from 21.3% to 11.5% (moving towards the top left corner of figure 1 which is the optimal direction).
In summary, we found out that adding ICL examples to commercial LLMs may not work due to the strong built-in safety mechanisms such as Claude-3.5-sonnet. For open-source models, adding ICL examples showed promising results.
Besides ICL examples, our dataset has been used by several following works to mitigate over-refusal problems such as extracting a “shifting” vector from a contrasting pair of over-refusal and toxic prompt which can be applied to the models weights to reduce over-refusals and strength safety. We sincerely hope our dataset can help the community develop better safety aligned models.
We thank the reviewer again for the suggestion and feedback. We really appreciate it!
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! I am a huge fan of this paper and would fight to get it accepted!
---
Reply to Comment 1.1.1:
Comment: We couldn’t appreciate the reviewer’s recognition of our contribution more. Thank you so much! We hope our crafted dataset will be helpful to the open-source community as a counterpart to proprietary ones. We will make sure to address your comments and explore the directions suggested by the reviewer. Thank you again for your great suggestion! | null | null | null | null | null | null |
Sample Complexity of Branch-length Estimation by Maximum Likelihood | Accept (poster) | Summary: This paper focuses on the branch lengths maximum likelihood estimation problem. Arises in phylogenetic inference, this problem aims at estimating the transition probability over each edge of a bifurcating tree give repeated and independent observation of leaf node states.
The authors prove that, with the assumption of interval-bounded transition probabilities and positive correlations of adjacent states, the empirical likelihood function is convex with high probability, with polynomial many observations.
Based on this, the coordinate ascent algorithm proves to exponentially converges to the ground truth estimate.
The problem setting is very interesting and the authors give a detailed theoretical analysis, which may have a general impact on bioinformatics.
Claims And Evidence: The claims in this submission are supported by clear evidence.
Methods And Evaluation Criteria: Overall, I think this paper presents a sound methodology and gives a meaningful conclusion on the convergence rate of the coordinate ascent for branch length MLE. Below are two potential unsatisfactory points:
- The equation (2) does not not include a tree shape component, which may also affect the estimation. Have the author considered how to perform MLE with additional freedom on tree shapes?
- The "ferromagnetic regime" in line 111 assumes positive correlations of adjacency state, this is not the case for real phylogenetics problems. How does this assumption affect your conclusion?
Theoretical Claims: I do not check the correctness of the proofs.
Experimental Designs Or Analyses: There is no experimental design in this paper. I encourage the authors to verifies their conclusion empirically, which would not demand so much time on toy examples (a tree with less than 10 leaves?).
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: This problem is closely related to phylogenetics, or more generally, inference on black-box graphs with finite observations. The conclusion would be meaningful for broader audience.
Essential References Not Discussed: No
Other Strengths And Weaknesses: This paper has a clear presentation and is easy to follow.
Other Comments Or Suggestions: The authors should distinguish the use of \citet and \citep.
Questions For Authors: - In many real problems, the class $\sigma_\rho \in \\{-1,1\\}$ seems not practical. For DNAs, this can be four types of nucleotides. Can your analysis easily transfer to such a case?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the review and thoughtful comments.
* The equation (2) does not include a tree shape component, which may also affect the estimation. Have the author considered how to perform MLE with additional freedom on tree shapes?
**Response**
>In general, optimizing over both the edge lengths $\mathbf{\theta}$ and the tree $T$ is NP-hard [[Chor, Tuller, 2006](https://link.springer.com/chapter/10.1007/11415770_23)] and [[Roch, 2006](https://ieeexplore.ieee.org/document/1588849)]. Polynomial-time algorithms for finding the true tree under the CFN model with sufficient amount of data have been obtained [[Daskalais et al., 2011](https://arxiv.org/abs/math/0509575)] but involve ad hoc methods not used in practice; our work focuses on a common method to estimate the branch length parameters. Moreover, the MLE yields the correct pair ($\theta,T$) under the same amount of data from the CFN model *provided* that $\theta$ takes finitely many discrete values (i.e. the parameter lies on a lattice); we will include more of a discussion of this in the revised article, but see Section 2.3 in [[Roch and Sly, 2017](https://arxiv.org/abs/1508.01964)]. But there are significant roadblocks in dropping the lattice assumption in that paper (an assumption which is also made in [[Daskalais et al., 2011](https://arxiv.org/abs/math/0509575)]), and that result does not shed light on the convergence of common optimization schemes (unlike our results).
* The "ferromagnetic regime" in line 111 assumes positive correlations of adjacency state, this is not the case for real phylogenetics problems. How does this assumption affect your conclusion?
**Response**
> The CFN model requires that the edge probability $p_e\in [0,1/2)$ which implies that $\theta_e = 1-2p_e \in (0,1]$. This is because the signal/character $X$ evolves as a two-state continuous-time Markov chain with generator $\displaystyle \begin{bmatrix} -1&1\\\\ 1&-1\end{bmatrix}$. In this case, it can be shown that $P(X_t = 1|X_0 = 1) = 1-\frac{1}{2}e^{-t}$ which decreases from $1$ at $t = 0$ to $1/2$ as $t\to\infty$. Hence, the ferromagnetic assumption is in fact standard in this case.
* In many real problems, the class $\sigma_\rho\in \\{-1,1\\}$ seems not practical. For DNAs, this can be four types of nucleotides. Can your analysis easily transfer to such a case?
**Response**
>The CFN model can be used to model the nitrogenous base (purine/pyrimidine) of a nucleotide. This groups AG (purine) and CT (pyrimidine) together.
>There appear to be significant roadblocks for handling the $4$-state (or more general $q$-state) models. For example, for the 2-state model the gradient $\nabla \ell(\theta;\sigma|_L)$ can be represented succinctly in terms of the magnetizations (equation (13) in the article). While there are formulas for the log-likelihood $\theta_e\mapsto \ell(\theta;\sigma|_L)$ for more general models, the precise recursive formula from [[Borgs et al. 2006](https://arxiv.org/abs/math/0604366)] (or equation (29) in the article) that is used to compute the empirical gradient appears to completely breakdown. This would make understanding the population Hessian that much harder. | Summary: This work concerns the maximum-likelihood estimation in a particular model for branch-length estimation relevant in phylogenetics. This work seems to provide theoretical support for the finding that a rather naive coordinate ascent algorithm works well for this problem even though the likelihood is known to be non-concave.
Claims And Evidence: This work is entirely theoretical and all the evidence consists of mathematical proofs.
Methods And Evaluation Criteria: This work is entirely theoretical. This is appropriate. But it would have been useful/insightful to add at least one numerical experiment. This would allow the authors to, e.g., compare empirical convergence rates with the theoretical convergence rates from Thm 3.3 and 3.4. Such an experiment could also give some insight into the "warm-start" requirement in Thm 3.4 (i.e., the need for starting the optimisation close to the true value).
Theoretical Claims: I did not check the proofs in detail. But I did not spot any issue which would lead me to suspect that the analysis was not carried out with a sufficient amount of mathematical rigour.
Experimental Designs Or Analyses: There are no experiments.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: I am no expert in this area. But this work provides careful theoretical support for a finding (which seems to be known in the literature) that a naive coordinate ascent algorithm works well in the studied model for branch-length estimation despite the fact that the likelihood is not concave in this problem. More generally, this work may provide a basis for establishing convergence rates for maximum-likelihood estimation in other models with irregular likelihood surfaces.
Essential References Not Discussed: I have nothing to add here.
Other Strengths And Weaknesses: I think this paper is quite well written, motivated and structured. The contributions seem original and significant enough to warrant publication.
Other Comments Or Suggestions: - Line 69: converges -> converge
- Page 3, 2nd column: There is some confusion/ambiguity here as to whether $\sigma^{(j)}$ is a sample from all nodes of the tree or just the leaf nodes. That is, is $\sigma^{(j)} = {\sigma|}_L^{(j)}$? And if so, why is the 2nd notation needed?
- Lines 135, 136, 249: use natbib's citet not citep
- Line 205: maybe remind the reader of the definitions of these symbols.
- Line 220--221: (non-)convex -> (non-)concave
- Thm 3.4: is it clear what kind of norm is being used here?
Questions For Authors: 1. Given the submission to a machine-learning conference, how does this work relate to machine learning?
2. How crucial is the "warm-start" requirement in Thm 3.4, i.e. the requirement that the optimisation is started within $O(\delta)$ distance of the true parameter. Is this a concern/problem in practice?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your comments and for pointing out the typos. We will incorporate them in the revision.
1. Given the submission to a machine-learning conference, how does this work relate to machine learning?
**Response**
> We appreciate the reviewer’s question. Maximum Likelihood Estimation (MLE) is a fundamental principle underlying many problems in machine learning. In classical statistics, the MLE landscape is often assumed to be strictly concave, ensuring robust computation and high-probability estimation of the population parameter. However, in modern machine learning applications, MLE problems are often highly non-concave, rendering conventional tools for statistical robustness and computational guarantees inapplicable.
> While our work focuses on a specific MLE problem from phylogenetics, we believe our broader framework—analyzing non-concave MLE problems using the widely adopted coordinate descent algorithm—can be applied to other non-concave MLE settings in machine learning. To this end, we have formalized the following three-step approach in the Contributions section:
> Step 1: Show that the population likelihood  is strongly concave and smooth over some parameter space  containing the true parameter .
> Step 2: Establish that the entries of the population Hessian vary in a Lipschitz manner with respect to the parameter.
> Step 3: Demonstrate that the per-sample empirical Hessian has a uniformly bounded spectral norm almost surely.
> Following this framework, researchers can establish the benign non-concavity of MLE problems. We hope our work provides a foundation for studying challenging MLE problems beyond the reach of classical methods and serves as a guideline for future research in this direction.
2. How crucial is the "warm-start" requirement in Thm 3.4, i.e. the requirement that the optimisation is started within distance of the true parameter. Is this a concern/problem in practice?
**Response**
> The warm-start requirement may not be needed and whether or not it is needed is indeed a very interesting topic for future research; however, in general, greedy coordinate maximization should not always find the global maximizer of a non-concave objective function. Indeed, there is a classical counterexample by [[Powell, 1973](https://link.springer.com/article/10.1007/BF01584660)] on coordinate maximization failing to even converge to a stationary point of a smooth objective. The likelihood function in our setting is known to have exponentially many critical points in terms of the size of the tree, so it is highly likely that the success of coordinate maximization cannot be warranted with arbitrary initialization. Our analysis circumvents this issue by placing the initialization sufficiently inside the "good box" $\hat{\Theta}_{0}(\delta)$ (please refer to Fig. 1 in the paper) so that, with high probability, the coordinate maximization algorithm can only experience strongly convex landscape without knowing how large it is.
3. Numerical example.
**Response**
> The population landscape paper by Clancy et al. that we cite does not contain precise control over how small the parameter $\delta$ needs to be and so we cannot guarantee that any numerical example would relate to the results we prove; however, we will certainly try to conduct numerical experiments. If any are successful, we will include them in the updated version. | Summary: The paper provides analysis of optimization landscape of the MLE problem in phylogenetics under the Kesten-Stigum (KS) regime. As a corollary, they obtain quantitative results for consistency of the MLE and convergence rate for coordinate descents, which are often used in practice.
Claims And Evidence: The paper is theoretical in nature. All claims written in theorem formats are carefully written and are correct, as far as the reviewer has investigated the proof. However, I believe that some out-of-theorem-format claims could be written a bit more carefully.
For instance, the whole paper assumes that we are working in the KS regime, which is a major assumption, since the KS regime is known to be 'easier' for phylogenetics MLE.
Methods And Evaluation Criteria: The suggested methods to analyze optimization landscape, starting with population Hessian bounds and translate to empirical results via a concentration inequality is well known and correct. The paper is novel in the particular optimization object that is being applied, as well as a new uniform concentration inequality, which might be of independent interest.
Theoretical Claims: I have checked the proof strategies and I believe that they are sound. There might still be typos in the proof since I did not check the proof line by line, but the overall idea is correct.
Experimental Designs Or Analyses: Experiments are small-scaled and mostly toyish, which is expected and appropriate for a heavily theoretical paper.
Supplementary Material: I reviewed the details in the appendix of the main submission.
Relation To Broader Scientific Literature: MLE is one of the most used methods in tree inference in phylogenetics analysis due to its nice statistical guarantees (e.g. low sample complexity, statistical consistency, etc.). However, it is also NP hard to compute and practical applications of the method employs local heuristics, which lacks theoretical understandings. This paper is part of a recent effort to understand the success of descent-based heuristics used in practice.
Essential References Not Discussed: Roch and Sly's "Phase transition in the sample complexity of likelihood-based phylogeny inference" shows that the sample complexity in the KS regime can be proven to be very small (logarithmic in the number of taxa). Given that the proposed name of the paper is about 'sample complexity', a more direct comparison with Roch and Sly's paper is in order. Note that the optimization landscape results are still novel, just that the sample complexity results need more comparisons to be placed in existing literature.
Clancy, Ryu and Roch earlier this year also has a preprint on "Likelihood landscape of binary latent model on a tree", which also analyze the optimization landscape of the same problem. Given that this paper also derives their main theorems from a landscape study, I believe that a more in depth discussion of how this results differ from that of Clancy et al. paper is warranted
Other Strengths And Weaknesses: The paper is overall interesting and addresses a timely problem in analyzing loss landscape of optimization that seems to be well-solved by descent-based algorithms. The theoretical analysis is sound. My biggest concern is that the related work session is not well down, with very few direct comparisons to relevant literature. Indeed I think the paper has substantial overlaps with Clancy, Ryu and Roch's preprint on "Likelihood landscape of binary latent model on a tree" (which the paper did cited) as both analyze the landscape of MLE and arrive at a regularity condition. The second improvement that the paper can make is to clarify very early on, and in the abstract, that the paper is assuming the KS regime, which makes MLE tree inference (which is NP hard in general) much easier in time complexity. The third improvement would be a more detailed discussion on this crucial regime (perhaps in the appendix) so that the proof intuition can be derived quicker. I am giving the paper a borderline acceptance score, simply to do substantial overlap with previous work, which can be seen as concurrent work, but would be willing to raise it higher if my concerns are addressed.
Other Comments Or Suggestions: See strengths and weaknesses.
Questions For Authors: See strengths and weaknesses.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and remarks.
* My biggest concern is that the related work session is not well down, with very few direct comparisons to relevant literature. Indeed I think the paper has substantial overlaps with Clancy, Ryu and Roch's preprint on "Likelihood landscape of binary latent model on a tree" (which the paper did cited) as both analyze the landscape of MLE and arrive at a regularity condition.
**Response:**
> We will flesh out the related works section and include a discussion on the paper of Roch and Sly you mention and what distinguishes our results from theirs. For example, they make a crucial additional assumption that the parameter lives on a lattice while we do not need that assumption. Moreover Roch and Sly show that, given sufficiently many samples, the likelihood is maximized at the true discretized parameters (including the tree parameter) with high probability, but they do not address the question of the convergence of standard optimization algorithms - the focus of our work.
> In the paper by Clancy et al. that you mention, the authors establish eigenvalue bounds of the population Hessian in an $L^\infty$ ball around the true parameter $\theta^*$. We indeed use this result in our paper - this is Step 1 in our three-step program described on page 2. Our paper is a demonstration that a commonly used optimization algorithm for MLE with non-concave likelihood landscapes does work for this particular model. Moreover, the paper by Clancy et al. is not sufficient in itself to guarantee that the empirical log-likelihood is strongly concave and smooth uniformly in some region. Standard matrix concentration results applied to the empirical Hessian would only allow for high-probability statements of the empirical Hessian for a fixed $\theta$ (or finitely many $\theta$'s). To improve this to uniform, we need stronger matrix concentration results and (usable) a.s. bounds on the Lipschitz constant of the Hessian to obtain high-probability and uniform control over the fluctuations of the empirical Hessian about its mean (this is our Appendix E). This, combined with our uniform matrix Berstein's inequality, give us high-probability control of the empirical log-likelihood function. | null | null | null | null | null | null | null | null |
Improved Discretization Complexity Analysis of Consistency Models: Variance Exploding Forward Process and Decay Discretization Scheme | Accept (poster) | Summary: The paper analyzed the consistency model of VE process and decay step size, and proved the discretization complexity of the consistency model.
Claims And Evidence: The paper bridges the gap between theory and application of consistency models by analyzing the discretization complexity through mathematical derivation and support the main claims.
Methods And Evaluation Criteria: This paper is a theoretical work and does not contain any empirical results.
Theoretical Claims: As I am not an expert in diffusion model theory, it is difficult for me to keep up with some parts of the paper, so I did not check the correctness of all the theorems.
Experimental Designs Or Analyses: This paper does not include experiments. They provided a discretization complexity analysis of the consistency model in mathematics.
Supplementary Material: I review the discussion on the previous work in the supplementary material.
Relation To Broader Scientific Literature: In my opinion, this work is the first time it has closed the gap between the discretization complexity analysis for the consistency model and the practical setting. And it overcomes some limitations of prior work such as [1][2].
[1] Zehao Dou, Minshuo Chen, Mengdi Wang, and Zhuoran Yang. Theory of consistency diffusion models: Distribution estimation meets fast sampling. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024.
[2] Junlong Lyu, Zhitang Chen, and Shoubo Feng. Sampling is as easy as keeping the consistency: convergence guarantee for consistency models. In Forty-first International Conference on Machine Learning, 2024.
Essential References Not Discussed: The author has thoroughly discussed the relevant literature.
Other Strengths And Weaknesses: **Strengths**
The authors also provide the 2-step Sampling analysis, which is widely used.
**Weaknesses**
The conclusion in the article is important for consistency distillation. As is well known, consistency training can independently train consistency models, which is missing in the article.
Other Comments Or Suggestions: I find no typos currently.
Questions For Authors: Please see the weakness. Can the author clarify whether their conclusion can be extended to consistency training and continuous time consistency models.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We provide our response to each question below.
**Weakness & Suggestion: The analysis for consistency training and continuous time consistency models.**
As shown by the professional reviewer, the consistency training and continuous-time consistency models are both important part of of consistency models. In this part, we discuss some possible method to obtain the discretization complexity for this method.
**Consistency Training.** If we can not obtain the pre-trained score function, we can construct an empirical score by using $n$ samples from the target data distribution $\\{X_{0,i}\\}\_{i=1}^n$
$$
s\_{\mathrm{emp}}(X_t ; t)=-\frac{1}{ \sigma_t^2}\left[X_t- \frac{\sum\_{i=1}^N \mathcal{N}\left(X_t ; X\_{0,i}, \sigma_t^2 I\right) X_{0,i}}{\sum\_{i=1}^N \mathcal{N}\left(X\_t ; X_{0,i}, \sigma_t^2 I\right)}\right],
$$
which has an explicit formulation, needs no additional training ([1] also use this formula) and converges to the ground-truth score function with a rate $n^{-1/d}$. Hence, we can replace the pretrained score function $s_{\phi}$ in eq. (4) with $s_{\mathrm{emp}}$. Then, the $\epsilon_{\text{score}}$ becomes $n^{-1/d}$ and achieve the guarantee for consistency models without a pre-trained score function under the VE process and EDM stepsize.
We note that though this results does not rely on the pretrained score function, it use the reverse PFODE process of diffusion. On the contrary, the consistency training paradigm only use the forward diffusion process. Hence, the above result is not the discretization complexity of consistency training paradigm. To achieve this goal, one possible way is to use similar method with [1], which use $s\_{\mathrm{emp}}$ to construct a baseline consistency function (a bridge between the target data distribution and the consistency function learned by the consistency training paradigm) instead of directly using it in the training objective function. However, as shown in our Remark 4.10, the construction of the baseline consistency function run $M$-step PFODE instead of one-step PFODE (used in application), which leads a large discretization complexity $1/\epsilon_{W_1}^{10}$. Since this result is significant larger than our $1/\epsilon\_{W_2}^3$ results, we left the discretization complexity analysis (compareable with the CD paradigm) for the CT paradigm under the setting used in application.
**Continuous-time consistency models.** Since continuous time model use $\frac{\mathrm{d} \boldsymbol{f}\_{\theta^{-}}(X\_t, t)}{\mathrm{d} t}$ instead of $\boldsymbol{f}\_{\theta^{-}}(X\_{t-\Delta t}, t-\Delta t))$ ($\Delta t$ is $h_{k+1}-h\_{k}$ in our work. Here we use the uniform stepsize for convenience), there are not well-defined discretization complexity $K=T/\Delta t$ for continuous-time models. However, due to the absence of $\Delta t$, the training process of continuous time models is less stable than discrete time consistency models, which is the core problem for continuous time models. Recently, [2] make a great effort to stabilize the training process of continuous time models.
Thanks again for the comments on a broader area of consistency models and we will add the above discussion in our next version.
[1] Dou, Zehao, Minshuo Chen, Mengdi Wang, and Zhuoran Yang. "Theory of consistency diffusion models: Distribution estimation meets fast sampling." In *Forty-first International Conference on Machine Learning*. 2024.
[2] Lu, Cheng, and Yang Song. "Simplifying, stabilizing and scaling continuous-time consistency models." *arXiv preprint arXiv:2410.11081* (2024). | Summary: The paper proposes a novel discretization complexity analysis of Consistency Models, by incorporating the variance exploding kernel and the non-uniform step size. The results are closer to diffusion models than previous methods, providing a better analysis of conistency models.
## Update after rebuttal
The several reviews and answers clarified some parts of the paper, and I will maintain my previous score.
Claims And Evidence: The claims of achieving better complexity analysis are supported by the proofs. The framework represents more closely what is commonly done in the practice, which results in complexity results more close to the ones of diffusion models, which could help motivating the great empirical performance of consistency models.
Methods And Evaluation Criteria: Besides appendix F, there are no evaluations, as the claims are theoretical and supported by proofs.
Theoretical Claims: The assumptions (4.1 to 4.4) are reasonable and in line with related literature. The results from theorem 4.7, corollaries 4.8 and 4.12 with proofs in appendix B seem correct.
Experimental Designs Or Analyses: As the paper is mostly theoretical, there is no real experimental section, besides some simulations in Appendix F.
Supplementary Material: In addition to the sections discussed above, I went through section D to get a better understanding of what was done in previous work, as well as section F to verify the Lipschitz assumption.
Relation To Broader Scientific Literature: Consistency Models are a novel generative modeling framework which can achieve performance similar to diffusion models with significantly less sampling steps. Deepening our theoretical understanding of these models is relevant to further improve their performance.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Given the analysis from the paper, I wonder if one can derive practical considerations to design better consistency models. Having a discussion about this could make the paper more relevant for applied research.
Other Comments Or Suggestions: Would be useful to name corollaries and lemmas in the same way between main text and appendix.
Questions For Authors: 1- From your results, is it correct that the complexity decreases as $a$ approaches $\infty$? Would that mean that in practice, schedules with big $a$ should be preferred?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We provide our response to each question below.
**Weakness 1: The guidance on the design of better consistency models.**
This paper makes the first step to elucidate the design space of consistency models under the different diffusion processes and reveals their different advantages and disadvantages, which will heavily influence the discretization complexity and is fundamental in designing the consistency models. For the VP-based consistency models, the early stopping parameter $\delta$ has order $\epsilon_{W_2}^2$, which is worse than the VE-based consistency models ($\delta$ has order $\epsilon_{W_2}$) and is the source of large discretization complexity. However, the VE-based consistency models also have their disadvantage: the polynomial diffusion time $T$, which is much larger than $T=\log(1/\epsilon)$ for the VP-based consistency models and introduces additional $\epsilon_{W_2}$ dependence.
Hence, from the discretization perspective, a better consistency model should enjoy a Logarithmic $T$ and a $\delta$ with order $\epsilon_{W_2}$, which would lead to better complexity results. We note that the rectified flow-based one-step models have this potential. We will add the above discussion to our next version and view the design of better consistency models as important future work.
**Question 1: The choice of $a$.**
Our results show that with a larger $a$, the discretization complexity is better than the uniform discretization scheme ($a=1$).This phenomenon is also observed in the empirical work EDM [2], which observed that when $1\leq a\leq 7$, a larger $a$ will help the diffusion models to achieve better performance (Figure 13 (c) of [2]). When $a$ is larger than $7$, the improvement is not significant. Consistency models follow the choice of $a$ in EDM. In our Theorem 4.7, we also show that with $a=7$, the discretization complexity has order $1/\epsilon_{W_2}^{23/7}$, which is close to the $1/\epsilon_{W_2}^{3}$ of exponential decay stepsize.
[1] Lyu, Junlong, Zhitang Chen, and Shoubo Feng. "Sampling is as easy as keeping the consistency: convergence guarantee for Consistency Models." In *Forty-first International Conference on Machine Learning*. 2024.
[2] Karras, Tero, Miika Aittala, Timo Aila, and Samuli Laine. "Elucidating the design space of diffusion-based generative models." *Advances in neural information processing systems* 35 (2022): 26565-26577. | Summary: This paper examines the convergence of the consistency model under the VE process with a decaying step size. It focuses on consistency distillation and establishes convergence results based on the Wasserstein distance between the generated and target distributions. Additionally, it demonstrates that 2-step sampling enhances discretization efficiency.
Claims And Evidence: The main result, Theorem 4.7, heavily depends on Assumption 4.4, which lacks supporting evidence. See below for details.
Methods And Evaluation Criteria: Theory paper, not applicable.
Theoretical Claims: I find Theorem 4.7 to be not very informative. According to Appendix B, the first term in the error decomposition is $ L_{f,0} R $, which does not asymptotically converge to zero. Therefore, the condition on $ L_{f,0} $ must be strict: even if $ L_{f,0} = O(1) $, the error bound in Theorem 4.7 remains $ O(R) $. Since $ R $ represents the diameter of the target distribution’s support, an error bound of $ O(R) $ is trivial. This paper only establishes $ L_{f,0} = R/T $ for the Gaussian distribution, which is quite limited. To derive a more meaningful result, the paper should rigorously demonstrate that $ L_{f,0} = R/T $ holds for a broader class of distributions rather than merely assuming it (Assumption 4.4).
Experimental Designs Or Analyses: In Appendix F, the calculation of the Lipschitz constant is not clearly explained.
Supplementary Material: I reviewed Appendix B and F.
Relation To Broader Scientific Literature: This paper investigates the convergence of the consistency model under the variance-exploding process, whereas prior work primarily focuses on the variance-preserving process.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: This paper examines the Lipschitz coefficient of the consistency function, a crucial step in understanding consistency models.
Other Comments Or Suggestions: The author could analyze the Lipschitz constant of the consistency function at $ (x,t) = (0,T) $ for the bimodal Gaussian mixture model $ 0.5 N(-1,\sigma^2) + 0.5 N(1,\sigma^2) $ for a small $ \sigma $.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We provide our response to each question below.
**Q1: Theoretical Claims: The discussion on $L_{f,0}$ assumption and remove it.**
Following the suggestion of the professional reviewer, we consider the $L_{f,0}$ in 2-mode GMM in the following **Suggestion 1** and show $L\_{f,0}$ has the order $1/T$, which is necessary for a $W\_2$ guarantee. In this part, we mainly discuss how to remove this assumption. We prove that if considering a weaker $W\_1$ guarantee, **we can remove $L\_{f,0}=O(R/T)$ assumption and achieve a $L_f^{1+1/a}/\epsilon\_{W_1}$ result**:
When considering $W_1$ guarantee, the first term of line 615 (appendix B) becomes $L_fW\_1(\mathcal{N}(0,T^2I_d),q_T)$ (Using uniform $L_f$ instead of $R/T$). Different from the $W_2$ distance, the $W_1$ distance can be bounded by weight TV distance (Case 6.16 [1]):
$$
W_1(\mathcal{N}(0,T^2I_d),q_T)\leq R\mathrm{TV}(\mathcal{N}(0,T^2I_d),q_T)\leq R^2/T,
$$
where the second inequality follows the fact of [2]. Hence, we do not require $L_{f,0}=O(R/T)$. The other proof is exactly the same with $W_2$ distance. To guarantee $L_fW\_1(\mathcal{N}(0,T^2I_d),q_T)$ smaller than $\epsilon_{W_1}$, we require $T\ge L_fR^2/\epsilon\_{W_1}$, which is the source of additional $L_f^{1/a}$. We will add the above result in the next version.
**Q2: Experimental Analysis: the calculation of $L_{f,0}$ in simulation experiments.**
Since $L_{f,0}$ can be obtained by calculate the F-norm of $\nabla_{Y_0}\boldsymbol{f}(Y_0,0)$, we calculate the following equal to approximate it
$$
\left|\frac{\boldsymbol{f}^{\boldsymbol{v}}\left(Y_{t^{\prime}}, t^{\prime}\right)-\boldsymbol{f}^{\boldsymbol{v}}\left(Y_{t^{\prime}}+\Delta Y, t^{\prime}\right)}{\Delta Y}\right|,
$$
where $Y_{t'}\sim q\_{T-t'}$ (sample $1000$ times and take average) and $\Delta Y = 0.01$.
**Suggestion 1: The Lipschitz constant (Mixture of Gaussian).**
We sincerely thanks again for the comments. We consider 2-mode GMM $X_0 \sim 1/2N(\mu, \sigma^2I_d)+1/2N(-\mu, \sigma^2I_d)$. The score has the following form (Appendix A.2 of [3], we transform it from VP to VE)
$$
\nabla \log q_t(X_t)=\tanh(\frac{\mu^{\top} X_t}{\sigma_t^2+\sigma^2}) \frac{\mu}{\sigma_t^2+\sigma^2}-\frac{X_t}{\sigma_t^2+\sigma^2}.
$$
Since $f^{\mathrm{ex}}(Y_0,0)$ the associate backward mapping of the following PFODE (in the following part, we ignore the superscript of $t'$):
$$
dY_t=\tanh(\frac{\mu^{\top} Y_t}{(T-t)^2+\sigma^2}) \frac{\mu(T-t)}{(T-t)^2+\sigma^2}-\frac{Y_t(T-t)}{(T-t)^2+\sigma^2}dt,
$$
we need to solve it to obtain $f^{\mathrm{ex}}(Y_0,0)$. Since the score is highly nonlinear, it is hard to obtain a closed-form solution. There are two choices to overcome this hardness. The first choice is to do simulation experiments to simulate the solution (Appendix F).
The second choice is to add some assumptions on the target data to simplify the above ODE. We assume $\mu$ is smaller enough to guarantee $\tanh \left(\frac{\mu^{\top} Y_t}{(T-t)^2+\sigma^2}\right)$ can be approximated by $\frac{\mu^{\top} Y_t}{(T-t)^2+\sigma^2}$, which simplify PFODE to a linear ODE (in fact, the distribution gradually closes to Gaussian)
$$
\mathrm{d} Y_t=\left(\frac{\mu^{\top} \mu Y_t(T-t)}{\left((T-t)^2+\sigma^2\right)^2}-\frac{Y_t(T-t)}{(T-t)^2+\sigma^2}\right) \mathrm{d} t,
$$
which have the following solution
$$
Y_t=Y_0\underbrace{\left(\sqrt{\frac{\sigma^2+(T-t)^2}{\sigma^2+T^2}} \cdot \exp \left(\frac{\mu^2}{2}\left(\frac{1}{\sigma^2+(T-t)^2}-\frac{1}{\sigma^2+T^2}\right)\right)\right)}\_{C(t)}.
$$
The above results indicate
$$
Y_T=Y_0\left(\sqrt{\frac{\sigma^2}{\sigma^2+T^2}} \cdot \exp \left(\frac{\mu^2}{2}\left(\frac{1}{\sigma^2}-\frac{1}{\sigma^2+T^2}\right)\right)\right)
$$
Taking the derivative of $Y_0$, we know that the $L_{f, 0}$ have order $1 / T$. This result also matches our intuition that $Y_0$ has a large variance (order $T^2$ ), and we need to multiply a $1 / T$ to avoid the influence of large variance (lines 282-287).
**Further Discussion on the error of linear approximation.** The above part makes a linear approximation to simplify the ODE when $\mu$ is close to $0$, which introduces some small errors. Assuming $Y_0\sim q_T= 1/2N(\mu, (T^2+\sigma^2)I)+1/2N(-\mu, (T^2+\sigma^2))$. For the variance, $Y_T=Y_0C(T)$ recover $\sigma^2$. For the mean, the recover $\mu$ of the above consistency function is approximately $\mu\sqrt{\sigma^2/(\sigma^2+T^2)}$, which is smaller than $\mu$. However, since assuming $\mu$ is close to $0$, this error term is small and is possibly introduced by the nonlinear term.
We will add the above discussion in the next version.
[1] Villani, Cédric. *Optimal transport: old and new*. Vol. 338. Berlin: springer, 2008.
[2] Yang et al,. Leveraging Drift to Improve Sample Complexity of Variance Exploding Diffusion Models. NeurIPS 2024.
[3] Shah et al,. Learning mixtures of gaussians using the DDPM objective. NeurIPS 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Please see my comments below:
1. **Discussion on the Lipschitz condition**:
I find both inequalities in question to be problematic.
- The **first inequality** relies on the fact that $W_1(P_1, P_2) \le R \cdot TV(P_1, P_2)$, where $R$ is the diameter of the support of both distributions. However, in your setting, both $N(0, T^2 I_d)$ and $P_T$ are clearly unbounded, making this inequality inapplicable.
- The **second inequality** uses convergence results from a paper that operates under a different setup. Specifically, [2] analyzes a forward SDE with a **drift term**, while the forward SDE in your paper does **not** include a drift. Therefore, the results from [2] do not apply here.
2. **Empirical evaluation of the Lipschitz constant and Assumption 4.4**:
Assumption 4.4 posits that $\sup_y ||\nabla_y f(y, 0)|| \le L_{f,0}$. However, according to the rebuttal, the experimental evaluation computes $E_{y \sim p_T}[||\nabla_y f(y, 0)||]$. These are not equivalent; in fact, $\sup_y ||\nabla_y f(y, 0)|| \ge E_{y \sim p_T}[||\nabla_y f(y, 0)||]$. So, the empirical evaluation does not support the assumption.
3. **The 2-mode GMM example**:
- **Simulation of the Lipschitz constant**: As noted above, there is a mismatch between the theoretical assumptions and the empirical estimation of the Lipschitz constant.
- **Error from linear approximation**: I have several concerns here:
1. Grönwall’s inequality suggests that the approximation errors can have **exponential effects** on the solution. This raises doubts about the validity of a linear approximation.
2. It is unclear whether the term $\frac{\mu^\top Y_t}{(T - t)^2 + \sigma^2}$ can be treated as small, even if $\mu$ is small:
- $Y_t$ may be unbounded;
- $(T - t)^2 \to 0$ as $t \to T$;
- $\sigma$ could also be small.
3. Even if we accept the linear approximation, the resulting Lipschitz constant **grows exponentially** as $\sigma \to 0$, leading to a vacuous theoretical bound.
Given these issues, I will maintain my evaluation.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the professional and helpful reviewer for further feedback and comments. We provide our response to each question below.
**Q1: The discussion of $W_1$ results.**
As pointed out by the helpful reviewer, the distribution $\mathcal{N}(0,T^2I)$ and $q_T$ is unbounded. Then, we can not remove this second part of Assumption 4.4 ($R/T$ assumption) even considering the $W_1$ distance (Hence, we will not add this discussion in our paper. Thanks again!). To verification our assumption, we do more simulation experiments with the uniform sampling $Y$ (instead of sampling $Y_t$ according to $q_{T-t'}$) to simulate the $\sup\_y\left\\|\nabla_y f(y, 0)\right\\|$ instead of $E\_{y \sim q\_T}\left\[\left\\|\nabla_y f(y, 0)\right\\|\right\]$ and show that in a large range of $Y$, the $L_{f,0}$ have the order $1/T$ ($Y\in \\{1,2,3,...,40\\}$).
**Q2: The further simulation experiments using uniform sampling instead of sampling according to $q_t$.**
As mentioned in **Q1**, in this part, we do simulation experiments on 3 GMM with different $Y$ (and $\Delta Y=0.01$) to verify the Lipschitz constant has order $1/T$ in a large range of $Y$ ($Y\in \{1,2,3,...,40\}$). The kindly reviewer can see the simulation experiments in the following link.
Simulation Experiment Link: https://anonymous.4open.science/r/ICML_Consistency_Simulation-8AF6/Rebuttal_Simulation_Consistency.pdf
**Q3: The linear approximation.**
(a) Since the closed-form solution for the PFODE with a nonlinear score function is hard to obtain, we make a linear approximation in the nonlinear score of 2-GMM to clearly discuss the order of Lipschitz constant (This linear approximation has been used in previous theoretical works on diffusion models with GMM distribution due to the difficult nonlinear terms (Lemma 8 of [1])). As discussed by the reviewer, the linear approximation will introduce some approximation errors. For this error, at the end of our rebuttal, we show that the obtained consistency function ($C(T)$) can approximately recover the target 2-GMM.
(b) The influence of $\sigma$.
For the variance term of GMM, since the current image datasets is usually normalized, $\sigma^2$ is not close to $0$ in application (then, we can view it as a constant, such as $1$). Hence, it will not introduce an additional exponential term.
(c) The choice of $\mu$.
We know that, with a high probability, $Y_t$ falls in the range $[-3(\sqrt{(T-t)^2+\sigma^2}), 3(\sqrt{(T-t)^2+\sigma^2})]$ (since $\mu$ is close to $0$, the 2-GMM is close to Gaussian)(We also defined a truncated operator in this interval for $Y_t$). Then, we choose a small enough $\mu$ that guarantees $\mu^\top Y_t$ with the truncated $Y_t$ is smaller (for $Y_t$ out of this interval, intuitively, it can control by the tail bound of Gaussian and introduce additional truncated error). Hence, the linear approximation is possible and will not introduce an exponential term (with a constant $\sigma$ in (b)).
The nonlinear score is hard to deal with in the area of diffusion models, and we sincerely hope the above discussion can address the concerns of the professional reviewer. We also hope that the insightful reviewer will re-evaluate this work based on our discussion.
Best,
Authors
[1] Shah et al,. Learning mixtures of gaussians using the DDPM objective. NeurIPS 2023. | Summary: This paper aims to provide a theoretical explanation for the strong empirical performance of consistency models — specifically focusing on how many discretization steps $K$ are needed during training to guarantee high-quality one-step sampling at test time. Prior theoretical analyses of consistency models typically used variance-preserving (VP) forward processes with uniform steps, leading to large and possibly unrealistic complexity bounds. This work, instead, targets the variance-exploding (VE) forward process and the EDM (decay) time-step scheduling. Under these more practical assumptions (matching real applications in, e.g., Karras et al., 2022 or Song et al., 2023), the authors derive improved discretization complexity bounds – polynomial in $O(1/\varepsilon)$ with exponents significantly better than previous results. They also show that 2-step sampling (a widely used trick in consistency models) can further reduce the required number of steps to achieve a given Wasserstein-2 error.
Claims And Evidence: In this paper, the authors claimed that analyzing VESDE plus EDM steps yields a polynomial discretization bound for consistency models that is significantly smaller than in previous theoretical studies, and this complexity is close to that of the best known diffusion results. In addition, 2-step sampling further reduces the exponent in $\varepsilon$.
To show these claims, the authors provided rigorous proof in the main text and appendix. They compared the final complexity expressions to older results, showing strict improvement. Besides, simulation experiments for multi-modal Gaussian distributions illustrate that their key assumption on Lipschitz constants is possible.
Methods And Evaluation Criteria: As a theoretical paper, there is no benchmark or datasets needed.
Theoretical Claims: The paper states each assumption explicitly, references prior standard assumptions (like bounded support for data or Lipschitz continuity of the consistency function). The proofs revolve around standard SDE manipulations, approximate PDE expansions, and the idea that “time-dependent” bounding of the score drift is more precise.
Experimental Designs Or Analyses: There are no experiments needed in this theoretical paper.
Supplementary Material: Yes, I reviewed all the supplementary material. They are basically extra proofs of lemmas and theorems.
Relation To Broader Scientific Literature: This is the first analysis that specifically uses VE forward SDE plus a decaying step approach for consistency. It corrects prior mismatches in theoretical assumptions vs. real usage. The authors connect the final complexity to that of diffusion, bridging a gap that older works left open. They cite relevant works on diffusion complexity (Song et al., Gao & Zhu, Chen et al.), on prior consistency theory (Dou et al. 2024, Li et al. 2024, Lyu et al. 2024). They also mention Karras et al. for EDM steps. Therefore, I would like to say the references are quite comprehensive.
Essential References Not Discussed: Nothing crucial seems missing. The standard relevant theoretical diffusion or consistency references appear.
Other Strengths And Weaknesses: Strengths:
The focus on VE and EDM steps is precisely the realistic setting used in modern SOTA consistency models, bridging earlier theoretical-limitation criticisms. It is a good progress to achieve $\tilde{O}(\frac{1}{\varepsilon^{3+2/a}})$ with 2-step sampling compared with the previous $O(1/\varepsilon^7)$. The paper is well-organized, with main results clearly stated and strictly proved.
Weaknesses:
1. The entire analysis relies on an assumption that the score approximation is sufficiently accurate. Is it possible for the authors to remove this assumption and handle the end-to-end training complexity?
2. The multi-step analysis is restricted to 2 steps; though that’s the main empirical scenario, I still want to ask about the scenario with more steps or sampling schedules. Have you considered a more general $N$-step approach for consistency? Could that yield further improvements or do you expect diminishing returns after 2 steps?
3. Could your “time-dependent lemma” approach be extended to other step-size patterns beyond EDM (like a piecewise approach)? Are there potential further gains if we do a more sophisticated scheduling than a single exponent $a$.
Other Comments Or Suggestions: Please refer to the "Weaknesses" section.
Questions For Authors: Please refer to the "Weaknesses" section.
Ethical Review Concerns: No ethical concerns since it is a completely theoretical work.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. We provide our response to each question below.
**Weakness 1: The approximated score and consistency function error (end-to-end analysis).**
In this work, we assume the pretrained score and consistency function are accurate enough to achieve the final discretization complexity. Though they are standard assumptions in the complexity analysis area, as the friendly reviewer mentioned, the end-to-end analysis is also important, and we can use the current estimation error analysis results to achieve this goal. More specifically, for the approximated score, we use the results of [1] and replace $\epsilon_{score}$ with $n\_{score}^{-2/d}$ (where $n\_{score}$ is the number of data used to train the score function). For the approximated score function, we use the result of [2] and replace $\epsilon\_{cm}$ with $n\_{cm}^{-1/2(d+5)}$. Then, we can obtain the end-to-end complexity analysis.
**Weakness 2: The Results of Multi-step Sampling Algorithm.**
In fact, our analysis can be extended to $N$-step sampling algorithm and can achieve nearly $L_{f}/\epsilon\_{W_2}^{3+1/a}$ (which is better than Thm. 4.7 and Coro 4.12) under the EDM stepsize. We use $3$-step sampling algorithm as an example ($\tau_1=T,\tau_2=3T/4, \tau\_3=T/2$). Under this setting, the result becomes (here we ignore $\epsilon\_{score}$, $\epsilon_{cm}$, $R,d $ and focus on the dominated term)
$$
\delta+1/T^3++L_f (T / \delta)^{\frac{1}{a}} /\left(K \delta^2\right).
$$
To guarantee the above term smaller than $\epsilon_{W_2}$, we require $\delta=\epsilon\_{W_2}$ and $K\ge \frac{L_fT^{1/a}}{\delta^{2+1/a}\epsilon\_{W_2}}=\frac{L_fT^{1/a}}{\epsilon\_{W_2}^{3+1/a}}$, which is the same with the one-step and two-step sampling algorithms. However, 3-step algorithm only require $T\ge 1/\epsilon\_{W_2}^{1/3}$, which is better than $1/\epsilon\_{W_2}$ of 1-step and $1/\epsilon\_{W_2}^{1/2}$ of 2-step. Hence, the discretization complexity for $3$-step sampling algorithm is $L_f/\epsilon\_{W_2}^{3+4/(3a)}$, which is better than $2$-step algorithm. The above steps can be extended to $N$ steps, and the influence of $T$ decreases, and finally, $T$ does not affect the discretization complexity, which leads to a $L_{f}/\epsilon\_{W_2}^{3+1/a}$ results. We will add the above discussion in our next version.
**Weakness 3: The Discretization Complexity for Piecewise Discretization Scheme (beyond the EDM with single $a$).**
Before providing the complexity result for the piecewise discretization scheme, we first discuss the performance of the EDM scheme in the application (Consistency models follow the choice of $a$ of EDM.). EDM [3] shows that when $1\leq a\leq 7$, a larger $a$ will help the diffusion models to achieve better performance (Figure 13 (c) of [2]), which also matches our theoretical results. However, when $a$ is larger than $7$, the improvement is not significant and even becomes worse [3]. As a result, the exponential decay stepsize is theoretically friendly and is not widely used in applications. One possible explanation is that at the end of the reverse process, diffusion models generate image details and require a small stepsize (However, the exponential decay stepsize is too large.)
Hence, we can design a two-stage discretization scheme: (a) when $t'\in [0, T-1]$, we use the exponential decay stepsize; (b) when $t'\in (T-1, T-\delta]$, we use the EDM stepsize. With this scheme, the discretization complexity becomes $L_{f}/\epsilon\_{W_2}^{3+1/a}$, which is better than Thm. 4.7 with EDM (single $a$). This result shows the improvement of the two-stage discretization scheme from the theoretical perspective, and we leave the empirical application of this scheme as an interesting future work. We will add the above discussion in our next version.
[1]Oko, Kazusato, Shunta Akiyama, and Taiji Suzuki. "Diffusion models are minimax optimal distribution estimators." In *International Conference on Machine Learning*, pp. 26517-26582. PMLR, 2023.
[2]Dou, Zehao, Minshuo Chen, Mengdi Wang, and Zhuoran Yang. "Theory of consistency diffusion models: Distribution estimation meets fast sampling." In *Forty-first International Conference on Machine Learning*. 2024.
[3]Karras, Tero, Miika Aittala, Timo Aila, and Samuli Laine. "Elucidating the design space of diffusion-based generative models." *Advances in neural information processing systems* 35 (2022): 26565-26577. | null | null | null | null | null | null |
ResearchTown: Simulator of Human Research Community | Accept (poster) | Summary: The paper proposes ResearchTown, a multi-agent simulation framework for research community simulation. The research community is simplified as an agent-data graph, where researchers are modeled as agent nodes and research outputs (such as papers and reviews) as data nodes. The interactions, including paper reading, paper writing, and review writing, are modeled through a unified text-based message-passing framework named TextGNN. The main contributions claimed are: (1) a realistic simulation of collaborative research activities, (2) robustness in simulating complex multi-researcher and multi-paper interactions, and (3) the ability to inspire interdisciplinary research ideas. The authors validate the framework on ResearchBench, a benchmark evaluating ResearchTown via masked-node prediction tasks.
Claims And Evidence: 1. Claim 1: The paper claims that ResearchTown “provides a realistic simulation of collaborative research activities, including paper writing and review writing.” This claim is partially supported by evidence from their node-masking experiments. For a large set of existing papers, the system attempts to regenerate each paper’s content given its authors and references; the similarity between the generated text and the actual paper is reasonably high. These results suggest that the simulated agents can often reproduce or predict key elements of actual papers and reviews. However, the evidence for “realism” relies entirely on embedding-based similarity metrics – no human evaluation. The similarity measurement cannot capture other important aspects in paper rewriting and reviewing, such as logical consistency.
2. Claim 2: Robustness with Multiple Researchers and Diverse Papers. This one is supported by their ablation studies (Figure 4, 5).
3. Claim 3 – Generating Interdisciplinary Research Ideas. The author provides some qualitative evidence that ResearchTown can brainstorm non-obvious research questions by bridging fields, which aligns with the claim. However, the support is still limited: no systematic evaluation of “idea novelty” or quality is done. To better support this claim, future work should include a more rigorous assessment of the novelty and usefulness of generated ideas – perhaps by soliciting evaluations from domain experts on a sample of cross-domain proposals, etc.
Methods And Evaluation Criteria: 1. One concern to me is about data leakage. The author found that the ResearchTown framework performs notably better on impactful papers. This can be attributed to data leakage, i.e., the LLM may have seen the paper before during pretraining. The author tried to address the problem of information leakage by excluding any of the author's publications released after the target paper's publication. However, this is not enough, as the LLM can still recall details for released papers and reviewers, especially for those highly impactful papers.
2. The paper only utilizes the similarity as evaluation metrics. However, the similarity measurement is not sufficient and cannot capture other important aspects in paper rewriting and reviewing, such as logical consistency, originality, etc.
3. Another concern lies in the absence of baseline comparisons beyond the ablations of their own approach.
4. Lastly, this is essentially a prompting engineering paper without any training or optimization involved. This is fine to me, but the author framed their method in an overly complex way. For example, equation (4), (5) is over complicated and hard to follow.
Theoretical Claims: The paper does not present explicit theoretical proof.
Experimental Designs Or Analyses: See the section of Methods and Evaluation Criteria.
Supplementary Material: Appendices C, D, E, F, G were reviewed.
Relation To Broader Scientific Literature: The paper builds on prior work in multi-agent LLM frameworks, graph-based modeling of research communities, and text-attributed graphs (TAGs).
Essential References Not Discussed: --
Other Strengths And Weaknesses: Strengths:
1. Novel framework combining multi-agent LLMs and graphs for research community simulation.
2. RESEARCHTOWN can maintain robust simulation with multiple researchers and diverse papers.
Weakness:
1. The evaluation metric used, semantic similarity, is insufficient and may not fully capture novelty or logical consistency. Adding additional metrics or incorporating human expert evaluation could strengthen validation.
2. The authors state: "To prevent information leakage when simulating paper writing and review scenarios, we exclude any of the author’s publications released after the target paper’s publication year." However, I believe this measure is insufficient to prevent data leakage, as the LLM may have already been pretrained on these papers, including the target paper itself—especially for high-impact papers. The authors also observe that ResearchTown tends to achieve higher performance on high-impact papers, which could potentially be attributed to data leverage and LLM memorization, as these papers are more frequently included in the pretraining corpus.
3. A related concern is that the trends in Figures (4) and (5) could also be attributed to data leakage. Moreover, increasing the number of agents may further increase the likelihood that LLMs recall information from the pretraining corpus related to the target paper.
4. Given that this is essentially a prompting-based paper, the mathematical notation used may be somewhat confusing, making the paper more complex than necessary. In particular, Equations (4) and (5) are difficult to understand. I suggest simplifying the mathematical annotations and formulations to enhance clarity.
Other Comments Or Suggestions: .
Questions For Authors: Questions:
1. Why can paper reading be described as ‘inserting a new agent node’?
2. The framework performs significantly better on impactful papers that focus on analysis or tool development. Could this indicate data leakage? The LLM may have encountered the paper during pretraining.
3. 'Combining agent and data aggregation leads to a decrease in score differences, possibly because the presence of related papers causes reviewers to apply stricter novelty judgments' -> Are there any qualitative examples to support this?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your insightful and constructive comments. We ddress each of your comments in detail.
**[Fine-grained evaluation with LLM and human]** Please check the same tag under **Reviewer YZ7h** for more human and LLM eval.
**[Novelty+feasibility evaluation with LLM and human]** Please check the same tag under **Reviewer YZ7h** for more human and LLM eval.
**[More baseline comparison]** Please check the same tag under **Reviewer 5ZWh** for baseline comparison.
**[Data leakage concern]** For our main results (Table 1–2) and ablation studies (Figures 3–5), as noted in *Appendix C.2 (Line 813–815)*, we use NeurIPS 2024 and ICLR 2024 papers, which are *post-dated beyond GPT-4o-mini's* October 2023 knowledge cutoff. Thus, *data leakage is not a concern*. We also mask the full text during the simulation to avoid accidental exposure.
For *HighImpactPaperBench* (Appendix C.3, Line 840–843), we use high-impact papers from the past decade as an *extreme-case test* for idea simulation. While some may exist in the LLM’s training data, this benchmark is separate from our main results and serves to explore how LLMs handle well-known concepts.
Our similarity analysis shows that 55% of generated papers score between 0.65–0.75, and 18% exceed 0.75, indicating moderate to high alignment. Only 1% score below 0.45. These scores are *comparable to PaperBench* (Table 1), suggesting no abnormal inflation. Even famous papers like *VAE, GAN, LayerNorm* do not receive notably high scores, implying that *semantic similarity—not memorization based on citation relationships—drives the results*, especially for tool/benchmark papers, which naturally resemble their references more.
**[Review writing performance analysis]** For the behavior behind Global-Agg of review writing results, we conduct fine-grained analysis on the difference between predicted score $S$ and real-world score $S^*$ on a subset of review writing tasks. The results show that *GPT-4o-mini consistently assigns lower scores than the real-world reviewers*, especially in the *Global-agg* setting, where the mean of (S - S*) is -1.47. In contrast, *Deepseek-v3* does not show this consistent bias, indicating a better performance on review writing.
| Experimental Setting | Mean of \|S - S*\| | Mean of (S - S*) | Std of S |
| :----------------------- | ------------------ | ---------------- | -------- |
| Global-agg (GPT-4o-mini) | 1.49 | **-1.47** | 0.80 |
| Global-agg (deepseek-v3) | 0.74 | **-0.02** | 0.91 |
Qualitatively, Global-agg reviews with GPT-4o-mini tend to provide *more specific and critical assessments*, particularly highlighting weaknesses related to *novelty* and *experiment* that are often missed in Self-agg settings, resulting in lower scores.
Example 1: Global-agg provides a more detailed description of novelty concern and make the score lower.
- *Global-agg*: *"The proposed method does not present a sufficiently innovative approach compared to existing frameworks. Many cited works, such as P2B and PTTR, already address similar challenges, and the submission fails to articulate clearly how it advances the state of the art.”* → Score: *4*
- *Self-agg*: *“The novelty of the proposed methods is not clearly articulated. The paper does not convincingly demonstrate how the approach differs from existing methods or why it is a significant advancement in the field.”* → Score: *5*
Example 2: Global-agg notices more weaknesses compared with self-agg.
- *Global-agg*: *“Experimental Support: The experimental results presented are not robust. For instance, the claim of maintaining output quality is not backed by sufficient statistical analysis or metrics, making it difficult to assess the validity of the findings.”* --> Score: 4
- *Self-agg*: Not mention this weakness. --> Score: 6
**[Paper Reading as inserting new node]** As described in *Algorithm 1 (Line 220–237)*, the input to the simulation is the paper content itself. To initialize the agent for simulation, we first perform a *“paper reading”* step, which sets up the agent profile. This is implemented in *Line 6* of Algorithm 1. We interpret this as a *form of agent node insertion*—specifically, initializing the *text attributes* of an agent node based on an external paper. Thus, although it may not be an insertion in the structural sense, it serves the role of *initializing a new agent node* in the simulation graph.
**[Math notation]** Thank you for pointing this out. Our intention was to provide a formal definition analogous to message-passing GNNs, such as in *Equation 3 (Line 145–148)*. However, due to the *heterogeneous nature* of our agent–data graph, we define *different aggregation functions* depending on the node types. We acknowledge that this may introduce complexity in notation. We will *simplify and clarify* the mathematical presentation in the revised version of the paper to improve readability. | Summary: The paper introduces RESEARCHTOWN, a multi-agent framework for simulating human research communities using Large Language Models (LLMs). The key idea is to model the research community as an agent-data graph, where researchers (agent nodes) and papers (data nodes) interact through edges representing authoring, citations, and reviews. The authors propose TextGNN, a text-based message-passing mechanism that borrows concepts from Graph Neural Networks (GNNs), treating LLM-powered functions as GNN blocks to unify research activities (e.g., paper writing, reviewing) as operations on this graph. In addition, the paper proposes the RESEARCHBENCH benchmark, which evaluates the performance of RESEARCHTOWN by comparing the similarity between the generated papers and real papers. Experiments on the RESEARCHBENCH benchmark show that RESEARCHTOWN can provide a realistic simulation of collaborative research activities, with multi-agent setups outperforming single-agent configurations.
Claims And Evidence: The claims are generally supported by evidence, but some require deeper scrutiny:
1. Realistic simulation of collaborative research: Supported by node-masking prediction results (similarity scores), but it's unclear what level of similarity score sufficiently demonstrates alignment with the claim of realistic simulation.
2. Inspiring interdisciplinary idea generation: While the paper provides examples (e.g., NLP + criminology), no quantitative evidence or human evaluation validates their novelty or feasibility.
Methods And Evaluation Criteria: - Agent-data graph + TextGNN: The framework is innovative, simple and well-suited for modeling dynamic research communities. The use of LLMs as agent functions for text-based message passing is a creative adaptation of GNNs.
- RESEARCHBENCH: The node-masking task is a reasonable proxy for evaluating reconstruction fidelity, but it focuses on similarity rather than research quality (e.g., novelty, feasibility). Human evaluation or downstream task validation (e.g., citation impact simulation) could strengthen the assessment.
Theoretical Claims: N/A. The paper does not present formal theoretical proofs.
Experimental Designs Or Analyses: - Strengths: The experiments evaluate the framework's performance using well-justified tasks (masked-node reconstruction). Furthermore, the authors conduct systematic ablation studies to compare different framework configurations.
- Weaknesses:
- No comparison with existing LLM frameworks.
- No ablation studies to isolate the contribution of the graph structure vs. LLM capabilities.
- Similarity scores (via text embeddings) may not capture practical utility.
Supplementary Material: The appendix offer comprehensive supplementary material, likes ethical concerns, comprehensive technical details, additional experimental results, and additional case studies. I have no concerns about the supplementary material.
Relation To Broader Scientific Literature: RESEARCHTOWN builds on:
1. LLM-driven research automation (e.g., The AI Scientist) but emphasizes multi-agent collaboration in simulated research communities.
2. Multi-agent LLM systems for social simulation (e.g., Generative Agents, S^3, SOTOPIA) but extends them to research communities framework.
3. Graph-based research modeling (citation networks, academic social networks) but shifts focus from analysis to dynamic simulation.
Essential References Not Discussed: I have no concerns regarding essential references that were not discussed.
Other Strengths And Weaknesses: - Other Strengths:
- The modular design allows for flexible expansion with new node types (e.g., code repositories, blogs) and edge types (e.g., commits, participations), offering the possibility to simulate complex academic ecosystems.
- The maintenance of hidden states in the text space (TextGNN) effectively preserves the semantic coherence of research outputs.
- Other Weaknesses:
- The social dynamics in simulated academic communities are not considered.
- The computational cost of text generation may limit the practicality of large-scale community simulations.
- Not tested the dynamic community evolution capabilities.
Other Comments Or Suggestions: - Include a running example of a simulated research workflow to improve readability.
Questions For Authors: 1. Could you provide quantitative evidence (e.g., human evaluation scores) for the interdisciplinary idea generation claims? This would help validate if the generated ideas are truly novel and feasible beyond surface-level text similarity metrics.
2. How does RESEARCHTOWN compare against existing LLM frameworks for idea generation? Comparative results would help position the framework's unique contributions.
3. The similarity scores (0.67 paper/0.49 review) are presented as evidence of realistic simulation - what is the baseline/expected score range for human-written content? Contextualizing these metrics would strengthen their interpretation.
4. Have you considered modeling social dynamics (e.g., senior-junior researcher interactions, institutional affiliations) in the community graph?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your insightful and constructive comments. We address each of your comments in detail.
**[Novelty+feasibility evaluation with LLM and human]** Please check the same tag under **Reviewer YZ7h** fo novelty and feasibility evaluation.
**[Fine-grained evaluation with LLM and human]** Please check the same tag under **Reviewer YZ7h** for fine-grained consistency evaluation.
**[Cost and scalability for ResearchTown]** Please check the same tag under **Reviewer Cny3** for analyzing the computational cost.
**[Future Application of ResearchTown]** Please check the same tag under **Reviewer Cny3** for analyzing social dynamics simulation.
**[Baseline score of realistic simulation]** To check whether ResearchTown provides realistic simulation, we benchmark similarity in real-world research activity. For paper writing, we reference two concurrent papers [1,2] recognized for presenting nearly identical ideas—yet with different writing styles and experiments—which yield a VoyageAI similarity of 0.8244. This suggests that scores above 0.82 can potentially indicate strong idea overlap. For review writing, we analyze data of reviewers evaluating the same paper. The average inter-reviewer similarity is 0.5900 (strengths) and 0.5904 (weaknesses), reflecting natural variance in human judgment. These inter-similarity score in the real world confirm that ResearchTown’s similarity scores represent realistic simulation.
**[Ablation study on LLMs and graphs]** For graph structure ablation, *Table 1-2* already demonstrates the effect of different types of neighboring nodes during aggregation with different sub-parts of the neighborhood. It can be considered as ablation on graph structures.
Additionally, we provide results on **LLM ability variation** using *Deepseek-V3* (potentially larger than GPT-4o-mini) and *Qwen2.5-7B-Instruct* (potentially smaller than GPT-4o-mini) on a sample of 100 examples each from PaperBench and ReviewBench.
For paper writing tasks, we find when given the same aggregation setting, the performance improves when the models are larger: Qwen2.5-7B-Instruct < GPT-4o-mini < Deepseek-V3 (evaluated by openai embedding).
| Aggregation Setting | Qwen2.5-7B-Instruct | GPT-4o-mini | Deepseek-V3 |
| ------------------- | ------------------- | ----------- | ----------- |
| Global-agg | 71.94 | 73.93 | **74.09** |
For review writing tasks, we test the performance of different models under different aggregation settings. We observe that Deepseek-V3 benefits more from multi-agent + multi-paper settings; GPT-4o-mini tends to be stricter than human reviewers, especially when more context is available.
| Aggregation Setting | Model | Strength | Weakness | Avg Δs (Abs) |
| ------------------- | ----------- | -------- | -------- | ------------ |
| Data-agg | GPT-4o-mini | 71.08 | 68.22 | 1.17 |
| Global-agg | GPT-4o-mini | 62.40 | 56.79 | 1.49 |
| Data-agg | Deepseek-V3 | 68.63 | 67.76 | 1.08 |
| Global-agg | Deepseek-V3 | 68.04 | 67.99 | **0.74** |
**[More baseline comparison]** The TextGNN framework in our work is a general-purpose multi-agent simulation tool where existing LLM-based frameworks can be used to define the message-passing mechanism. Beyond our default setup, we conducted a small-scale experiment extending the AGG-agent setting [*Line297–298*] into a **multi-turn conversation** using the SWARM framework. This mimics multiple iterations within a single GNN layer and improves similarity scores from 52.32 to 57.68, evaluated using `text-embedding-large-3`. We also compare ResearchTown with the Sakana AI scientist framework, which uses five rounds of single-agent reflection. On the same subset of paper writing tasks, Sakana **AI Scientist** achieves a score of 0.63, while ResearchTown reaches 0.66 using multi-agent simulation.
**[Running example]** *Table14-28 (Line1430-2254)* includes multiple running examples from ResearchTown: (1) Table14–17: Paper writing examples; (2) Table18: Review writing examples; (3) Table19–28: Interdisciplinary research examples. We will add more *end-to-end running examples*, beginning from citations and ending with paper + review outputs, in the revised version.
**[Downstream task validation of ResearchTown]** We would include more downstream tasks like citation prediction in the modified version of our paper. Based on our HighImpactPaperBench (details in Appendix C.3, Line837-853), we observe that highly cited novel papers like GAN and VAE are harder to simulate the thinking process based on citation and multi-agent, indicating some kind of correlation between citation impact and simulation tasks.
[1] Chen et al. ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning
[2] Jin et al. Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning | Summary: This paper aims to simulate the human research community (called ResearchTown), which is modeled as a graph structure, where researchers and papers are represented as nodes and they are connected based on their relationships. Also, each researcher over the graph structure is powered by Large Language Models (LLMs), making the simulation of the research community under the framework of multi-agent (or multi-LLM) collaboration. For evaluation, the authors propose two tasks: paper writing and review writing based on the process of paper reading, and handle them with text-based message passing over the graph structure. The authors then show that the proposed ResearchTown can not only provide a realistic simulation of the research activity (e.g., paper writing and review writing) but also facilitate interdisciplinary research.
---
### Update after rebuttal:
Thank you so much for your response, which addresses my last concern on the scalability of the TextGNN. In my view, I still believe that the design of TextGNN has a clear latency issue compared to the typical GNN that plays over the embedding space (which I hope the authors would discuss in the updated version); however, I also see some benefits of propagating and aggregating information of nodes via natural language texts (in terms of their effectiveness when working with LLMs and their interpretability). I will raise my score from 2 to 4 (accept). Good luck!
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: While the proposed (graph-structure-based) approach to simulate the human research community is reasonable, I have an important concern about the evaluation criteria. First of all, as an evaluation metric, the authors use the embedding-level similarity between the generated item (such as the paper or review) and the target item; however, I believe one single embedding-level similarity may be suboptimal to measure the alignment between them. In other words, there are many aspects that should be considered when validating whether the generated paper (for example) is similar to the target paper (such as factual consistency, logical coherence, methodological relevance, or novelty), and embedding-based metrics often fail to account for finer-grained structural and conceptual differences.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Most of the experiments are sound and valid. However, I view validating the proposed ResearchTown with one single (proprietary) LLM as a clear weakness of the current experimental setting. Specifically, the proprietary LLM is usually not reproducible even if we control the temperature value, and also it is questionable whether the proposed framework can work with other LLMs (such as larger or smaller than GPT-4o-mini).
Supplementary Material: I skimmed through it, mostly checking the prompt templates.
Relation To Broader Scientific Literature: The key contribution of this paper is related to the recent effort to simulate or automate AI research, which is a very important and timely topic.
Essential References Not Discussed: There are some papers [A, B] that aim to automate the research process with multi-agent collaboration frameworks (similar to the concept of the ResearchTown that aims to simulate the research process with multi-agents), and it may be worth discussing them.
[A] ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models
[B] Chain of Ideas: Revolutionizing Research Via Novel Idea Development with LLM Agents
Other Strengths And Weaknesses: While the authors claim that the proposed ResearchTown is designed to simulate the human research communities with LLMs, the current set of benchmark evaluations is limited to paper writing and review writing, done based on the paper reading among researchers. I believe there are more activities (that can be measured) within ResearchTown with more types of nodes and extra interactions between them, and it would be valuable if the authors would discuss them. Also, the proposed TextGNN (i.e., the message passing framework between nodes with text) seems not scalable when the number of layers becomes moderate (e.g., three or four), due to the nature of text-based communication in contrast to embedding-based communication where the information across nodes can be aggregated more efficiently over the vector space. I view this as another limitation of the proposed approach, and perhaps showing some experiments on the scalability of the proposed approach according to the number of layers may be beneficial.
Other Comments Or Suggestions: Some sentences are not clear and it would be worth clarifying them:
* Could you clarify the sentence in Lines 273 - 277 (starting with "More specifically")?
* In Lines 359 - 361, it is not clear why the inclusion of both the agent and data nodes would degrade the performance of the review writing simulation task.
* In Lines 317 - 318, how to select the top 5 researchers most related to the paper?
* Overall, I feel it would be worth including the inputs and outputs for each task. For example, in paper reading and subsequent paper writing, do you use the full text of the neighboring papers? For the review writing task, what are the targets that should be generated (i.e., they are the strengths and weaknesses of the paper)?
Questions For Authors: Please see my previous comments. I feel the core idea of this paper is interesting, I would like to increase my rating if the authors would address them.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your insightful and constructive comments. We address each of your comments in detail.
**[Novelty+feasibility evaluation with LLM and human]** Please check the same tag under **Reviewer YZ7h** for novelty and feasibility evaluation conducted by both LLMs and humans.
**[Fine-grained evaluation with LLM and human]** Please check the same tag under **Reviewer YZ7h** for factual consistency and other fine-grained metrics.
**[Ablation study on LLMs and graphs]** Please check the same tag under **Reviewer 5ZWh** for experimental results with three different LLMs.
**[Review writing performance analysis]** For clarification of [*Line359-361*], please check the same tag under **Reviewer t7zy** for more analysis about why the performance drop.
**[Input and output of ResearchTown]** ResearchTown processes either full papers or abstracts, depending on the task, and outputs standardized formats to support consistent and scalable evaluation. For paper writing, it generates a condensed 5Q format [*Line893–894*]; for review writing, it produces bullet-point strengths and weaknesses [*Line910*]. Prompt templates are shown in Table12–13 [*Line1379–1425*], with examples in Table14–28 [*Line1432–2253*]. As described in Appendix C.3 [*Line837–853*], this alignment reduces evaluation complexity and enables sub-part similarity scoring [*Line898–902*]. Input sources vary: only the paper’s abstract is used during reading [*Line994–995*] and full papers are used for review writing [*Line1171–1172*]. Aggregation setting details are in [*Line688–726*].
**[Future Application of ResearchTown]** In [*Line139–144*], any research-related content—e.g., images, codebases, models, or social media posts—can be represented as nodes in the agent-data graph, with edge types like “cite paper,” “release model,” or “comment on X post” (examples in Figure 1) defining interactions. By specifying appropriate edge types and agent functions (`f_u` in [*Line134–135*]), the framework can be extended to simulate tasks such as code writing, model release, panel discussions, or lectures. While we focus on paper and review writing due to their importance, available real-world data, and simplicity, the framework supports broader applications.
Additionally, ResearchTown can be extended to model social dynamics such as peer pressure, collaborations, and institutional roles via agent-agent relationship edges [*Line133–135*]. Our current implementation already includes role-based dynamics (e.g., leader vs. participant), and we plan to support richer simulations of institutional and reputational factors in future work.
**[Cost and scalability of ResearchTown]** TextGNN’s complexity scales linearly with the number of layers under standard GNN inference with full-batch inference. Our implementation uses GraphSAGE-style [1] to support per-paper evaluation, which is slightly less efficient but more practical. In line with findings from models like GraphSAGE, we observe that a 2-layer TextGNN is both robust and sufficient, making the cost affordable and controllable. Instead of deepening the GNN layer, we increase the number of transformation steps within each message-passing layer—interpreted as more agentic conversations—making it another scalable and effective approach. To demonstrate intra-layer scalability, we extend the AGG-agent setting [*Line297–298*] using the SWARM framework for multi-turn agent interactions, which boosts similarity scores from 52.32 to 57.68. These results show that agentic iteration within layers offers a practical and scalable alternative.
**[Details on reviewer selection]** To simulate realistic reviewer assignment, we collect over 3,000 unique authors from the author list of PaperBench dataset and generate profiles by summarizing their recent publications. Using the `voyage-3` API, we embed each profile with the target paper’s abstract and select the top 5 most similar researchers, excluding the original authors. This method enables high-quality reviewer-paper matching—for example, a social learning paper was matched with reviewers experienced in social science and LLMs.
**[Details on review writing inputs]** To clarify [*Line273–277*], in the review writing simulation (Algorithm 1 [*Line220–236*]), both the paper and review are outputs of the ResearchTown pipeline. While one option is to evaluate reviews based on generated papers, this introduces compounding errors if the paper is inaccurate. Instead, as noted in [*Line273*], we use the ground-truth paper as input to isolate and more reliably evaluate the review writing stage.
**[Missing related work]** Thank you for highlighting valuable related works—we will include and discuss them in the revised version around [*Line120–125*].
[1] Hamilton et al. Inductive Representation Learning on Large Graphs
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, which addresses most of my concerns. One remaining concern that I have is regarding the scalability of the proposed approach. As described in my original review, the proposed text-based GNN framework propagates the information between nodes via natural language (instead of embeddings), which may be less scalable. For example, in the case of embeddings, the information propagated from 10 different nodes is merged into one single representation (typically); however, in the case of natural language, the proposed texts from 10 different nodes are 10 times longer than the text that each node creates. In this regard, I think the authors may provide some experimental results to clarify this, for example, is the proposed method scalable with more than only 2 layers?
---
Reply to Comment 1.1.1:
Comment: Thank you for your additional feedback. We're happy to answer any further questions and would appreciate it if you could consider raising the score.
**[constrained output length for each layer of TextGNN]**
The aggregation function for classical GNN (*Equation 3, Lines 145-148*), which is often a pooling or mean operation, is used to condense all neighborhood information into one embedding with the same size as input. Similarly, for our TextGNN layers (*Equations 4-5, Lines 182-203*), $f_u$ and $f_g$ are acted as aggregation function similar in classical GNN, they produce outputs with controlled textual formats and similar lengths with updated information in the neighborhood nodes by *summarizing* with LLMs. Therefore, *the output length of multiple layers of TextGNN would not increase but keep approximately the same*.
We achieve such length control in TextGNN via *format control in prompting*. We specifically designed prompts ensure each output adheres to pre-defined constraints:
- **Paper writing:** "5Q" format for paper (mentioned in *Lines 1068-1089*).
- **Review writing:** ~200-word bullet points for review (mentioned *Lines 1165-1196*).
- **Paper reading:** 100-300 words persona for researcher (mentioned *Lines 996-998*).
These prompt-controlled constraints ensure stable output lengths at every TextGNN layer, avoiding text length inflation with increasing depth. Each aggregation step condenses and prioritizes critical information, effectively filtering less relevant details.
**[multi-layer aggregation example]**
We illustrate why controlled length can support multi-layer of TextGNN clearly with an example aggregation across multiple layers. As you can see, an example of 3-layer TextGNN does not result in longer and longer text but remains a highly informative condensed version of text:
*Layer 1*: Paper1, Paper2, Paper3 (each 5Q format) → Researcher Profile1 (100-300 word persona).
*Layer 2*: Researcher Profile1, Researcher Profile2, Paper4, Paper5 (each 5Q format) → Paper6 ( 5Q format).
*Layer 3*: Researcher Profile3, Researcher Profile4, Paper6, Paper7 (each 5Q format) → Paper8 ( 5Q format).
As demonstrated in experiments (*Table 23*), even when aggregating many paper and researcher inputs, TextGNN outputs consistently maintain controlled lengths. A more concrete example we show below is that more layers can provide more condensed version of description but not necessarily with longer length:
Part of 3-layers of TextGNN paper writing results:
*This framework will utilize structural causal models (SCMs) to identify causal relationships while incorporating machine learning methods to enhance predictive performance. Key metrics for evaluation will include causal identifiability, robustness to distribution shifts, and interpretability of the learned models. The expected outcomes include a comprehensive understanding of causal mechanisms in complex systems and improved performance of machine learning models in real-world applications.*
Part of 2-layer of TextGNN paper writing results:
*This framework will utilize causal influence diagrams to model dependencies among agents and their intentions, allowing for the computation of causal queries related to decision-making processes. The expected outcomes include a clearer understanding of how intentions influence actions in AI systems, improved algorithms for causal discovery in multi-agent settings, and enhanced safety analysis tools that can be applied to various AI applications.*
More aligned information is simulated and discussed when more layers of TextGNN are considered and aggregated. However, the length remains approximately the same.
**[empirical validation on more layers of TextGNN]**
To empirically validate the scalability of TextGNN beyond two layers (paper reading and writing), we conducted additional experiments incorporating multi-hop information besides the current 2-hop. Previously, we initialized researcher personas by aggregating researchers' authored papers and leveraging immediate paper and researcher neighborhoods to generate new papers. In our extended experiments, we now include authors of those cited papers and papers cited by/related with those cited papers, effectively integrating deeper multi-hop connections. Due to the complexity of collecting extensive multi-hop data, our evaluation is limited to 42 samples, focusing specifically on the quality of the generated paper nodes. We can see that 3-layer provides more improvement under full-agg, agent-agg, and self-agg settings while drops slightly for data-agg. The drop in data-agg might be because too much noisy paper is involved.
| Setting | OpenAI Sim Avg |
| ----------------- | ------- |
| 2-layer self-egg | 0.6182 |
| 2-layer agent-agg | 0.7068 |
| 2-layer data-agg | 0.7508 |
| 2-layer full-agg | 0.7348 |
| 3-layer self-agg | 0.6225 |
| 3-layer agent-agg | 0.7488 |
| 3-layer data-agg | 0.7271 |
| 3-layer full-agg | 0.7435 | | Summary: This research work starts from the idea that we can leverage LLMs to simulate human research communities and proposes ResearchTown, a multi-agent framework designed to model human research societies and behaviors. This work also introduces TextGNN to model various research activities, including paper reading, paper writing, and review writing. In addition, it develops ResearchBench, which uses a node-masking task to evaluate whether ResearchTown can successfully simulate the masked paper node by masking a paper within a graph.
Claims And Evidence: No. This paper claims that the simulation of automated research processes should be correlated with human research processes. However, under this assumption, it seems that discovering groundbreaking and insightful scientific ideas may become more challenging. The multi-agent system tends to produce known scientific knowledge while sacrificing the ability to learn, explore, and discover.
Methods And Evaluation Criteria: Some specific implementation details are unclear:
- 1) How does ResearchTown generate a complete research paper including many parts of abstract, method, and experimental results?
- 2) Unlike standard GNNs, each node in TextGNN is based on the text space, but what exactly constitutes the text? Is it the entire paper or a condensed summary of the paper?
Theoretical Claims: This work does not involve theoretical derivations.
Experimental Designs Or Analyses: The specific calculation methods for the metrics used in the experimental section are not very clear. For example, in Table 1, were text-embedding-large-3 and voyage-3 used to extract embeddings? Was cosine similarity used to compute the scores?
Supplementary Material: No supplementary materials were provided.
Relation To Broader Scientific Literature: ResearchTown conducts experimental validation to demonstrate its alignment with human research communities. However, how can we verify that the ideas generated by ResearchTown are valuable and can be followed? Moreover, this work has the potential to advance automatic scientific discovery, but it requires more robust evaluation and validation.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: [Strengths]
- 1) This work models human research communities by constructing TextGNN and attempts to use a masking-node approach for evaluation, which is insightful.
- 2) This work constructs ResearchBench, a benchmark consisting of 1,000 paper-writing tasks and 200 review comments.
[Weaknesses]
- 1) This paper claims that the simulation of automated research processes should be correlated with human research processes. However, under this assumption, it seems that discovering groundbreaking and insightful scientific ideas may become more challenging. The multi-agent system tends to produce known scientific knowledge while sacrificing the ability to learn, explore, and discover.
- 2) ResearchTown conducts experimental validation to demonstrate its alignment with human research communities. However, how can we further verify that the ideas generated by ResearchTown are valuable and can be followed? Moreover, this work has the potential to advance automatic scientific discovery, but it requires more robust evaluation and validation.
- 3) How does ResearchTown generate a complete research paper including many parts of abstract, method, and experimental results?
- 4) Unlike standard GNNs, each node in TextGNN is based on the text space, but what exactly constitutes the text? Is it the entire paper or a condensed summary of the paper?
- 5) The specific calculation methods for the metrics used in the experimental section are not very clear. For example, in Table 1, were text-embedding-large-3 and voyage-3 used to extract embeddings? Was cosine similarity used to compute the scores?
Other Comments Or Suggestions: N/A
Questions For Authors: Please refer to Part of [Other Strengths And Weaknesses]
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you very much for your insightful and constructive comments. We address each of your comments in detail.
**[Input and output of ResearchTown]**
Please check the same tag under **Reviewer Cny3** for a detailed explanation.
**[Creativity of ResearchTown's output]**
LLMs have been shown capable of generating novel research ideas through large-scale human studies [1]. Multi-agent role-playing and discussion further enhance creativity [2] and originality [3]. Based on these studies, ResearchTown encourages exploration of interdisciplinary ideas via structured prompting and multi-agent role-play design [*Line414-426*]. Examples are shown on Page27–Page41. We design HighImpactPaperBench in Appendix C.3 [*Line837-853*] to show ResearchTown’s capacity to generate impactful research [*Line355-376*].
**[Metric calculation in ResearchTown]**
Our evaluation metrics for both paper and review writing are detailed in Appendix E [*Line886–930*]. To enable meaningful comparison, we standardize papers into a condensed 5Q format [*Line893–895*] and reviews into a bullet point format [*Line909–911*]. As described in Appendix C.3 [*Line837–853*], we also convert real-world papers and reviews into these aligned formats using strong LLMs. We then compute cosine similarities with `voyage-3`,`text-embedding-3-large` and `nv-embed-v2`, based on Equation 21 for papers [*Line898–901*] and Equation 22 for reviews [*Line912–915*]. Detailed sub-part similarity scores are provided in Table 3 [*Line935–953*].
**[Fine-grained evaluation with LLM and human]**
We extend beyond embedding-based metrics using prompting-based GPT-4o evaluations, covering factual consistency, logical/method alignment, motivation, and context. Each is scored from 1–10.
| | Semantic Similarity | Factual Consistency | Motivation Alignment | Method Alignment | Logical Consistency | Application Context Consistency |
| ---------- | ------------------- | ------------------- | -------------------- | ---------------- | ------------------- | ------------------------------- |
| Self-Agg | 1.22 | 1.49 | 1.57 | 1.23 | 1.22 | 1.41 |
| Agent-Agg | 2.51 | 2.40 | 3.50 | 2.41 | 2.22 | 3.08 |
| Data-Agg | 3.94 | 3.48 | 5.05 | 3.68 | 3.27 | 4.97 |
| Global-Agg | 4.43 | 3.94 | 5.56 | 4.32 | 3.69 | 5.33 |
These results show clearer differences than embedding-based scores, with Global-Agg (paper nodes + agents) performing best on motivation/method alignment. A human study over 40 papers on similarity-based evaluation (20 in-domain, 20 cross-domain) yields Pearson’s r = 0.745 and Spearman’s ρ = 0.735, supporting the validity of embedding/LLM-based metrics.
**[Novelty+feasibility evaluation with LLM and human]**
We conducted small-scale human and LLM evaluations (20 interdisciplinary and 20 ML papers) on novelty and feasibility. Interdisciplinary examples include papers tagged with multiple fields on arXiv (e.g., CS+Economics). Each cell below includes *real-world* data evaluation vs *simulated* results evaluation.
| | LLM-Eval Novelty | LLM-Eval Feasibility | Human-Eval Novelty | Human-Eval Feasibility |
| --------------------------------- | ---------------- | -------------------- | ------------------ | ---------------------- |
| Interdisciplinary Research Papers | 7.5 vs. 7.35 | 6.45 vs. 6.9 | 7.4 vs. 7 | 7.4 vs. 7 |
| ML Research Papers | 7.85 vs. 7.65 | 6 vs. 6.9 | 6.6 vs. 6.95 | 5.65 vs. 7.15 |
LLM vs human novelty/feasibility scores show a moderate Pearson correlation (~0.38), highlighting evaluation difficulty. ResearchTown’s outputs are generally comparable in novelty and more feasible than real papers in the ML domain.
**[Text form defined in TextGNN]**
Each node’s "hidden state" in TextGNN represents a condensed form of a paper or review. Initially, full paper contents serve as node states [*Line158-161*]. After iterative message passing, paper nodes adopt the standardized 5Q format [*Line893-894*], condensing information for easier evaluation. Review nodes similarly use bullet points [*Line910*] as condensed information.
[1] Si, et al. Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers.
[2] Lu, et al. LLM Discussion: Enhancing the Creativity of Large Language Models via Discussion Framework and Role-Play.
[3] Zhao, et al. Assessing and Understanding Creativity in Large Language Models.
[4] Lu, et al. The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response. After carefully reading the rebuttal, I still have the following concerns:
- The evaluation and metric calculation method used by ResearchTown seems somewhat unreasonable. ResearchTown employs cosine similarities to indicate alignment with real-world research community, but ideas and research are more high-level concepts. A single idea can have multiple forms of concrete description or implementation method, making it difficult to accurately be captured using cosine similarities.
- In TextGNN, each node represents a summary of a paper or review (generated using 5Q format), which may result in the loss of important knowledge from the papers (such as key mathematical equations or algorithm workflow), making it challenging to simulate the research process of human society.
As a result, I choose to maintain my initial rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your additional feedback. We're happy to answer any further questions and would appreciate it if you could consider raising the score.
**[Decompisitionality of our evaluation metric]**
We agree that *“a single idea can manifest through diverse descriptions or implementation strategies, rendering surface-level metrics like cosine similarity inadequate for capturing conceptual equivalence.”* This is exactly what motivates us to propose **5Q-based evaluation framework**—a decompositional evaluation with 5 sub-parts:
Q1: **What is the problem?**
Q2: **Why is it interesting and important?**
Q3: **Why is it hard?**
Q4: **Why hasn’t it been solved before?**
Q5: **What are the key components of the proposed approach and results?**
This structure enables alignment between papers that differ methodologically (Q5) but share similar motivations and problem framings (Q1–Q4). For instance, in [1] and [2], despite distinct methods and settings, experts would find strong alignment on Q1–Q3.
We validate this framework through per-question similarity analysis (Table 3, Lines 935–953). In *PaperBench-easy*, Q2 (motivation) shows the highest alignment (80.25), while Q4 and Q5 score lower (71.54, 70.60), indicating that motivation is easier to capture than novelty or method. In *PaperBench-hard*, alignment drops on Q1, Q4, and Q5 (55.35, 58.55, 57.84), showing that even problem formulation becomes challenging in complex domains, while Q2 and Q3 remain relatively stable.
These results align with our intuition: understanding *why* a problem matters (Q2, Q3) is easier with domain knowledge, while identifying novel formulations (Q1) and implementation details (Q5) requires deeper expertise. The 5Q framework thus enables structured, fine-grained evaluation beyond surface-level similarity.
**[Scalability of our evaluation metric]**
To address the challenge that a *single idea can take many concrete forms*, we complement decomposition with **scalability**. LLMs can generate hundreds of semantically distinct research questions from a single prompt, but evaluating these outputs traditionally requires domain experts—a process that is **costly, slow, unscalable, and hard to reproduce**. For example, [3] spent thousands hiring top-tier researchers solely for annotation and review, which is infeasible for evaluating large-scale, automated research generation. Our approach replaces this bottleneck with **semantic similarity over 5Q-decomposed representations**. We can select the best among sampled and make the score the final result.
**[Extensibility of our evaluation metric]**
While we acknowledge the importance of elements like *mathematical formulations* or *algorithmic workflows*, our framework is **inherently extensible**—the 5Q format can be expanded into 6Q or 7Q by adding domain-specific dimensions such as *algorithmic structure* or *key theoretical results*. This is especially valuable in systems and theory papers, enabling **more fine-grained and domain-aware similarity analysis**. As demonstrated in **[Fine-Grained Evaluation with LLM and Human]**, our approach also supports integration of non-semantic metrics like **logical consistency** and **factual accuracy**, making it extensible from evaluation metric perspective.
**[Reliability of our evaluation metric]**
Our embedding-based / LLM-based similarity metric builds on state-of-the-art models optimized for knowledge-intensive tasks. **Voyage AI embeddings**, widely adopted in real-world RAG systems, are designed to reduce hallucination and excel in high-precision semantic retrieval—making them ideal for evaluating research content. Additionally, sota LLMs are highly effective at semantic comparison. As demonstrated in our **[Fine-Grained Evaluation with LLM and Human]** section, our method yields interpretable similarity scores, and human evaluations further validate its alignment with expert judgment.
**[Main contribution of paper]**
We emphasize the main contribution here. The main contribution is **ResearchTown**, a framework that simulates collaborative research activities by modeling paper writing and peer review as dynamic message-passing on a graph. It represents the research ecosystem as a graph of researchers, papers, and reviews, capturing complex temporal interactions in a structured and scalable way. This design supports both realistic simulation and graph-based evaluation through techniques like node masking. Inspired by Graph Neural Networks (GNNs), ResearchTown employs a **TextGNN** for inference, where nodes are iteratively generated and updated via textual message passing, enabling nuanced modeling of how research communities evolve over time.
[1] Chen et al. ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning
[2] Jin et al. Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
[3] Si et al. Can LLMs Generate Novel Research Ideas? | null | null | null | null | null | null |
Curse of High Dimensionality Issue in Transformer for Long Context Modeling | Accept (poster) | Summary: This paper explores the challenge of the curse of dimensionality in Transformer architectures for long-context modeling, with a particular focus on redundant attention computations. To address this issue, the authors introduce a novel approach called Dynamic Group Attention (DGA), which minimizes redundant computations by dynamically grouping and aggregating less critical tokens while preserving essential token interactions. They redefine traditional probabilistic sequence modeling as a supervised learning task and conduct a theoretical analysis of attention sparsity in Transformers, showing that only a small subset of tokens has a significant impact on predictions. Furthermore, they frame attention optimization as a linear coding problem and develop the DGA mechanism to dynamically manage token grouping and aggregation. Experimental results indicate that DGA effectively lowers computational costs while maintaining strong performance.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited to the problem, but I still have some concerns:
* While the DGA mechanism effectively mitigates redundant computations, its implementation and understanding are relatively complex. The dynamic grouping and aggregation strategy requires precise control, which may increase implementation difficulty and debugging costs.
* The DGA mechanism depends on several hyperparameters, such as the group size $m$ and the importance rate $\gamma$, which significantly influence both model performance and computational efficiency. Although the paper examines their impact experimentally, determining the optimal values in real-world applications remains challenging. A more in-depth discussion on hyperparameter selection would be beneficial.
Theoretical Claims: I have thoroughly reviewed the theoretical claims presented in the manuscript, including the proposed theorems and their corresponding proofs. The authors provide well-structured and insightful theoretical analyses that contribute valuable perspectives on the problem. This paper offers a rigorous and comprehensive understanding of redundancy in transformer-based long-context modeling, effectively identifying redundant tokens, providing probabilistic insights into attention sparsity, and introducing a robust optimization strategy through group coding. These theoretical contributions serve as a strong foundation for developing more efficient and effective attention mechanisms, as exemplified by the proposed Dynamic Group Attention mechanism.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper are well-structured and robust.
* The authors compare the proposed method against a diverse set of baseline approaches, including MInference and StreamLLM, using LongBench-E and EM scores. The results demonstrate significant reductions in computational costs and inter-token latency while maintaining competitive performance across various long-context tasks.
My questions are:
* Can the proposed method be integrated with FlashAttention optimization strategies to achieve further acceleration? Additionally, does it support KV-Cache techniques?
* Do the baseline methods used for comparison incorporate FlashAttention or KV-Cache when evaluating latency?
Supplementary Material: I have reviewed the supplementary material, including theoretic analysis, more discussions on sequence modeling and supervised learning, and more details for experiment settings, causing mask in Group-Oriented attention, reduced complexity and key-value cache, and more ablation studies.
Relation To Broader Scientific Literature: The authors clearly identify the issue of computational redundancy in Transformer-based long-context modeling and substantiate its presence through both theoretical analysis and empirical validation, establishing a strong foundation for further optimization.
Beyond introducing a novel method, the authors conduct an in-depth theoretical analysis of attention weight sparsity, reformulate the attention optimization problem as a linear coding issue, and propose a group coding strategy within this framework. This theoretical exploration offers a fresh perspective on understanding redundancy in Transformers.
While the paper primarily focuses on long-context modeling, the flexibility and efficiency of the Dynamic Group Attention (DGA) mechanism suggest its potential applicability to other tasks involving long-sequence processing, such as video and audio analysis. This highlights the broad applicability of the proposed method.
Essential References Not Discussed: The authors clearly discussed key methodologies in the field of efficient attention.
Other Strengths And Weaknesses: * The proposed Dynamic Group Attention (DGA) mechanism introduces a novel strategy for optimizing attention mechanisms. By dynamically grouping and aggregating less important tokens, it effectively reduces redundant computations while preserving essential token interactions. This approach successfully lowers computational costs without compromising model performance, demonstrating strong practical value.
* The authors conduct extensive experiments across multiple long-context modeling tasks, including the LongBench-E benchmark. The results show that DGA not only reduces computational costs but also maintains or even outperforms existing methods in terms of performance. Notably, in long-text generation tasks involving extensive context lengths (e.g., 16K), DGA significantly reduces generation latency, underscoring its efficiency and suitability for long-context applications.
Other Comments Or Suggestions: Some formulas in the appendix are missing periods or commas, such as Equation 36, 42.
Questions For Authors: The method in this paper is effective for long sequence text modeling. However, it remains unclear whether the proposed DGA mechanism can be applied to other long sequence processing tasks or architectures like video and audio processing. Without extra experiments, it would be great if the authors could discuss this potential application.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We really appreciate the reviewer's kind words and detailed suggestions. Here are our responses:
>Q1. "...**well-structured and insightful theoretical analyses**... **strong foundation**...", "The experimental designs...are **well-structured and robust**...", "...theoretical exploration offers a **fresh perspective** on understanding redundancy", "...**flexibility and efficiency**... **potential applicability to other tasks**...", "...successfully lowers computational costs **without compromising model performance**, demonstrating strong **practical value**".
**A1.** We sincerely thank the reviewer for their insightful feedback. We appreciate the recognition of our theoretical rigor in analyzing Transformer redundancy and the robust experimental validation demonstrating DGA’s efficiency-performance balance. The highlighted flexibility and practical value of our approach motivate us to explore broader applications in future work.
>Q2. DGA mitigates redundancy well, but its implementation and understanding are complex. The dynamic grouping and aggregation strategy needs precise control, increasing implementation difficulty and debugging costs.
**A2**. Thanks for your insightful comments. To address this concern, we will add a PyTorch-style pseudocode in the revised manuscript. Our approach leverages standard PyTorch operations, including matrix multiplication, `topk` selection, and tensor slicing, all of which are widely used and well-optimized within modern deep learning frameworks. We believe that this additional clarification will help elucidate the implementation.
>Q3. >Q3. DGA depends on hyperparameters like $m$, $\gamma$, affecting performance. The paper examines their impact, but finding optimal values in practice is hard. Deeper discussion on selection would help.
**A3**. Experimental analysis in Tables 4 and 6 reveals that smaller group sizes ($m$) and higher importance rates ($\gamma$) improve performance at the cost of increased latency. Based on these, we recommend the following practical guidelines for parameter selection: For resource-constrained scenarios, larger $m$ and lower $\gamma$ balance acceptable performance with reduced complexity. Performance-critical applications (e.g., medical diagnosis) benefit from minimal $m$ and maximal $\gamma$ to preserve accuracy, whereas latency-sensitive tasks (e.g., real-time systems) require moderate $m$ and lower $\gamma$ for responsiveness.
An adaptive framework to automatically optimize ($m$, $\gamma$) based on application-specific accuracy-latency trade-offs remains a promising direction for future work.
>Q4. Can the proposed method be integrated with FlashAttention optimization strategies to achieve further acceleration? Additionally, does it support KV-Cache techniques?
**A4**. Yes, the proposed DGA method is compatible with FlashAttention and KV-Cache optimizations. For KV-Cache, DGA retains only 0.32–2.51GB of KV states on context-length 4K-32K (Table I), a 78–98% reduction compared to vanilla self-attention.
Table I: Comparison of KV-Cache (GB) with Vanilla self-attention.
| Methods | 4K | 8K | 16K | 32K |
|---|---|---|---|---|
| Vanilla self-attention | 2 | 4 | 8 | 16 |
| DGA (Ours) | 0.32 | 0.63 | 1.26 | 2.51 |
>Q5. Do the baseline methods used for comparison incorporate FlashAttention or KV-Cache when evaluating latency?
**A5**. No. FlashAttention is not used in any method (including ours) to ensure fair comparison, as StreamingLLM's official implementation lacks support. All compared methods (including baselines and our DAG) employed standard KV-Cache techniques.
>Q6. Some formulas in the appendix are missing periods or commas, such as Equation 36, 42.
**A6**. We will fix it.
>Q7. The method works for long text modeling. But it's unclear if DGA can be applied to other long sequence tasks (e.g., video, audio processing). Without extra experiments, could the authors discuss this potential application?
**A7**. We appreciate the reviewer’s insightful question regarding the broader applicability of DGA. While our current work focuses on long-text modeling, the proposed Dynamic Grouping Attention (DGA) mechanism is inherently task-agnostic and could generalize to other long-sequence domains (e.g., video/audio) where redundancy exists in sequential tokens and adaptive token importance assessment is critical. For instance:
* **Video Processing**: Temporal sequences in videos often exhibit localized redundancy (e.g., static backgrounds or repetitive motions). DGA could dynamically group less informative frames while preserving critical temporal segments.
* **Audio Processing**: Long audio signals contain silent or redundant segments. DGA’s importance scoring could prioritize phonetically rich regions, enabling efficient compression.
We will include the above discussions in our revised paper. | Summary: In this paper, the authors propose a novel approach called Dynamic Group Attention (DGA) to address the computational inefficiencies in long-context modeling for transformer-based large language models. DGA leverages a group coding strategy to dynamically aggregate less important tokens while preserving critical token interactions. The approach aims to reduce redundancy in attention computations without sacrificing model performance. Through theoretical analysis, the authors demonstrate the robustness of the group coding mechanism and its ability to improve learning efficiency. Extensive experiments on the LongBench-E benchmark validate the effectiveness of DGA, showing significant reductions in computational costs while maintaining competitive performance across various long-context tasks.
Claims And Evidence: This work successfully demonstrates the potential of Dynamic Group Attention (DGA) in addressing computational inefficiencies in long-context modeling for transformer-based large language models. The paper's claims are supported by solid evidence from theoretical analysis and extensive experiments.
Methods And Evaluation Criteria: The methods presented in the paper are well-suited for the challenges of long-context modeling in transformer-based large language models.
My questions are:
1. From Figure 1, it appears that the grouping operation involves reordering the tokens. When handling longer sequences, could this reordering introduce significant additional overhead?
2. The paper would benefit from further clarification of certain method details. Specifically, it is unclear whether the group window size is consistent between the prefill and decoding stages. Additionally, more information is needed on how token importance is determined during the decoding stage.
Theoretical Claims: The theoretical claims in this paper are highly solid and well-supported. The introduction of Dynamic Group Attention (DGA) and its group coding strategy represents a significant innovation in addressing the challenges of long-context modeling. The theoretical analysis clearly demonstrates how DGA reduces computational redundancy and improves efficiency by dynamically aggregating less important tokens. This is well-justified and aligns perfectly with the experimental results, further validating the effectiveness of the method. The paper's theoretical background provides a strong foundation for understanding the improvements in attention mechanisms and model performance. Overall, the theoretical contributions contribute greatly to the novelty and impact of the work.
Experimental Designs Or Analyses: The experimental designs and analyses presented in the paper are sound and robust.
1.The paper compares the proposed Dynamic Group Attention (DGA) with several baseline methods, including standard self-attention and other attention optimization techniques, providing a comprehensive evaluation of the method's performance.
2.The authors incorporate a variety of experiments, including comparisons on the LongBench-E benchmark, performance testing across multiple tasks, and inference efficiency evaluations, offering a solid foundation for validating the effectiveness of DGA.
3.Ablation studies are conducted to assess the contribution of key components, such as the group coding strategy and dynamic token aggregation, further supporting the claims of the method's benefits.
My questions are:
How are the values of the thresholds (e.g., $\rho$) chosen? Are these values specific to certain datasets, and can they generalize across different datasets?
It would be beneficial to include an ablation study on the complementary tokens, specifically evaluating their impact on the model's performance, particularly in terms of efficiency and accuracy.
Supplementary Material: I reviewed the supplementary material, which includes more details on the implementation, experimental setups, and additional experimental results.
Relation To Broader Scientific Literature: The key contributions of this paper are strongly rooted in the existing literature, addressing significant gaps in long-context modeling for transformer-based large language models.
Essential References Not Discussed: The paper addresses key references relevant to its contributions. However, there is an existing method that uses dynamic grouping to accelerate attention[A]. It would be better to provide a more detailed discussion on how the proposed method differs from this.
[A] Dynamic Group Transformer: A General Vision Transformer Backbone with Dynamic Group Attention. (IJCAI 2022)
Other Strengths And Weaknesses: The proposed method relies heavily on the sparsity of the long-context tokens' importance. However, in tasks with relatively lower sparsity, such as summarization, the method's performance seems to be less effective. Are there any potential solutions or adjustments that could improve its performance in such scenarios?
Other Comments Or Suggestions: NA
Questions For Authors: 1. Can you provide a detailed computational complexity analysis of the proposed sparse attention mechanism, specifically for both the prefill and decoding stages?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the encouraging comments and detailed suggestions. Responses are below:
>Q1. "...**solid evidence** from **theoretical analysis** and **extensive experiments**", "...**theoretical analysis clearly demonstrates** how DGA reduces computational redundancy...**well-justified and aligns perfectly with** the experimental results", "...comparison...**comprehensive evaluation**...", "...incorporate **a variety of experiments**...", "...addressing **significant gaps** in long-context modeling..."
**A1.** We are deeply appreciative of your thoughtful comments. The recognition that DGA’s redundancy reduction theory aligns with experimental results shows our approach's rigor. We appreciate your focus on comprehensive evaluation, including baseline comparisons and ablation studies, inspiring us to further develop solutions in this domain.
>Q2. From Fig. 1, grouping seems to reorder tokens. Could this introduce significant overhead for longer sequences?
**A2.** No. The reordering operation introduces **minimal overhead** even for long sequences. From Table I, the repositioning time ratio increases only marginally (12.5%→15.8%) as context length grows from 4K to 16K, demonstrating scalability. Thus, the benefits of dynamic grouping (e.g., 2.4–3.5× latency reduction) far outweigh this minor cost.
Table I: Percentage of total attention computation time for reordering operation across different sequence lengths (Table 3 settings).
|Context Length|4K|8K|16K|
|-|-|-|-|
|Ratio|12.5%|15.6%|15.8%|
>Q3. The paper needs clarity on method details, e.g., group window size consistency prefill-decoding, and token importance determination in decoding.
**A3.** In decoding, we use a slightly larger group size ($m' = 1.1m$, e.g., 16→18) with 10% slots for focal tokens. Focal tokens are selected via top-10% attention weights from the group’s last token when a group reaches $m'$, ensuring adaptive prioritization and aligning with training principles. We'll clarify in revisions.
>Q4. How are threshold values (e.g., ρ) chosen? Are they dataset-specific and generalizable?
**A4.** The threshold $ρ$ is chosen based on sequence length $L$ ($ρ \in (1/L,1]$), with smaller $ρ$ enforcing stronger sparsity. Table II shows $P_{sparse}$ has **nearly identical trends** across SlimPajama and WikiText2, confirming ρ generalizes across datasets. As $ρ=0.01$, $P_{sparse}$ increases sharply with $L$, proving sparsity strengthens with context length universally. So, **ρ is dataset-independent and adapts to $L$**.
Table II: Estimations of $P_{sparse}(L,ρ=0.01)$ across lengths (Fig. 2 settings).
|L|100|200|300|400|500|600|700|800|900|1000|
|-|-|-|-|-|-|-|-|-|-|-|
|WikiText2|0.14|0.68|0.86|0.93|0.94|0.99|1.00|1.00|0.99|1.00|
|SlimPajama|0.09|0.65|0.86|0.95|0.97|0.97|0.99|1.00|1.00|0.99|
>Q5. Ablation study on complementary tokens, assessing impact on model performance (efficiency & accuracy).
**A5.** Our ablation study (Table III) shows complementary tokens are critical: removing them significantly degrades performance on tasks like Multi-Doc QA (3.58→2.37) and Code (53.45→48.00). They slightly increase latency (24.9ms→28.8ms), but are still **2.4–3.5× faster** than vanilla self-attention (Table 2: 69.70–102.22ms).
Table III: Effect of complementary tokens (Table 1 settings).
|Methods|Single Doc. QA|Multi Doc. QA|Summar.|FS learning|Synthetic|Code|Avg.|ITL (ms)|
|-|-|-|-|-|-|-|-|-|
|w/o comple. tokens|6.43|2.37|8.47|53.69|3.04|48.00|20.33|24.9|
|DGA-LLM (Ours)|3.61|3.58|6.81|57.90|1.47|53.45|21.14|28.8|
>Q6. The proposed method relies on long-context token importance sparsity. In lower-sparsity tasks like summarization, its performance seems less effective. Any potential solutions for such scenarios?
**A6.** For lower-sparsity tasks (e.g., summarization), our method can adjust group size m for balance performance and efficiency. Reducing m from 16 to smaller values (e.g., m = 2) improves summarization performance (9.53→6.81) but increases ITL (53.0ms→28.8ms). We will explore dynamic m-adjustment in future work.
Table IV: Comparisons of inference performance for different group sizes (m=2→m=16) on LongBench-E.
|**Task**|**vanilla self-attention**|**2**|**4**|**8**|**16 (default)**|
|:-:|:-:|:-:|:-:|:-:|:-:|
|Single Doc. QA|6.43|5.68|4.93|3.04|3.61|
|Multi. Doc. QA|2.37|5.40|4.43|4.74|3.58|
|Summar.|13.65|9.53|8.36|7.30|6.81|
|ITL(ms)|69.7|53.0|38.3|33.5|28.8|
>Q7. Provide detailed computational complexity analysis of sparse attention mechanism for prefill and decoding stages.
**A7.** Thanks for the suggestion. Let $L$, $r$, and $m$ denote the context length, the number of focal tokens, and the group size.
* **Prefilling stage**: Complexity is $O(Lr+L\frac{L-r}{m}+Lm)$, simplifying to $O(\frac{L^2}{m})$ for constants $m$ and $r$, significantly lower than vanilla self-attention’s $O(L^2)$.
* **Decoding stage**:, Per-token complexity is $O(r+\frac{L-r}{m} + m)$, significantly lower than vanilla’s $O(L)$ for large $m$. | Summary: This paper addresses the computational inefficiency in Transformer-based models for long-context modeling caused by redundant attention computations. The authors reformulate probabilistic sequence modeling as a supervised learning task, providing a theoretical foundation for analyzing redundancy. Building on this, they propose a group coding strategy to aggregate less important tokens and introduce Dynamic Group Attention (DGA), which dynamically groups non-focal tokens while preserving critical interactions. Experimental results show that DGA reduces computational costs while maintaining competitive performance on long-context benchmarks.
Claims And Evidence: The claims in the paper are supported by clear evidence.
Methods And Evaluation Criteria: The methods presented in the paper are well-motivated, but I still have some question:
1. Some statement in Section 5 is confusing. The authors first mention that they divide the tokens into two parts. However, in Eqn. 12, I found the tokens are segmented into three parts.
2. Some notations are hard to understand, such as $min G_i$ in line 315. I guess it denote the minimum index in the group index $G_i$, right?
3. In Eqn 17, the authors use a small set of query to estimate the attention weights. However, what if we only have one query token in the decoding stage?
Theoretical Claims: I have thoroughly reviewed the theoretical claims presented in the manuscript, including the proposed theorems and their corresponding proofs. The authors offer clear theoretical insights. They rigorously establish the sparsity of attention weights (e.g., Theorem 1 on sparsity bounds), showing that only a small subset of tokens significantly contributes to predictions. Subsequently, they link the optimization in attention with group coding and show its improved robustness (Theorem 2) and optimization efficiency (Theorem 3). These thorough analyses deepen the understanding of the redundancy in the attention, providing a foundation for the proposed method to address the computational inefficiency in transformers.
Experimental Designs Or Analyses: The authors conducts comprehensive empirical results that validate the effectiveness of DGA. The experiments on LongBench-E and EM score show significant reductions in computational costs and inter-token latency while maintaining competitive performance on various long-context tasks. The significant latency reduction underscores its practical utility for real-world applications requiring efficient long-context processing. Besides, the authors compare the proposed method with a wide range of baseline methods, including MInference and StreamLLM. The comparative evaluation provides sufficient evidence to support the authors' claims. I still have some questions:
1. In Table 3, are all the methods tested with Flash Attention?
2. The method introduces complementary tokens to recover masked information during autoregressive generation. However, I would like to understand the impact of these complementary tokens on the model's inference performance,such as latency, and overall accuracy.
Supplementary Material: I have reviewed the supplementary material, including detailed proofs and implementation details.
Relation To Broader Scientific Literature: The proposed method seems to reduce the redundancy of the attention, different from the existing method like MInference and StreamLLM that directly discard some tokens.
Essential References Not Discussed: The authors clearly discussed key methodologies in the field of efficient attention.
Other Strengths And Weaknesses: 1. In Algorithm 1, can the proposed method conduct attention parallelly? I found the proposed method seems to calculate attention for different query Q_i separately in line 8 of the algorithm.
Other Comments Or Suggestions: 1. Some titles appear odd and awkward, such as “C.4. Implementation Details on Sparsity” and “C.5. Implementation Details on Optimization Efficiency.”
2. What is the meaning of the subfigures a b c and d in Figure 5?
Questions For Authors: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the encouraging comments and suggestions. Responses are below:
>Q1. "...**clear theoretical insights**...**deepen the understanding** of the redundancy in the attention, providing a **foundation** for the proposed method...", "...**comprehensive empirical results**...The comparative evaluation provides **sufficient evidence** to support the authors' claims", "...**different from the existing method** like MInference and StreamLLM that directly discard some tokens".
**A1.** We sincerely appreciate your thoughtful and encouraging feedback on our work. Your acknowledgment of the theoretical insights into attention redundancy, along with the robust empirical validation, is greatly appreciated. We are also grateful for highlighting how our method innovatively diverges from approaches like MInference and StreamLLM by avoiding token discarding. These comments underscore the significance of our contributions, motivating us to further advance research in this domain.
>Q2. Some statements in Section 5 are confusing. The authors first mention that they divide the tokens into two parts. However, in Eqn. 12, I found the tokens are segmented into three parts.
**A2.** We appreciate the reviewer’s careful observation and clarify the token partitioning strategy:
* **Two-Part Token Partitioning**: Tokens are divided into **focal** and **non-focal** (redundant) groups based on their importance scores.
* **Complementary Tokens for Autoregressive Integrity**: To address potential information loss caused by grouping (e.g., some tokens cannot access the group information due to the autoregressive nature), we introduce **complementary KV pairs** (third part in Eq. 12) to restore missing dependencies.
>Q3. Some notations are hard to understand, such as $min G_i$ in line 315. I guess it denotes the minimum index in the group index $G_i$, right?
**A3.** Yes. $min G_i$ is used to identify the earliest token position in the group. We will clarify these notations in the revised manuscript to improve readability.
>Q4. In Eqn 17, the authors use a small set of queries to estimate the attention weights. However, what if we only have one query token in the decoding stage?
**A4.** In the decoding stage with only one query token, our method adapts as follows:
* **Initial Generation Phase (Token Count < m)**: When fewer than $m$ tokens (group size) are generated, we directly compute attention using standard full self-attention (without grouping) to ensure accurate context capture. This avoids instability in weight estimation with limited tokens.
* **Subsequent Generation (Token Count ≥ m)**: Once $m$ tokens are generated, we leverage the **query of the last token** in the current group to compute weights *P* (Eqn. 14). This query implicitly encodes dependencies on prior tokens through its positional encoding, enabling reliable importance estimation.
>Q5. In Table 3, are all the methods tested with Flash Attention?
**A5.** No. To ensure fair comparisons, none of the methods in Table 3 use Flash Attention because StreamingLLM's official implementation lacks support for it. However, our approach is compatible with Flash Attention optimizations (e.g., block matrix operations), which could further enhance performance in future implementations.
>Q6. The method introduces complementary tokens to recover masked information during autoregressive generation. However, I would like to understand the impact of these complementary tokens on the model's inference performance, such as latency and overall accuracy.
**A6.** Our ablation study (Table I below) shows complementary tokens are critical: removing them significantly degrades performance (21.14→20.33) on Longbench-E by **3.8%↓**. While they incur a slight latency increase (24.9ms→28.8ms), this remains **2.4–3.5× faster** than vanilla self-attention (Table 2: 69.70–102.22 ms).
Table I: Effect of complementary tokens, where we train LLaMA2-7B on 8K context-length texts over SlimPajama and test on Longbench-E.
|Methods|Single Doc. QA|Multi Doc. QA|Summar.|FS learning|Synthetic|Code|Avg.|ITL (ms)|
|-|-|-|-|-|-|-|-|-|
|w/o comple. tokens|6.43|2.37|8.47|53.69|3.04|48.00|20.33|24.9|
|DGA-LLM (Ours)|3.61|3.58|6.81|57.90|1.47|53.45|21.14|28.8|
>Q7. In Algorithm 1, can the proposed method conduct attention parallelly? I found that the proposed method seems to calculate attention for different query Q_i separately in line 8 of the algorithm.
**A7.** Yes. While Algorithm 1 describes sequentially for clarity (e.g., line 8), our implementation uses batched matrix operations and GPU parallelism to process all queries in parallel.
>Q8. Some titles appear odd and awkward, such as “C.4. Implementation Details on Sparsity” and “C.5. Implementation Details on Optimization Efficiency.” 10. What is the meaning of the subfigures a b c and d in Figure 5?
**A8.** We will revise the appendix titles for clarity and add clear captions for the subfigures in Figure 5. | Summary: In this paper, the author proposes a new method for long context LLMs. It uses dynamic grouping to divide tokens into several groups, the attention over coarse granularity of token groups achieves faster inference.
Claims And Evidence: 1. The LLM sparsity discovered in Sec 4 has actually already been revealed in several previous works (such as StreamingLLM).
2. The approach proposed in Sec 4 of performing grouping for tokens has already been proposed (see KVMerger "MODEL TELLS YOU WHERE TO MERGE: ADAPTIVE KV CACHE MERGING FOR LLMS ON LONG-CONTEXT TASKS"). Moreover, I believe the clustering-based approach used by KVMerger is simpler and more reasonable than the grouping method in this paper.
3. In Section 3, the author proposes using supervised learning to achieve long context feature extraction. However, it seems the author does not mention the supervised learning method in their method section (Sec 5).
Methods And Evaluation Criteria: 1. The analysis in Figure 2, as well as the definition of ρ-sparse, is not as reasonable and clear as the analysis of attention sink in StreamingLLM (see Figure 2 of StreamingLLM).
2. Table 1 in the paper has issues: (1) The context window length setting is not specified by the author. (2) Since StreamingLLM performs token pruning, its inference efficiency should be much higher than the original model (LLaMA2-7B). However, in this table, StreamingLLM's speed (ITL) has decreased.
3. Similar issues appear in Table 3. StreamingLLM's inference efficiency should be almost unaffected by text length because it directly truncates text using a sliding window.
Theoretical Claims: No
Experimental Designs Or Analyses: Please see Methods And Evaluation Criteria
Supplementary Material: Not provided.
Relation To Broader Scientific Literature: See Claims And Evidence.
Essential References Not Discussed: Many related work are about grouping/clustering tokens in long-context LLM, such as KVMerger, PQCache: Product Quantization-based KVCache for Long Context LLM Inference, ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression
Other Strengths And Weaknesses: The author has a very strange citation: they even included a citation for supervised learning (Hastie et al., 2009). This citation is clearly inappropriate.
Other Comments Or Suggestions: No
Questions For Authors: See Claims And Evidence and Methods And Evaluation Criteria
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments. Responses are below:
>Q1. The LLM sparsity discovered in Sec 4 has already been revealed in several previous works (e.g., StreamingLLM).
**A1.** We thank the reviewer for noting prior sparsity on self-attention weight like in StreamingLLM [r1]. Unlike empirical studies, we theoretically show why attention sparsity should happen and would be strengthened with context length (Theorem 1). Based on this, we developed Dynamic Group Attention (DGA), which adaptively aggregates tokens while preserving critical interactions. Theorems 2–3 further establish group coding's advantages in noise robustness and optimization efficiency. We highlight **two additional distinctions**:
* **Static Sparsity vs. Dynamic Group Mechanism**. Methods like StreamingLLM often use static sparsity that prioritizes attention on initial/ final tokens, which may discard critical tokens; DGA dynamically groups tokens based on context-dependent importance, ensuring adaptation in dynamic scenarios.
* **Perfomance Comparisons**. DGA outperforms StreamingLLM by 11.69% EM score (Table 2) with 1.28× speedup (Table I), highlighting both accuracy and efficiency gains.
[r1] Efficient streaming language models with attention sinks. ICLR2024.
>Q2. The token grouping in Sec 4 has been proposed by KVMerger [r2]. Its clustering-based method seems simpler and more reasonable.
**A2.** Our method differs from KVMerger [r2] in **objective**, **methodology** and **experiments**:
* **Distinct Objectives**: KVMerger focuses on KV cache compression via Gaussian-kernel-based Key clustering but ignores attention computation redundancy; DGA targets self-attention acceleration in long-context tasks by dynamic grouping, thus **reducing both computation and memory consumption**.
* **Methodological Differences**: KVMerger uses static Key clustering; DGA groups tokens by context-aware importance, adapting to input variations.
* **Empirical Validation**: KVMerger lacks self-attention acceleration results; Our Table 3 shows **2.4× speedup** at 16K length with minimal accuracy loss.
We will include this in our revised paper.
[r2] Model Tells You Where to Merge: Adaptive KV Cache Merging for LLMs on Long-Context Tasks. Arxiv 2024.
>Q3. It seems the author does not mention the supervised learning method in their method section (Sec 5).
**A3.** Sec. 3 reformulates long-context modeling as a supervised learning task (Eq. 2), separating **relevant** (critical for predictions) and **irrelevant** (redundant for context) tokens, motivating **theoretical analysis of sparsity** (Theorem 1–3 in Sec. 4). This **inspires the design of DGA** in Sec. 5, where token relevance (Eq. 4) derived from supervised learning guides dynamic grouping in DGA ($s_i$ in Eq. 16), aggregating redundant tokens while preserving critical interactions. These three sections form a pipeline: **supervised learning identifies redundancy**→**theoretical analysis quantifies sparsity**→**DGA operationalizes efficient computation**. We will clarify this in our paper.
>Q4. The Fig. 2 analysis and ρ-sparse definition seem less reasonable and clear than StreamingLLM's attention sink analysis (see Fig. 2 of StreamingLLM).
**A4.** The ρ-sparse, i.e., $P_{sparse}(L,ρ)$ quantifies the probability of at least one attention weight exceeding $1/(Lρ)$, rigorously measuring how sparsity strengthens with context length $L$. It evaluates a model’s **inherent ability to prioritize critical tokens** in varying contexts, e.g., $P_{sparse}$ rises sharply for $ρ=0.01$ as $L$ grows (Fig. 2b).
Crucially, we note the **different functionality**: StreamingLLM analyzes sparsity for **individual sequences** (e.g., attention maps), while our ρ-sparsity aggregates across **multiple sequences** (Sec. 6.2), offering a **model-level sparsity characterization**.
>Q5. Issues in Tables 1 & 3: (1) Unclear context window lengths. (2) StreamingLLM is slower than vanilla despite pruning.
**A5.** We address concerns:
* **Context Window**: Models were trained on **8K context** and evaluated on **32K context** (covering 95% of LongBench-E sequences). ITL was at **16K context**.
* **StreamingLLM Efficiency**:
* All methods in Tables 1 and 3 used MInference for consistent evaluation (with full KV caching overhead).
* With StreamingLLM’s official code (Table I), ITL stabilizes (36ms) but is higher than DGA-LLM (28ms) as it retains a fixed 4K-token cache, while **DGA dynamically reduces cache size** (e.g., 2576 tokens for 16K).
Table I: ITL (ms) comparisons with official StreamingLLM.
|Methods|4K|8K|16K|
|-|-|-|-|
|StreamingLLM|36.16|36.97|36.87|
|**DGA (Ours)**|**26.26**|**26.87**|**28.79**|
>Q6. Many related works are missing and the citation (Hastie et al., 2009) is clearly inappropriate.
**A6.** We will carefully discuss these works according to the reviewer's suggestions. | null | null | null | null | null | null |
Rethinking Aleatoric and Epistemic Uncertainty | Accept (poster) | Summary: This paper revisits the concepts of aleatoric and epistemic uncertainty in machine learning. It identifies inconsistencies in how these uncertainties are commonly discussed. The authors argue that traditional definitions are overly simplistic, causing confusion. They propose a decision-theoretic approach to clarify uncertainty, defining it through the expected loss of optimal predictions. This framework allows a clearer distinction between uncertainty that can and cannot be reduced by additional data. Additionally, the authors critique commonly used information-theoretic measures like entropy and BALD scores. They demonstrate these metrics are useful practically, but often inaccurately estimate uncertainty. The paper emphasizes that subjective uncertainty estimates from models should not replace objective external evaluations. Overall, the authors aim to provide a unified, clear foundation for future research on uncertainty estimation.
Claims And Evidence: Examples are supported by simple proofs. Experimental evidence is only anecdotal and limited to BALD.
Methods And Evaluation Criteria: The paper's conceptual nature implies no comprehensive benchmark study is required. Proofs are clear and easy to follow. Experimental illustration could be enhanced.
Theoretical Claims: Yes, the proofs are correct as far as I could check. This comes at little suprise, as the claims are mostly based on previous work and well-known results.
Experimental Designs Or Analyses: It appears the author did not share code to reproduce experimental evidence, violating open science standards.
Supplementary Material: I checked the experimental details (A). There is no other supplementary material.
Relation To Broader Scientific Literature: I welcome a) the decision-theoretic perspective on predictive UQ/model uncertainty and b) the shift away from (mostly purely probabilistic) UQ towards a more comprehensive view on modeling and reasoning in machine learning. Of course, a) allows for b), but I consider both parts of independent interest and valuable contributions on its own. Generally, I like the writing. The presentation is clear and the setup is well-motivated.
Having said this, I identify two main concerns with the current state of the paper. While the first one is a conceptional caveat that might not be resolved so easily, the second refers to the presentation and should be fairly easy to address by minor revisions.
1. As strong as I support the authors in arguing against the simplistic aleatoric-epistemic dichotomy, I am afraid the authors introduce another such (and equally questionable) dichotomy by fundamentally differentiating between objective model evaluation and subjective uncertainty quantification. Let me explain in detail. The authors start by emphasizing the basic rationality axioms by VNM and Ramsey but to my great regret, they fail to expand on them nor explicitly use them. The decision-theoretic embedding would allow for a rigoros study of how the choice of axioms affects the utility (i.e., loss function) in machine learning. Instead, the authors naively equate the predictive task with a loss function. I appreciate how the authors discuss the subjectivity of UQ and expose it to be based on the internal belief state of a model which does not necessarily need to correspond to what the authors call p_{eval}, the law of interest for evaluation. However, they miss the opportunity of doing the same for the choice of the loss function. The authors write, "Bayes optimality simply means we are taking an action that reflects our belief state; it says nothing at all about how well that belief state matches reality." I could not agree more, but the same applies to the loss. For instance, consider the desiderata of a loss function to reflect multiple users' preferences (total orders). It has been long recognised in the social choice literature (e.g. Arrow's famous impossibility result) that no total order exists to aggregate such preferences under reasonable assumptions on the aggregation. See https://apps.dtic.mil/sti/tr/pdf/AD0708563.pdf for more background. In other words, by naively conditioning their decision-theoretic analysis on a real-valued loss (i.e., stipulating a total order), the authors exclude all subjectivity in how such a loss function arises. I consider this to be somewhat inconsistent with the differentiation between p_eval and p_train. Why should a machine learning model be evaluated with respect to different probability distributions but always with respect to the same loss? It was already recognised by Abraham Wald in his seminal 1949 paper "Statistical Decision Functions" (Ann. Math Stat.), that in the case where P_eval is not known and the decision maker has only ordinal preferences, rationality axioms imply a classical maximin instead of the Bayes criterion. Notably, these order theoretic deliberations on the loss aren't purely theoretical. They have been applied in the ML community recently. For instance, https://proceedings.mlr.press/v216/jansen23a/jansen23a.pdf and https://papers.nips.cc/paper_files/paper/2024/hash/b1f140eeee243db24e9e006481b91cf1-Abstract-Conference.html derived optimal procedures for multivariate random variables with locally varying scale of measurement (e.g. cardinal (e.g. autmoatic evaluation of ML model) and ordinal (e.g. human ranking of ML model's output like in RLHF) criteria). In summary, just like "probability does not exist" (de Finetti), i.e., there is no objective probability, there is no objective loss tied to a predictive task. I thus encourage the authors to reconsider the "objective" terminology and reframe their concept as e.g. addressing probabilistic vs. non-probabilistic uncertainty and not subjective vs. objective reasoning. This is further supported by the fact that there workarounds avoiding the sensitivity of Bayes-actions towards the choice of prior. Besides classical objective Bayesian approach (see the cited Berger book), these workarounds comprise "prior near ignorance" credal sets of priors (i.e. convex sets of probability measures that represent partial ignorance), see e.g. https://alessiobenavoli.com/research/prior-near-ignorance/
2. The paper could benefit from some more constructive and practical recommendations for ML practitioners. This is not my area of expertise, but I feel the paper's accessibility to the (applied) ML community could be improved by e.g. a more tangible case study. Why not expand the experiments on BALD to a more comprehensive setup involving both data acquisition and prediction?
Essential References Not Discussed: I do not think the following references are strictly essential, but certainly tangential and thus worth a read.
@ "Training data need not be direct examples of the predictive task": This is reminiscent of the notion of "Institutional Separation" introduced in this 2024 ICML paper: https://arxiv.org/abs/2404.04669
@ "Reasoning about new data can (but will not always) yield a unique uncertainty decomposition": The Bayesian selection of new data was discussed here (https://www.jmlr.org/papers/v24/21-1067.html) and here (https://arxiv.org/abs/2406.12560). On a slightly different note, the paper (as far as I could check) focuses on acquiring new data exclusively in the context of active learning. It has recently been noted that the decision-theoretic embedding of data acquisition extends beyond active learning and comprises e.g. self-training in SSL, boosting, and bandits: https://proceedings.neurips.cc/paper_files/paper/2024/hash/0337b41b4e8b2eb5d7ab161ffd42cf3b-Abstract-Conference.html (For an example of Bayes-optimal acquisition in SSL, see https://proceedings.mlr.press/v216/rodemann23a/rodemann23a.pdf)
Other Strengths And Weaknesses: The paper occasionally uses different terms interchangeably without explicitly stating equivalences clearly. For example, "statistical dispersion in data" and "noise" are used interchangeably. It would be clearer to pick one term and consistently use it after an initial definition. I also encourage the authors to clearly note some examples (e.g., variance and entropy derivations) as standard textbook derivations to avoid confusion about novelty.
Other Comments Or Suggestions: Minor remarks/typos:
- "foundations ideas" → "foundational ideas"
- "this spurious associations" → "these spurious associations"
- "implicitly using favouring" → either "implicitly using" or "favouring."
- "defined to conditional" → "defined conditional" or "defined to be conditional." ?
- "lead to different reduction" → "lead to different reductions."
- "Variously referred in" → "variously referred to in." ?
- "thus that objective reasoning" → better phrased as "thus objective reasoning."
- "show that recovers" → missing "this," should be "show that this recovers."
- "down to extremely pure, unambiguous case" → should be plural, "cases."
- "over horizon of m observations" → "over a horizon..."
- "Justifying the need for external grounding." → This sentence fragment should be combined with the preceding sentence.
- "We average over 50 repeats with different random seeds." → "We average over 50 random seeds."
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review.
We are pleased to see positive feedback on multiple aspects of the paper:
1. Strong motivation
2. Interesting use of decision theory
3. Comprehensive view on reasoning and learning
4. Clear writing, proofs and overall presentation
We also appreciate your thoughtful, in-depth conceptual comments along with useful pointers for lower-level improvements.
### **Loss functions**
> I am afraid the authors introduce another such (and equally questionable) dichotomy by fundamentally differentiating between objective model evaluation and subjective uncertainty quantification.
>
You raise an astute point that is more aligned with our perspective than you might think.
While we agree that a loss function is typically an imprecise abstraction of our internal desires for how a system should behave (and thus there is a subjective element in its construction), we argue that this falls under our problem definition because it relates to our definition of optimality in an ideal world, and not how that optimality is achieved. Core to our arguments is the idea that all uncertainty characterisation should start from the problem definition; our technical contributions are then based on how this can be done in a rigorous way. This results in a goal-driven notion of uncertainty that is rigorous given a problem definition (note that this setup is very standard across machine learning and not just the uncertainty-quantification literature).
Another key point is that even if we are unsure about how best to set up our loss function, this is not an uncertainty that arises from a *lack of information*. There is therefore no notion of changes in uncertainty from having access to more data or knowing the underlying data-generating process, which is ultimately what we wish to characterise in our decompositions. The choice of loss function is thus objective from a statistical-terminology perspective, even if it is a subjective decision (in the lay sense) in practice. We believe this use of terminology reflects a long history within decision theory, perhaps most prominently in the work of Savage (1971), who linked subjective beliefs to externally grounded evaluations through scoring rules.
Given the above, we do not feel it is problematic to condition our analysis on a real-valued loss, but we do agree that the importance and “subjectivity” of choosing the loss function warrants more discussion in the paper, and we will happily add this in our update.
### **Practical implications**
> The paper could benefit from some more constructive and practical recommendations for ML practitioners.
>
Great point. We have laid out some points in “Practical implications” in our response to Reviewer MLbH.
### **Case study**
> the paper's accessibility to the (applied) ML community could be improved by e.g. a more tangible case study
>
This is an excellent idea.
In our code repo (link below) we have added a new practical demonstration of some of the key ideas from the paper. We look at Gaussian-process regression and show how the loss function affects three things:
1. The model’s subjective uncertainty, $h_\ell[p_n(z)]$
2. The model’s discrepancy, $d(p_n, p_\mathrm{eval})$, with respect to a reference system
3. Which data gets prioritised during data acquisition
We also highlight the crucial distinction between model-based uncertainty quantification and externally grounded evaluation using a scoring rule or discrepancy function.
We do this by comparing a standard quadratic loss against a weighted quadratic loss that encodes a preference for accuracy on larger values of $z$ ($z$ might be a variable we need to be large, such as the solubility of a candidate drug molecule).
We plan to add some of these plots to the paper, along discussion relating it to practical applications.
### **Code**
> It appears the author did not share code
>
Thank you for flagging this oversight. We have made our code available at https://anonymous.4open.science/r/rethinking-aleatoric-epistemic-00DF.
### **Experimental evidence**
> Experimental evidence is only anecdotal and limited to BALD.
>
We recognise that our experiment is simple, but we contest the characterisation of our evidence as anecdotal. As explained in the paper, our experiment is carefully chosen to support our empirical claim, complementing results from past work.
### **Existing results**
> I also encourage the authors to clearly note some examples… as standard textbook derivations
>
Good point. We will flag existing results clearly.
### **Terminology**
> The paper occasionally uses different terms interchangeably without explicitly stating equivalences clearly.
>
This is useful feedback. We will standardise the terminology.
### **Related work**
Thanks for these pointers. We will happily add citations.
### **Typos**
We appreciate you pointing these out. We will fix them.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed reply. I greatly appreciate it! I feel most of my points are sufficiently addressed - however, my main point is not among them. I am *not* convinced by the way you motivate your fundamental distinction between objective model evaluation and subjective uncertainty quantification. Reviewer MLbH appears to have a hard time accepting this strict dichotomy, too. After considering your reply, I am still under the impression this distinction is artificial and not rigorously grounded in decision theory.
> Because it relates to our definition of optimality in an ideal world, and not how that optimality is achieved
If that is your main motivation for distinguishing between objective model evaluation and subjective uncertainty quantification, it is a pretty vague one. Can the authors explain why the choice of the loss function is not concerned with "how that optimality is achieved"? A risk functional (mapping from the functional space of loss functions) clearly determines how optimality is achieved, yes, but so does the loss function (and crucially, its underlying domain).
> We believe this use of terminology reflects a long history within decision theory
This is clearly wrong. It has been remarked by none other than the founding father of statistical decision theory himself (Abraham Wald) that for non-cardinal preferences, basic rationality axioms imply maximin strategies instead of Bayes criteria, on which the authors appear to base their whole argument. The classic Savage paper deals with elicitating personal probability distributions generally. This does not mean elicitation of order preferences on the loss domain is disregarded in decision theory. Quite the contrary, I would argue that a substantial part of decision theorists even work on order theory directly, or at least consider the basis of decision theory.
I would like to emphasize that my concerns regarding the order structure implied by the loss function's domain are not purely theoretical. There are strong impossibility results if the loss shall simply represent more than one subject's preferences. Any collective agreement on the loss' underlying order is affected, rendering these limitations relevant to e.g. democratizing AI, see https://arxiv.org/pdf/2206.02786 for instance.
EDIT: All in all, I strongly encourage the authors to discuss the loss function's subjective elements (even if tied to an objective problem). After reading the paper again, I do not think my concerns are unresolvable. But the author's simplified abstraction (just like any) has limitations that should be discussed in the revised version of the paper.
---
Reply to Comment 1.1.1:
Comment: > All in all, I strongly encourage the authors to discuss the loss function's subjective elements (even if tied to an objective problem). After reading the paper again, I do not think my concerns are unresolvable. But the author's simplified abstraction (just like any) has limitations that should be discussed in the revised version of the paper.
>
Thank you for your continued engagement. Your input is really helping shape the updates we will make to the paper. It is important to us that we get this right.
We believe we are aligned with you on three central points:
1. **There are cases (eg, group decisions) where preferences do not imply a loss function.** Our assumption of a loss function means we cannot cover these cases.
2. **Even if there is a well-defined notion of loss, it will vary from one decision-maker to another.** Nothing in our analysis is objective in the sense of applying to all decision-makers.
3. **There are multiple approaches to decision-making.** We use expected-loss minimisation to produce a coherent synthesis of ideas that recovers widely used quantities. This does not mean using expected loss is the only possible approach; alternatives include minimax.
Point 1 is a limitation we are comfortable with but do not by any means want to ignore. We will clarify the scope of our analysis as it stands and highlight the potential for future work exploring the difficulties you rightly point out.
Point 2 is something we are keen to communicate more clearly. We understand your issue with “subjective reasoning vs objective reasoning”, and we are planning to change our terminology. One option is “internal reasoning vs external reasoning”; another is “subjective Bayesian reasoning vs frequentist evaluation”. Let us know if you have thoughts on this.
Point 3 is also something we are happy to provide more context on. This relates to a remaining issue that you raised:
> Can the authors explain why the choice of the loss function is not concerned with "how that optimality is achieved"?
>
Our point here was to emphasise that even if we assume we have a loss function (ie, “our definition of optimality”), we still need to choose a higher-level procedure for making decisions (ie, “how that optimality is achieved”), such as expected-loss minimisation or minimax.
While we will be unable (due to ICML restrictions) to respond to any further comments you have here, we will take seriously any remaining concerns when we update the paper. | Summary: This paper argues that the current view on the decomposition of uncertainty into (reducible) epistemic and (non-reducible) aleatoric uncertainty is not only insufficient but also inappropriate from the theoretical viewpoint. They argue that the whole notion of predictive uncertainty should be grounded in a loss function over actions, entering into an argument of rational behavior.
Claims And Evidence: I would argue that this is one of the few papers I have seen that does not make a technical but rather a didactical claim: That they provide clarity on three fronts: (i) that a loss function drives the principled treatment of uncertainty, (ii) that decomposition of uncertainty is usually not possible, and (iii) that model-based uncertainty can be interpreted as estimating the model predictive performance on unseen data, which is however not a theoretical substitute for external grounding. Even though I find the whole work compelling and quite well formalized, I am inclined to say that the paper somewhat falls short in its goal to bring clarity. The paper is excellently written, but it is extremely dense and technical with a lot of notation, which is understandable and necessary, but it makes the paper quite difficult to read, and in many parts, it is necessary to read several times and even then it does not become entirely clear (see also my questions below). I think that this paper would need at least one additional page but probably more to give justice to its own ambition of really bringing more clarity. This way, the paper above all raises awareness but, at least in my case, not really much clarity.
Methods And Evaluation Criteria: n/a (interestingly)
Theoretical Claims: There are some few propositions, most of which I checked the proofs and found them reasonable. These propositions are cool and should be there. They are not a game changer for the paper in either direction though, and I wouldn't even call them "claims".
Experimental Designs Or Analyses: There is a minimalistic experimental design around BALD for illustrative purposes, which is nice and would also be necessary in other parts of the paper.
Supplementary Material: n/a
Relation To Broader Scientific Literature: Excellent
Essential References Not Discussed: None I am aware of
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: This paper is really different from standard papers one reviews at ICML since there is no technical method, no result tables, bold figures etc. Personally I find this very refreshing, but it is problematic even only fairly review this paper into the new ICML review scheme, which is entirely oriented towards papers with tables with bold numbers. I appreciate the attempt to try something different. Frankly I also have no suggestion of what to take out from the paper since everything appears relevant; there seems just to be too little space available to properly convey the message.
Questions For Authors: - line 149 left: Why would the Shannon entropy of the training data be n log 2?
- I am not sure what to think of this type of issue that "... is only an estimator of [the true uncertainty quantity]". I mean the authors surely will agree that we will always only be able to estimate the true uncertainty, whichever it is, at least in what *you* here define as aleatoric and epistemic uncertainty. So in the light of an impossibly perfect estimate, what would be even the desireable objective in your viewpoint?
- I am also not sure that I follow the argument of unique decomposability of the expected uncertainty reduction only if the additional training samples are determined without any noise (bottom right of page 5). I don't see what is wrong about a decomposition that still involves the expected value of those additional data points. At least this part doesn't become clear to me.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review.
We are happy to see you highlight a number of positive aspects of the paper:
1. Clear writing
2. Reasonable theory
3. Useful experimental results
4. Clear contextualisation within the literature
5. Refreshing style of contribution
We also recognise there are some things you think should be improved.
### **Clarity and space**
> Even though I find the whole work compelling and quite well formalized, I am inclined to say that the paper somewhat falls short in its goal to bring clarity. The paper is excellently written, but it is extremely dense and technical with a lot of notation, which is understandable and necessary, but it makes the paper quite difficult to read
>
We agree that the paper is necessarily idea-dense in order to communicate the full picture with appropriate nuance. Happily we think the paper’s clarity could quite easily be improved using the extra page allowed in the camera-ready paper, some careful rearrangement of the content, and extra efforts to highlight the key takeaways.
A key improvement we can make is to provide visual examples of the points we are making (see “Case study” in our response to Reviewer Atnu). Another thing we will try is presenting Examples A1-A3 and B1-B3 together: then the logic of the main technical content will be uninterrupted and the progression across the examples will be clearer. Finally we will make more use of the appendix to provide more verbose explanations of the trickier concepts covered in the paper.
### **Paper style**
> This paper is really different from standard papers… Personally I find this very refreshing, but it is problematic even only fairly review this paper into the new ICML review scheme…
>
It is great to hear that you find our contribution refreshing. We believe it could provide real value to the community, especially after we update for improved clarity based on your feedback. As noted above, we do feel there should be sufficient space to convey our message more clearly.
### **Imperfect estimators**
> I am not sure what to think of this type of issue that "... is only an estimator of [the true uncertainty quantity]".
>
You are right: estimation is what we have to do in practice. Two points are worth highlighting.
First, past work has often discussed common estimators as if they are the quantities they are in fact only approximating. We think emphasising the potential inaccuracy of the estimators has value in resolving misconceptions and supporting more clear-eyed use of common quantities.
Second, by establishing that these are only estimators, our work reveals the scope for alternative estimators that might be superior. For example, it might be clear that the infinite-step irreducible predictive uncertainty should be effectively zero, which a practitioner can use directly; Figure 2 in https://arxiv.org/abs/2404.17249 shows this can be better than using standard estimators. From a more theoretical perspective, an important implication of Propositions 1-4 is that the standard uncertainty estimators we consider are only optimal if we use a quadratic estimation loss, with other losses yielding different optimal estimators. Our exposition provides a template for deriving better estimators.
### **Unique uncertainty decomposition**
> I don't see what is wrong about a decomposition that still involves the expected value of those additional data points.
>
The key point here is that stochasticity in the data makes the decomposition depend on the data-generating process: any notion of irreducible and reducible uncertainty becomes conditional on it. This means we cannot talk about *the* decomposition into irreducible and reducible uncertainty: there are effectively endless possible decompositions because the data-generating process itself depends on design decisions we make (eg, how inputs are sampled or selected). Even if we fix a design policy (https://arxiv.org/abs/1604.08320), the corresponding true data-generating process is still unknown and so we are approximating the true expected decomposition with our model of the data-generating process. We will make updates to clarify this in the paper.
Another thing we would be happy to expand on in the paper (if it is of interest) is the existence of families of distinct data-generating processes that produce the same uncertainty decomposition given a long enough rollout of data generation. For example, it is possible for different experimental-design policies to be equivalent in the extent to which they will reduce uncertainty over a given rollout length.
### **Training-data entropy**
> line 149 left: Why would the Shannon entropy of the training data be n log 2?
>
We have $2^n$ possible outcome sequences, $y_{1:n}$, each with probability $1/2^n$. The entropy is
$$
\mathrm{H}[p_\mathrm{train}(y_{1:n})] = -\sum_{i=1}^{2^n} p_\mathrm{train}(y_{1:n}^{(i)}) \log p_\mathrm{train}(y_{1:n}^{(i)}) = -\sum_{i=1}^{2^n} 2^{-n} \log 2^{-n} = n \log 2.
$$ | Summary: The paper examines the concepts of aleatoric and epistemic uncertainty. It highlights inconsistencies in existing discussions of these concepts, attributing them to the limited expressiveness of the aleatoric-epistemic framework in capturing the diverse uncertainty quantities. To address this, the authors propose a decision-theoretic perspective on prediction, deriving formal definitions of model-based uncertainty and statistical dispersion in data. This new framework aims to provide a clearer foundation for future discourse in the field. Additionally, the paper investigates popular information-theoretic quantities, revealing their limitations as estimators of intended metrics while demonstrating their potential utility in guiding data acquisition processes.
Claims And Evidence: Yes. The authors provide proofs for the examples and propositions. Nevertheless, as I noted in the Weaknesses, I find certain arguments and motivations to be insufficiently persuasive.
Methods And Evaluation Criteria: No.
Theoretical Claims: Yes. The article incorporates a substantial number of mathematical expressions; however, it does not involve highly complex derivation or proof processes.
Experimental Designs Or Analyses: Yes. I think the experiments are not sufficient. See Weaknesses for details.
Supplementary Material: The appendix is very short, and no supplementary material is provided.
Relation To Broader Scientific Literature: This study examines the limitations of the traditional aleatoric-epistemic framework and introduces a distinctive decision-theoretic perspective. It further highlights that widely used information-theoretic quantities may serve as inadequate estimators of the constructs they are commonly assumed to measure.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The existing aleatoric-epistemic framework indeed suffers from conceptual ambiguity, coupled with evident terminological misuse in the design of current methodologies. This constitutes a compelling and worthwhile topic for discussion within the field of uncertainty estimation.
2. I agree with the opinions in Section 5.4. I think the analysis really provides some interesting insights.
Weakness:
1. Some of the article’s arguments are somewhat vague and difficult to comprehend, or not convincing enough. For instance:
Line 126: "This assumption is equivalent to stipulating basic axioms of rationality (Ramsey, 1926; von Neumann & Morgenstern, 1947)."
The connection between the cited references and the preceding context remains unclear. Furthermore, the specific mechanism by which this equivalence is ensured is not elucidated.
Line 212: "Given this, we argue that a principled notion of predictive uncertainty cannot be detached from this loss".
I find this perspective both difficult to comprehend and lacking in credibility. Even after thoroughly reviewing the entire article, I remain unable to fully grasp the rationale and necessity for adopting this viewpoint as the foundation. The authors may need to provide a more comprehensive explanation to justify this position and clarify its significance in the context of the study.
2. I find the presentation in Section 3: Key Concepts confusing. Is the author intending for the concepts in this section to be treated as assumptions that must be satisfied, or as prerequisite knowledge that readers are expected to possess?
3. The practical applicability of this work remains ambiguous. What tangible benefits might arise from replacing the traditional aleatoric-epistemic view with the proposed decision-theoretic perspective at the application level? Could it facilitate more accurate analysis of uncertainty sources? Analyzing the sources of uncertainty is a primary developmental goal of the aleatoric-epistemic framework, and the application value of this direction should be explored in that context. However, the experimental section of the paper is severely lacking, with insufficient empirical validation to support the claims.
4. As I read through Section 5.3 and realized that the authors’ primary contributions concluded at that point, I experienced a slight sense of surprise and disappointment. I had anticipated the emergence of a concrete uncertainty decoupling algorithm guided by the authors’ novel perspective. It is possible that this reflects a limitation in my own understanding, but I currently hold the view that, while this work offers ample and innovative theoretical analysis, it remains incomplete in its present form. I recognize one of the authors’ assertions: that a unique decomposition of uncertainty into reducible and irreducible components is often unattainable. However, I contend that this should not serve as a justification for forgoing the exploration of specific algorithms. Under the traditional aleatoric-epistemic framework, researchers have pursued a variety of uncertainty decoupling algorithms, even if these are not always precise. If the authors aim to substantiate the superiority of their decision-theoretic perspective over conventional approaches, the development of concrete algorithms accompanied by comprehensive experimentation is indispensable.
5. Inappropriate or Inaccurate Literature Citations: (1) Page 1, Line 14: The citation (Kendall & Gal, 2017) merely introduces a variance-based measure for epistemic uncertainty within the related work section. Utilizing it as an example in this context appears somewhat inappropriate. (2) Page 1, Line 18: The reference (van Amersfoort et al., 2020) does not employ distance-based measures to quantify epistemic uncertainty. The study explicitly states that it does not engage in the decoupled analysis of aleatoric and epistemic uncertainty, rendering this citation potentially misleading.
6. Inappropriate Submission Title: The author should focus more precisely on the unique contributions of the paper when selecting the title, rather than adopting a generic phrase such as "Rethinking Aleatoric and Epistemic Uncertainty." This title lacks distinctiveness, as the reconsideration of aleatoric and epistemic uncertainty has been an ongoing topic of discussion for an extended period.
Other Comments Or Suggestions: N/A
Questions For Authors: See Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review.
Your critical feedback is much appreciated. We hope our responses and new demonstrative plots help alleviate your concerns.
### **Loss functions and uncertainty**
> "Given this, we argue that a principled notion of predictive uncertainty cannot be detached from this loss"… I remain unable to fully grasp the rationale and necessity for adopting this viewpoint as the foundation.
>
This is useful feedback—thanks. We will clarify our case.
Our core argument is that notions of uncertainty should be derived from the task we ultimately wish to perform, rather than being chosen in abstract. The loss is the measure of success on our final task, and our analysis shows that we can then directly derive rigorous notions of uncertainty from this loss using a framework of rational actions, namely minimising the expected loss.
The necessity of adopting this viewpoint as a foundation then manifests in a variety of ways, such as ensuring uncertainties are consistent with rational actions, having an end-goal-driven approach so that uncertainties reflect what we actually care about, and allowing data acquisition to be performed in a meaningful way that targets the problem of interest.
### **Practical implications**
> The practical applicability of this work remains ambiguous. What tangible benefits might arise…?
>
Thanks for highlighting this. We will happily add more discussion on the practical benefits. We also plan to add a case study that helps show the implications of our work in a practical setting (see "Case study" in our response to Reviewer Atnu).
One takeaway is to stop using off-the-shelf uncertainty measures and instead derive the one that will be most useful in a given decision problem (eg, data acquisition). We show how to do this.
Another is that we cannot think simply in terms of "Analyzing the sources of uncertainty" in the way the current literature does, with such decompositions themselves being highly subjective in practice. It is hard to characterise the application value of the work in the context of the aleatoric-epistemic viewpoint, as we are ultimately arguing that this is fundamentally flawed.
Other takeaways relate to estimation of uncertainty in practice. Part of this is promoting caution in using common estimators; part is encouraging alternatives. For example, standard estimators can be so inaccurate that we are better off directly using a numerical estimate based on our prior knowledge (see "Imperfect estimators" in our response to Reviewer 43ee).
### **Concrete algorithms and empirical evidence**
> If the authors aim to substantiate the superiority of their decision-theoretic perspective over conventional approaches, the development of concrete algorithms accompanied by comprehensive experimentation is indispensable
> the experimental section of the paper is severely lacking, with insufficient empirical validation to support the claims
We feel that this misses the primary contribution of our work, which is not algorithmic but instead in showing the inconsistencies and flaws in the current foundations on which uncertainty-quantification algorithms are usually based, as well as providing a more rigorous and principled foundation. As noted by other reviewers, this makes it an unusual paper, but we do not believe every paper should be about proposing a new algorithm.
By extension, it is not clear what empirical experimentation it would actually make sense to add to the paper, beyond confirming the one empirical claim we make about how the BALD estimator should be understood. We do not agree that there claims being made that are lacking empirical validation, but if there are any experiments you think are missing we will do our best to add them.
To try and provide more clarity on how the work can be used for concrete algorithms, we have also added a new case study in our code base (see "Case study" in our response to Reviewer Atnu). This shows how using a non-standard loss can significantly impact how the uncertainty should be measured.
### **Title**
> This title lacks distinctiveness
>
We believe the broad span and foundational nature of our work actually requires a title like the one we use. Our paper brings together ideas from much of the ongoing discussions of aleatoric and epistemic uncertainty in the literature and aims to change the way people think about these concepts themselves.
### **Key concepts**
> Is the author intending for the concepts in this section to be treated as assumptions that must be satisfied, or as prerequisite knowledge that readers are expected to possess?
>
The concepts in the section are intended to be didactic rather than assumptions we are making. The section is also covering some of our problem formulation and synthesising relevant facts from past work, but its core is to introduce the key ideas that underpin our formulations.
### **Citations**
We will revisit all of the citations you raise. | Summary: This paper critiques the concepts of aleatoric and epistemic uncertainty in machine learning predictions, identifying inconsistencies and limitations in existing discussions. The authors argue that the traditional aleatoric-epistemic framework is insufficient to capture all relevant aspects of uncertainty in predictive modeling.
To address these shortcomings, the authors propose a more rigorous framework for model uncertainty based on expected loss values. This approach aims to clarify the concerns discussed and address various aspects of uncertainty more comprehensively, while also capturing existing notions of uncertainty.
The authors provide experimental insights into the BALD score, a popular information-theoretic quantity. Their findings demonstrate that this metric can sometimes measure something substantially different from what it is commonly perceived to quantify (i.e. the infinite-step predictive information gain).
Overall, the paper challenges existing paradigms in uncertainty quantification for machine learning models and proposes a new perspective based on decision theory to address the identified shortcomings.
Claims And Evidence: The paper is carefully written, and the theoretical justifications for the paper's argument are rigorous and make sense.
The experimental results on the performance of the BALD score offer convincing evidence for the authors' claims about the potential misinterpretation of common information-theoretic quantities. The connection between these experiments and Proposition 5 strengthens the paper's argument.
Methods And Evaluation Criteria: The paper's experiments use simple toy models that clearly distill the paper's points and are easy to understand. The BALD score is a popular measure of predictive uncertainty, making it a relevant focus for the experiments.
Theoretical Claims: While there are no substantial proofs to verify, there are a few small propositions whose proofs seem sound, and overall the analysis of different perspectives on uncertainty in the literature and their connections to each other is clearly stated and theoretically grounded. Please see my comments below about the clarity of the proofs and their associated proposition statements.
Experimental Designs Or Analyses: I liked the experiments, especially in the context of the result of Proposition 5. However, I would have preferred if more explanation of the setup was given in the main body, as I had to consult the appendix to be able to understand Figure 3.
Supplementary Material: I read over the appendix material, and found it clear.
Relation To Broader Scientific Literature: A large number of works have studied aleatoric/epistemic uncertainty and related notions of uncertainty quantification. This paper both adds to the conversation and connects and clarifies the different perspectives prior works take on this topic, making it a valuable addition to the body of research.
Essential References Not Discussed: The paper's discussion of aleatoric and epistemic uncertainty perspectives shares some themes with this ICLR 2025 paper: https://arxiv.org/abs/2412.18808. In particular, Section C.2 of that paper appears relevant to the decision-theoretic perspective in Section 5.1 here.
Other Strengths And Weaknesses: __Strengths:__
The paper is well-written and provides a valuable clarifying perspective on the commonly held views of epistemic and aleatoric uncertainty in machine learning. It also offers a more grounded alternative approach to understanding and quantifying predictive uncertainty.
__Weaknesses:__
The paper's focus on conceptual discussion, coupled with limited technical results and experiments confined to synthetic data, could potentially position it as more of a position paper, making it better suited to a different venue. However, the message it conveys is likely to be beneficial for the broader machine learning community.
Other Comments Or Suggestions: - I felt the proofs and statements of propositions 2-5 were a bit too brief, sometimes making the discussion hard to follow. It would be useful to concretely restate the quantities referenced in the proposition statements in mathematical terms.
- Perhaps I missed it, is $EIG_{\theta}$ ever defined in the main body?
- It might be valuable to consider including in the conclusion a brief exploration of any key open questions or potential research directions that this discussion brings to light.
- A number of the proofs (Propositions 2, 3, 5) use the phrase "...follows from the same working as in..." perhaps a more standard word choice would be "reasoning" or "argument" rather than "working"?
Questions For Authors: I have no specific questions, but please address any comments in my review that may indicate a misunderstanding of the paper's key points.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review.
We appreciate your positive feedback on a number of points:
1. Careful writing
2. Theoretical rigour
3. Convincing empirical evidence
4. Clear coverage of prior work
5. Potential benefit to the community
We are also grateful that you highlighted some ways to improve the paper.
### **Paper style**
> The paper's focus on conceptual discussion… could potentially position it as more of a position paper… However, the message it conveys is likely to be beneficial for the broader machine learning community.
>
We strongly support your emphasis on the benefit the paper could bring to the community, which we believe is ultimately what matters. While we agree that the paper would not have been totally out of place in ICML’s position-paper track, we ultimately felt that the objective and technically precise nature of its core contributions made it a better fit for the standard track, even if it does not fit the common mould within the field, as you note.
### **Future work**
> It might be valuable to consider including in the conclusion a brief exploration of any key open questions or potential research directions that this discussion brings to light.
>
Great idea. We will happily add this.
One exciting direction is deriving new problem-driven data-acquisition objectives using the decision-theoretic approach we demonstrate. In particular, we think our work lays foundations for developing loss-calibrated active-learning methods, which are underappreciated in the literature.
A key open question is how reducible uncertainties should best be estimated. Propositions 1-4 reveal existing estimators to be optimal only if we use a quadratic estimation loss, with further work required to establish optimal estimation strategies in other contexts.
### **Experimental setup**
> I would have preferred if more explanation of the setup was given in the main body, as I had to consult the appendix to be able to understand Figure 3.
>
This is useful feedback—thanks. We will happily revisit it.
### **Related paper**
> The paper's discussion of aleatoric and epistemic uncertainty perspectives shares some themes with this ICLR 2025 paper: https://arxiv.org/abs/2412.18808. In particular, Section C.2 of that paper appears relevant to the decision-theoretic perspective in Section 5.1 here.
>
Thanks for drawing our attention to this work. We agree that it is relevant and will cite it.
### **Wording of propositions and proofs**
> I felt the proofs and statements of propositions 2-5 were a bit too brief, sometimes making the discussion hard to follow. It would be useful to concretely restate the quantities referenced in the proposition statements in mathematical terms.
>
> A number of the proofs (Propositions 2, 3, 5) use the phrase "...follows from the same working as in..." perhaps a more standard word choice would be "reasoning" or "argument" rather than "working"?
>
We appreciate the feedback. We will update the wording and mathematical statement of all propositions and proofs with a view to making things easier to follow and using standard language.
### **Expected information gain**
> Perhaps I missed it, is $EIG_{\theta}$ ever defined in the main body?
>
Thanks for flagging this. It is the expected information gain in $\theta$, where $\theta$ represents stochastic model parameters, which is the same thing as the BALD score we refer to throughout the paper. We will take care to clarify this. | null | null | null | null | null | null |
LOB-Bench: Benchmarking Generative AI for Finance - an Application to Limit Order Book Data | Accept (poster) | Summary: The paper introduces LOB-Bench, a benchmark designed to evaluate the quality of generative models for limit order book (LOB) data. The authors propose a quantitative evaluation framework that measures distributional differences between generated and real LOB data. LOB-Bench assesses key LOB metrics such as spread, order book volumes, order imbalance, and market impact using unconditional and conditional statistical comparisons with L1 norm and Wasserstein-1 distance. It also incorporates adversarial evaluation via a discriminator network to distinguish real from synthetic data. The study benchmarks various generative models, finding that the LOBS5 model outperforms traditional approaches. It accurately replicates price impact functions, while classic LOB models fail in this aspect.
Claims And Evidence: The claims made in the submission are largely supported by quantitative evaluation methods, comparative analyses, and empirical results.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria in LOB-Bench make sense for evaluating generative AI models in the limit order book (LOB) modeling context. More specifically, the unconditional and conditional distributional comparisons provide a comprehensive statistical framework for evaluating how closely generated LOB data resembles real market data. Metrics like L1 norm and Wasserstein-1 distance effectively measure distributional accuracy across different time horizons and order book features.
However, the benchmark is tested on Alphabet (GOOG) and Intel (INTC) stocks, but it is unclear whether the results generalize to other stocks which can be more volatile. Furthermore, the framework focuses on statistical similarity but does not evaluate how well synthetic data supports real-world financial applications like algorithmic trading backtests and market stability simulations.
Theoretical Claims: The paper does not involve theorems.
Experimental Designs Or Analyses: The results section evaluates generative models for limit order book (LOB) data using the LOB-Bench framework, comparing LOBS5, baseline, Coletta, and RWKV models. As mentioned before, one specific concern is that there is a lack of robustness across different market conditions. The models are tested on only two stocks (GOOG & INTC), limiting insights into their performance across different asset classes, market regimes, and varying liquidity conditions. Expanding the benchmark to include volatile stocks, ETFs, and multi-market scenarios would improve generalizability.
Supplementary Material: Yes, I reviewed the supplementary material, including the benchmark code, which is clearly structured and well-documented. Additionally, I examined the figures in the appendix, which are logically presented and effectively illustrate key results.
Relation To Broader Scientific Literature: This paper builds on and extends multiple areas of research in financial market simulation and generative AI. It directly connects to the work of Vyetrenko et al. (2019) on realism metrics for limit order book (LOB) market simulations, which established the importance of evaluating synthetic financial data against empirical LOB properties. While Vyetrenko et al. relied on agent-based models (ABMs) to replicate market behavior, LOB-Bench focuses on deep learning-based generative models, benchmarking S5 state-space models, RWKV transformers, and GANs. This work is also closely related to Coletta et al. (2023), which explored conditional generative adversarial networks (CGANs) for LOB environments, emphasizing market reactivity and stylized fact replication. While CGANs aim to simulate realistic order flows, LOB-Bench goes further by introducing quantitative evaluation metrics (L1 norm, Wasserstein-1 distance) and adversarial discrimination to measure the closeness of synthetic and real data systematically. Ultimately, this work has the potential to establish a foundation for developing more robust and interpretable generative models in high-frequency trading.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: The study focuses primarily on distributional similarity metrics (L1 norm, Wasserstein-1) and market impact curves but does not evaluate how well the generated data supports actual trading strategies. Without backtesting in a simulated or real trading environment, it’s unclear if these generative models can improve decision-making for market participants.
Other Comments Or Suggestions: Minor improvement on grammar:
1. "... which is explainable as it was intended for small-tick stocks, which INTC is not." -> "... which is expected since the model was designed for small-tick stocks, whereas INTC is not."
2. "passed to function" -> "passed to a function"
3. "The mode produces..." -> "The model produces..."
Questions For Authors: The following questions are listed in descending order of importance:
1. How generalizable is the evaluation system across different stocks and market regimes?
2. The paper introduces distributional evaluation metrics (L1 norm, Wasserstein-1 distance) to compare real and generated LOB data. How sensitive are these metrics to different market conditions (e.g., high volatility vs. low volatility periods)? Have you considered alternative evaluation metrics. For instance, evaluating models trained with synthetic data using the real data?
3. The paper suggests that LOB-Bench-generated data could be useful for reinforcement learning (RL) training. Have you tested any RL-based trading strategies using synthetic data, and how well did they generalize to real market conditions?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank reviewer Nn9N for their detailed remarks and for recognizing the potential of this work in establishing a foundation for developing more robust and interpretable generative models in finance. We address the queries and concerns below, and would welcome any follow-up questions or suggestions. If these have been suitably addressed, we would appreciate an increase in the review score to help build a stronger case for acceptance.
## Generalizability to Different Stocks
Our proposed framework is inherently stock-agnostic. Users of LOB Bench can evaluate their models on any asset within the LOBSTER universe – or even beyond, provided the data is converted into the LOBSTER format. For this study, we selected two stocks that differ along key metrics and are representative of assets that models could be trained on in practice.
While we acknowledge that different asset classes, such as ETFs, may yield different results, exploring this was beyond the scope of our work, which focuses on presenting the benchmark itself. We also refer reviewer Nn9N to our response to reviewer KDf5 regarding “Limited Assets” and “Zero-Shot Transfer Evaluation” for additional context on our stock selection.
## Sensitivity to Different Market Conditions
We consider the evaluation of different market conditions to be distinct from the evaluation of additional stocks. A single stock can experience varying market conditions over time, including fluctuations in volatility, spread, and trading volume. Our benchmark directly accounts for this by evaluating conditional distributions.
For instance, the conditional distribution of message inter-arrival times, given the spread, captures how a model reflects variations in inter-arrival times across different market conditions (i.e., spread regimes). This approach ensures that the benchmark effectively assesses a model’s ability to adapt to dynamic market environments.
## Alternative Practical Evaluation: Downstream Tasks & Sim-to-Real Transfer
We acknowledge the concerns regarding the reliance on distributional similarity metrics, such as the L1 norm and Wasserstein-1 distance. While these metrics are valuable for comparing real and generated data, they may not fully measure the degree of practical utility of synthetic data in all real-world applications.
To address this, we propose including a brief evaluation of mid-price trend forecasting—a downstream task relevant to algorithmic trading—in the camera-ready version of the paper. Specifically, we will assess model performance on a held-out set of real data after training on three different datasets: (1) real historical data, (2) a combination of real and generated data, and (3) purely generated data. This allows assessing the sim-to-real transfer gap and thus evaluating the quality of the generated model. It is worth noting, however, that this approach has its shortcomings.
The implementation will follow the model architectures proposed in Prata et al. (2024), focusing on the best-performing BINCTABL architecture, along with DeepLOB and a simple LSTM. This evaluation will serve as a complementary assessment to the distributional metrics, providing insights into how synthetic data impacts mid-price forecasting accuracy. Usefulness for predicting mid-prices might be a very discrete property of generated data, and a potential failure in this regard might prove uninformative for future model development.
## RL Using Generated Data
Training reinforcement learning (RL) agents using generated data in a limit order book (LOB) setting is an active area of ongoing and future research. This approach represents another instance of sim-to-real transfer evaluation. Generative models have the potential to overcome the limitations of training policies solely on static historical data by introducing dynamic, action-dependent data trajectories, thereby enriching the training environment.
## Grammar and Spelling
Thank you for highlighting these specific typos. We have now corrected them in the manuscript. | Summary: This paper introduces LOB-Bench, a novel benchmark implemented in Python for evaluating the quality and realism of generative AI models applied to Limit Order Book (LOB) data in the LOBSTER format. The benchmark addresses the lack of quantitative evaluation paradigms in financial sequence modeling by providing a comprehensive framework for distributional evaluation. LOB-Bench measures distributional differences between generated and real LOB data, both conditionally and unconditionally, using a suite of relevant LOB statistics (spread, volume, imbalance, inter-arrival times) and scores from a discriminator network. Furthermore, it incorporates "market impact metrics" to assess cross-correlations and price response functions for specific events. The authors benchmark several generative models, including autoregressive state-space models, a (C)GAN, and a parametric LOB model, finding that autoregressive GenAI approaches outperform traditional models. The code and generated data are publicly available to facilitate further research and model development.
Claims And Evidence: Yes, the claims made in the submission are generally well-supported by clear and convincing evidence. The paper claims to introduce a novel benchmark and demonstrate its utility in evaluating generative models for LOB data. This claim is supported by:
* Development of LOB-Bench: The paper clearly describes the components of LOB-Bench, including the various scoring functions, evaluation metrics (L1 norm, Wasserstein-1 distance), and conditional evaluation methodologies. The availability of the code further strengthens this claim.
*Benchmarking Experiments: The authors present experimental results comparing several generative models (LOBS5, baseline, Coletta, RWKV4, RWKV6) on GOOG and INTC stock data. Figures 3, 4, and 5 visually and quantitatively demonstrate the performance differences across models and metrics.
* Identification of Model Derailment: Figure 5 and related discussion provide evidence for "model derailment" in longer unrolls, a crucial insight for generative sequence models.
* Reproducibility: The paper explicitly mentions the availability of code and generated data, enhancing the reproducibility and verifiability of their findings.
The evidence presented, particularly the comparative benchmarking results and the analysis of model derailment, effectively supports the paper's claims about the value and utility of LOB-Bench.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are sensible and well-justified for the problem of benchmarking generative models for LOB data. Given that this is primarily a benchmark paper, the focus is appropriately placed on the design of comprehensive and relevant evaluation criteria rather than introducing novel modeling methods.
The evaluation criteria are well-chosen because:
* Distributional Evaluation: Shifting from qualitative analysis of "stylized facts" to quantitative distributional evaluation addresses a crucial gap in the field and provides a more rigorous approach to model comparison.
* Relevant LOB Statistics: The inclusion of commonly used LOB statistics like spread, volume, imbalance, and inter-arrival times ensures that the benchmark is grounded in domain knowledge and evaluates models on features relevant to financial practitioners.
* Discriminator Network: Incorporating a discriminator network provides an adversarial perspective on model realism and captures complex, high-dimensional data characteristics that might be missed by simpler statistical metrics.
* Market Impact Metrics: Including market impact metrics assesses the model's ability to generate realistic responses to counterfactual scenarios, a key aspect for financial applications.
* Conditional Evaluation: Evaluating conditional distributions allows for a more nuanced understanding of model performance under different market conditions and over forecasting horizons, addressing the "autoregressive trap" issue.
While benchmark datasets are mentioned (LOBSTER, FI-2010), the core contribution lies in the evaluation criteria and the LOB-Bench framework itself, which are well-suited for the task of rigorously assessing generative LOB models. The focus on distributional similarity and market-relevant metrics makes the benchmark highly pertinent to the application domain.
Theoretical Claims: There are no significant theoretical claims in this paper that require formal proof verification. The paper is primarily empirical and methodological, focused on developing and demonstrating a benchmark rather than establishing new theoretical results. Therefore, this question is not applicable.
Experimental Designs Or Analyses: The experimental designs and analyses are generally sound and valid for the purpose of demonstrating LOB-Bench. However, a limitation is the scope of assets considered:
* Limited Asset Universe: The experiments are primarily conducted on data for only two assets: Google (GOOG) and Intel (INTC). While these are representative stocks, evaluating the benchmark and the models on a broader range of assets, including stocks with different market capitalizations, liquidity profiles, and industry sectors, would significantly strengthen the generalizability of the findings. Restricting the analysis to just two assets limits the external validity of the conclusions about model performance and benchmark utility.
Despite this limitation, the internal validity of the experiments is well-maintained. The comparisons between models are conducted fairly using consistent evaluation metrics and the LOB-Bench framework. The analysis of model derailment and the breakdown of performance across different scoring functions are insightful and contribute to the understanding of generative LOB models.
Supplementary Material: Yes, I reviewed the supplementary materials and code files. The supplementary material is well-organized and provides valuable details that enhance the paper's transparency and reproducibility.
Relation To Broader Scientific Literature: The key contributions of this paper are directly related to the growing body of literature on generative AI in finance, particularly in the domain of market microstructure modeling and simulation.
* Generative Financial Models: The paper builds upon recent work applying generative AI to financial data, citing papers like Nagy et al. (2023) which pioneered token-level generative modeling of LOB data. It extends this line of research by focusing on rigorous evaluation.
* LOB Simulation: The paper is directly relevant to the literature on LOB simulation, which has traditionally relied on agent-based models or parametric approaches (cited in the introduction). LOB-Bench offers a new paradigm for evaluating the realism of these simulations, particularly those powered by GenAI.
* Benchmark Datasets and Evaluation Metrics in Financial ML: The paper addresses the broader challenge of benchmarking machine learning models in finance, where evaluation has often been ad-hoc and lacking in standardized metrics. It contributes to the emerging field of financial machine learning benchmarks.
* Autoregressive Sequence Models: The paper implicitly connects to the broader literature on autoregressive sequence models, highlighting the "autoregressive trap" problem and the importance of evaluating models beyond next-token prediction accuracy, especially relevant in the context of LLMs and generative models.
While the application is tightly focused on LOB data, the methodological contribution of a comprehensive distributional benchmark for generative models is more broadly relevant to evaluating sequential generative models in other domains, even if the paper itself doesn't explicitly explore these extensions.
Essential References Not Discussed: Based on my familiarity with the literature on generative financial modeling and limit order book research, there do not appear to be essential related works that are critically missing from the paper's citations and discussion.
Other Strengths And Weaknesses: Strengths:
* Originality and Significance: LOB-Bench fills a critical gap by providing the first comprehensive, quantitative benchmark for generative AI models applied to LOB data. This is a significant contribution as it enables rigorous model comparison and advancement in this important area.
* Practicality and Accessibility: The benchmark is implemented in Python and is open-source, making it easily accessible and usable by researchers and practitioners. The use of the standard LOBSTER format further enhances its practicality.
* Comprehensive Evaluation Framework: LOB-Bench incorporates a wide range of relevant metrics, including statistical distributions, discriminator scores, and market impact measures, providing a holistic view of model performance.
* Identification of Model Derailment: The paper highlights the important issue of "model derailment" in autoregressive models, providing a valuable insight for the community.
* Clarity: The paper is generally well-written and clearly explains the LOB-Bench framework, evaluation metrics, and experimental results.
Weaknesses:
* Limited Asset Scope: As mentioned before, the experiments are somewhat limited by focusing on only two assets (GOOG and INTC). Expanding the asset universe would strengthen the generalizability of the findings.
* Application Specificity: While LOB-Bench is valuable for LOB data, the paper could briefly discuss the potential for generalizing the principles of distributional benchmarking to other types of sequential financial data or time-series domains. While the framework itself is somewhat adaptable, the paper's framing is very LOB-centric.
* Discriminator Score Challenge: The paper notes that discriminator-based scoring sets a high bar. While this is a valuable metric, further discussion on how to interpret and potentially address low discriminator scores in future model development could be beneficial.
Other Comments Or Suggestions: * Expand Asset Coverage: In future work, consider benchmarking on a more diverse set of assets to demonstrate the robustness and generalizability of LOB-Bench and the evaluated models. Perhaps including stocks from different sectors, market caps, and liquidity levels.
* Explore Domain Adaptation: Briefly discuss the potential for adapting LOB-Bench or its principles to evaluate generative models in other financial domains, such as options markets, FX markets, or even broader time-series data.
* Enhance Code Documentation: While the code is available, ensure comprehensive documentation and potentially example notebooks to facilitate easier adoption and use by the community.
* Investigate "Discriminator Score Challenge" Further: Explore strategies for improving discriminator scores for generative LOB models in future research, potentially through adversarial training techniques or modified model architectures.
Questions For Authors: * Generalizability across Assets: Given the current experiments focus on GOOG and INTC, how do you anticipate the relative performance of the benchmarked models might change when evaluated on a significantly broader and more diverse set of assets (e.g., including small-cap stocks, less liquid stocks, or stocks from different industry sectors)? Understanding the authors' perspective on asset generalizability would help assess the benchmark's broader applicability. If they anticipate significant changes in model ranking or benchmark sensitivity, it might suggest future directions for improving LOB-Bench's robustness.
* Beyond LOB Data: While LOB-Bench is specifically designed for Limit Order Book data, do you see potential avenues for adapting the core principles of distributional benchmarking and the evaluation metrics used in LOB-Bench to other types of sequential financial data or time-series forecasting tasks? Exploring the potential for broader methodological impact would increase the perceived significance of the work beyond the specific LOB domain. If authors have ideas for broader applications, it would strengthen the paper's contribution.
* Discriminator Score Interpretation: The paper notes the "difficult challenge of discriminator scores." Could you elaborate on how you interpret relatively lower discriminator scores for even state-of-the-art models? Does this primarily indicate limitations in the discriminator's ability to capture subtle differences, or are there fundamental aspects of generated LOB data that remain distinguishable from real data even for the best models? Clarification on the interpretation of discriminator scores would be valuable for guiding future model development and understanding the limitations of current generative LOB models. A nuanced response would demonstrate a deeper understanding of the evaluation methodology.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank reviewer KDf5 for their detailed review and thoughtful comments, which not only provide valuable feedback but also highlight key strengths of our paper. In particular, we appreciate the recognition of the important gap our work addresses by introducing a well-founded, fully distributional evaluation of generative LOB data, including assessments of model derailment and price impact functions. We address the queries and concerns below, and would welcome any follow-up questions or suggestions. If these have been suitably addressed, we would appreciate an increase in the review score to help build a stronger case for acceptance.
## Limited Assets
The reviewer raises a concern regarding the limited number of assets evaluated in our study. While we acknowledge that results may vary across different stocks, we selected GOOG and INTC as representative examples because they span a broad spectrum of relevant statistics, such as volatility, relative tick size (tick size relative to stock price), and traded volume.
Expanding the evaluation to additional stocks would require substantial computational resources. Training each considered model class on a new stock dataset takes multiple days on 8 L40S GPUs per stock. Moreover, as is standard practice in the domain, each model is trained on a single stock, since ample training data is usually available, and models are expected to specialize in the characteristics of individual stocks.
Small-cap and less liquid stocks, while an interesting avenue for future research, present additional challenges beyond the scope of this paper. These stocks provide less training data due to lower trading activity, which complicates deep learning approaches. A promising direction for future work is multi-asset modeling, which could leverage data across multiple stocks to enhance training and capture stock-independent dynamics.
It is important to emphasize that the benchmark itself is designed to allow researchers to evaluate models on any stock of their choice, and its usefulness is not limited to the two stocks presented in our paper. Our package includes efficient code that enables fitting baseline model parameters to new stock data in seconds to minutes. Furthermore, we plan to expand our research by developing new models that will naturally be evaluated on a broader set of stocks.
## Zero-Shot Transfer Evaluation
An alternative evaluation could assess zero-shot model transfer—training on one stock (e.g., GOOG) and generating data for another (e.g., WMT). While this reduces computational costs, it introduces challenges.
First, seeding mechanisms vary by model class: autoregressive models allow conditioning on history, but the Coletta CGAN model only seeds at the start of the day, and the baseline model does not support seeding. Second, it is unclear whether performance differences in transferred data stem from stock characteristics or model limitations. A more robust approach would involve training on multiple stocks, which is beyond the paper’s scope.
That said, our benchmark supports such research directions, as it remains agnostic to the model training regime.
## Discriminator Scores
We have added the following to the paper:
> “High discriminability may result from model errors, as indicated by imperfect model scores [see …]. A distributional mismatch in a single scoring function can be sufficient to make fake data identifiable. To mitigate this issue, future research could evaluate adversarial performance by training a discriminator on perturbed data and reporting scores conditioned on the noise level, particularly as models improve on this benchmark.”
Low discriminator scores suggest that even state-of-the-art models still struggle to generate perfectly indistinguishable LOB data. Future improvements may follow two paths: (1) training larger supervised models on richer datasets, akin to recent LLM advancements, or (2) explicitly minimizing discriminator scores, as in adversarial frameworks like GANs (Goodfellow et al., 2014) or GAIL (Ho et al., 2016).
## Beyond LOB Data
To highlight the transferability of our methodology, we added the following to the paper:
> “Our methodology formalizes and naturally extends common evaluation practices for synthetic one-dimensional time series, such as financial returns, which typically emphasize distributional similarity. Our framework enables a quantitative assessment of distributional properties in structured high-dimensional time series. By adapting the scoring functions, our approach could also be applied to financial transactions, payment data, streamed price quotes in forex markets, multi-asset limit order books, or decentralized crypto market protocols.”
## Documentation
We provide a Jupyter notebook demonstrating LOB Bench usage and will further improve the package documentation before the paper’s potential publication. | Summary: This is a great study with multiple important contributions:
1. The paper introduces a new benchmark for evaluating limit order book (LOB) generated data, applying aggregator functions to extract LOB-specific statistics and measuring the distribution distance between real and model generated data in both unconditional and conditional settings.
2. They also evaluate market impact response functions
3. They very well justify the distance measures used as evaluation score
4. They demonstrate nicely how their benchmark ranks various models, comparing across multiple scores and also plotting distributions showing a visual comparison.
5. They show that an autoregressive state-space model (LOBS5) outperforms traditional parametric and GAN-based methods on GOOG and INTC data.
Claims And Evidence: Authors claim that their benchmark quantitatively assesses the realism of generative models for LOB data. They also claim that based on their findings, the LOBS5 model notably achieves superior performance over competing methods. They provide detailed analysis on their evaluation measures as well as detailed statistical analysis including error divergence curves, bootstrapped confidence intervals, discriminator ROC scores, etc. to support their claims.
Methods And Evaluation Criteria: Their proposed framework maps high-dimensional LOB data to 1D scores via aggregator functions, compares histograms of these scores using the L1 norm and Wasserstein-1 distance, for both unconditional and conditional distributions (where the inference timeframe is bounded, such as error divergence over forecast horizons).
Theoretical Claims: NA
Experimental Designs Or Analyses: Their empirical results evaluate five different generative models (including traditional parametric methods, GAN-based approaches, and autoregressive models) on LOB data for Alphabet (GOOG) and Intel (INTC). Their analysis is robust: it employs multiple LOB-specific statistics, using a variety of LOB-specific metrics and visually contrasting the distributional differences in synthetic data across several tasks. It also visualizes error accumulation and histogram discrepancies over time, and uses bootstrapped confidence intervals to assess significance, which collectively underscore the models’ varying capabilities in capturing realistic market dynamics. Figures and captions are very clear and informative.
Supplementary Material: Yes, I reviewed all the additional figures and detailed training curves, the LOBS5 test loss curves and RWKV training dynamics and histograms for various scoring functions.
Relation To Broader Scientific Literature: This is a strong framework which provides a full distributional evaluation tool tailored to financial data, and is accessible and easily transferable to other domains.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: The key strength of the paper is its novel and detailed evaluation framework that provides clear, quantitative measures for generative realism in LOB data. It would be beneficial to add more ablation experiments for other choices of discriminator network architectures, and other binning strategies.
Other Comments Or Suggestions: -
Questions For Authors: 1. How sensitive are your benchmark results to the choice of aggregator functions?
2. How sensitive are your divergence metrics to the binning strategy?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank reviewer **EUwk** for their thorough review and detailed feedback. We appreciate their recognition of the key strengths of our paper and framework, including the robust justification for our methods and scoring functions, the generality of the methodology, market impact evaluations, fully distributional scoring, and the extensive visualizations provided. We address the queries below, and would welcome any follow-up questions or suggestions. If these have been suitably addressed, we would appreciate an increase in the review score to help build a strong case for acceptance.
## Discriminator Training
We do not make any specific claims or recommendations regarding the architecture of the discriminator. Our chosen model, combining Conv1D with attention layers, performs well and consistently distinguishes generated data from real data across all generative models, with a clear gradient in discriminator certainty across models indicating a varying degree of model veracity.
Rather than focusing on a particular discriminator design, we emphasize the value of using a discriminator as a learned adversarial scoring function. This approach remains valid as long as the model is sufficiently strong to learn the discrimination task effectively. Consequently, ablations on specific model characteristics are less relevant, as various state-of-the-art models could fulfill this role.
The key challenge we aim to address is improving generative model performance. In this regard, our chosen discriminator effectively exposes the gap between generated and real data while providing a meaningful learning signal for future progress.
## Sensitivity to Choice of Aggregator Function
For a detailed analysis of how benchmark results vary with the choice of aggregator function, we refer to Figure 10 in Appendix D, which presents a bar plot comparing distributional errors across models. To ensure that outliers in scores do not disproportionately impact model rankings, we also report the median and interquartile range (IQR) of scores in the model summary plots (Figure 4).
## Sensitivity of Divergence Metrics to the Binning Strategy
Thank you for suggesting this ablation. We are now evaluating the divergence scores with bins that are half and double their current size. We will report the sensitivity of the results based on the range of resulting scores in the camera-ready version of the paper.
Since we use a dynamic regular bin size determined by the Freedman-Diaconis (FD) rule, which is designed to adapt well to the distribution of the data, we do not expect significant sensitivity to changes in bin size. Furthermore, the following theoretical convergence argument supports this choice: the FD bin size minimizes the integrated mean squared error between the histogram and the theoretical data distribution. Bin size decreases at a rate of n^{1/3}, and the expected number of observations per bin increases without bound. This suggests that, for large sample sizes, our choice of bin size will not introduce substantial inaccuracies in the divergence scores. | null | null | null | null | null | null | null | null |
Near-Optimal Sample Complexity for MDPs via Anchoring | Accept (poster) | Summary: The authors propose a new model-free algorithm for solving average-reward weakly communicating MDPs with a generative model. The authors achieved the sample complexity of order $\widetilde{O}(SAH^2 / \varepsilon^2)$, where $S$ is a number of states, $A$ is a number of actions, $H$ is a span of an optimal bias function. The main property of the algorithm is that it does not require any prior knowledge on $H$. Additionally, the authors adapted their approach to a discounted setting.
Claims And Evidence: **Claim 1.** Algorithm SAVIA achieves the $\varepsilon$-policy error using $\mathcal{O}(SAH^2/\varepsilon^2)$ samples but requires the knowledge of $H$;
The proof looks good to me.
**Claim 2.** Algorithm SAVIA+ achieves the $\varepsilon$-policy error using $\mathcal{O}(SAH^2/\varepsilon^2)$ samples without knowledge of $H$, using a combination of SAVIA and the doubling trick.
The statement is expected and looks good to me too, however, I did not verify the proof in detail.
**Claim 3.** This algorithm also achieves $\varepsilon$-error in terms of solution to Bellman equations with the same sample complexity;
This automatically follows from the structure of the proof.
**Claim 4.** The adaptation of the algorithm to discounting case achieves $\varepsilon$-solution to Bellman equations using $\mathcal{O}(SA/(\varepsilon^2 (1-\gamma)^2))$ samples and $\varepsilon$-optimal policy using $\mathcal{O}(SA/(\varepsilon^2 (1-\gamma)^4))$ samples.
This statement looks good to me, although I did not verify it in detail in the Appendix.
Methods And Evaluation Criteria: The paper of theoretical nature and does not contain any empirical studies.
Theoretical Claims: See **Claims and Evidence** section.
Experimental Designs Or Analyses: N/A
Supplementary Material: I carefully checked proofs in Section A.1 and A.2, skimmed the rest of Section A. I did not check proofs in sections C since they look like a straightforward generalization of the results from section A, but nevertheless, the important one.
Relation To Broader Scientific Literature: The key contributions of this paper follow the current scientific literature. In particular, the question on the optimal sample complexity without prior knowledge in average-reward MDPs is very interesting and not yet well-studied (although the number of studies of this question increases).
Essential References Not Discussed: All related literature was discussed in detail.
Other Strengths And Weaknesses: As one of the strengths of the paper, I have to underline the clarity and the quality of writing, as well as a detailed literature review.
Other Comments Or Suggestions: No additional comments.
Questions For Authors: - What do you think, could a variance-reduction technique (Wainwright, 2019) reduce the sample complexity of your method in the average-reward setting? For example, it is observable that in the current approach, the dependence on H^2 comes from evaluation of expectations with respect to $d^k$, but if one replaces $d^k$ with some variance-reduced version with a norm of only a constant order, it can improve the final sample complexity.
- How is the proposed method connected to a usual Q-learning? Can we treat this method as some modification of Q-learning?
Wainwright, M. J. (2019). Variance-reduced $ Q $-learning is minimax optimal. arXiv preprint arXiv:1906.04697.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments. Both of the Reviewer's questions are central to our motivation for writing this work. For the list of references please refer to the reply to Reviewer BWFD.
1. Use of a related variance reduction technique
The paper [16] is related to ours in the sense that classical (synchronous) Q-learning is fundamentally a stochastic version of the Krasnoselskii-Mann (KM) iteration for a contracting operator. It builds on earlier work by the same author [17], where a sample complexity of $O(SA(1-\gamma)^{-5}\epsilon^{-2})$ was established. However, it is important to note that the method cited by the Reviewer applies only in the discounted setting, with complexity measured in terms of the distance to the optimal Q-factor. The argument in that paper is clever: by employing variance reduction techniques inspired by stochastic optimization, it improves the original (suboptimal) complexity by a factor of $1/(1-\gamma)$. To achieve minimax optimality, a rerun of the main algorithm is carefully designed. From a technical standpoint, the variance reduction argument relies on a precise interplay between the step size (which depends on the contraction parameter), the number of iteration $n$, and the specific structure of the iteration itself. In fact, the number of samples per epoch explicitly depends on $\gamma$, and the step size decreases as $\alpha_n=1/(1+(1-\gamma)n)$.
Regarding the reviewer's specific question, in the average reward setting, the Bellman operator is generally only nonexpansive. This makes it difficult to apply a similar argument due to the crucial role of the discount factor in previous approaches. Nevertheless, we employ a related idea by carefully designing (random) controlled batches to reduce variance. Unfortunately, our approach appears to be constrained to using the square of the span seminorm of $d^k$, as any alternative choice would affect the overall sample complexity.
That being said, it is worth mentioning that in the discounted case, our SAVID+ method can achieve a sample complexity of $O(SA(1-\gamma)^{-4}\epsilon^{-2})$ for computing an $\epsilon$-optimal policy (see Theorem 4.3). It remains an open question whether a restarting strategy could be devised in this setting to remove the final $1/(1-\gamma)$ factor.
2. On the connection of our method to Q-learning
The well-known (synchronous version of) Q-learning algorithm takes on a different meaning in the average reward case. In this setting, the goal is to compute bias vectors and (bias) Q-factors to derive optimal policies. The closest analogue to Q-learning in this context is the RVI-Q-learning algorithm [1]. The technical challenge here is that, since the optimal value g* is unknown, an auxiliary function is introduced as an online estimator. However, this approach has a drawback: the resulting operator, while related to the Bellman function, may lose its nonexpansiveness. Nevertheless, it is possible to analyze the RVI-Q procedure using a related iteration, which also turns out to be a stochastic version of the KM iteration (see, for instance, [5] for a nonasymptotic analysis in the unichain case),
where the set of fixed points is unbounded.
We can now answer the question, which, as mentioned, is central to our motivation. From a purely fixed-point perspective, RVI-Q-learning and similar algorithms can be seen as instances of stochastic KM iterations. It is known that in the absence of noise Halpern iteration is faster with a $1/n$ convergence rate for the fixed-point residual (see references in our paper for further details) whereas KM achieves at best a rate of $1/\sqrt{n}$.
In conclusion, the Reviewer's intuition is fully correct: our goal was precisely to "improve Q-learning" by leveraging the fact that the Bellman error controls the induced policy value (see Proposition 2.1). To reach near-optimal complexity, this idea from fixed-point theory is combined with recursive sampling techniques that have been developed for computing almost-optimal policies
in average reward MDPs.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors, and I am happy to keep my score. | Summary: This paper introduces a novel value iteration algorithm that achieves near-optimal sample complexity in the setting of *weakly communicating* MDP. A weakly communicating MDP is an MDP whose state space is comprised by a set of states that are accessible from one another and an additional set of transient states. The authors consider average reward and discounted cumulative reward value functions.
The algorithm proposed is a value iteration scheme with anchoring. Effectively, it can be seen as a form of Halpern's iteration. The sample complexity is $\tilde{O}(|S||A|\|h^*\|^2/\epsilon^2)$.
The propose 2 algorithms for the average reward case and 2 for the cumulative discounted rewards. In both cases they start with a base algorithm that is incorporating Halpern's iteration on top of value iteration. Yet, as that algorithm relies on knowledge of the Q values from their optimal values, they propose a scheme that uses a doubling trick to be able to call the primary algorithm and while each time getting a better estimate of the optimal Q value.
Claims And Evidence: The claims made are supported by rigorous mathematical proofs.
Methods And Evaluation Criteria: The paper is of theoretical focus and there are no experiments.
Theoretical Claims: I checked the correctness of Proposition 2.1, Proposition 3.1, and Theorem 3.2.
Experimental Designs Or Analyses: No experiments. The analyses rely on mathematical arguments
Supplementary Material: I checked the correctness of Proposition 2.1, Proposition 3.1, and Theorem 3.2.
Relation To Broader Scientific Literature: The authors achieve to design an almost optimal algorithm for MDPs that does not depend on the mixing time and the state spain is weakly communicating. It is a very natural RL problem and the algorithm they propose combines value iteration with Halpern's iteration.
Essential References Not Discussed: I do not think they have omitted some essential reference.
Other Strengths And Weaknesses: Strengths:
* the authors manage to get an almost optimal sample complexity
* the sample complexity is independent of the mixing time of the MDP
* the algorithmic solution requires minor modification over prior method
Weakness:
* no experimental evaluation
Other Comments Or Suggestions: None
Questions For Authors: * What stands in the way of achieving linear dependence on $\| h_* \|_{sp}$?
* Do you think that Halpen iteration could improve with q-learning as well, achieving similar results?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments.
1. What stands in the way of achieving linear dependence on $\\|h^*\\|_{sp}$?
This is a very interesting and relevant question which we tried to address without success so far. From a technical viewpoint, the quadratic dependence of our sample complexity derives from the batch size $m_k$ in SAVIA which involves the quadratic term $\\|d^k\\|^2$. In order to reduce the sample complexity and reach the theoretical lower bound with linear dependence on $\\|h^*\\|_{sp}$, we considered introducing an additional variance reduction to SAVIA+. A possible approach is to implement a restart similar to the one used in [16] for discounted MDPs, which removed a factor $1/(1-\gamma)$ in the complexity by a carefully designed rerun of the main algorithm. Unfortunately we have not succeeded to adapt this technique to the average reward and at this point it is unclear to us whether this is possible or not. Exploring this is indeed an interesting future direction.
2. Do you think that Halpern iteration could improve with q-learning?
We are not sure how to interpret this question. It is known that synchronous RVI Q-learning can be viewed as a stochastic version of the Krasnoselskii-Mann (KM) iteration [5, 16]. A discussion on this is also included in the rebuttal to Reviewer hCuT. In the absence of noise, Halpern is known to be optimal achieving a $1/n$ convergence rate [14], whereas KM generally attains at best a $1/\sqrt{n}$ rate [6]. Interestingly, [12] demonstrated that these non-asymptotic rates for KM and Halpern also hold for average MDPs in the tabular setting. In the stochastic framework, KM has already some type of built-in variance reduction so one might conjecture that in combination with Halpern could attain a smaller complexity. Unfortunately, our attempts in this direction did not succeed and produced worse complexity compared to the pure Halpern with recursive sampling presented here.
References
[1] Abounadi J., Bertsekas D., Borkar V.S. (2002), Stochastic approximation for nonexpansive maps: application to Q-learning algorithms, SIAM Journal on Control and Optimization 41(1):1-22.
[2] Azar M.G., Munos R., Kappen H.J. (2013), Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model. Machine Learning, 91(3): 325-349.
[3] Bertsekas D.P. (2012), Dynamic Programming and Optimal Control, volume II. Athena Scientific, 4th edition.
[4] Bravo M., Contreras J.P. (2024), Stochastic Halpern iteration in normed spaces and applications to reinforcement learning. arXiv:2403.12338.
[5] Bravo M., Cominetti R., (2024) Stochastic fixed-point iterations for nonexpansive maps: Convergence and error bounds, SIAM Journal on Control and Optimization, 69:191-219.
[6] Contreras J.P., Cominetti R., (2022), Optimal error bounds for nonexpansive fixed-point iterations in normed spaces, Mathematical Programming, 199(1-2):343-374.
[7] Ganesh S., Mondal W.U., Aggarwal V. (2024), Order-Optimal Global Convergence for Average Reward Reinforcement Learning via Actor-Critic Approach, arXiv:2407.18878v2.
[8] Jin Y., Gummadi R., Zhou Z., Blanchet J. (2024a) Feasible Q-learning for average reward reinforcement learning. International Conference on Artificial Intelligence and Statistics.
[9] Jin Y., Sidford A. (2020), Efficiently solving MDPs with stochastic mirror descent. International Conference on Machine Learning.
[10] Lieder F. (2021), On the convergence rate of the Halpern-iteration. Optimization Letters, 15(2):405-418.
[11] Lee J., Ryu E. (2023), Accelerating value iteration with anchoring. Neural Information Processing Systems.
[12] Lee J., Ryu E. (2025), Optimal non-asymptotic rates of value iteration for average-reward MDPs. International Conference on Learning Representations.
[13] Puterman M.L. (2014) Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley and Sons, 2nd edition.
[14] Sabach S., Shtern S. (2017), A first order method for solving convex bileveloptimization problems. SIAM Journal on Optimization, 27(2):640-660.
[15] Tuynman A., Degenne R., Kaufmann E. (2024), Finding good policies in average-reward Markov decision processes without prior knowledge. Neural Information Processing Systems.
[16] Wainwright M.J. (2019). Variance-reduced-learning is minimax optimal. arXiv:1906.04697
[17] Wainwright M.J. (2019). Stochastic approximation with cone-contractive operators: Sharp $\ell_\infty$ bounds for Q-learning. arXiv:1905.06265
[18] Wang S., Blanchet J., Glynn P. (2024), Optimal sample complexity for average reward Markov decision processes. International Conference on Learning Representations.
[19] Zurek M., Chen Y. (2024), Span-based optimal sample complexity for weakly communicating and general average MDPs. Neural Information Processing Systems.
[20] Zurek M., Chen Y. (2025), The Plug-in Approach for Average-Reward and Discounted MDPs: Optimal Sample Complexity Analysis. arXiv:2410.07616. | Summary: This paper studies the sample complexity of weakly communicating average-reward MDPs assuming access to a generative model. The authors focus on developing model-free algorithms by using a stochastic version of Halpern iteration. They show that this approach achieves a sample complexity bound in terms of the span of a bias vector solving the Bellman optimality equations, and unlike some prior work, their algorithm does not require any prior knowledge of this complexity parameter. They achieve this by using a doubling trick combined with a stopping condition. They also adapt their algorithm to discounted MDPs.
## update after rebuttal
I think the authors for their response. I'll maintain my score.
Claims And Evidence: The main claims of this paper are all of a theoretical nature and supported by proofs.
Methods And Evaluation Criteria: The methods, namely stochastic Halpern iteration, make much sense for this problem. The evaluation criterion of sample complexity to learn near-optimal policies is standard for this well-studied problem. Measuring the sample complexity required to find a point with small Bellman error in the discounted setting (Theorem 4.2) is new. While this is an interesting finding which I do think is worthy of inclusion in the paper, it is unclear to me what use this has (beyond the fact that low Bellman error implies a bound on the suboptimality, which is probably what we actually care about).
Related to this task of finding a point with small Bellman error, regarding the comments made about prior work on approximately line 426, first column, it does not follow that these works require the stated complexity $SA/((1-\gamma)^3 \varepsilon^2)$ to obtain an $\varepsilon$ Bellman residual error, it just means that their sample complexity for doing so is upper-bounded by this amount. In fact, several prior works achieve a sampling complexity of $SA/((1-\gamma)^2 \varepsilon^2)$ [Wang et al 2023, Zurek & Chen 2024] for computing an $\epsilon$-optimal policy. In light of this and my comment above on Bellman error vs. suboptimality, I have reservation with the claim on line 429 on Theorem 4.2 being the best known complexity.
Theoretical Claims: I looked over most results for the average-reward setting, which seem correct.
Experimental Designs Or Analyses: N/A
Supplementary Material: I reviewed the supplementary material briefly but did not check line by line.
Relation To Broader Scientific Literature: The achieved sample complexity is inferior to the best prior model-based approaches. The achieved sample complexity is slightly better than that of the best prior model-free approach of Zhang and Xie (2023), removing a lower order ($O(1/\varepsilon)$) term present in their bound. There also exists an earlier model-based approach https://arxiv.org/abs/2410.07616v1, which is not cited in this paper, which achieves a similar sample complexity bound also without prior knowledge of the span parameter. Therefore the main contribution of this paper is that it is simultaneously model-free, prior-knowledge-free, and obtains a sample complexity bound which is suboptimal by only one factor of the span parameter. This is still a nice contribution, in particular because the problem of removing the need for prior knowledge is important and challenging.
The results for discounted MDPs do not seem to offer significant improvements upon prior work so I am uncertain about their value. (For instance, the dependence on $Q^\star$, obtained in Theorem 4.3, is also obtained in Jin et al. 2024b, which is not mentioned. I think more discussion about the relationship of the discounted MDP results to the literature would improve the paper.
Essential References Not Discussed: Discussion on Jin et al. 2024b and https://arxiv.org/abs/2410.07616v1 should be added. See "Relation to Broader Scientific Literature" above.
here are several missing references all related to termination conditions. Propositions 2.1 and 4.1 are very standard and well-known (e.g. I think they appear in the Bertsekas Dynamic Programming and Optimal Control books). A stopping rule guaranteeing near-optimality is provided in Tuynmann et al. (2024) (the authors cite this paper but do not mention this stopping rule, which seems at least closely related to the one used in this paper, and so discussion of their relationship seems needed).
Other Strengths And Weaknesses: The paper is generally written clearly.
Since the results for discounted MDPs are basically half of the paper, the authors should discuss more related work for solving discounted MDPs (such works are only given a few lines in the introduction).
The authors should also discuss more related work on model-based methods and include the comparison in Table 1. Particularly in the tabular setting, the distinction between model-based and model-free methods are blurry and not particularly useful.
In the introduction (around line 49 column 2) the authors seem to claim that their main contribution is that they are the first to apply a Halpern technique to average-reward MDPs in the generative model setting. However in line 207 column 1 the authors cite the earlier work Bravo and Cominetti (2024) and explicitly mention it was used in this same problem setting. Therefore it seems like it would be beneficial for the authors to clarify their main contributions.
Other Comments Or Suggestions: N/A
Questions For Authors: What is the use of a Bellman error bound (Theorem 4.2) beyond the fact that low Bellman error implies a bound on the suboptimality?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer's constructive feedback. For the list of references please refer to the reply to review BWFD.
1. Advantage of SAVIA+ compared to [8] and [20]
The papers [8] and [20] propose model-free and model-based methods under the mixing time and weakly communicating assumptions, respectively. Although they can run without prior-knowledge of mixing times and bias span, they do not allow to determine the number of iterations required to achieve an $\epsilon$-optimal policy, unless one has a priori bounds on $t_{mix}$ or $h^*$. While our SAVIA algorithm has the same drawback (see Section 3.2), by using a doubling trick and the empirical Bellman error we get SAVIA+ which finds an $\epsilon$-optimal policy with high probability. This is a key advantage over previous methods, and we believe it is the first method that is "fully" prior-knowledge-free. We will clarify this in the revised version.
2. Novelty of results for discounted MDPs (DMDPs)
Our main contribution is clearly the sample complexity for average MDPs. However, given the novelty of anchoring and recursive sampling in RL, exploring their use for DMDPs is natural and instructive. Let us discuss the relevance of our Bellman error by comparing to [18, 19]. For ergodic DMDPs with $t_{mix} \le 1/(1-\gamma)$, [18] obtains sample complexity $O(SA t_{mix}(1-\gamma)^{-2} \epsilon^{-2})$. For weakly communicating and general MDPs, if $H ,B+H \le 1/(1-\gamma)$, [19] obtains $O(SA H(1-\gamma)^{-2} \epsilon^{-2})$ and
$O(SA (B+H)(1-\gamma)^{-2} \epsilon^{-2})$, respectively. These bounds include the factors $t_{mix}, H, B$ and hold only for $\gamma$ large enough. In contrast our Theorem 4.2 provides a sample complexity $O(SA (1-\gamma)^{-2} \epsilon^{-2})$ to achieve $\epsilon$-Bellman error, with no additional parameters and assumptions. We believe this is still a meaningful contribution.
The fixed-point residual, corresponding to the Bellman error in MDPs, has been widely used as a performance measure [10, 14] and, unlike the distance to the Q*, it is computable. This also holds in the generative setting where one can compute the empirical Bellman error, whereas estimating the policy error is more challenging as it involves the g*. Moreover, the information-theoretic lower bound of the Bellman error has been recently studied in the tabular setup [11, 12]. In the generative setting the lower bound for $\epsilon$-optimal policy is $O(SA (1-\gamma)^{-3} \epsilon^{-2})$ [2]. Our Theorem 4.2 shows that the Bellman error is at most $O(SA(1-\gamma)^{-2}\epsilon^{-2})$, which we believe is the best upper bound currently available and does not follow from previous results. We will clarify this in the revision.
Regarding Theorem 4.3, as noted by the reviewer, [20] presents a model-based approach with complexity depending on V*. Although [20, Theorems 9 and 10] guarantee the convergence, they require knowledge of V* to determine the number of iterations to obtain an $\epsilon$-optimal policy. In contrast, SAVID+ does not require knowledge of Q* to get $\epsilon$ optimality. In this sense, our Theorem 4.3 improves over prior results. As suggested, we will expand our review of prior works for DMDPs, with a more detailed discussion on model-based and model-free methods.
3. Propositions 2.1 and 4.1
For completeness and readability we think it is useful to include Propositions 2.1 and 4.1 which connect the Bellman error to the policy error, justifying our focus on the former. Proposition 2.1 is a Q-factor variant of a result for bias vectors in [13], whereas Proposition 4.1 is a simple consequence of estimates for contractions. However, we could not find these statements explicitly in [3] nor other prior works. We will clarify this in the revision and would be happy to include a reference for these facts.
4. Stopping rule
Thanks for mentioning the connection with [15]. We note however that [15] considers a model-based algorithm with a stopping rule based on an estimate of the diameter of the MDP, restricting the framework to communicating MDPs. In the revision we will discuss this connection.
5. Prior work on stochastic Halpern in MDPs
We are sorry for the misunderstanding (line 49 col. 2). We intended to stress that our work is the first to combine recursive sampling with Halpern iteration. As the reviewer noted, we acknowledged that [4] used Halpern iteration in this context (line 207). Specifically, [4] considers stochastic Halpern iteration for nonexpansive maps using minibatches, establishing error bounds in expectation. For average MDPs this yields a residual of at most $\epsilon$ in expectation with a larger sample complexity $O(SA\epsilon^{-7})$. We suspect it should be possible to get $\epsilon$-optimality in high probability with that rate, but the bound is not explicit in its dependence on the $H$. So it is unclear whether it can be implemented without prior knowledge (as the case of [8, 20]). We will clarify this in the revised version. | Summary: The established sample complexity of $O(\frac{1}{\epsilon^2})$ for average reward MDPs using a interesting approach of Halpern's iteration. The result is independent of mixing time which is very important.
Claims And Evidence: Looks good.
Methods And Evaluation Criteria: Seems so.
Theoretical Claims: I didn't verify the proofs.
Experimental Designs Or Analyses: No
Supplementary Material: No
Relation To Broader Scientific Literature: Average reward MDPs are crucial field of studies, with many real world setting, hence the sample complexity for the same, is of great importance.
Essential References Not Discussed: > The paper [1], also has similar results $O(\frac{1}{\epsilon^2})$ convergence using span norm. Requesting authors to compare their work with this.
> [2] also proves sample complexity of $O(\frac{1}{\epsilon^2})$, without knowledge of mixing time.
[1] @misc{zurek2024spanbasedoptimalsamplecomplexity,
title={Span-Based Optimal Sample Complexity for Average Reward MDPs},
author={Matthew Zurek and Yudong Chen},
year={2024},
eprint={2311.13469},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2311.13469},
}
[2] @misc{ganesh2024orderoptimalglobalconvergenceaverage,
title={Order-Optimal Global Convergence for Average Reward Reinforcement Learning via Actor-Critic Approach},
author={Swetha Ganesh and Washim Uddin Mondal and Vaneet Aggarwal},
year={2024},
eprint={2407.18878},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.18878},
}
Other Strengths And Weaknesses: Strengths: Paper is well written as easy to read. The theoretical results are convincing. The approach (Halpern's iteration) used is very interesting.
Weakness: It is not sure, how the result improves the SOTA, as the $O(\frac{1}{\epsilon^2})$ sample complexity without knowledge of mixing time already exits in [1].
[1] @misc{ganesh2024orderoptimalglobalconvergenceaverage,
title={Order-Optimal Global Convergence for Average Reward Reinforcement Learning via Actor-Critic Approach},
author={Swetha Ganesh and Washim Uddin Mondal and Vaneet Aggarwal},
year={2024},
eprint={2407.18878},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.18878},
}
Other Comments Or Suggestions: Suggestions: Better literature surevey would better position the paper and its contribution.
Questions For Authors: Q1: The paper states the average reward MDP has no contraction mapping, hence uses Halpern iteration (anchored VI) that has convergece rate of $O(\frac{1}{k})$. However, as shown in section 6.6 of [1], it proves average reward MDP Bellman operator is a $\gamma$-contraction where $\gamma$ is a function of mixing time of the MDP. It would be appreciated, if the authors could clarify these two.
[1] [Wiley Series in Probability and Statistics] Martin L. Puterman - Markov decision processes_ discrete stochastic dynamic programming (1994, Wiley-Interscience)
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the opportunity to clarify some points that were not transparent
in our manuscript, and to better put in perspective our contribution with respect to
previous work. For the list of references please refer to the reply to Reviewer BWFD.
1. Comparison with [19]
There are two major differences between our results and those in [19, Zurek & Chen].
First, in contrast with SAVIA+ which is model-free, the algorithm proposed in [19] is
model-based with its higher memory requirements for storing the empirical transition kernel.
Second, and more importantly, Zurek & Chen's method requires prior knowledge of an upper
bound for the span seminorm of the bias vector $H\geq \\|h^*\\|_{sp}$, whereas SAVIA+ does not
require any prior-knowledge.
2. Comparison with [7]
Ganesh et al. [7] study a model-free algorithm for average reward MDPs, although in a different context of Markovian sampling and function approximation which allows to deal with large state spaces by considering a parameterized family of policies. The proposed actor-critic algorithm is designed to optimize the parameters of the policy. The results are restricted to ergodic MDPs, although their method does not require prior-knowledge of the mixing time. The main result, Theorem 1, establishes an optimal asymptotic convergence rate in expectation, although no finite error bounds in high probability are provided. As a consequence the model and its underlying assumptions, as well as the algorithm and the complexity results, are of a different nature compared to our paper and neither one implies the other.
3. Strict contraction vs nonexpansivity of Bellman's operator
For average reward MDPs with finite mixing times Bellman's operator is indeed a strict contraction in span seminorm. However, this property fails in the weakly communicating case where $t_{mix}$ can be infinite, and Bellman's operator may be just nonexpansive with multiple fixed points. Consider a simple example with two states $S=\\{s_1,s_2\\}$ and two actions $A=\\{a_1,a_2\\}$ with deterministic transitions: action $a_1$ keeps the process at the current state, whereas $a_2$ moves to the other state; all rewards are $1$ except when moving from $s_2$ to $s_1$ under action $a_2$ whose reward is $0$. The optimal reward is $g^*=1$ and Bellman's operator for the bias $h=(h_1,h_2)$ is (see line 93 second column of our paper): $T(h_1,h_2)=(\max\\{h_1,h_2\\},\max\\{h_1-1,h_2\\})$ which is nonexpansive in span seminorm but not a contraction. In fact $T$ has a continuum of fixed points even up to identification modulo constants, namely, all vectors $h=(h_1,h_2)\in\mathbb{R}^2$ such that $h_1-h_2 \in[0,1]$. | null | null | null | null | null | null |
LapSum - One Method to Differentiate Them All: Ranking, Sorting and Top-k Selection | Accept (poster) | Summary: LapSum introduces a unified method for creating differentiable versions of ordering operations—such as ranking, sorting, and top‑k selection—by leveraging a closed-form inversion of the Lap-Sum function (the sum of Laplace CDFs). This approach allows efficient gradient computation in $O(n \log n)$ time while using only $O(n)$ memory, making it well-suited for high-dimensional data. The theoretical framework shows that these soft approximations converge to their hard, discrete counterparts. Extensive experiments on datasets like CIFAR‑100 and ImageNet demonstrate competitive or superior performance compared to existing methods, and the availability of both CPU and CUDA implementations underscores its practical applicability in large-scale neural network training and optimization tasks.
Claims And Evidence: I think the evidence is generally clear and convincing.
Methods And Evaluation Criteria: The methods are well-justified for the problem at hand.
Theoretical Claims: I have not checked the proof, but theorem statements are reasonable for me.
Experimental Designs Or Analyses: The experimental design is generally sound.
Supplementary Material: I went through the appendix.
Relation To Broader Scientific Literature: The paper situates its contributions within a growing body of work on differentiable ordering operations. Previous approaches often faced challenges in computational efficiency or relied on iterative procedures, LapSum introduces a closed-form inversion using the Lap-Sum function, leading to a unified framework that efficiently computes gradients in O(n log n) time and requires only O(n) memory.
Essential References Not Discussed: I am not familiar with literature at all.
Other Strengths And Weaknesses: - Strengths: From my perspective, I think this paper is well-written, and novel in the sense that this it proposes unified framework for differentiable ordering that creatively combines ideas from previous works on smooth approximations with a closed-form solution using the Lap-Sum function.
- Weakness: since I am not familiar with literate, experinments only focus on the calssification tasks, which might hurt the broad implicaition of the current method. Another application can be done to demonstrate the generality of the approach.
Other Comments Or Suggestions: Overall, I find the paper both rigorous and engaging. However, for readers like me who may not be deeply familiar with the literature, it would be beneficial to include concrete examples that explain in greater detail the importance and applications of ranking, sorting, and top‑k selection. Additionally, incorporating a concise pseudocode or algorithmic summary that outlines the key steps of LapSum in the main text would help clarify the intuition behind the algorithm and its general use.
Questions For Authors: See the comment above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and constructive suggestions for improving our paper.
1. Applications of our method
Our paper introduces several applications including top-k learning, soft ranking, sorting, and permutation learning. To address your request for concrete examples:
- Vector Quantized-VAE models can directly leverage our differentiable sorting approach in their codebook optimization process, improving representation learning efficiency.
- Large Language Models benefit from our ranking method when analyzing token distributions and building optimized dictionaries, enabling more effective model analysis and interpretation. Finding the dictionary of a trained Transformer dictionary provides a method for its deeper analysis.
We have included pseudocode for the LapSum model in Appendix A, designed for straightforward implementation. Additionally, the attached code contains complete implementations in both PyTorch and CUDA, with performance competitive with existing methods.
2. Experiments
While our paper focused primarily on classification applications, we will expand the experimental section to include soft ranking and sorting evaluations. These additions will demonstrate that LapSum provides a unified approach applicable across all differentiable soft ordering problems. The results will show comparable or superior performance to specialized methods while maintaining our closed-form advantages. [Link to figures with comparison of ranking and sorting solutions.](https://anonymous.4open.science/r/icml25-7AB6/README.md)
Results for sorting methods on forward (CPU, 10* - space dimension):
|Metric|Method|10⁷|10⁶|10⁵|10⁴|10³|
|:-|:-|-:|-:|-:|-:|-:|
|Mean time|Lap-Sort|59.944|6.163|0.596|0.059|0.009|
||Blondel et al.|98.884|6.077|0.443|0.044|0.007|
|Max memory(MB)|Lap-Sort|43.36|4.74|0.87|0.49|0.45|
||Blondel et al.|29.09|3.44|0.84|0.57|0.56|
Results for ranking methods on forward (CPU, 10* - space dimension):
|Metric|Method|10⁷|10⁶|10⁵|10⁴|10³|
|:-|:-|-:|-:|-:|-:|-:|
|Mean time|Lap-Rank|32.379|2.81|0.245|0.021|0.003|
||Blondel et al.|110.712|6.034|0.437|0.04|0.008|
|Max memory|Lap-Rank|33.83|3.85|0.85|0.55|0.52|
||Blondel et al.|19.53|2.45|0.75|0.57|0.55|
We appreciate your thoughtful feedback and believe these clarifications and additions will strengthen the paper considerably. | Summary: Authors propose “LapSum” - that yields differentiable versions of ranking, sorting, top-k selection, and permutations, all in closed form, with low time complexity: O(nlogn) (same as any sorting algorithm), and a linear memory.
Authors define this F-sum function and then define the ranking task in terms of F-sum function.
The main contributions is: 1) Defining all “soft” ordering tasks can be built by plugging the relevant r and appropriate $\alpha$ and then inverting or evaluating F-Sum(\alpha). A naive approach would need iterative solutions and result in O(n^2) complexity. 2) Authors choose
F as the CDF of the Laplace distribution, the computations become both closed-form and O(nlogn) complexity.
## update after rebuttal
I had one clarification question around experiments with learning-to-rank datasets, and authors clarified that it is not the norm in related works to experiment with learning-to-rank datasets, and authors experimental setup is indeed valid. My original assessment (4) has not changed.
Claims And Evidence: The claims made hold theoretically, and empirically, though I have some reservations (more of that in questions to the authors section).
Methods And Evaluation Criteria: Authors use benchmark datasets used in previous papers from the "differentiable sorting" literature. I have some additional comments (more of that in questions to the authors section).
Theoretical Claims: The theoretical claims seem to be correct, though I didn't get a chance to go through all of the steps in all proofs.
Experimental Designs Or Analyses: I am not fully convinced with the experimental setup. I am open to discussion with the authors on this.
Supplementary Material: Authors have provided the source code in supplementary material, thanks for that. I didn't get a chance to run it locally.
Relation To Broader Scientific Literature: The paper proposes a model for learning to rank task, which is critical to many real world applications.
Essential References Not Discussed: Authors didn't discuss the connection (or a lack of) with the following differentiable ranking works:
1. Ustimenko, Aleksei, and Liudmila Prokhorenkova. "Stochasticrank: Global optimization of scale-free discrete functions." International Conference on Machine Learning. PMLR, 2020.
2. Oosterhuis, Harrie. "Learning-to-rank at the speed of sampling: Plackett-luce gradient estimation with minimal computational complexity." Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2022.
3. Sakhi, Otmane, David Rohde, and Nicolas Chopin. "Fast slate policy optimization: Going beyond Plackett-Luce." arXiv preprint arXiv:2308.01566 (2023).
Other Strengths And Weaknesses: The writing of the paper could be improved, for example the connection of the CDF with the ranking task is not immediately clear. It would be helpful if authors first formally write down the ranking task (for ex: rank as sum of indicator function etc. to connect with a CDF).
Also, the method is designed for ranking tasks, but popular learning to rank datasets/tasks are missing, for ex: MSLR, Yahoo etc LTR datasets [1].
1. Ustimenko, Aleksei, and Liudmila Prokhorenkova. "Stochasticrank: Global optimization of scale-free discrete functions." International Conference on Machine Learning. PMLR, 2020.
Other Comments Or Suggestions: As I noted in the previous section, if the authors can first formally introduce the ranking task (mathematically) and it's connection with CDFs and the math following, it would be more helpful for the readers.
Questions For Authors: Please see my comments previously about the lack of connections to some previous works and lack of learning to rank datasets/task.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review of our paper.
1. Related references
We appreciate your suggestions regarding additional references. Our literature review focused primarily on differentiable soft ordering, ranking, sorting, and top-k methods as addressed in works by Cuturi, Lapin, Berrada, Petersen, Blondel, and others (all included in bibliography). While our primary aim was to propose a new theoretically well-founded and computationally efficient tool to compare with these approaches, we acknowledge the importance of gradient boosting stochastic smoothing methods and Plackett-Luce based models. We will include these references in our revised manuscript.
2. Experiments
For direct comparisons, we utilized applications addressed in existing literature, such as top-k classification and permutation matrix classification, particularly in high-dimensional spaces. While the additional datasets you suggest are very valuable, incorporating them would extend beyond our current scope. We plan to evaluate our framework on a wider range of applications in future work.
Regarding experimental setup, we followed protocols established in the aforementioned literature to ensure direct compatibility, as our main goal was to develop a well-grounded and computationally effective tool. The direct comparisons with boosting approaches are in plans for our research.
We have not included soft ranking due to limited availability of solutions with probability values. However, we have defined the LapSum approach for both ranking and sorting tasks and will include these results in the final version. [Link to figures with comparison of ranking and sorting solutions.](https://anonymous.4open.science/r/icml25-7AB6/README.md)
Results for sorting methods on forward (CPU, 10* - space dimension):
|Metric|Method|10⁷|10⁶|10⁵|10⁴|10³|
|:-|:-|-:|-:|-:|-:|-:|
|Mean time|Lap-Sort|59.944|6.163|0.596|0.059|0.009|
||Blondel et al.|98.884|6.077|0.443|0.044|0.007|
|Max memory(MB)|Lap-Sort|43.36|4.74|0.87|0.49|0.45|
||Blondel et al.|29.09|3.44|0.84|0.57|0.56|
Results for ranking methods on forward (CPU, 10* - space dimension):
|Metric|Method|10⁷|10⁶|10⁵|10⁴|10³|
|:-|:-|-:|-:|-:|-:|-:|
|Mean time|Lap-Rank|32.379|2.81|0.245|0.021|0.003|
||Blondel et al.|110.712|6.034|0.437|0.04|0.008|
|Max memory|Lap-Rank|33.83|3.85|0.85|0.55|0.52|
||Blondel et al.|19.53|2.45|0.75|0.57|0.55|
3. Theoretical claims
Thank you for recommending a more formal definition connecting ranking to the CDF. We will incorporate this description to improve clarity, following your recommended approach. A key advantage of LapSum is its theoretically elegant closed-form solution, naturally not forgetting its computational efficiency. This non-iterative approach provides enhanced stability compared to existing methods that rely on iterative processes.
4. Other concerns and concluding remarks
We believe we have addressed your primary concerns and will implement the suggested improvements in our final manuscript. In the revised version, we will add the missing references to better describe the related research field, include a formal description of ranking and CDF, and more clearly articulate the impact of our proposal on the field. | Summary: This paper proposes a new method for computing differentiable approximations of ranking, sorting and top-k operators. This method is based on considering sums of the CDF of the Laplace distribution, which defines the approximations for well chosen arguments, with a regularization term $\alpha$. The choice of the Laplace distribution is motivated by the fact that the proposed operators can be computed and differentiated in closed form efficiently in this case, as formally detailed in the paper.
The method is illustrated for top-k selection in multilabel classification on CIFAR and Imagenet, as well as k-nn for Image classification on MNIST and CIFAR and soft-permutation on MNIST.
Experimental results also validate the computational and memory efficiency of the proposed approach for the top-k operator, either surpassing or competing with previous approximations on different hardware.
Claims And Evidence: - the efficient calculation of the Lap-Sum, its inverse and derivatives are supported by convincing mathematical evidence. I nevertheless think it would have been clearer to formalize the results of paragraph "Calculation of inverse function" (l.269) and section 4.2 (derivatives) with a proposition, as is done for section 4.1.
- the fact that $F-Rank_\alpha$ and $F-Top_\alpha$ approximate the ranking and top-k operator is supported by convincing mathematical evidence. I would have expected the same for $F-Sort_\alpha$, for which, unless I am mistaken, a proof is missing.
Methods And Evaluation Criteria: The paper considers evaluations and methods that were used in prior works, such as Petersen et al. (2022b), Berrada et al. (2018), and Grover et al. (2019). The evaluations mostly concern the top-k operator, with also one experiment on soft-permutations.
To me, the evaluation makes sense for the problem and application at hand, but it would have strengthened the paper to add applications on sorting and ranking, such as the ones in Blondel et al., 2020, or in Berthet et al., 2020 (Learning with Differentiable Perturbed Optimizers).
Theoretical Claims: I checked the proofs in the main text, which seem okay to me despite some typos.
- The fact that F-Rank_alpha is applied at $r_k$ in l.154 versus applied on $r$ in theorem 3.2 is confusing. I understand that l.154 should be $(F-Rank_\alpha(r))_k$ instead?
- For the proof of theorem 3.2, can the authors clarify why there is a 1/2 factor which we don't have in the theorem?
- The definition of F-sort_alpha involves the inverse of F-sum_alpha, but this one is only evaluated with one argument, whereas it expects two arguments (see l.125, for instance).
- Typo in the definition of F-Top_alpha.
Experimental Designs Or Analyses: - Yes, I checked the soundness/validity of the experimental designs, which are mostly adapted from previous works. I did not notice any issue despite the aforementioned one, which consists of missing experiments for the sorting and ranking operators.
- I appreciated the empirical validation of the efficiency of the proposed method for top-k selection.
- Displaying the std for the results in the tables would have been helpful to validate the statistical significance of the results.
Supplementary Material: I checked the efficiency validation curves on GPUs (fig 12 and 13 in app. D).
Relation To Broader Scientific Literature: The key contributions of the paper are quite well related to the broader scientific literature. The paper clearly cites existing papers proposing competing methods, and compares itself to these methods in terms of accuracy (top-k classification, soft-permutation, k-nn in tab 1-4) as well as efficiency (space and time complexity).
However, I still think that a comparison is missing for sorting and ranking, since the experimental part of the paper solely focuses on top-k and soft-permutations.
Essential References Not Discussed: Most papers I know about in the differentiable programming area to propose relaxations of sorting, ranking, or top-k operators are discussed, except Berthet et al., 2020: Learning with Differentiable Perturbed Optimizers, which is clearly aligned with the topic considered, by proposing soft ranking operators for label ranking applications.
Other Strengths And Weaknesses: Strengths:
- The paper proposes a method that is faster than existing ones for computing and differentiating differentiable approximations of the top-k operator.
- The method is easy to understand and to implement.
I tend to lean towards acceptance of the paper, but there are a few weaknesses that bother me:
- The paper introduces differentiable approximations for sorting and ranking, but these are not considered in the experimental part of the paper.
- The sentence "Through extensive experiments, we demonstrate that our method outperforms state-of-the-art techniques for high-dimensional vectors and large k values" feels like an overstatement to me. It is true that the proposed method outperforms existing ones in terms of speed and memory (even though I did not see any experimental analysis for sorting and ranking), but it does not demonstrate superiority in terms of accuracy.
- There are too many typos.
- The method is really comparable to Xie et al. (2020) in terms of efficiency: can the authors comment on the advantage of their method compared to this one? Why should we use one instead of the other?
- The proposed top-k operator is not sparse (in the sense that as soon as $\alpha \neq 0$, all the coefficients will be non zero). This should be mentioned because this prevents the use of the operator for pruning weights in neural networks or for mixture of experts.
Other Comments Or Suggestions: I strongly suggest the authors carefully proofread the paper to eliminate the typos. Here is a non-exhaustive list:
- l. 26: optimal transport-based?
- l. 29
- l. 33: what are n and k?
- "Techniques employed to solve these problems include relaxations and estimators, ranking regularizers, even learning-based ranking solutions": add references.
- l. 69: "value of k" + parentheses for the ref.
- l. 117: I find it misleading that F_alpha and f_alpha correspond to two different formulas, but this is a detail.
- l. 154: inconsistency between the definition of F_rank here and in thm. 3.2 (in terms of argument, scalar vs. vector).
- l. 170
- l. 215: doubly stochastic?
- l. 270: the function?
- l. 305: Appendix x2
- l. 410
Questions For Authors: - Why haven't you considered applications to sorting and ranking ?
- Lap Sum is really comparable to Xie et al (2020) in terms of computational and memory efficiency: can the authors comment on the advantage of their method compared to this one ? Why should we use one instead of the other?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your very careful reading and insightful review of our paper. We address your questions as follows:
1. Why haven't we considered sorting and ranking applications?
Our research focused on designing an efficient model with a closed form for soft ordering tasks, and we defined models for both ranking and sorting. For applications, we deliberately aligned with Petersen's established framework to facilitate meaningful comparisons. Our experimental scope was partially constrained by implementations that allow cross-entropy loss value comparisons. Nevertheless, we conducted experiments for both sorting and ranking using LapSum, achieving results comparable to existing solutions. These will be included in the final version. [Link to figures with comparison of ranking and sorting solutions.](https://anonymous.4open.science/r/icml25-7AB6/README.md)
Results for sorting methods on forward (CPU, 10* - space dimension):
|Metric|Method|10⁷|10⁶|10⁵|10⁴|10³|
|:-|:-|-:|-:|-:|-:|-:|
|Mean time|Lap-Sort|59.944|6.163|0.596|0.059|0.009|
||Blondel et al.|98.884|6.077|0.443|0.044|0.007|
|Max memory(MB)|Lap-Sort|43.36|4.74|0.87|0.49|0.45|
||Blondel et al.|29.09|3.44|0.84|0.57|0.56|
Results for ranking methods on forward (CPU, 10* - space dimension):
|Metric|Method|10⁷|10⁶|10⁵|10⁴|10³|
|:-|:-|-:|-:|-:|-:|-:|
|Mean time|Lap-Rank|32.379|2.81|0.245|0.021|0.003|
||Blondel et al.|110.712|6.034|0.437|0.04|0.008|
|Max memory|Lap-Rank|33.83|3.85|0.85|0.55|0.52|
||Blondel et al.|19.53|2.45|0.75|0.57|0.55|
2. Justification for using our model over Xie's given comparable computation time and memory
While both models demonstrate similar computational efficiency as shown in our statistical tests (Figures 5, 7), LapSum outperforms Xie's model in all but one test. Both methods offer high accuracy (important in application, as Xie notes) and handle all derivatives effectively. However, the key advantage of our approach is its closed-form solution, whereas Xie's method relies on an iterative approach for both the model and its derivatives. This closed-form formulation provides theoretical elegance and potentially better stability in complex applications.
3. Missing reference to Berthet's paper
We acknowledge the omission of Berthet et al. 2020 "Learning with differentiable..." paper. While we referenced a Blondel et al. 2020 "Fast differentiable..." paper which addresses related solutions, we will include Berthet's work in our final version, as it aligns very well with our approach and provides excellent context on related research.
4. Theoretical claims
Thank you for your detailed analysis. We confirm that:
- The definition in line 154 should indeed be $(F-Rank_\alpha(r))_k$ and will be corrected. In later lines, where some indices may be obvious, we shall note it clearly.
- For $F-Sort_\alpha$ (line 125), we will add a footnote clarifying that $r$ in the definition is the default parameter.
- In Theorem 3.2 proof the ${1}\over{2}$ is the value of $F_\alpha$ function at $0$, see e.g. Fig.2. The mismatch between the formulation and proof of the theorem is a typo to be corrected.
- The proof for $F-Sort_\alpha$ is very similar, and we can add it in the final version.
5. Experimental issues
We appreciate your positive assessment of our experimental flow. Regarding standard deviations in experiment tables: we relied on published results from other researchers where original code wasn't always available. Reimplementing these approaches would introduce potential inconsistencies and errors. Consequently, we cannot provide deviation values at this stage but acknowledge this as an area for future work.
6. Potential overstatement in abstract
Our intention was to emphasize that our tool is theoretically sound, offers a closed-form solution, and is computationally competitive for high-dimensional spaces. The wording was not meant to imply superiority but rather computational competitiveness. We will revise this to better reflect our position.
7. Typographical errors
We will correct all identified typos (as addressed above) to improve clarity. Regarding "doubly stochastic matrix" we were referring to permutation matrices $P_{k,c}$ that are both row- and column-stochastic, as described in Petersen 2022 "Differentiable top-k..." The $n$ and $k$ in the introduction denote the space dimension and the number of top values to be selected — this shall be clarified. Thank you for careful reading. | null | null | null | null | null | null | null | null |
Generalization Performance of Ensemble Clustering: From Theory to Algorithm | Accept (poster) | Summary: This paper explores the theoretical foundations of ensemble clustering, focusing on its generalization performance, including generalization error, excess risk, and consistency. The authors derive convergence rates for both generalization error and excess risk, which are bounded by $\mathcal{O}(\sqrt{(\log n / m)} + 1/\sqrt{n})$ ($n,m$ are the numbers of samples and base clusterings) and demonstrate that ensemble clustering achieves consistency when both m and n approach infinity, and $m \gg \log n$. Recognizing that $m, n$ are finite in practice, the authors theoretically demonstrate that better clustering performance can be achieved by minimizing the bias of base clustering from its expectation and maximizing the diversity among base clusterings. Based on this, they instantiate their theory to a novel algorithm that utilized high-confidence pairwise similarity to approximate the expected clustering and solve it using a reduced gradient descent method, achieving state-of-the-art performance.
Key contributions include:
(1) For the first time, the authors derive the theoretical guarantees for generalization error, excess risk, and consistency of ensemble clustering;
(2) They develop a bias-diversity decomposition and innovatively establish the relationship between diversity and robustness in ensemble clustering;
(3) They propose a practical algorithm validated through extensive experiments. This work bridges theory and practice, offering both rigorous analysis and a high-performing solution for ensemble clustering.
Claims And Evidence: The methods proposed in the paper are well-aligned with the ensemble clustering problem and their claims made in this paper are supported by both theoretical analysis and experiments. I think the theoretical results in this paper are clear, providing generalization error, excess risk bounds, and consistency. The exploration of Bias and Diversity in ensemble clustering is well-developed, with a solid theoretical foundation and sufficient experimental validation.
Methods And Evaluation Criteria: The methods proposed in the paper are well-aligned with the ensemble clustering problem. The authors define the objective function of ensemble clustering in the form of a spectrum, which is logical and reasonable. Using this objective function, the authors investigate its generalization performance, including generalization error, excess risk, and consistency. These theoretical insights are of significant importance for research in ensemble clustering. Unlike heuristic definitions of the objective function, the authors instantiate their theory to develop a new algorithm. I believe this is highly valuable and provides a fresh perspective for the theoretical study of ensemble clustering. The adopted benchmark datasets (10 real datasets) and evaluation criteria (NMI, ARI, Purity) are appropriate for measuring clustering performance and the experiments they designed are reasonable and useful.
Theoretical Claims: The authors provide several important theoretical claims related to ensemble clustering, specifically regarding its generalization performance, excess risk, and consistency. Besides, they provide a bias-diversity decomposition for ensemble clustering under their designed objective function, along with a proof of the equivalence between diversity and robustness. I have reviewed all the proofs provided, and the theoretical claims in this paper appear to be solid.
Experimental Designs Or Analyses: The authors provide several important theoretical claims related to ensemble clustering, specifically regarding its generalization performance, excess risk, and consistency. Besides, they provide a bias-diversity decomposition for ensemble clustering under their designed objective function, along with a proof of the equivalence between diversity and robustness. I have reviewed all the proofs provided, and the theoretical claims in this paper appear to be solid.
Supplementary Material: In the Supplementary Material, the authors provide all the datasets used in their study as well as the code for their proposed method. Additionally, they include the hyperparameters and random seeds used for each dataset. I find the reproducibility of this experiment highly convincing.
Relation To Broader Scientific Literature: This paper significantly advances the theoretical understanding of ensemble clustering, offering theoretical guidance for practical applications. It establishes a formal link between model diversity and robustness, providing a theoretical foundation for enhancing performance in various ensemble-based methods. Notably, the algorithm introduced is not heuristic but directly derived from their theoretical framework, likely increasing scholarly focus on theoretical research.
Essential References Not Discussed: I have reviewed the references cited in the paper and did not find any significant omissions.
Other Strengths And Weaknesses: Strengths
Originality: The paper offers a novel theoretical framework for ensemble clustering.
Quality: The theoretical results are rigorous and well-supported by comprehensive experiments.
Clarity: The paper is clearly written, with detailed explanations of complex concepts and a well-structured presentation of the theoretical derivations and experimental results.
Significance: The findings contribute valuable insights into the practical application of ensemble clustering, offering guidance for optimizing clustering performance in real-world scenarios.
Weaknesses
1.The paper would benefit from providing statistical significance tests to confirm the robustness of the reported improvements (e.g., paired t-tests or Wilcoxon signed-rank tests).
2.The details of some experiments are not fully detailed in the main text but are mentioned to be in the appendices. It would be beneficial to include a brief summary of key findings from these experiments in the main paper.
3.Although the authors have conducted extensive theoretical analysis of their algorithm, I believe it is necessary to add a part discussing the time complexity of their algorithm, as this would help to understand its practical implementation in real-world scenarios.
4.The authors' derivation from Equation (8) to Equation (9) seems somewhat abrupt. I would suggest they include more details to clarify the process.
Other Comments Or Suggestions: I don’t have other comments or suggestions.
Questions For Authors: Besides the questions I mentioned in the Weakness section, I would like to ask the authors whether they have considered more general cases when instantiating their theory, beyond just this spectral form of loss function?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: **We sincerely thanks for all your constructive comments!**
>**Weakness 1. Statistical significance tests**
We conducted paired t-tests on our method using NMI, ARI, and Purity metrics, and the results show that our method significantly **outperforms the sota methods** compared across almost all datasets. We believe this further demonstrates the effectiveness of the proposed approach. The experiment is in <https://anonymous.4open.science/r/ICML8598/TabB345.pdf>
>**Weakness 2. Describe some key conclusions in the main text**
Due to page space limitations, we placed some conclusions in the appendix. Following your suggestion, we will move the key conclusions from Appendix E.4. Hyper-parameter Analysis, E.5. Ablation Experiment, and E.6. Ensemble Size Analysis into the main text and noted that readers can find the detailed analyses in the appendix.
>**Weakness 3: Weakness 2. Time complexity of the proposed algorithm**
The time complexity of our method is composed of the following parts: Construct each similarity, Calculate $\tilde{K}$, Solve $Z$, Compute reduced gradient, and Update $w$. Through theoretical analysis, the **time complexity** of our method is $O(n^{2.376})$. Detailed time complexities for each module can be found in the comments for Reviewer 2. Additionally, the time complexities of most baseline methods we compared are also $O(n^{2.376})$, yet our method **significantly outperforms** them. We also conducted time cost experiments on different datasets, and the results confirm that our method surpasses these Co-associate matrix optimization-based methods in terms of both performance and time efficiency. The experiment can be found in <https://anonymous.4open.science/r/ICML8598/FigA4.pdf>
>**Weakness 4: Clarify formula derivation (Eq. (8) to Eq. (9))**
For $\min_w -2tr(K^wK^*)$, our objective is to adjust the weights ${w}$ such that the weighted CA matrix $K^w$ closely approximates its expected value $K^*$. To this end, we **reformulate the problem as finding a low-dimensional embedding** $Z$ for $K^*$. This approach consolidates 1) $max_w tr(K^wK^* )$ and 2) $\max_Z tr(K^* ZZ^T)$ into a single expression, $max_Z tr(K^*ZZ^T)$, which corresponds to the Bias term in the objective function of Eq. (9). For $\min_w tr(K^wK^w)$, we apply a similar strategy. Specifically, for the term $\min_w tr(K^wK^w)$, we replace one instance of $K^w$ with the low-dimensional embedding $Z$, i.e., $tr(K^wK^w)\Rightarrow
\max_Z tr(K^wZZ)$, and transforming it into a min-max optimization problem. Additionally, we constrain $Z$ to be an orthogonal matrix in its columns. It is important to note that the original constraint $w^Tw=1$ is nonconvex and a standard relaxation technique is $w^Tw\le 1$. However, we revise this to $w1 = 1$ and also modify the definition of $K^w$. This approach has the advantage of allowing $w$ to be better interpreted as a weight distribution.
>**Question 1: Generalization form of loss function**
There is a **general framework** for our method. For a continuously differentiable and strongly convex function $\phi$, let $\Omega$ be a convex set where $x, y \in \Omega\$, we can define the Bregman divergence as:
$$D_\phi(x, y) = \phi(x) - \phi(y) - \langle \nabla \phi(y), (x-y) \rangle. $$
Thus, we can transform Eq. (7) in the text into a more generalized form:
$$D_{\phi}(K^*, K^w) = \frac{1}{m} \sum_{t=1}^m D_{\phi}(K^*, mw_tK^t) - \frac{1}{m} \sum_{t=1}^m D_{\phi}(K^w, mw_tK^t),$$
where $K^w = (\nabla \phi)^{-1}\left( \frac{1}{m} \sum_{t=1}^m \nabla \phi(mw_tK^t) \right) $.
Consequently, we can derive a generalized Bias-Diversity decomposition:
$$\underset{w}{\min} \, -\langle \nabla \phi(K^w), K^* \rangle + \langle \nabla \phi(K^w), K^w \rangle - \phi(K^w)$$
Based on this, we can let $\phi(x)$ be various metric functions, such as KL divergence, JS divergence, etc. We believe **this is a more significant conclusion** and will be further discussed in future work. | Summary: This paper investigates the theoretical foundations of ensemble clustering, focusing on its generalization performance, including generalization error, excess risk, and consistency. The authors derive theoretical bounds for these indicators and propose a new ensemble clustering algorithm based on their findings, demonstrating significant improvements over existing methods. The key contributions and findings are as follows:
1. The paper establishes the convergence rate for generalization error and excess risk, showing that increasing the number of base clusterings helps reduce the generalization error but cannot eliminate it. Furthermore, it proves that when both the number of samples nnn and base clusterings mmm approach infinity, with m>>log n, ensemble clustering achieves uniform convergence, meaning the clustering result progressively approximates the true data structure.
2. The study reveals that clustering performance can be improved by minimizing the bias of base clusterings (i.e., the difference between each base clustering and its expectation) while maximizing diversity among them. The authors further establish that maximizing diversity closely relates to robust optimization models.
3. Leveraging this theoretical framework, the authors introduce a novel ensemble clustering algorithm. It utilizes high-confidence elements to approximate the expected co-association matrix and formulates clustering as a min-max optimization problem. The algorithm optimizes the base clustering weights using a descending step-degree method to ensure low bias and high diversity. Experimental results on multiple datasets demonstrate superior performance compared to state-of-the-art methods.
Claims And Evidence: The claims presented in the paper are well-supported by both theoretical derivations and experimental validation.
1. The paper rigorously derives the generalization error bound, excess risk bound, and sufficient conditions for the consistency of ensemble clustering. These theoretical results establish a solid foundation for the feasibility and effectiveness of the proposed algorithm, providing strong theoretical support for ensemble clustering method selection.
2. Through comparisons with state-of-the-art methods, the experimental results demonstrate the superiority of the proposed algorithm across multiple datasets. The algorithm obtains good performance in terms of NMI, ARI, and Purity.
3. The paper effectively integrates the bias-diversity tradeoff principle into ensemble clustering optimization. By minimizing bias and maximizing diversity, the proposed approach enhances clustering performance. This concept is further validated through both algorithmic design and empirical results.
Methods And Evaluation Criteria: In this paper, several evaluation criteria such as NMI, ARI and Purity are adopted at the same time. This comprehensive evaluation method is more comprehensive and can evaluate the performance of the algorithm from different perspectives. NMI and ARI provide an assessment of association with real labels, while Purity focuses more on clustering accuracy. By using the multi-index evaluation method, the effectiveness of the integrated clustering method can be comprehensively measured and its applicability in practical problems can be ensured.
Theoretical Claims: I have reviewed the validity of the proofs for the theoretical claims presented in the paper. The key theorems—3.1 (generalization error bound), 3.2 (excess risk bound), and 3.3 (consistency)—are derived in a detailed and structured manner. The proof methodology is logical and rigorous, leveraging probability theory and statistical consistency principles. Intuitively, the results align with theoretical expectations, and the reasoning appears sound.
Experimental Designs Or Analyses: The experiments effectively demonstrate the advantages of the proposed method, but there are areas that could be further improved for a more comprehensive evaluation.
1. The paper evaluates the algorithm on multiple datasets.
2. The paper discusses key parameters such as convergence rate, number of iterations, and learning rate. However, a more detailed exploration of how these parameters influence performance across different datasets would strengthen the experimental findings.
3. Certain aspects that could enhance the credibility of the results are not explicitly addressed. For instance, ablation studies on the impact of individual components in the algorithm (e.g., the weighting strategy, bias-diversity optimization) could provide deeper insights into the contributions of each part. Additionally, comparisons with a broader range of baseline methods, particularly under different noise conditions, would further support the claims of robustness.
Supplementary Material: I reviewed the supplementary material associated with the paper. The appendix is mainly about the concrete process of theoretical proof, using matrix Bernstein inequality, Davis-Kahan theorem and other mathematical tools to prove the theorem and lemma and some supplementary experiments, as well as pseudo-code.
Relation To Broader Scientific Literature: This paper fills this gap in the generalization performance of ensemble clustering by providing a comprehensive analysis of the generalization error, excessive risk, and consistency of ensemble clustering.
Essential References Not Discussed: The paper has cited and discussed relevant prior findings and results necessary for contextualizing its contributions.
Other Strengths And Weaknesses: **strengths**
This paper provides a theoretical analysis of the generalization performance of ensemble clustering, addressing a key gap in understanding the theoretical foundations of ensemble clustering.
**weaknesses**
1. While the paper establishes sufficient conditions for clustering consistency, it does not discuss necessary conditions.
2. Some symbolic definitions for intermediate processes are omitted, which may affect clarity.
3. In Section 2.2, the notation i in the definition of \bar{A} is not explicitly explained.
4. The selection of an appropriate threshold is a challenge.
5. Although the paper mentions a method for setting the threshold, it does not thoroughly analyze the impact of different threshold choices on the results.
6. The computational complexity of the proposed algorithm is not analyzed in detail.
7. The optimization process, which uses a reduced gradient descent method to optimize the weighted matrix W and spectral embedding Z, lacks an in-depth discussion of its convergence, efficiency, and stability in practical applications. Since gradient descent methods can be sensitive to initialization, it remains unclear whether the proposed approach guarantees a global optimal solution.
8. While the paper validates its theoretical findings through experiments, these primarily focus on performance comparisons between algorithms. The verification of the theoretical results, such as experimentally confirming whether the convergence rates of generalization error and excess risk align with theoretical expectations—is not thoroughly addressed.
Other Comments Or Suggestions: I have no other comment.
Questions For Authors: This paper presents a new integrated clustering algorithm based on theoretical framework. How do different hyperparameter Settings affect the performance of the algorithm, and how to choose the optimal hyperparameter?
How do the results of the proposed algorithm compare with other state-of-the-art methods in terms of computational efficiency and scalability?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **We sincerely thanks for all your constructive comments!**
We denote "Experimental Designs Or Analyses" as E, "Questions For Authors" as Q, and "Weakness" as W to save space.
>**E2 & W7: Global optimal solution**
Our model is a **convex optimization problem of w** (optimization function is convex and equality constraint is affine). Specifically, for our optimization function $J(w)$, we have
$$J(aw_1+(1-a)w_2)=\max _{Z\in\Gamma}tr((2\tilde{K}+K^{aw_1+(1-a)w_2})ZZ^T)$$
$$=\max _{Z\in \Gamma}tr((2\tilde K+\sum _{t=1}^m{(aw _{1t}+(1-a)w _{2t})^2K^{(t)}})ZZ^T)$$
$$\le\max _{Z\in\Gamma}tr((2\tilde{K}+\sum _{t=1}^m{(aw _{1t}^{2}+(1-a)w _{2t}^{2} )K^{(t)}})ZZ^T)\quad Given\ a(a-1)\le 0$$
$$=\max _{Z\in\Gamma}tr((2a\tilde{K}+aK^{w_1}+2(1-a)\tilde{K}+(1-a)K^{w_2})ZZ^T)$$
$$\le a\max _{Z\in\Gamma}tr((2\tilde{K}+K^{w_1})ZZ^T)+(1-a)\max _{Z\in\Gamma}tr((2\tilde{K}+K^{w_2})ZZ^T)$$
$=aJ(w_1)+(1-a)J(w_2)$,
which means it is convex. Obviously, the constraint $\sum_{t=1}^m w^{(t)}=1$ is affine. Therefore, it can be theoretically demonstrated that our method will achieve the **global optimal value** with different initializations. As verified by repeated experiments on various datasets, with random initializations (see <https://anonymous.4open.science/r/ICML8598/FigA3.pdf>), our algorithm consistently attains the global minimum.
We initialize the learning rate as $\min(0.1,\min_t(w_t/\nabla_t))$ to maintain $w\ge0$. When reaching the minimum loss with above learning rate, we use the Golden search method for finer updates, stopping when $|w_{new}-w_{old}|<0.001$ (i.e., convergence criterion is 0.001). We set the maximum number of iterations to 100, but in practice, the algorithm terminates after just a few dozen iterations.
>**W4,5 & Q1: Hyperparameter setting**
**Our algorithm has only one hyperparameter**: threshold $\alpha$ and outperforms SOTA methods with $\alpha$ even fixed at 0.1. Besides, In Appendix E.4, Fig 4, we have exhibited the performance across different datasets when the **threshold ranges from 0.1 to 0.9** using grid search (a method consistent with all the baselines). The results show that when the threshold is in {0.1,0.3}, the model achieves better performance, indicating our algorithm is **not sensitive to the hyperparameter**.
>**E3: Ablation study & Noise situation**
In Appendix E.6, Table 6, we have conducted ablation experiments. It shows that performance declines when either the Bias or Diversity module is removed, indicating both are **important for optimal results**.
As your suggestions, we add more baselines (AAAI24, TKDD23, Inf Fus22, AAAI21) to validate our method under different levels of noise (from level 10% to 90%). The results show that our method **remains the best** in noise condition, which further demonstrate its robustness. The experiments are in <https://anonymous.4open.science/r/ICML8598/TabB12.pdf>
>**W6 & Q2: Computational efficiency**
The time complexity of each module in our method is as follows:
- Construct each similarity: $O(n^2)$
- Calculate \tilde{K}: $O(n^{2.376})$
- Solve Z: $O(n^2)$
- Compute reduced gradient: $O(n^2)$
- Update $w$: $O(m)$
Note that in matrix multiplication and eigenvalue decomposition, we can employ accelerated methods such as Coppersmith-Winograd algorithm. Thus, the time complexity of our method is $O(n^{2.376})$ and most of the baselines have the same time complexity, as they also involve matrix multiplication. We also conducted time cost experiment (which can be seen in <https://anonymous.4open.science/r/ICML8598/FigA4.pdf>), and the results show our method **outperforms these matrix optimization-based methods** in terms of both performance and time cost.
>**W1,8: Verification of theoretical results and necessary conditions for consistency**
Our convergence rates of generalization error and excess risk are both $O(\sqrt{\log n/m}+1/\sqrt{n})$. **In Sec 6.3, Fig 3, we have demonstrated the convergence rate of the excess risk bound on real dataset.** According to your comments, we conduct experiment on generalization error rate in <https://anonymous.4open.science/r/ICML8598/FigA5.pdf>, and the result shows that it is also consistent with our theory.
We derived the sufficient condition for consistency is $m\gg\log n$, and **it is noteworthy that we are the first to theoretically depict the relationship between clusterings number and sample size**. Additionally, the conclusion that $m\gg \log n$ is a mild condition, indicating that we only need a few base clusterings to satisfy the consistency condition. However, we think the necessary condition is very challenging and we have rarely seen any similar research on this topic. We would like to leave this as our future work.
>**W2,3: Symbolic definitions and Notations**
As suggested, we will revise the paper to avoid undefined notations. For example, we will correct the typo where $t$ is mistakenly written as $i$ in definition of $\bar{A}$, and add an additional explanation for it.
---
Rebuttal Comment 1.1:
Comment: The authors have adequately addressed the major concerns raised in the previous review. Based on the improvements and clarifications provided in the rebuttal, I am raising my score to 3. | Summary: The ensemble clustering is the problem of combining multiple base clusterings into a more accurate final clustering result. Prior research shows advances of ensemble clustering in practice while the theoretical analysis has fallen behind.
This paper provides the first generalization bound of emsemble clustering and extend their generalization bound into a new algorithm. The authors also conducted experiments to validate their bounds as evidence.
Claims And Evidence: Yes. The generalization bounds have been proved rigorously and experimental results also provide evidence.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I have briely read the statement and proofs in appendix. The claims and the way to get there seems correct.
Experimental Designs Or Analyses: The authors have run experiments against a few existing benchmarks. The evaluation method also makes sense to me. Detailed introductions of experiment design is included in appendix.
Supplementary Material: I have briefly read the proofs and the experiment design in the appendix.
Relation To Broader Scientific Literature: Given the success of ensemble clustering in practice, results in this paper can benifit any machine learning research that conducts ensemble clustering as a subroutine.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength: This paper is a rigorous theoretical study of ensemble learning and the results proved are novel .
Weakness: This paper could certainly benefit from improvements in its writting quality. There is no formal problem definition and I have to read the Fred-Jain paper to understand many concepts. Authors should not assume every reader is an expert in the area of study.
Other Comments Or Suggestions: 1. On page 2, section 2.2, the definition of $\bar{A}=\frac{1}{m}\sum_{i=1}^m A^{(t)}$ is a typo?
2. On page 2, section 3, the definition of matrix D can not be found anywhere. Moreover the notation $D^{(t)-\frac{1}{2}}$ is very confusing.
Questions For Authors: 1. Can you please provide some insights on the gap condition between the k-th and k+1-th eigenvalues of K^*? Is it a mild condition or is it generally true in practice?
2. The $\sqrt \frac{\log n}{m}$ bound means it requires $m>>\log n$, namely, using more than $\log n$ base clusterings, to guarantee convergence to the ground truth, which can be achieved by a single clustering (say kernel k-means). Moreover, the experiment also show the loss is divergence when $m=\log \log n<\log n$. Is this a "more is less" contradiction?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **We sincerely thanks for all your constructive comments!**
>**Weakness 1: Improve writing quality and clarify some concepts**
Thanks for your suggestions! We will make the following changes in the final version of the paper:
- Provide a more detailed explanation of ensemble clustering and motivation in Introduction
- Rename Section 2.2 from "Co-Association Matrix" to "Ensemble Clustering" and add a simple flowchart to illustrate the process. (Fig can be seen in <https://anonymous.4open.science/r/ICML8598/FigA1.pdf>)
- Add Section 2.3 "Problem Definition" to clarify the problem definition
- In Section 3 "Generalization Performance" we will provide more details for several equations
- In Section 4 "Key Factors in Ensemble Clustering" we will include a more detailed derivation of Eq. (8) to Eq. (9)
- Move some important experimental conclusions into Section 6 Experiments instead of Appendix
- Review the entire paper to avoid undefined symbols and typos
>**Comment 1: A typo in $\bar{A}=1/m\sum_i^m A^{(t)}$**
The correct form is $\bar{A}=1/m\sum _t^m A^{(t)}$, and we will correct it in the final version.
>**Comment 2: Definition of matrix $D$ and notation $D^{(t)−1/2}$ are confusing**
$D^{(t)}$ is the degree matrix of similarity matrix $A^{(t)}$, which is diagonal and defined as $D_{jj}^{(t)}=\sum_{i=1}^n A_{ij}^{(t)}$ and $D^{(t)-1/2}$ is defined as a diagonal matrix where the diagonal elements are $(D _{jj}^{(t)})^{-1/2}$ and the off-diagonal elements are 0. We will define it in the final version.
>**Question 1: Is the gap condition between the k-th and k+1-th eigenvalues of $K^\*$ mild or generally true**
It is a mild condition and can be explained in two aspects.
1. $K^*$ is regarded as the true similarity (kernel) of the data, and in many papers regarding kernels, they also adopted this assumption, such as [1] and [2].
2. On different datasets, we randomly sampled base clusterings and then calculated their means to approximate $K^*$ (since the expectation is unattainable). The results show that in our 1000 experiments, **not once** were the k-th and (k+1)-th eigenvalues equal. The experiment can be seen in <https://anonymous.4open.science/r/ICML8598/FigA2.pdf>
Therefore, it is a **mild condition** and we will incorporate this condition into the General Assumptions section in the final submission of our paper to avoid misunderstandings.
[1] Error bounds for kernel-based approximations of the Koopman operator.
[2] Scalable Multiple Kernel Clustering: Learning Clustering Structure from Expectation.
>**Question 2: A single clustering algorithm (like kernel k-means) can ensure convergence to the true value, but why does ensemble clustering require $m\gg \log n$. Experiment shows the loss is divergence when $m=\log \log n < \log n$. Is this a "more is less" contradiction?**
- It's important to note that in the generalization analysis of single clustering algorithm (like kernel k-means), it is often assumed that the data features are accessible or the kernel function represents the true similarity relationships in the data. Their studies concern the necessity of generalizing from finite-dimensional matrices to infinite-dimensional integral operators. But in ensemble clustering, our similarity matrices are binary and generated by $n\times 1$ vectors (**no features or kernel functions**). We can view features or kernel functions as high-dimensional representations of the data, while considering the base clusterings in ensemble clustering as **special discrete one-dimensional projections**. The problem we are addressing is whether we can achieve the same results using **only** these discrete one-dimensional embeddings, instead of relying on high-dimensional representations. Our research shows that consistent results can be obtained when $m\gg \log n$. We believe this is valuable not only in ensemble clustering but also in other areas of machine learning.
- Note that the loss diverges when $m = \log \log n$ in Section 6.3, Figure 3. However, in this experiment, the sample size $n$ increases with $m$, **rather than being fixed**. In Appendix E.6, Figure 5, we present an experiment where the clusterings size $m$ is increased while the sample size $n$ is fixed. This experiment shows that our clustering accuracy improves as $m$ increases (Our convergence rate $O(\sqrt{\frac{\log n}{m}} + 1/\sqrt{n})$ also shows that as 𝑚 increases and $n$ is fixed, the error is reduced). A more intuitive explanation is that, in ensemble clustering, we would like to approximate the true kernel function using $m$ binary similarity matrices (with dimension $n$). Here, $n$ should be considered as the **feature dimension** of the matrices, and $m$ as the **number of samples** (number of similarity matrices). As the feature dimension $n$ increases, we need to add more samples (similarity matrix) to avoid underfitting. Thus, this is **not a "more is less contradiction"**. | null | null | null | null | null | null | null | null |
More Than Meets the Eye: Enhancing Multi-Object Tracking Even with Prolonged Occlusions | Accept (poster) | Summary: This paper presents MOTE , a novel multi-object tracking algorithm designed to tackle the persistent challenge of tracking occluded objects. MOTE introduces a unique approach by integrating deformable detection transformers with a custom disocclusion matrix, which significantly improves the ability to track objects even when they are temporarily hidden from view. The algorithm utilizes optical flow to generate features, which are processed through a softmax splatting layer to create the disocclusion matrix. This matrix is very important in maintaining track consistency by estimating the motion of occluded objects. It is important to note that state-of-the-art performance has been achieved on multiple datasets.
## update after rebuttal
Thank you for the authors' response, which has addressed some of my concerns. Importantly, this paper achieve high performance, which is an encouragement for end-to-end MOT. Additionally, I hope the code could be made open-access to advance research in the multi-object tracking (MOT) field. I keep my score.
Claims And Evidence: Yes, the claims made in the submission are well-supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-suited for multi-object tracking .
Theoretical Claims: Yes, i have checked the theoretical Claims of the proposed method.
Experimental Designs Or Analyses: Yes, i have checked the experimental designs and analyses.
Supplementary Material: Yes, i have reviewed the experimental supplementary material, including MOT15 Extended Results and Visualization of Prolonged Occlusion Handling with Optical Flow.
Relation To Broader Scientific Literature: Prior work in MOT has long struggled with occlusion handling, often relying on heuristic methods or appearance-based features that fail in complex scenarios. MOTE addresses this by introducing a novel disocclusion matrix and optical flow. This paper has led to a significant improvement in the performance of end-to-end methods and also demonstrates new trends for the future of multi-object tracking.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1.This paper significantly improves tracking performance under occlusion challenges by leveraging deformable transformers and a disocclusion matrix, which sounds quite innovative.
2.The paper introduces an ETEM module, further enhancing the model's robustness in occlusion scenarios.
3.The proposed method achieves state-of-the-art (SOTA) performance on multiple tracking datasets. End-to-end object tracking is a future trend, and this paper further demonstrates the potential of end-to-end object tracking.
Weaknesses:
1.The paper employs optical flow estimation, which may significantly increase computational complexity and reduce speed. This could pose a bottleneck in real-world applications.
2.The authors should analyze the computational complexity of their method compared to other approaches, at least to identify potential optimization directions and further promote the development of end-to-end multi-object tracking.
3. The paper lacks an Impact Statement.
Other Comments Or Suggestions: See Weaknesses
Questions For Authors: Has the author tested the performance on the KITTI dataset, and how does it compare with other methods? Additionally, if this method is applied to real-world scenarios, what further improvements are needed?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive assessment of our work and insightful comments.
Computational complexity: MOTE's end-to-end processing time is $\sim$45ms per frame on an A100 GPU, compared to $\sim$20ms for MOTR and $\sim$15ms for ByteTrack. The additional overhead comes primarily from optical flow estimation ($\sim$18ms) and softmax splatting ($\sim$7ms). We've explored optimization strategies including fewer optical flow iterations and model pruning that reduce computation by 40\% with only 2.3\% HOTA drop.
KITTI results: We appreciate your suggestion regarding evaluation on the KITTI dataset. This would indeed be beneficial for exploring more challenging interactive scenarios involving both vehicles and pedestrians. While we focused our current evaluation on MOT17, MOT20, and DanceTrack datasets, we plan to conduct evaluation on KITTI in future work to further validate our approach in diverse scenarios.
Real-world improvements: For practical deployment, we identify three key areas for improvement: 1) Model optimization for computational efficiency, 2) Enhanced handling of fast motion through adaptive resolution scaling, and 3) Integration with long-term feature banks to handle extended occlusions. We're actively working on these directions.
Impact Statement: We apologize for this oversight and will include a comprehensive impact statement addressing both the benefits (improved surveillance and autonomous navigation safety) and potential concerns (privacy implications and computational resource requirements).
Extremely fast motion: While our current implementation may face challenges with extremely rapid motion, our adaptive flow resolution approach has shown promising results in preliminary testing. This is particularly important in scenarios like sports tracking, where sudden rapid movements are common. | Summary: This paper presents MOTE, a novel multi - object tracking (MOT) algorithm aiming to solve the problem of tracking occluded objects. It combines deformable detection transformers, optical flow estimation, and softmax splatting. By leveraging optical flow to generate features and using a softmax splatting layer to create a disocclusion matrix, MOTE can estimate the motion of occluded objects. The enhanced track embedding module (ETEM) in its architecture helps maintain object identity during occlusions. MOTE is evaluated on multiple datasets such as MOT17, MOT20, and DanceTrack. It achieves high tracking metrics, outperforming existing state - of - the - art methods, especially in reducing identity switches and handling complex occlusion scenarios. Ablation studies are conducted to verify the effectiveness of different components. However, there are some limitations, like the lack of ablation experiments on different component combinations and unclear data flow details among modules.
Claims And Evidence: The claims in the submission are mostly supported by clear evidence. The proposed MOTE algorithm shows excellent performance on multiple datasets, and the ablation studies effectively verify the contributions of different components. For example, the comparison between softmax splatting and linear splatting demonstrates the superiority of softmax splatting in enhancing tracking accuracy. However, the evidence regarding the method's effectiveness in handling occlusions may be challenged due to the obvious ID switches in the supplementary materials' videos.
Methods And Evaluation Criteria: The proposed methods make sense for the problem at hand. The integration of deformable transformers, optical flow, and softmax splatting is innovative and suitable for multi - object tracking in occlusion scenarios. The evaluation on multiple datasets with standard MOT metrics like HOTA, MOTA, and IDF1 is reasonable. However, the lack of ablation experiments on different component combinations limits the comprehensiveness of evaluating the method. Also, the unclear data flow details among modules may affect the understanding and reproducibility of the method.
Theoretical Claims: There are no complex theoretical proofs in the paper that require checking. However, the proposed approach is based on established concepts in computer vision, such as optical flow and transformers. The combination of these concepts seems reasonable, but a more in - depth theoretical analysis of how the different components interact and why they work effectively could strengthen the theoretical foundation.
Experimental Designs Or Analyses: The experimental designs are generally sound. The ablation studies on individual components like the splatting technique, the number of iterations in optical flow estimation, and the effect of occlusion weights are well - designed and provide valuable insights. However, as mentioned before, the lack of ablation experiments on different component combinations is a limitation. Also, the evaluation of the method's performance in handling occlusions could be more comprehensive, considering the ID switch issues in the supplementary materials.
Supplementary Material: The supplementary material includes extended results on the MOT15 dataset, which further demonstrates MOTE's generalization ability. The visualization of prolonged occlusion handling with optical flow provides a qualitative analysis of the method's performance. However, the videos in the supplementary materials show obvious ID switches, which need to be further investigated.
Relation To Broader Scientific Literature: The key contributions of the paper are related to the broader scientific literature. The paper builds on existing works in multi - object tracking, especially those addressing occlusion challenges. It improves upon CNN - based and Transformer - based methods by integrating optical flow and softmax splatting. The use of softmax splatting for occlusion handling is also related to previous research in video interpolation. However, the paper could further discuss how its approach differs from and improves upon these related works in more detail.
Essential References Not Discussed: There are no essential references that are clearly missing from the paper. The paper comprehensively reviews the relevant literature in the field of multi - object tracking and occlusion handling. However, it could explore more deeply some recent research trends and their potential implications for the MOTE algorithm.
Other Strengths And Weaknesses: Strengths:
1、The innovative combination of multiple techniques effectively addresses the occlusion problem in multi - object tracking, which is a significant contribution to the field.
2、The comprehensive experimental evaluation on multiple datasets and the ablation studies enhance the credibility of the method.
3、The paper is well - structured, making it easy to follow the research.
Weaknesses:
1、The lack of ablation experiments on different component combinations limits the understanding of the interactions between components.
2、The unclear data flow and interaction details among modules make it difficult to fully understand and reproduce the method.
3、The ID switch problem in the supplementary materials' videos may cast doubt on the method's effectiveness in handling occlusions.
Other Comments Or Suggestions: 1、Conduct ablation experiments on different component combinations to better understand the synergies between components.
2、Provide more detailed explanations of the data flow and interaction among modules, including the specific operations and data format changes.
3、Re-evaluate the method's performance in handling occlusions, considering the ID switch issues in the supplementary materials.
Questions For Authors: Can you explain the reasons for the obvious ID switches in the videos provided in the supplementary materials? How do you plan to address this issue in future research? If the ID switches are due to limitations in the current method, it may significantly affect the practical application of MOTE, and I may lower my evaluation of the paper.
Could you elaborate more on the potential interactions between different components in MOTE? For example, how does the softmax splatting module interact with the ETEM module in more complex scenarios? A better understanding of these interactions could strengthen the theoretical and practical value of the paper, and it may lead to a more positive evaluation.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback on our MOTE framework.
ID switches: You raise an important point. The videos show some ID switches in extremely challenging scenarios with prolonged, complete occlusions. These represent edge cases where even our approach struggles. Our primary focus was on handling prolonged occlusions, and our quantitative results (Tables 1-3) demonstrate significant improvements in ID switch reduction overall (1412 vs. 1446, 2.4\% fewer than previous methods). We've reported ID switch metrics throughout our evaluations (Tables 1,4,5,6), providing transparency about our model's performance in this aspect. ID switching remains an open research challenge, and we've been transparent about current limitations while demonstrating substantial progress.
Data flow clarity: We apologize for any lack of clarity in describing module interactions. The flow proceeds as follows: 1) Optical flow estimation between frames, 2) Feature extraction via deformable transformers, 3) Softmax splatting to generate disocclusion features, 4) ETEM integration of these features with track queries, and 5) Final object tracking via the decoder. We'll improve our description to make these interactions clearer.
Component integration: Our approach to component integration is guided by careful ablation studies. While computational constraints limited testing all combinations, Tables 4-6 provide evidence of each component's contribution. The integrated tests confirm that the full integration provides 3.2\% better HOTA than any subset. Specifically, our softmax splatting approach enables the extraction of disocclusion features, providing the model with perceptual understanding of subjects under prolonged occlusion scenarios. This perceptual capability represents a significant advancement over previous methods that struggle with occlusion handling.
The key innovation in our approach is how softmax splatting interacts with ETEM in complex scenarios. Splatting provides weighted feature propagation that preserves motion information during occlusions, while ETEM integrates these features with appearance cues to maintain consistent tracking. This synergy enables MOTE to handle occlusions more effectively than methods that rely on either motion or appearance alone. | Summary: The paper introduces MOTE, an end-to-end multi-object tracking framework that integrates optical flow estimation and softmax splatting to robustly handle prolonged occlusions.
## update after rebuttal
The authors did not provide a detailed FLOPS analysis, leaving key computational efficiency aspects unaddressed. Based on other reviews, I agree that the description of inter-module data flow and the design of comprehensive ablation experiments remain insufficient. Therefore, I am modifying my score to "Weak Accept."
Claims And Evidence: MOTE is the first end-to-end tracking framework that successfully integrates optical flow and softmax splatting to handle prolonged occlusions, surpassing other methods. The experimental results on MOT17, MOT20, and DanceTrack datasets show improved performance.
Methods And Evaluation Criteria: The proposed method combines deformable DETR for multi-scale feature extraction with RAFT-based optical flow estimation. • Evaluation is performed on standard MOT datasets using common metrics (MOTA, HOTA and IDF1)
Theoretical Claims: The paper does not focus on new theoretical proofs; it is primarily experimental and engineering-driven.
Experimental Designs Or Analyses: The experimental design is sound, with comparisons on multiple datasets and a comprehensive ablation study.
Supplementary Material: The supplementary material was reviewed and includes a demo video and source code.
Relation To Broader Scientific Literature: The work builds on transformer-based tracking (e.g., MOTR) and integrates ideas from optical flow-based approaches and video frame interpolation.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. Innovative combination of optical flow and softmax splatting within an end-to-end framework.
2. Comprehensive experimental evaluation and ablation studies
3. Significant performance improvements on standard benchmarks.
Weaknesses:
1. The method may incur higher computational costs due to optical flow estimation, which might pose challenges for real-time applications.
2. Sensitivity to extremely rapid motion or complex interactions is acknowledged but not deeply analyzed.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Could you provide more details on how the increased FLOPS due to optical flow estimation impact real-time performance?
2. Have you considered any mechanisms or additional experiments to address the potential sensitivity of the optical flow module in extremely fast motion or highly complex interaction scenarios?
3. Did you explore other fusion strategies besides softmax splatting
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive assessment of our work and thoughtful questions.
Computational costs: Our MOTE framework adds only 25ms of additional processing time per frame compared to the baseline MOTR method's 133ms inference time (7.5 FPS) on high-resolution (1536x800) inputs, representing just a 19\% increase in total computation. This modest overhead comes from two main components: the RAFT implementation with 20 iterations (requiring $\sim$18ms) and the softmax splatting layer (adding $\sim$7ms). We believe this represents a reasonable trade-off considering the significant performance improvements demonstrated in our results. For real-time applications where speed is critical, we've explored lightweight flow estimators that reduce the total overhead to $\sim$12ms with only a 1.8\% HOTA reduction.
Fast motion handling: We've implemented adaptive flow resolution scaling that detects rapid motion and applies higher-resolution flow estimation selectively. Our occlusion masking mechanism (Eq. 12-13) also addresses this by weighting tracking components based on occlusion states. As shown in Table 6, the occlusion weighting mechanism significantly improves performance in complex scenarios, leading to better results in challenging cases involving rapid motion and occlusions.
Fusion strategies: We compared softmax splatting with linear splatting as an alternative approach. As shown in Table 4, softmax splatting consistently outperformed linear splatting with a 3.2-point increase in HOTA (58.4 vs. 55.2), a 3.6-point increase in MOTA (64.9 vs. 61.3), and a 3.5-point increase in IDF1 (69.2 vs. 65.7). The softmax splatting mechanism is particularly effective for preserving motion information during occlusions compared to the simpler linear approach. | Summary: This paper propose leveraging optical flow with soffmax splitting to estimate the motion of occluded objects. Together with the proposed enhanced track embeddgins module (ETEM), the model, i.e. MOTE, achieves state-of-the-art (SOTA) performance on various multiple object tracking (MOT) benchmarks.
Claims And Evidence: The proposed softmax splatting method is validated by Fig. 5 and equation 1 to 5.
The proposed ETEM is supported by equation 6 to 11.
All results are further validated by ablation studies tables, i.e. Table 4 to 6.
Methods And Evaluation Criteria: The methods are evaluated on MOT17, MOT20, and DanceTrack with the standard metircs, i.e. HOTA, ASSA, and DetA.
Theoretical Claims: The theorectical claims are fine and validated.
Experimental Designs Or Analyses: The experimental desgins are good and the analyses are fruitful.
Supplementary Material: The supplemetary contains MOT15 extedned results and more illustration of the visualization of optical flow.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strenghts:
- The performance improvements over 3 popular datasets are good and demonstrates method's superiority.
- The usage of optical flow and new memory mechanism is reasonable and effective.
Weakness:
- THe qualitative analyses are limited. For MOT, it requires more qualitative results to see the effectness of the method, especially for the occluded objects.
- Despite that the three datasets results are provided, but there are some more challenging datasets such as Bird Flock Tracking (BFT) and SportsMOT. Those are more advanced and should be evaluated on along with newer baselines.
Other Comments Or Suggestions: See the weakness.
Questions For Authors: The explanation of Table 5 looks not convincing enough. Why the optical flow estimation improves over iteration and yet harms the HOTA metirc?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback on our MOTE framework. We appreciate your recognition of our method's performance improvements and effective use of optical flow and memory mechanisms.
Qualitative analysis: We understand your concern about limited qualitative results. While Fig. 5 demonstrates our approach's effectiveness, we agree that additional visualizations would strengthen our case. We have prepared more examples showing MOTE's performance on challenging occlusion scenarios and will include these in the supplementary video materials.
Dataset selection: We evaluated MOTE on three diverse benchmarks (MOT17, MOT20, and DanceTrack) as well as the MOT15 dataset in the extended results. The DanceTrack dataset offers particularly challenging scenarios with complex motion patterns and high inter-object similarity. We appreciate your suggestion regarding Bird Flock Tracking (BFT) and SportsMOT datasets. To address this, in the last few days, we conducted a preliminary experiment on SportsMOT without retraining and evaluated our method on a sample of three sequences. We also compared our approach with ByteTrack and MOTR under the same conditions. We plan to expand to BFT in future research applications to diversify the tracked object types.
Preliminary results on SportsMOT: Our preliminary experiments indicate that MOTE achieves the highest MOTA score at 45.7\%, significantly outperforming ByteTrack, which achieves only 17.9\%, and slightly surpassing MOTR at 44.1\%. Similarly, in terms of IDF1 score, MOTE achieves 50.2\%, compared to 31.4\% for ByteTrack and 48.7\% for MOTR. These results highlight MOTE's superior tracking accuracy and adaptability, even when applied to a new dataset without additional training. While fine-tuning on SportsMOT could further enhance performance, these results already demonstrate the robustness of our approach in diverse tracking scenarios.
Optical flow Iterations vs. HOTA: The apparent paradox where more iterations (25) improve MOTA but harm HOTA can be explained by the balance between detection and association accuracy. At 20 iterations, we achieve an optimal balance between efficiency and tracking performance. At 25 iterations, improved flow estimation increases detection accuracy (reflected in MOTA) but introduces over-smoothing that reduces feature distinctiveness (affecting association accuracy in HOTA). This trade-off demonstrates the importance of careful parameter tuning in tracking systems.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' effort in the rebuttals. The authors addressed all my concerns, and I am willing to increase the rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback on our MOTE framework. We're grateful that our additional experiments on SportsMOT and our explanation of the optical flow iterations addressed your concerns. Your suggestions have significantly improved our paper, and we look forward to incorporating these insights into the final version. | null | null | null | null | null | null |
An Architecture Built for Federated Learning: Addressing Data Heterogeneity through Adaptive Normalization-Free Feature Recalibration | Reject | Summary: The paper proposed Adaptive Normalization-free Feature Recalibration (ANFR) to address data heterogeneity in federated learning. Instead of using normalization layers as in common neural networks, ANFR normalizes convolutional layer weights with a learnable scaling factor. This approach alleviates data heterogeneity in that normalizing activations is biased towards statistics of local data, while layer weights are globally synchronized before sent to clients. Experiments showed that ANFR achieved improved performance on various settings and for a variety of federated algorithms.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: NA.
Experimental Designs Or Analyses: Yes.
Supplementary Material: I read most parts.
Relation To Broader Scientific Literature: The paper proposed a new approach to address data heterogeneity, which is a crucial issue in federated learning.
Essential References Not Discussed: The paper should discuss more prior work on architecture approach such as FedBN and NAS for FL.
Other Strengths And Weaknesses: Strengths:
1. The paper proposed an effective approach that simply replaces activation normalization with weight normalization. The motivation is clearly addressed.
2. The paper considered various experimental settings including global FL and personalized FL, showing the wide application of ANFR for different settings and algorithms.
Weaknesses:
1. The paper claimed that ANFL is the first architecture-level approach to address data heterogeneity in FL. However, there is rich literature on architecture studies for FL. FedBN can be considered as one example in that it brought modifications for BN layers in FL. There is also a line of research on neural architecture search for FL, e.g. [1].
2. While ANFR outperforms previous models overall, the performance improvement is not significant in many cases, e.g. on FedChest and CIFAR-10 for GFL, and on Fed-ISIC2010 and FedChest for pFL. The benefit may not be able to outweigh the cost of modifying the network in ANFR, e.g. re-pre-training on ImageNet.
[1] Towards Non-I.I.D. and Invisible Data with FedNAS: Federated Deep Learning via Neural Architecture Search. arXiv:2004.08546.
Other Comments Or Suggestions: NA.
Questions For Authors: 1. How does ANFL perform compared to vision transformer models?
2. What is the formulation of eq. 6 when the convolution kernel size is larger than 1?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the Reviewer for their thoughtful engagement. Their concerns highlight important aspects that warrant clarification, as addressed comprehensively below:
---
# W1: Relation to previous work
We thank the Reviewer for suggesting additional related work. To clarify our contribution's positioning:
- We previously discussed FedBN and its follow-ups in Section 2 and included it in our pFL experiments (Section 4.2). FedBN modifies the **aggregation rule** for BN layers, not the layers themselves, which has significant downsides compared to ANFR as further detailed in Table 12. We will move Table 12 to the main paper to better highlight the differences between ANFR and related work.
- Prior architecture-related work in FL falls into three categories:
- Comparative studies of existing architectures (Pieri et al., 2023; Siomos et al., 2024)
- Aggregation method modifications for specific layers (FedBN, FixBN, FBN, ChannelFed)
- Layer substitution studies replacing BN with GN or LN (Hsieh et al., 2020; Du et al., 2022; Tenison et al., 2023; Chen & Chao, 2021; Zhong et al., 2024).
None of the above propose a novel architectural design specifically addressing FL heterogeneity challenges, which is ANFR's pioneering contribution.
- Regarding Federated NAS, following the Reviewer's valuable suggestion, we commit to including discussion of it in Section 2. In brief, while innovative, this line of work differs substantially from ANFR: Federated NAS is as much an aggregation algorithm as it is an architectural approach, since it requires the clients to continuously exchange information on what the architecture will be, and by definition the learned architecture is only optimal for the specific experiment the process is ran for. This is in contrast to ANFR, where we show one model robustly outperforms baselines across multiple datasets and client configurations (pFL,gFL, cross-silo, cross-device). Additionally, NAS is typically limited to searching for operations within a convolutional block and following this block with a BN layer, i.e. the architecture still includes BN, which has been shown theoretically and experimentally to be problematic in non-IID FL.
---
# W2: Performance increase vs costs & overheads
We appreciate the concern about cost-benefit tradeoffs, and would like to emphasize the following:
* **Conservative comparisons**: Our reported results actually underestimate ANFR's potential, as hyperparameters were tuned for baselines and used unchanged for ANFR. Appendix B.4 and Table 10 shows significantly larger performance gains when hyperparameters are optimized specifically for ANFR. We would be happy to move this part to the main paper if the Reviewer finds it particularly relevant.
* **Minimal overheads and supplied pre-trained models**: As detailed in appendix A.3 and further expanded in our reply to Reviewer 2 (D221), computational and communication overheads are minimal for ANFR. Furthermore, we are committed to open-sourcing the ImageNet pre-trained ANFR models so there is no pre-training cost for other practitioners.
* **Non-pretrained effectiveness**: Appendix B.3 demonstrates ANFR's effectiveness even with random initialization, showing it can benefit tasks without available pre-trained models.
---
# Q1: Comparison to vision transformer models
We have conducted additional experiments using a ViT-B-16 model on FedISIC-2019 and CIFAR10:
| | Random Init | | ImageNet-pretrained | |
|--------------|-------------|----------|---------------------|----------|
| | ANFR | ViT-B-16 | ANFR | ViT-B-16 |
| CIFAR-10 | **83.2** | 52.26 | 97.42 | **97.8** |
| Fed-ISIC2019 | **57.71** | 51.55 | **74.78** | 71.19 |
These results show that ANFR outperforms ViT in 3 of 4 settings, despite the much higher computational and communication overhead (86M vs 28M parameters) of the transformer model. With random initialization, where transformers are known to struggle, ANFR's advantages become particularly pronounced.
---
# Q2: Eq. 6 for kernel sizes $K>1$
Eq. 6 extends to:
$Z^{\text{ANFR}} = \frac{{\gamma_\text{eff}}}{\mathbf{\sigma} H W} \sum_{h,w}^{H,W}\sum_{c=1}^{C_\text{in}}\sum_{i=0}^{K-1}\sum_{j=0}^{K-1}W_{:,c,i,j}X_{:,c,h+i,w+j} - \frac{{\mu}{\gamma_\text{eff}}}{{\sigma} H W}\sum_{h,w}^{H,W}\sum_{c=1}^{C_\text{in}}\sum_{i=0}^{K-1}\sum_{j=0}^{K-1}X_{:,c,h+i,w+j} + {\beta}$
where $\mu$ and $\sigma$ now include the kernel size. There is no qualitative difference for $K>1$, and K=1 was presented for simplicity.
---
Based on our responses to each concern and the additional experiments provided, we kindly hope the Reviewer can reconsider their score. We have demonstrated ANFR represents a novel, effective architectural approach that addresses federated learning challenges with minimal implementation costs while providing consistent performance improvements across diverse settings. | Summary: This paper proposes a novel architecture-level approach to address statistical heterogeneity in Federated Learning(FL). While most previous works have focused on aggregation strategies, this paper directly modifies model architecture to enhance the generalization performance over heterogeneous clients. The proposed method, named ANFR(Adaptive Normalization-Free Feature Recalibration), combines weight standardization to eliminate reliance on batch statistics (which causes severe problem in heterogeneous FL) with channel attention to adaptively recalibrate features across clients.
Key contributions are:
* ANFR improves performance across various FL settings (global FL, personalized FL, with Differential privacy, with various aggregation strategies).
* ANFR enhances class selectivity under heterogeneous setting by ensuring more stable feature representation.
* Minimal computational overhead is required (comparing to BN-based model).
Claims And Evidence: The paper makes several key claims, all of which are strongly supported by experimental results. Every figures and tables have clear messages.
* ANFR improves performance under heterogeneous FL settings (Table 1: different dataset/aggregation methods, Table 2: pFL, Table 3:cross-device FL)
* ANFR is more robust in FL with DP (table 4).
* Class selectivity analysis (Figure 2) shows that ANFR maintains strong feature discrimination compared to batch normalization
* Weight standardization(WS) and channel attention(CA) have the synergy as attention weights in ANFR remain diverse even after FL training whereas CA does not work without WS (figure 3).
* ANFR requires minimal computational overhead (Table 7).
Methods And Evaluation Criteria: Overall, the methodology is comprehensive and well-structured.
Theoretical Claims: The paper does not introduce a new theoretical framework, but empirical results validate the model’s effectiveness.
Convergence analysis is not required as the ANFR is orthogonal to aggregation strategies.
Experimental Designs Or Analyses: Experiments are comprehensive (GFL, pFL, DP-based FL, diverse dataset and aggregation methods).
It also includes ablation studies (Table 5: attention mechanism), and interpretability analysis (figure 2, 3) showing class selectivity improvement and diverse attention weight.
Supplementary Material: I read Appendix A, B, and D, all of which are also well-structured and informative.
Relation To Broader Scientific Literature: This paper fills an important gap in FL research by addressing data heterogeneity at the model architecture level. While we know BN performs poorly in heterogeneous FL, there has not been architecture level study to address it instead of focusing on aggregation methods. ANFR integrates channel attention with weight standardization, achieveing robust feature recalibration and channel selectivity. I believe this novel integration of architectual components makes ANFR a pioneering contribution to FL research.
Essential References Not Discussed: .
Other Strengths And Weaknesses: .
Other Comments Or Suggestions: I have one minor suggestion:
* While ablation study for channel attention mechanisms is included, could explore other normalization-free architectures beyond weight standardization?
Questions For Authors: This paper focuses on CNN architecture. I am wondering if ANFR's benefits can extend to transformer-based architectures in FL.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the Reviewer for their thorough and thoughtful review and positive assessment of our paper. Their recognition of ANFR as a "pioneering contribution to FL research" is deeply appreciated! Below we touch upon the suggestions the Reviewer made:
---
The suggestion to explore means of removing normalization beyond weight standardization is on-point. Indeed, alternatives such as FixUp initialization [1] represent promising directions we've been considering. Such approaches could potentially further complement channel attention mechanisms while maintaining the key advantage of avoiding BN.
We're particularly excited about concurrent work on transformers without normalization, namely Dynamic Tanh (DyT) [2], which demonstrates growing interest in removing normalization dependencies in centralized training. Specifically with regards to transformer models, we believe adapting ANFR principles to transformers is a promising direction: the self-attention mechanism already provides a form of feature recalibration, but combining this with alternative (non) normalization approaches like DyT could potentially yield benefits for FL like the ones we showcase for ANFR.
Pursuing this direction would require careful investigation of:
- How to replace layer normalization in transformers effectively (DyT, T-FixUp [3], etc.)
- Whether self-attention alone provides sufficient feature recalibration or if additional mechanisms are needed.
- The computational trade-offs.
While beyond the scope of our current submission, the above represents a natural evolution and extension of our work that we're eager to explore.
To summarize, while this paper focuses on CNN architectures (where the heterogeneity challenges in FL have been most thoroughly documented), extending ANFR principles to other architectural families represents the next research frontier. This is an active area of investigation in the centralized training community, though we believe our work is among the first to specifically address these architectural considerations in the FL context.
[1] Zhang, Hongyi, Yann N. Dauphin, and Tengyu Ma. "Fixup Initialization: Residual Learning Without Normalization." International Conference on Learning Representations. 2018.
[2] Zhu, Jiachen, et al. "Transformers without Normalization." arXiv preprint arXiv:2503.10622 (2025).
[3] Huang, Xiao Shi, et al. "Improving transformer optimization through better initialization." International Conference on Machine Learning. PMLR, 2020.
---
Once again, we genuinely appreciate the Reviewer's comments, which align perfectly with our vision for future work. Given the Reviewer's enthusiasm for our work, we would be honored if they might consider upgrading their recommendation to a 5 to help champion this paper during the committee discussions. We believe ANFR introduces important architectural principles for FL that could influence both research and practical implementations moving forward. | Summary: This paper introduces Adaptive Normalization-Free Feature Recalibration (ANFR), an architecture-level approach designed to combat heterogeneity in Federated Learning (FL). The authors explore how architectural components, more specifically weight standardization and channel attention can be used to enhance robustness in non-IID settings. They present an extensive set of experiments over five different benchmark datasets showing the effectiveness of ANFR when combined with existing aggregation strategies.
Claims And Evidence: Broadly, the paper makes two main claims:
1. The first is that ANFR is a novel architectural approach that bridges a gap in heterogeneity-aware FL methods. To the best of my knowledge, this is true and the paper provides convincing evidence when comparing against prior work (Table 12 in the Appendix).
2. The second claim is that ANFR is an effective method in improving performance against heterogeneity. This is well-supported by detailed experiments on a range of benchmark datasets that cover different FL settings and different methods of modelling heterogeneity. The experiment results are strong -- ANFR outperforms all relevant baselines.
Methods And Evaluation Criteria: The paper evaluates ANFR through a comprehensive set of experiments on five federated datasets (Fed-ISIC2019, FedChest, CIFAR-10, CelebA, and FedPathology) each with different approaches to simulating heterogeneity including label distribution skew and covariate skew (LEAF) and varied client participation sizes in both a small-scale and large-scale cross-device settings. I appreciate the varied choice of datasets and feel they cover a wide-range of simulated FL settings (in both scale and heterogeneity). All evaluation metrics are well-suited to the specific datasets classification task.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design is well thought out covering a wide-range of settings including global FL, personalized FL, cross-device and differentially private settings. Experiments are performed over multiple aggregation methods including baselines and SOTA approaches and ANFR is compared to other existing architectural choices (such as BN). The conclusions across all settings are generally consistent, pointing to ANFR being the dominant architectural choice for best utility.
Supplementary Material: The supplementary material contains full details on the federated datasets and hyperparameters which seem fairly complete from a reproducibility standpoint. Additional experiments are included that replicate some experiments already included in the main paper but on other datasets.
Relation To Broader Scientific Literature: The paper does a good job of outlining its contributions and how this fits into the wider research area. Table 12 in the Appendix does a good job in summarising how it compares to the most relevant baselines in the literature. In more detail:
- Prior work mainly focuses on aggregation strategies (FedAvg, FedProx, SCAFFOLD, etc.), but this work explores architecture design and the effect it has on improving utility in heterogenous settings.
- ANFR combines both weight standardisation and channel attention which has been explored in the central setting, but the combination for FL and application to all settings (GFL, pFL and DP) has not been studied.
Essential References Not Discussed: There are no related works that I feel are missing.
Other Strengths And Weaknesses: **Strengths:**
- The paper presents a novel study into how the architectural choices and help or hinder FL performance in heterogeneous settings. As far as I am aware, there is not much prior work in the area that covers this.
- The experimental setup is strong and covers a wide-range of datasets and federated settings.
- Experimental findings consistently show ANFR works best across multiple settings (Global, pFL, DP) and unlike existing methods that are tied to specific FL strategies, ANFR works universally across each setting.
- The paper is written to a high-quality, is concise and is structured clearly.
**Weaknesses:**
- The proposed method is limited to visions tasks and the CNN architectures.
- The method incurs some additional overhead compared to existing architectural choices.
- There are some minor presentational aspects that could be improved (see below).
Other Comments Or Suggestions: N/A
Questions For Authors: 1. A current limitation of this work is that it is limited to vision classification tasks and CNN architectures. Do you feel any of these insights can be extended or adapted to other tasks or settings?
2. Do you plan to open-source the code? Having these and baseline methods accessible across the datasets/varied settings that the paper has considered could be valuable for the community and for future architecture-driven FL work.
3. Table 12 in the Appendix very clearly highlights where this work sits in comparison to prior work. I feel this should be moved to the main paper as it helps emphasise the contributions.
4. There should be more discussion about the overheads of ANFR. The section on overheads in the Appendix is useful and some of it should be moved to the main paper. I would also like to see some reference or experiments on the communication overhead increase with ANFR vs. the baseline as this is an important aspect for FL training that is not discussed in the paper.
5. For the DP setup, why is sample-level privacy via DP-SGD used instead of user-level privacy (i.e., DP-FedAvg)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the Reviewer for their thorough and encouraging review and their insightful feedback. Their appreciation of our work's contributions and experimental rigor is very welcome. We are particularly pleased that they recognized the novelty of our architectural approach to federated learning. We are also grateful for their constructive suggestions that we believe will help improve the final version of our paper, and address their observations and suggestions in detail below:
---
# Extending beyond vision classification and CNNs
We appreciate the point raised regarding the current focus on vision classification and CNN architectures. We agree that extending these insights to other tasks and architectures can broaden the impact of our work. The recent concurrent work on "Transformers without Normalization" [1] signals a growing recognition that normalization-free architectures may have broader applicability than CNNs. Moreover, we believe our channel attention insights could potentially transfer to transformer architectures, where attention mechanisms already play a central (albeit different) role. While our current experiments target vision classification tasks, as ours is the first work exploring this angle in FL, we believe ANFR, if accepted, will be laying a robust foundation for future investigations into other settings.
[1] Zhu, Jiachen, et al. "Transformers without Normalization." arXiv preprint arXiv:2503.10622 (2025)
---
# Open-sourcing code and models
We agree with the Reviewer and in fact our code is already included in the supplementary material. We are fully committed to open-sourcing both our code and ImageNet pre-trained ANFR models (with SE, CBAM, and ECA channel attention variants) to support reproducibility and further research. We'll ensure the repo is public (with detailed instructions on how to use it) by the camera-ready deadline, if accepted.
---
# Formatting suggestions & communication overheads
We thank the Reviewer for the useful recommendation regarding Table 12. We agree that its inclusion in the main text would better emphasize the contributions relative to prior work, and we will incorporate it in the final version using the extra page if the paper is accepted.
Regarding communication overheads, our approach introduces an overhead that is equal to the increase in the backbone model’s size. Specifically, when using SE attention and assuming 50-layer networks, the overhead is **~10%** (28.09M vs 25.56M), while with ECA, the overhead is **virtually zero** (calculated from Table 6). In response to the Reviewer's suggestion, we will move the discussion of both computational and communication costs from the appendix to the main text (with detailed model specifications remaining in Appendix A.3) to better inform readers about these aspects.
---
# Sample vs client-level Differential Privacy
Our focus on sample-level DP stems from our primary emphasis on cross-silo settings, where individual clients (e.g., hospitals or institutions) typically house data from numerous different individuals, and as such the primary goal is guarantees at the sample level. That said, we agree that exploring the privacy-utility tradeoff in client-level DP scenarios (e.g. DP-FedAvg) represents an important direction for future work, particularly as we extend to more cross-device scenarios. Should the Reviewer had been asking about true user-level DP in a cross-silo setting (meaning guarantees at the user level when the user might have entries in multiple silos), there are ways to *extend* sample-level DP to user-level DP quite straightforwardly [2] where we believe ANFR would work equally well.
[2] Kato, Fumiyuki, et al. "Uldp-FL: Federated Learning with Across-Silo User-Level Differential Privacy." Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases. Vol. 17. No. 11. 2024.
---
Once again, we deeply appreciate the Reviewer's positive assessment and constructive feedback, which has helped us clarify important points and improve our presentation. We hope these clarifications and planned updates address their questions. We would also be grateful if the Reviewer could consider this additional evidence and our commitment to improvements when finalizing their overall recommendation. | Summary: This paper focus another aspects to design a new architecture to address the heterogeneous data in FL. This architecture uses the weight standardization. Channel attention gets learnable scaling factors for feature maps for consistent features. This strategy improves the class selectivity and channel attention weight distribution. This method can also use differential privacy with better performance.
## update after rebuttal
I carefully read the review and rebuttal from other reviewers, and I lean toward accepting this paper for this unexplored area of model architecture design in the FL heterogeneity area. I'm also willing to acknowledge the authors' contribution.
Claims And Evidence: This paper claim that inconsistent activations from CNR cause conflicting gradients, but no evidence to demonstrate this phenomenon and no further experiments about whether ANFR mitigate gradient conflicts.
Methods And Evaluation Criteria: Yes
Theoretical Claims: No Theoretical Claims.
Experimental Designs Or Analyses: - It is unclear about how to compute $C_R$ and $C_{NR}$.
- What is before FL training? how can we get different PDF of SE-ResNet, NF-Resnet, BN-Resnet, and ANFR.
- The evaluated methods are out of date. For example, the latest method compared is also 2021; can it be proved that the network will not produce performance conflicts when adding the latest method? Can this architecture be used in some sharpness awarness based methods, like [R1, R2]
- The number of clients participating in the experiment, as well as the number of participating clients in each round, is not clear
References:
[R1] Qu, Zhe, et al. "Generalized federated learning via sharpness aware minimization." *International conference on machine learning*. PMLR, 2022.
[R2] Fan, Ziqing, et al. "Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization." *Forty-first International Conference on Machine Learning*.
Supplementary Material: All of Supplementary Material.
Relation To Broader Scientific Literature: New insight and perspective about how to design new architecture to mitigate non-iid in FL.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
- This method can be a plug-in with any FL methods.
- Diversity datasets and tasks.
- Extra DP experiments, which is important in medical scenario.
Weakness:
See above
Other Comments Or Suggestions: No
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments, which have helped us identify areas for improved clarity and comprehensiveness. Below we address each point in detail:
---
# $C_R$, $C_{NR}$ and their computation
$C_R$ and $C_{NR}$ are conceptual tools used to explain the underlying mechanisms and illustrate the gradient conflict issue, not quantities explicitly computed during training. The discussion about them is primarily intended to build intuition and explain our motivation, rather than make quantifiable claims about specific mechanisms.
Regarding evidence, Figure 1 is derived from the outputs of a toy experiment with 2 convolutional layers that demonstrates this gradient conflict. When scaling to deeper networks (e.g., 50 layers), the patterns become humanly uninterpretable as networks contain duplicate and complex filters. This is precisely why we introduced the additional analyses of i) class selectivity and ii) CA weight distribution in Section 3 which provide quantifiable, interpretable metrics even for deep networks. We would be happy to include the full aforementioned toy 2-layer experiment in an appendix if the Reviewer believes this would strengthen the manuscript.
---
# "Before FL Training" figures
The left panels of Figures 2 & 3 refer to the behavior of architectures after centralized pre-training on ImageNet, but before any federated learning occurs. The different architectures show varied CSI distributions after ImageNet pre-training due to their inherent training dynamics. However, as expected, BN-ResNet and SE-ResNet share similar distributions, as do NF-ResNet and ANFR, since federated heterogeneity hasn't affected them yet. The purpose of these figures is to provide a contrast with "after FL" figures, demonstrating that channel attention provides significant improvements by increasing i) class selectivity ii) CA weight variability specifically in federated settings, where data heterogeneity poses unique challenges. This contrast offers mechanistic insights into why ANFR outperforms baseline architectures. We will revise the text around line 255 to make this explanation more explicit.
---
# Regarding compatibility with newer and SAM-based methods
We thank the reviewer for suggesting the inclusion of SAM-based methods. **We will add FedSAM and FedLESAM to Section 2 (Related Work), discussing the references provided**. We conduct additional experiments comparing baseline models using FedSAM and FedLESAM on CIFAR-10, using the FedLESAM setup and codebase. Since we employ pre-trained models, we modified the training to 100 rounds (1 local epoch each) with 10 clients always participating, which was sufficient for performance to converge:
| | GN-ResNet | NF-ResNet | ANFR |
|----------|-----------|-----------|-------|
| FedSAM | 85.91 | 85.73 | **87.79** |
| FedLESAM | 84.14 | 87.33 | **87.55** |
These results demonstrate that ANFR continues to outperform baseline architectures for SAM-based methods. Notably, ANFR is more compatible with these techniques than BN networks as FedLESAM is not applicable to BN networks. **We commit to including these results in an appendix and, if accepted, will integrate SAM methods into our main codebase for the final version**.
Regarding our choice of baseline methods, our primary aim was to demonstrate ANFR's versatility across major FL algorithm families rather than targeting the absolute newest methods. We believe that showing significant improvements with established (hence older) methods from different families provides strong evidence for ANFR's fundamental advantages. These results suggest that ANFR should integrate well with newer methods as they emerge, which our preliminary FedSAM/FedLESAM results now confirm.
---
# Numbers of total and participating clients
We clarify the relevant details here:
* FedISIC: 6 clients (stated in line 305)
* FedChest: 4 clients (line 308)
* CIFAR-10: 5 clients (line 311)
* FedPathology: 3 clients (line 326)
* CelebA: 9343 clients (line 608)
Apart from CelebA, where 10 clients participate every round (line 388), we assume full participation, in line with typical cross-silo scenarios. We will move the CelebA and client partitipation details to section 4.1 to improve clarity.
---
# Summary
Summarizing our rebuttal, we have:
- Clarified the conceptual nature of $C_R$, $C_{NR}$ and the gradient conflict discussion.
- Provided additional context for the interpretation of the "before FL" figures.
- Demonstrated ANFR's effectiveness with newer optimization methods belonging to an additional family.
- Committed to more explicit documentation of experimental parameters.
Given the above, which we hope address the concerns raised in the review, we respectfully ask if the Reviewer might reconsider their score, and thank them for the constructive feedback which led to an improved manuscript. | null | null | null | null | null | null |
CaDA: Cross-Problem Routing Solver with Constraint-Aware Dual-Attention | Accept (poster) | Summary: This paper targets cross-problem learning for vehicle routing problems. It proposes Constraint-Aware Dual-Attention (CaDA), which introduces a constraint prompt to enhance constraint awareness and employs a dual-attention mechanism consisting of a global branch and a sparse branch. The sparse branch utilizes a top-k attention strategy to focus on key node pairs. Experimental results across 16 VRP variants demonstrate the effectiveness of the proposed method.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The basic idea of the proposed CaDA is reasonable for the problem at hand, and the evaluation criteria are appropriate.
Theoretical Claims: No theoretical claims are presented in this paper.
Experimental Designs Or Analyses: The baselines and benchmark instances are sound. For the ablation experiment concerning the position of the prompt (Figure 5(a)), it would be beneficial to include an experiment that adds the prompt to both branches.
Supplementary Material: I reviewed the Appendix file.
Relation To Broader Scientific Literature: The two key contributions of the paper are: it proposes the use of a constraint prompt to enhance task awareness, which has not been emphasized in existing works [1,2,3] for multi-task VRPs. Additionally, it introduces a dual-attention mechanism that incorporates a sparse attention branch to learn from more promising connections, while current methods [1, 2, 3] rely solely on standard attention mechanisms.
[1] Multi-task vehicle routing solver with mixture-of-experts. In International Conference on Machine Learning (ICML), 2024
[2] Multi-task learning for routing problem with cross-problem zero-shot generalization. In The 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2024). Association for Computing Machinery, 2024.
[3] Routefinder: Towards foundation models for vehicle routing problems. In ICML 2024 Workshop on Foundation Models in the Wild, 2024.
Essential References Not Discussed: All relevant prior works necessary to understand the key contributions of this paper have been cited and discussed.
Other Strengths And Weaknesses: Strengths:
1. This paper demonstrates strong performance across 16 vehicle routing problems and real-world instances.
2. The paper is well-structured and easy to follow.
Weaknesses:
1. The motivation and function of the key components, e.g., the sparse branch,need further explaination.
2. The fine-tune performance is not discussed.
Other Comments Or Suggestions: 1. In Figure 2, the subsequent data flow of the sparse branch is unclear.
2. The formula in line 270 should probably be written as $\pi_{\theta}({\tau}_{t} = i \mid \mathcal{V}, \boldsymbol{\tau}\_{1:t-1})$.
3. The paper mentions that CaDA significantly reduces the runtime compared to state-of-the-art heuristic solvers. However, the results in Table 1 indicate that the runtime of CaDA is nearly identical to that of RF-TE, and it does not seem to have a significant advantage over other methods. Additionally, why do HGS-PyVRP and OR-Tools have exactly the same runtime?
4. The explanations for Figures 5(a) and 5(b) are split across two columns and are too close together, which could easily lead to confusion.
5. Please provide the specific average gap values in Figure 6.
6. It is recommended to test more data points in the ablation study on 'top-k' to better demonstrate performance variations.
7. "Kernel density estimation (KDE)" appears twice, in lines 394 and 490. It would be better to mention the full term and its abbreviation only once.
8. In Equation (19) in line 758, a comma should be used instead of a period.
Questions For Authors: 1. Further explanation about motivation is needed. In the proposed method, the sparse branch is introduced to focus on "more related node pairs." Could you clarify why this is important?
2. How is the fine-tune performance on the VRPs.
3. In Table 2, why does CaDA w/o Sparse perform worse than CaDA? Intuitively, two standard attention branches with higher model complexity should have a stronger representation power than one standard attention branch and one sparse attention branch.
4. Some recent multi-task learning methods for NCO (e.g., [1]) are missing in the related work section.
5. In Figure 4, CaDA w/o Prompt shows a considerable performance drop on OVRPBL compared to other problems. Can the authors provide an explanation for why this happens?
[1] UniCO: On Unified Combinatorial Optimization via Problem Reduction to Matrix-Encoded General TSP. In International Conference on Learning Representations, 2025
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our work. We are very glad to know you find the paper is well-structured and easy to follow. We address your concerns as follows.
> **E1. Prompt Ablation Study**
To address your concern, we conduct the ablation study on adding prompt to both branches. The results below confirm that CaDA’s design (prompt on the global branch) achieves optimal performance:
| Prompt on Both Branches | Prompt on Sparse Branch | CaDA |
| ----------------------- | ----------------------- | ----- |
| 1.94% | 1.80% | 1.71% |
> **W1,Q1. Explanation about Sparse Branch**: The motivation and function of the key components, e.g., the sparse branch,need further explaination. In the proposed method, the sparse branch is introduced to focus on "more related node pairs." Could you clarify why this is important?
The sparse branch is introduced to enhance the sequential decoding process in VRP by addressing the limitations of standard attention. In VRP, selecting the next node from a small subset of nearby nodes is crucial, but standard attention assigns nonzero scores to all node pairs, diluting focus on critical decisions. The sparse branch employs Top-k sparse attention, allowing the model to concentrate on the most promising candidates based on learnable attention scores, rather than just Euclidean distances. This enables the model to automatically identify and focus on highly relevant node pairs, improving decision-making. We will add the discussion into the revised manuscript.
> **W2,Q2. Fine-Tune Performance**
Thank you for your valuable comment. As suggested, we conduct fine-tuning experiments, and due to the character limit, please refer to our response in E1 & Q2 to reviewer KzHG for experimental results. In summary, CaDA achieved the best zero-shot performance on two new constraints. After the first fine-tuning epoch, CaDA's gap reduced by 72% on Multi-Depot tasks (compared to 61–66% for baselines), demonstrating strong generalization capabilities.
> **C1. Clarification on Sparse Branch Data Flow**
The final outputs of the sparse branch are fused with those of the global branch through a fusion layer. The data from the sparse branch flows into the final node embedding $H^{(L)}$. We will revise Figure 2 to clarify this data flow.
> **C2, C4, C7, C8**: The formula in line 270, The explanations for Figures 5(a) and 5(b), "Kernel density estimation (KDE)" in lines 394 and 490, Equation (19) in line 758.
We appreciate your careful review. We will revise them accordingly in the revised manuscript.
> **C3. Clarification on Runtime Comparisons**
1)While all neural solvers show comparable runtime performance, our key contribution is enhanced solution quality, not runtime reduction. 2) For HGS-PyVRP and OR-Tools, we set maximum runtimes (10s for VRP50, 20s for VRP100) following RF-TE (Berto et al., 2024), resulting in similar overall runtimes.
> **C5, C6. Top-k Gap Values and Top-k Parameter Range**
As suggested, the table below provides the specific average gaps for Figure 6, along with additional ablation studies using a wider range of $k$ values. We will include the following table in the revised manuscript:
| k=2 | k=6 (N/8) | k=10 | k=12 (N/4) | k=25 (N/2) | k=40 |
| ----- | --------- | ----- | ---------- | ---------- | ----- |
| 1.80% | 1.73% | 1.72% | 1.75% | 1.71% | 1.77% |
> **Q3. Explantion about Ablation Study**
Thank you for your question. Here we further explain the role of the sparse branch.
- As detailed in our responses to W1 and Q1, sparse attention is beneficial for addressing the limitations of standard attention mechanisms when solving VRPs.
- Additionally, our visualization (see Figure 4 in [PDF]([Figure.pdf](https://anonymous.4open.science/api/repo/CaDA_illustration-FC5A/file/Figure.pdf?v=13d844ac))) shows that CaDA without Sparse exhibits dispersed attention, making it difficult to focus on relevant information, whereas CaDA effectively concentrates attention on key nodes, which aligns with its better performance.
>**Q4. Paper to be Cited**
Thank you for your feedback. We will add this paper (UniCO, ICLR 2025) in the revised manuscript.
> **Q5. Explantion about Ablation Study**
Thank you for raising this point. For constraints "O", "B", and "L", it is challenging for the encoder w/o prompt to distinguish whether these constraints are "on" or "off" for a given problem instance. This is because instances with or without these constraints share identical input structures.
Consequently, the encoder cannot infer which specific problem variant it is solving, leading to suboptimal node embeddings. The introduction of task-specific prompts addresses this limitation, thereby significantly improving performance.
---
Rebuttal Comment 1.1:
Comment: The author's response has adequately addressed my concerns, and I am open to adjusting my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for reviewing our response and updating the assessment. We greatly appreciate your valuable comments and feedback. | Summary: The paper proposes a novel architecture to tackle various variants of vehicle routing problems (VRP). The main idea of this architecture is to encode the different possible constraints in a so-called "constraint prompt" and use in conjunction two attention-based encoders of a VRP instance, one global and one sparse (only using top-k attention).
Claims And Evidence: The authors demonstrate that their trained model can outperform other multi-task neural solvers. To make those results more conclusive, it would have been nice to discuss the following points:
- The explanation about the training process is a bit too succinct for me, since it simply refers to Berto et al. (2024). I would suggest the authors to explain/recall the training process, at least in the appendix. For instance, do the authors use mixed batch training and/or multi-task reward normalization?
- A discussion about model sizes of the different methods would help understand where the performance is coming from.
- An explanation about how the hyperparameters were obtained is missing. I believe that for the baselines, the hyperparameters suggested by their authors were used. It would be nice to confirm this point.
The authors also conduct an ablation study, which is helpful, although it seems to suggest that the sparse branch has a limited contribution. In addition, I think to make this ablation study complete, it would have been nice to test a version without the global branch.
Also, given the closeness of the results for N/2 and N/4, wouldn't it be better to use a smaller k than N/2? N/3 or N/4?
The performance of the proposed method is also confirmed on CVRPLIB. However, the best method is a bit strange to me. If I understand correctly, it consists in giving to the neural solver all possible combination of constraints to generate the constraint prompt, regardless of the constraints in effect in a solved instance. This would suggest using a similar technique when solving the 16 types of VRP in Table 1, which by construction can only improve the results. In that sense, the constraint prompt seems to encode only loosely a set of constraints.
Methods And Evaluation Criteria: The proposed method uses techniques that have been proposed in other contexts (as discussed by the authors in their related work in the appendix for instance) and combine them in a somewhat novel way.
Theoretical Claims: There is no theoretical claim.
Experimental Designs Or Analyses: The experiments are mainly conducted according to the experimental setting proposed by Berto et al. (2024). However, in contrast to previous works in multi-task solvers, the authors do not discuss much the generalization capability (e.g., one-shot or few-shot) of their method. I believe this aspect may be important in the multi-task setting.
Supplementary Material: I've checked the supplementary material.
Relation To Broader Scientific Literature: I believe that the authors clearly discuss the related work, notably the recent multi-task solvers and other techniques that inspired or are related to their propositions (e.g., multi-branch, sparse attention).
Essential References Not Discussed: Regarding the multi-branch architecture, there are some recent propositions using similar ideas (multi-view) for solving VRP, e.g.,
Gao, C., Shang, H., Xue, K., Li, D., and Qian, C. Towards generalizable neural solvers for vehicle routing problems via ensemble with transferrable local policy. arXiv, 2023
Fang, H., Song, Z., Weng, P., and Ban, Y. Invit: A generalizable routing problem solver with invariant nested view transformer. In Forty-first International Conference on Machine Learning, 2024
Other Strengths And Weaknesses: The paper is quite well-written and clear, although there are a few points that could be improved in the exposition, e.g.:
- In (1), \tau_t may actually be a partial sub-tour
- Some of the architectural design decisions could be better explained in the main paper (e.g., LayerNorm in (5), or SwiGLU)
- In (14), should it be H_c^{(L)}? and W_t should be W_c?
- Below (16), what is the index g of \pi_g? Also, in the line below, u should be bolded.
Other Comments Or Suggestions: None
Questions For Authors: 1. Could you clarify the training process?
2. Does the current evaluation consider generalization to new tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review of our manuscript. We are pleased to hear that you found the paper to be well-written and clear. Below, we address your concerns and questions point by point.
> **E1, Q2. Generalization Capability**
Thank you for raising this valuable point. We conduct experiments to evaluate zero-shot and fine-tuning performance on two unseen constraints.
**Unseen Constraints:**
1. Multi-Depot (MD), vehicles can start from any depot but must return to their respective starting depot. The evaluation includes 16 VRPs.
2. Mixed Backhaul (MB), linehaul and backhaul customers can be mixed along a route. The evaluation includes 8 VRPs.
**Results**:
|0-shot|MTPOMO|MVMoE|RF-TE|CaDA|CaDA$\times$32|
|-|-|-|-|-|-|
|MD|42.29%|45.56%|41.93%|39.34%|28.86%|
|MB|9.28%|8.74%|9.12%|8.46%|7.40%|
|MD\Epoch|1|5|10|
|-|-|-|-|
|MTPOMO|16.70%|11.32%|9.32%|
|MVMoE|17.88%|11.92%|9.74%|
|RF-TE|14.11%|7.71%|6.96%|
|CaDA|11.01%|6.77%|5.90%|
1. Zero-Shot: Given the relatively poor zero-shot performance across all evaluated models, generalization to the MD constraint appears to be more challenging. For MD, CaDA achieves a gap of 39.34% (vs. 41.93–45.56% for other baselines); for MB, 8.46% (vs. 8.74–9.28%).
2. Fine-Tuning: After the first epoch, CaDA’s gap reduces by 72% (39.34% $\to$ 11.01%), outperforming baselines’ improvements of 61–66% .
We will add the results and discussion into the revised manucript.
> **C1,Q1.Training Process Clarification**
1. Mixed Batch Training: Yes, we employ this to stabilize convergence.
2. Reward Normalization: No reward normalization is applied.
We will include these explanations in Section 4 of the revised manuscript.
> **C2. Discussion of Model Sizes**
MvMoE has the largest model size(3.7M), followed by CaDA(3.4M), RF-TE(1.7M), and MTPOMO(1.3M). We will add this result to the Appendix of the revised manuscript.
> **C3. Hyperparameters Clarification**
Yes, we confirm that for each baseline, the hyperparameters suggested by its authors were used.
> **C4. Ablation Study About Sparse Branch**
We respectfully clarify that:
- In our paper, CaDA w/o Sparse retains the two-branch structure, but both use global attention.
- The sparse attention consistently improved performance across all 16 VRPs (0.003–0.209\%, see Figure 4).
- The sparse attention yields notable gains on variants such as VRPBL (0.209%), OVRP (0.173%), OVRPL (0.163%).
> **C5. Removing the Global Branch**
Thank you for your suggestion. We conduct ablation experiments (GA denotes global attention and SA denotes sparse attention) which show that the dual-branch model outperforms its single-branch version, and optimal performance is achieved by integrating both global and sparse attention, as in CaDA.
|Branch|Attention|Gap|
|:-|-|-|
|Single|GA|1.92%|
|Single|SA|1.96%|
|Dual|GA|1.80%|
|Dual|SA|1.75%|
|Dual|GA+SA|1.71%|
> **C6. Comparison of k = N/2 and N/4**
N/2 has better results than N/4. However, choosing N/4 could result in reduced computational costs. We will include this discussion in the Section 4.4 of the revised manuscript.
> **C7. CaDA$\times$32 on 16 VRPs**
Thank you for this valuable point. Below, we provide the performance results of CaDA $\times 32$ on the 16 VRPs , which further improves the performance.
|MTPOMO|MVMoE|RF-POMO|RF-MoE|RF-TE|CaDA|CaDA$\times$32|
|-|-|-|-|-|-|-|
|2.45%|2.29%|2.14%|2.16%|1.97%|1.71%|1.35%|
> **R1. References to be Discussed**
Thank you for your valuable suggestion. We will incorporate the papers in the revised paper. Gao et al. (2023) propose global and local policies for CVRP and TSP, defining "local" by Euclidean distance. Fang et al. (2024) suggest learning from multiple nested local views, both focusing on generalization across distributions and scales. In contrast, CaDA addresses 16 VRP variants using a learnable mechanism (Top-k sparse attention) to dynamically select related nodes based on attention scores.
> **W1**: In (1), \tau_t may be a partial sub-tour.
>
> **W3**: In (14), should it be H_c^{(L)}? W_t should be W_c?
>
> **W4**: Below (16), what is the index g of \pi_g? u should be bolded.
Thank you very much for your careful checks. We will correct these notational issues as follows:
- W1: Yes. We will correct K in (1) to K_t, where K_t denotes the number of sub-tours up to the current step.
- W3: The H_c in (14) is correct. We will correct H_c^{(L)} to H_c in (15). Yes, it should be W_c.
- W4: Below (16), the g should be t. We will bold u.
> **W2. LayerNorm and SwiGLU Design**
Thank you for the feedback. Below we clarify these design choices:
- LayerNorm in (5): Since the first MLP layer's outputs vary in scale across VRPs (e.g., OVRPBLTW yields larger-scale embeddings due to more active constraints), we apply instance-level LayerNorm to normalize inputs to the second MLP.
- SwiGLU: Following Berto et al. (2024), we use SwiGLU to improve convergence. | Summary: This paper presents Constraint-Aware Dual-Attention (CaDA), a new neural architecture for solving multi-task vehicle routing problems (VRPs). CaDA integrates a constraint prompt to help the model recognize the specific constraints of the current task, along with a dual-attention architecture that combines a standard attention branch and a top-k sparse attention branch. This architecture ensures that the encoding process is both focused on promising nodes and informed by global context. The model is evaluated on 16 different VRP variants, demonstrating significant improvements over existing neural solvers.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The methods and evaluation criteria proposed in the paper are effective for addressing the intended problems.
Theoretical Claims: The paper does not involve theoretical claims.
Experimental Designs Or Analyses: I have checked all the experiments in the experimental section.
Supplementary Material: I have read all the content in the appendix.
Relation To Broader Scientific Literature: It contributes to the community of neural combinatorial optimization. It introduces an efficient constraint prompt mechanism to improve constraint awareness. It introduces the top-k sparse operation to focus on more related node pairs.
Essential References Not Discussed: The paper has included the main related works that are crucial for understanding the context and significance of their contributions.
Other Strengths And Weaknesses: Strengths:
1. The proposed two-branch structure is interesting.
2. It achieves SOTA results on cross-problem learning for routing problems.
Weaknesses:
There is no obvious weakness. However, some experimental results (refer to questions) require further clarification to enhance their interpretability. Additionally, since the code has not been made publicly available, the reproducibility of the experiments cannot be fully verified.
Other Comments Or Suggestions: 1. Some symbols are confusing. For example, the use of $\boldsymbol{\tau}$ in line 88 might represent a solution that includes multiple sub-solutions, i.e., it is a set of tuples, and $\boldsymbol{\tau}^{i}$ represents the i-th sub-solution. However, $\boldsymbol{\tau}$ in line 152 represents a solution as a tuple, and $\boldsymbol{\tau}^{i}$ represents the i-th complete feasible solution.
2. Given that $V$ in line 163 is a vector, it might be clearer to use lowercase letters to represent it.
3. In line 261, $H_c^{(L)} $ might should be corrected to $ H_c$.
Questions For Authors: 1. The testing data distribution seems limited. Can CaDA achieve better performance across a broader range of distributions? For example, in [1], it provides instances with various distributions including grid, explosion, implosion, rotation, expansion, etc. It would be beneficial to test the proposed method on these varied distributions to validate its generalization capabilities.
2. Could you further explain Figure 7? Why does CaDA w/o the prompt show the same attention distribution for CVRP and OVRP?
3. The experimental setting in the "Different Sparse Functions" section (line 400) is confusing. What do you mean by "Softmax+Top-k"? Could you further explain how "a standard Softmax and a representative sparse function α-entmax" modify parts of CaDA?
4. Some experimental settings are unclear. What is the ratio of different VRP variants in the training dataset during the training process?
5. Figure 8 needs further explanation. Why does Figure 8 appear symmetrical? And when $(i, j)$ is illegal and j will not be the next node of i, i.e., $P_{i,j} < 0$, why are there still many $A_{ij}$ of CaDA that have large values?
[1] Towards omni-generalizable neural methods for vehicle routing problems. ICML, 2023.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our work. We are very glad to know that you find our proposed method efficient and interesting. We address your concerns as follows.
> **W1&Q1. Results on Other Distribution**
Thank you for raising this point. To evaluate cross-distribution generalization, we conduct additional experiments using the six distributions from [1]. The results below show that CaDA consistently outperforms all baselines across all distributions.
| VRP50 | explosion | implosion | rotation | linearprojection | expansion | grid |
| ----- | --------- | --------- | -------- | ---------------- | --------- | ----- |
| MTVRP | 2.65% | 2.28% | 2.92% | 3.45% | 3.02% | 2.25% |
| MVMoE | 2.54% | 2.18% | 2.77% | 3.32% | 2.89% | 2.15% |
| RF-TE | 2.25% | 1.95% | 2.39% | 2.76% | 2.59% | 1.91% |
| CaDA | 1.88% | 1.63% | 1.97% | 2.31% | 2.13% | 1.59% |
> **W2. Code Availability**
Thank you for your feedback. We confirm that the code will be made public immediately upon the paper's acceptance, with a link provided in the final version.
> **C1-C3**: Some symbols are confusing, V in line 163, H_c^{(L)} in line 261.
Thank you very much for your careful review. We will revise our notation accordingly: 1) Use distinct symbols to clearly differentiate between sub-solutions and complete solutions. 2) Change the notation from $V$ to $v$ to represent vectors. 3) Correct the symbol $H_c^{(L)}$ to $H_c$ in line 261.
> **Q2. Explanation of Attention Distributions**: Could you further explain Figure 7? Why does CaDA w/o the prompt show the same attention distribution for CVRP and OVRP?
Thank you for raising this point. When no task-specific prompt is provided, the encoder cannot distinguish between CVRP and OVRP instances, because both variants share identical input structures ($\mathcal{V} = \{v_i\}_{i=1}^{N}$, where $v_i = \{\vec{X}_i, A_i\}$ and $A_i = \{\delta_i\}$) , with values sampled from the same distribution. This limitation leads the encoder to view instances of different variants as the same and process them with the same attention patterns.
> **Q3. Clarification of Implementation**
In the "Different Sparse Functions" section, "Softmax+Top-$k$" refers to the computation of sparse attention scores ${M}( \mathbf{A} )$ in Equation (11). This involves first calculating standard attention scores via Softmax, then applying a Top-$k$ selection operation to sparsify the scores. "A standard Softmax and α-entmax" indicates replacing the Top-$k$ operation with alternative methods (e.g., α-entmax). Both approaches produce sparse attention scores but differ in their sparsification mechanisms. We will clarify this explanation in the revised manuscript.
> **Q4. Training Settings**
Problem variants are uniformly sampled from 16 VRPs during training (follow RouteFinder). Each batch contains a mix of variants. Therefore, each variant has roughly the same number of training instances. We will clarify this in the revised manuscript.
> **Q5.Explanation of Attention Distributions**
Figure 8 exhibits symmetry: larger |P_ij| corresponds to lower attention scores. This is because for P_ij > 0, a larger |P_ij| implies that including edge (i,j) in the solution might result in longer waiting times. For P_ij < 0, a larger |P_ij| indicates a smaller l_j - e_i, and node v_j's time window likely precedes v_i's, making edge (j,i) inefficient due to longer waiting times. Overall, the larger the value of |P_ij|, the less likely the two nodes should be consecutive in the solution.
For cases where P_ij < 0 but |P_ij| is small, although (i,j) is illegal, (j,i) may remain feasible without excessive waiting time, so there are still many A_ij with high values. | Summary: The paper "CaDA: Cross-Problem Routing Solver with Constraint-Aware Dual-Attention" presents a novel cross-problem learning method for Vehicle Routing Problems (VRPs) that enhances constraint awareness and representation learning through a Constraint-Aware Dual-Attention Model (CaDA).
1.Main Contributions
(1) Constraint-Aware Dual-Attention Model (CaDA): A new cross-problem learning method for VRPs that improves model awareness of constraints and representation learning.
(2) Constraint Prompt and Dual-Attention Mechanism: A constraint prompt is introduced to facilitate high-quality constraint-aware learning, and a dual-attention mechanism ensures the encoding process is both selectively focused and globally informed.
(3) Superior Performance: Comprehensive evaluations across 16 VRP variants show CaDA achieves state-of-the-art performance, surpassing existing cross-problem learning methods. Ablation studies confirm the effectiveness of both the constraint prompt and dual-attention mechanism.
2. Main Results
(1) Performance: CaDA outperforms existing neural solvers (e.g., MTPOMO, MVMoE, RouteFinder) on 16 different VRP variants, with significant improvements in solution quality and efficiency.
(2) Ablation Studies: Removing the constraint prompt or sparse attention mechanism leads to performance drops, highlighting their importance. The prompt's position in the global branch and the Top-k sparse operation's effectiveness are also validated.
(3) Real-World Validation: CaDA shows strong performance on real-world CVRPLIB datasets, further proving its practical effectiveness.
3. Key Algorithm and Concepts
(1) Constraint Prompt: Represents problem constraints as a multi-hot vector processed through an MLP to generate prompts, which are concatenated with node embeddings to enhance constraint awareness.
(2) Dual-Attention Mechanism: Comprises a global branch (standard multi-head attention) and a sparse branch (Top-k sparse attention). The global branch captures broad graph information, while the sparse branch focuses on key node connections, improving representation learning.
(3) Encoding-Decoding Framework: Follows a typical encoder-decoder structure. The encoder uses the dual-attention mechanism and constraint prompt to generate node embeddings, and the decoder constructs solutions autoregressively based on these embeddings.
Claims And Evidence: The claims made in this submission are well-supported by comprehensive evidence, including ablation studies, comparative experiments, and visualization analyses. The authors have addressed potential weaknesses in existing methods and provided convincing validation for their proposed approach.
1. Existing cross-problem NCO methods for VRPs are constraint-unaware and rely solely on global connectivity, limiting their performance. The authors provide a thorough review of existing methods (Section 2) and identify specific limitations in their approach to handling constraints and node relationships. This sets a clear foundation for their proposed improvements.
2. CaDA's constraint prompt and dual-attention mechanism improve cross-problem learning performance. The ablation studies in Section 4.4 demonstrate that removing either component (constraint prompt or sparse attention) results in performance degradation. This directly supports the effectiveness of both mechanisms.
3. CaDA achieves state-of-the-art results across all tested VRPs. The comprehensive experimental results in Section 4 show that CaDA outperforms existing methods on 16 different VRP variants. The results include statistical comparisons and gap percentages relative to traditional solvers.
4. The dual-attention mechanism allows the model to focus on important connections while maintaining global context. The visualization of attention weights in Section 4.5 shows distinct patterns for different VRP variants, indicating that the sparse branch effectively focuses on key connections while the global branch maintains overall context.
5. The constraint prompt effectively provides constraint information to the model. The attention distribution analysis in Section 4.5 demonstrates that CaDA with the constraint prompt exhibits different attention behaviors for different problems, while CaDA without the prompt does not. This directly supports the effectiveness of the constraint prompt.
While the evidence is generally strong, there are a few areas where additional support could strengthen the claims:
1. Computational efficiency analysis: The paper focuses primarily on solution quality but could benefit from a more detailed analysis of computational efficiency, especially for larger-scale problems beyond 100 nodes.
2. Generalization to unseen constraints: The zero-shot generalization capability to entirely new constraint combinations (beyond the 5 studied) could be further explored to demonstrate the full potential of the constraint prompt mechanism.
Methods And Evaluation Criteria: The proposed methods in this paper, namely the Constraint-Aware Dual-Attention Model (CaDA), make excellent sense for addressing the cross-problem vehicle routing problem (VRP) challenges identified in the paper. The authors have identified key limitations in existing neural combinatorial optimization approaches for VRPs—specifically, the lack of constraint awareness and inefficient representation learning due to global connectivity—and have developed targeted solutions to these problems.
The dual-attention mechanism, combining global and sparse branches, directly addresses the need for both broad contextual understanding and focused attention on key node relationships in routing problems. This approach seems particularly well-suited to VRPs, where both global structure (like overall route efficiency) and local details (like specific customer sequences) significantly impact solution quality.
The constraint prompt mechanism effectively incorporates problem-specific information into the model, allowing it to handle diverse VRP variants without requiring separate training for each problem type. This is crucial for developing a generalizable cross-problem solver, as real-world logistics problems often involve varying combinations of constraints.
The evaluation criteria, including comprehensive testing across 16 VRP variants and comparison against both traditional solvers and state-of-the-art neural methods, provide a robust framework for assessing the model's performance. The use of gap percentage as a performance metric aligns with standard practices in combinatorial optimization and clearly demonstrates the practical significance of the improvements achieved by CaDA.
The ablation studies further strengthen the evaluation by isolating the contributions of specific model components, providing evidence for the effectiveness of both the constraint prompt and dual-attention mechanisms. These studies help establish that the proposed innovations are indeed responsible for the performance improvements observed.
Overall, the proposed methods and evaluation criteria are well-aligned with the problem objectives and demonstrate a thoughtful approach to advancing neural combinatorial optimization for VRPs.
Theoretical Claims: The paper primarily focuses on empirical validation of the proposed CaDA model rather than presenting formal theoretical claims with proofs. The claims made are mostly about the effectiveness of the model architecture and its components, supported by experimental results rather than theoretical analysis:
- Comprehensive experimental results across 16 VRP variants
- Ablation studies showing the contribution of each component
- Visualization of attention patterns demonstrating constraint awareness
- Comparison against state-of-the-art methods
While the paper does not contain formal theoretical claims with proofs, the empirical evidence provided is substantial and convincing for the claims made. The authors have appropriately focused on empirical validation given the nature of the problem and the proposed solution.
Experimental Designs Or Analyses: he experimental design and analysis in this paper are robust and valid, providing convincing evidence for the effectiveness of the proposed CaDA model. The comprehensive evaluation across multiple problem variants, appropriate baseline comparisons, and insightful ablation studies all contribute to a strong experimental foundation for the claims made.
1. Experimental Setup and Design
The experimental design is comprehensive and well-structured. The authors evaluate CaDA across 16 different VRP variants with varying constraints, which demonstrates the model's versatility and generalization capabilities. The use of both 50-node and 100-node instances allows assessment of performance across different problem scales.
The experimental setup is valid for testing cross-problem learning capabilities. The inclusion of both traditional solvers (PyVRP, OR-Tools) and state-of-the-art neural solvers as baselines provides a robust comparison framework.
2. Selection and Use of Baseline Methods
The choice of baseline methods is appropriate and comprehensive. The authors compare against both traditional heuristic solvers and multiple neural approaches, including MTPOMO, MVMoE, and various RouteFinder variants.
The baselines are implemented correctly, with the authors using open-source code where available and following the same training protocols for fair comparison. The time limits for traditional solvers (10s for VRP50, 20s for VRP100) are reasonable and allow meaningful comparison with the neural methods.
3. Performance Metrics and Statistical Analysis
The primary metric, gap percentage relative to traditional solvers, is appropriate for combinatorial optimization problems. The inclusion of objective function values and running times provides a complete picture of performance.
The statistical analysis is sound. The results are presented with sufficient detail, allowing readers to assess the significance of performance differences. The use of 1K test instances per VRP variant ensures reliable performance estimates.
4. Ablation Studies
The ablation studies are well-designed and effectively isolate the contributions of the constraint prompt and dual-attention mechanisms.
The ablation studies are valid and provide clear evidence for the effectiveness of each component. The results show consistent performance improvements when both components are included, supporting the claims made.
5. Visualization and Interpretation of Results
The visualization of attention weights is insightful and supports the claims about constraint awareness.
The interpretation of results is appropriate. The authors correctly link the observed attention patterns to the expected behavior for different VRP variants, demonstrating that the model learns meaningful representations.
6. Real-World Validation
The evaluation on real-world instances from CVRPLib is a valuable addition to the experimental analysis.
The real-world validation is appropriately conducted, with results showing that CaDA generalizes well beyond synthetic datasets. The comparison with existing methods on these benchmarks further strengthens the claims.
While the experimental design is generally sound, there are a few areas where additional details or analyses could enhance the validity:
1.Statistical Significance Testing: The paper could benefit from explicit statistical significance testing between CaDA and baseline methods to quantify the confidence in performance differences.
2.Computational Efficiency Analysis: A more detailed analysis of computational resources required by CaDA compared to baseline methods would be valuable, especially for larger problem instances.
3. Generalization to Unseen Problems: While CaDA demonstrates strong performance across 16 VRP variants, testing its zero-shot generalization to completely new problem types or constraint combinations would further validate its cross-problem capabilities.
Supplementary Material: The supplementary material provides essential technical details and additional experimental evidence that strengthens the claims made in the main paper. It allows for a more thorough evaluation of the methodology and results, particularly regarding the experimental design and analysis aspects I previously reviewed.
1. Appendix B: Method Details
This section provides additional technical details about the MDP formulation for solution construction, the gated linear unit (GLU) implementation, and feasibility evaluation procedures.
Relevance: These details support the methodological claims in the main paper and allow for a more complete understanding of the experimental setup and model architecture.
2. Appendix C: Experiment Details
This section includes detailed problem setups for the 16 VRP variants, hyperparameter settings, visualization of constraint awareness for time window constraints, and results on real-world instances from CVRPLib.
Relevance: The experiment details are crucial for assessing the soundness of the experimental design and the validity of performance claims. The real-world validation strengthens the practical relevance of the results.
3. Appendix C.3: Visualization of Constraint Awareness for Time Window Constraints
This part provides additional visual analysis of how CaDA handles time window constraints, showing how attention scores correlate with feasibility metrics.
Relevance: This visualization supports the claim that CaDA effectively incorporates constraint information and makes informed routing decisions.
4. Appendix C.4: Result on Real-World Instances
This section presents experimental results on real-world benchmark datasets, demonstrating CaDA's performance on problems with varying scales and characteristics.
Relevance: These results validate the generalization capabilities of CaDA beyond synthetic datasets and support its practical applicability.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. Originality:
The paper demonstrates strong originality through its creative combination of constraint prompts and dual-attention mechanisms specifically tailored for cross-problem VRP solving. This represents a novel adaptation of techniques from natural language processing (prompting) and computer vision (multi-branch architectures) to combinatorial optimization.
The constraint prompt mechanism is particularly innovative in how it encodes problem-specific information to guide the model's attention, addressing a significant limitation in previous neural VRP solvers that lacked constraint awareness.
2. Significance:
The work addresses a practically significant problem with direct real-world applications in logistics and transportation. The comprehensive evaluation across 16 VRP variants demonstrates the model's versatility and potential impact across diverse routing scenarios.
The performance improvements over existing state-of-the-art methods are substantial and practically meaningful, particularly given the computational efficiency advantages of neural solvers compared to traditional heuristic approaches.
3. Clarity:
The paper is exceptionally well-written and well-structured, making complex concepts accessible to a broad audience. The methodology is clearly explained, with appropriate technical details provided in both the main text and supplementary material.
The experimental results are presented with transparency, including detailed comparisons with baseline methods, ablation studies, and visualizations that help readers understand the model's behavior.
Weaknesses:
1. Originality:
While the combination of techniques is novel, the individual components (prompting, attention mechanisms) have precedents in other domains. The paper could benefit from more extensive discussion of how this specific integration addresses limitations in previous VRP solvers beyond what has been described.
2. Significance:
The practical impact would be further strengthened by including case studies with logistics companies or real-world deployment scenarios. While the CVRPLib benchmarks are valuable, demonstrating the model's effectiveness in actual operational settings would enhance its perceived significance.
The computational efficiency analysis is somewhat limited, particularly for very large-scale problems beyond 100 nodes, which might restrict its applicability in certain high-stakes logistics scenarios.
3. Clarity:
Some sections could benefit from additional visualizations or examples to further clarify complex concepts, especially regarding how the constraint prompt interacts with the dual-attention mechanism in practice.
The paper could provide more detailed explanations of how the attention weights are visualized and interpreted, which might help readers better understand the model's decision-making process.
Other Comments Or Suggestions: 1. Clarification of Prompt Parameters:
In Section 3.2 (Constraint Prompt), the notation for the prompt generation could be slightly clarified. The dimensions of the learnable parameters Wa, ba, Wb, and bb in Equation 5 should be explicitly stated to help readers understand the computational aspects of the prompt mechanism.
2. Practical Significance of Performance Gaps:
In Section 4.3 (Main Results), when discussing the performance gaps between CaDA and other methods, a brief discussion about the practical significance of these percentage differences in real-world logistics operations would be valuable. This could help readers better appreciate the real-world impact of the performance improvements.
3. Additional Visualizations for Ablation Study:
For the ablation study in Section 4.4, consider including additional visualizations comparing the attention patterns of CaDA with and without the sparse attention mechanism. This would provide further insight into how each component contributes to the model's performance and decision-making process.
4. Computational Efficiency Analysis:
A more detailed discussion of computational efficiency, particularly regarding how the dual-attention mechanism affects inference time compared to baseline methods, would strengthen the paper. This could include specific comparisons of runtime for different problem sizes and configurations.
5. Qualitative Analysis of Real-World Solutions:
In the real-world validation section (Appendix C.4), include a qualitative analysis of the solutions generated by CaDA for specific instances. Highlighting how the model's decisions align with expected behaviors given the constraints would provide additional confidence in its practical applicability.
Some formatting and grammar need to be standardized, for example: Page 3, Section 3.1,Page 5, Section 3.4,Page 7, Section 4.2,Page 9, Figure 5,Page 11, Section 4.5.
Questions For Authors: 1. Constraint Prompt Design Choices:
The constraint prompt is implemented as a multi-hot vector processed through an MLP. Why was an MLP chosen instead of other methods for incorporating constraint information (e.g., attention-based mechanisms or simple concatenation)? How sensitive is the model's performance to the specific architecture of the MLP?
If the MLP was chosen due to empirical testing showing superior performance compared to alternatives, this would strengthen the technical justification for the design. If no alternatives were tested, it might suggest that the prompt mechanism could be further optimized.
2. Dual-Attention Architecture Alternatives:
Were alternative architectures considered for combining the global and sparse attention branches (e.g., different fusion strategies or alternative attention mechanisms)? How did you settle on the current design?
If multiple alternatives were explored with the current design showing clear advantages, this would demonstrate thorough architectural search. If not, it might indicate potential for further improvement.
3. Zero-Shot Generalization to New Constraints:
Successful zero-shot generalization to entirely new constraint combinations would significantly enhance the perceived significance and versatility of the model.
Have you tested CaDA's zero-shot generalization capabilities on problem instances with combinations of constraints not seen during training? How would you expect the model to perform in such scenarios?
4. Inference Time Analysis:
How does CaDA's inference time compare to traditional solvers and other neural methods, especially for larger problem instances beyond 100 nodes? What is the computational overhead of the dual-attention mechanism compared to standard transformers?
If CaDA maintains its efficiency advantages at larger scales, this would strengthen its practical relevance. If inference time becomes prohibitive, it might limit real-world applicability.
5. Systematic Interpretation of Attention Patterns:
While the attention visualization shows different patterns for different VRP variants, is there a systematic way to interpret these patterns in terms of routing strategies, or are they primarily qualitative illustrations?
If there's a systematic interpretation method, it would demonstrate deeper understanding of the model's decision-making. If not, it might suggest a need for further interpretability research.
6. Sensitivity to Top-k Parameter:
The Top-k value is set to N/2 as the standard setting. How sensitive is the model's performance to this parameter? Have you explored a wider range of k values beyond what's shown in Figure 6?
If performance is robust across a range of k values, it suggests the mechanism is reliable. If performance is highly sensitive, it might indicate the need for careful hyperparameter tuning in practice.
7. Training Stability:
Did you encounter any training stability issues, particularly with the dual-attention mechanism and constraint prompt? How did you address them?
Evidence of stable training would support the practicality of implementing the model. If significant instability was encountered, it might indicate areas where the architecture could be refined.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your time and effort in reviewing our manuscript. We are glad to know that you find our method innovative and it addresses potential weaknesses in existing methods, and the paper well written and well structured. Point-to-point responses to your concerns and questions are presented below. Some visualizations are provided in the [PDF](https://anonymous.4open.science/api/repo/CaDA_illustration-FC5A/file/Figure.pdf?v=13d844ac) .
> **C1,E2,W3,S4,Q4. Computational Efficiency Analysis**
We compare RF-TE (single-branch) with CaDA (dual-branch) on VRP200. CaDA remains efficient for 100+ instances. Since the encoder runs once while the decoder runs repeatedly (~100x longer), the dual-branch structure adds minimal overhead.
| |Gap|Time per Instance|
|-|-|-|
|HGS-PyVRP| * |40s|
|RF-TE|5.02% |0.05s|
|CaDA|4.80% |0.05s|
> **C2,E3,Q3. Generalization to Unseen Problems**
We conduct a zero-shot and fine-tuning study (see E1, Q2 in KzHG for results), and CaDA achieves the best performance on two new constraints.
> **E1. Statistical Significance Testing**
We conduct a one-sided Wilcoxon rank-sum test comparing CaDA with RF-TE, and the result confirms CaDA’s superiority with >95% confidence.
| p-value| Significant(0.05) |
| ---| ----- |
| 2.6E-04 | TRUE |
> **W1. How CaDA Addresses Limitations**
We use prompt to enhance constraint awareness and a dual-branch model with global and sparse attention to better focus on key nodes.
Our additional ablation experiments on on the position of the prompt (E1 to 9jwN) and global-sparse fusion (C1 to KzHG) confirm that our current integration is most effective. Moreover, attention visualizations (S3) show that sparse attention reduces dispersion.
> **W2. Actual Operational Settings**
We validate our model on 64 real-world industrial insatnces from MTPOMO. The table shows the average objective values.
| MTPOMO | MVMoE | RF-TE | CaDA | CaDA x 32 |
| ------ | ----- | ----- | ---- | -------------- |
| 4262 | 4260 | 4080 | 4026 | 3983 |
> **W4. Interaction Visualizations:**
Figure 1 in the **PDF** illustrates the interaction mechanism: the prompt is concatenated with the initial node embeddings and fed into the global branch.
> **W5. How Attention Weights are Visualized**
We randomly select 100 VRP50 instances and collect global branch attention scores from CaDA and CaDA w/o Prompt. For Figure 7, KDE and a heatmap are used. For Figure 8, a hexbin plot is used.
> **S1. Clarification of Parameters**
The dimensions of these parameters are provided in line 194 of the manuscript, $W_a \in \mathbb{R}^{5 \times d_h}$, $b_a \in \mathbb{R}^{d_h}$, $W_b \in \mathbb{R}^{d_h \times d_h}$, and $b_b \in \mathbb{R}^{d_h}$.
> **S2. Discussion of Practical Significance**
CaDA outperforms the second-best learning-based methods by 0.26% for VRP50 and 0.32% for VRP100. These differences could accumulate over numerous daily routes and long-term operations, resulting in reduced transportation costs.
> **S3. Ablation Visualizations for Sparse Attention**
Thank you for your valuable suggestion. Figure 4 in the **PDF** shows the attention patterns: CaDA w/o Sparse exhibits dispersed attention, while CaDA effectively concentrates its attention on fewer nodes, aligning with its performance gains.
> **S5. Qualitative Analysis of Real-World Solutions**
We provide a qualitative analysis in Figure 3 of the **PDF**, which shows that CaDA's solution is more similar to the best-known solution.
> **S6 Standardization of Formatting and Grammar**
Thank you for your suggestion. We will review the entire manuscript and standardize the formatting and grammar in the mentioned sections.
> **Q1. Design of the MLP in the Prompt**
1. The multi-hot vector (dimension 5) must be projected to match the node embedding dimension. An MLP achieves this effectively. And MLPs are commonly used as prompt generators.
3. Ablation tests show that MLP modifications result in slight performance drops, the current design performs best.
|MLP with BatchNorm|MLP w/o Norm|CaDA|
|-- |-|-|
|1.77%| 1.75%|1.71%|
> **Q2. Fusion Strategy Design**
Thank you for your question. Our fusion strategy in CaDA follows classical multi-branch architectures in computer vision. We test two alternatives:Concat and Cross-Attention. Our current design yielded the best performance.
|Fusion by CrossAttn|Fusion by Concat|CaDA|
|-|-|-|
|1.873%|1.831%|1.714%|
> **Q5. Systematic Interpretation of Attention Patterns**
The attention analysis remains qualitative, but we acknowledge the need for systematic interpretation across VRPs and will highlight this as critical future work in the revision.
> **Q6. Sensitivity to k Parameter**
Thank you for raising this point. We test a wider range of k (refer to C5,C6 for 9jwN for results), and the model's performance is robust to k.
> **Q7. Training Stability**
No, the training of CaDA is stable. The loss curves for CaDA are shown in Figure 2 of the **PDF**. | null | null | null | null | null | null |
Reward-Guided Prompt Evolving in Reinforcement Learning for LLMs | Accept (poster) | Summary: This paper presents eva, a new minimax algorithm for RLHF which pushes beyond the static prompt set used by the majority of RLHF algorithms. In eva, the creator is trained to generate prompts that are solvable by the the solver. The largely empirical work focuses on evaluating a large number of design choices through extensive experimental evaluation.
Claims And Evidence: The claims made in the paper surrounding eva's performance are largely substantiated by extensive experiments.
However, one component that I believe would be important to understanding eva's claimed performance would be the different prompt/datasets used to train the reward model versus used for RL. I have listed some questions surrounding this in the "questions" section.
Methods And Evaluation Criteria: The presentation of the method is clear and the work provides sufficient justification for the objective presented in Eq. 2 through connections to minimax regret.
The chosen evaluations make sense as they are standard evaluations used for RLHF / Alignment.
Theoretical Claims: The work does not make any theoretical claims.
Experimental Designs Or Analyses: The work is thoroughly evaluated on a number of different axes. I commend the authors for their work in setting up all of these experiments.
One aspect that was less clear to me were the differences between the prompt datasets used to train the reward model versus prompt datasets used for the RL component. From looking at experiments in Appendix F, it seems like eva's gains versus DPO degrade substantially when DPO is given larger preference datasets.
I think it is most fair to evaluate DPO and eva with access to the same base prompt dataset. If the reward function used by EVA to select prompts is trained on all the data, this seems like an unfair advantage in the evaluation.
Supplementary Material: I skimmed the experiments in the supplemental section.
Relation To Broader Scientific Literature: Though prior work exists on both prompt evolution and evolution based reward design, eva presents an exciting proof of concept for dataset expansion for post-training RL. This demonstration is at the highest scale I have seen, and I believe would be of value to the community.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: * I can tell the authors were crammed for space, but I think it is nicer when papers write full works e.g. "with" instead of "w/" (Line 198 col 2)
* Line 229 col 2 "and can compete default training". Is there a missing word here?
Questions For Authors: * Eva appears to require having an explicit reward model. Could the authors detail how that reward model is trained?
* Could the authors provide more details on how they evaluate EVA? What subset of prompts are used from ultrafeedback?
* How does eva compare as the number of prompts in the initial set increase?
* Could the authors do a better job clarifying what data is used for training the reward model used for EVA versus the data used for RL? It seems like these might be different in places. Does this create a problem where the reward function must have more data to be sufficiently generalizable to generated prompts?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed review and insightful questions. Below we provide a high-level summary with detailed rebuttal, and will add relevant discussions in the reivisions.
---
> **Q2 & Q3**: *Could the authors provide more details on how they evaluate `eva`? What subset of prompts are used from ultrafeedback? How does `eva` compare as the number of prompts in the initial set increases?*
**A**: As discussed in Section 4, we evaluate `eva` on off-the-shelf benchmakrs including [AlpacaEval 2.0](github.com/tatsu-lab/alpaca_eval), [MT-Bench](github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge), and [Arena-Hard](github.com/lmarena/arena-hard-auto). We train `eva` with the train split of UltraFeedback; following standard practices in iterative RLHF, we shuffle the full training set and divide it into equal-sized subsets for each iteration. In Appendix F.2., we have experimented with varying number of prompts in the initial set from 10K to 20K and 60K, and show that `eva` can consistently bring robust gains across multiple iterations.
> **Q1 & Q4**: (**i**) Could the authors detail how the reward models are trained? (**ii**) What data is used for training the reward model *v.s.* the data used for training the policy model (*i.e.*, the solver)? (**iii**) Does this create a problem where the reward model must have more data to be sufficiently generalizable to generated prompts?
**A**: (**i**) As discussed in Section 3 and 4, we assume a fixed reward model as the oracle for human preferences during training. We have evaluated our method under different reward models ([ArmoRM-8B](https://arxiv.org/pdf/2406.12845), [SkyworkRM-27b](https://arxiv.org/pdf/2410.18451), [PairRM-0.4B](arxiv.org/abs/2306.02561), and [PairRM-8B](huggingface.co/RLHFlow/pair-preference-model-LLaMA3-8B)). (**ii**) The data used for training the reward model can be different from the data used to train the solver. For example, the prompts used to train ArmoRM-8B and PairRM-8B are selected from UltraFeedback, HelpSteer, OpenOcra, UltraInteract, Capybara and DIBT-10K; the prompts used to train SkyworkRM-27B are 80K prompts selected from HelpSteer2, OffsetBias, WildGuard (adversarial), and Magpie. (**iii**) We believe continual training of reward models is an important future work to enhance the robustness of `eva`. As the creator generates more prompts that diverge from the initial training prompt set (in our case, subsets of UltraFeedback), a fixed RM may be struggle to generalize reliably, especially if its training data does not cover the evolving prompt distribution. While our experiments show that `eva` remains effective under multiple fixed RMs, we view continual refinement of reward models (e.g., by online updates or co-training with the evolving policy) as important future works for improving the long-term robustness of `eva`. Note that reward models may be trained more efficiently and generalize better than policy models, as they only produce scalar scores or rankings, thus it does not necessarily require *more* data (or prompts) than the policy to remain effective.
> **Comments:** *It is nicer when papers write full works, e.g., "with" instead of "w/". Line 229 col 2 "and can compete default training". Is there a missing word here?*
**A**: Thanks so much for the suggestions! We will carefully revise each and every abbreviation and make sure the writing is clear. And yes, for Line 229, we meant that `eva` with only 1× raw data throughout training can compete with default training -- which uses 5× more human prompts and no evolved data -- by achieving comparable or better results.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions! I will maintain my score. | Summary: This paper studies a new paradigm for post-training where prompts are sampled adaptively. In particular, this paper proposes eva, in which a creator is addtionally introduced to select prompts for the solver to optimize. It provides extensive empirical results to show the advantage of eva.
## update after rebuttal
I have reviewed the authors' updated formulation, which is now more sound than the submitted version. The empirical ablations also address my concerns regarding effectiveness. Below are additional comments on the rigor of the formulation:
- Gap between Problem 1 and the min-max regret formulation: In Problem 1, the objectives of the two players differ, whereas in the min-max Nash game formulation, they are the same.
- Soundness of Problem 1: The formulation appears reasonable, though the lack of a prior definition for $\pi_{true}$ remains a subtle issue. I guess the issue is not from the statistical side that we cannot draw samples from $\pi_{true}$. Instead, the fundamental concern is that there is no clear criterion for defining the of optimality of $\pi_{true}$. This is analogous to many optimization problems where the optimal solution is unknown a priori but is instead implicitly defined by the objective function.
- Min-max regret formulation: Under the assumption that the solver has strong representation and optimization power, the optimal best response would simply select the action with the highest reward, regardless of the creator’s design. To avoid this trivial case, additional assumptions (e.g., limited representation or optimization power) should be introduced.
Overall, I find the empirical algorithm `eva` reasonable and potentially valuable to the community, though the theoretical formulation should be more rigorous. Given these considerations, I have updated my review score to 4, and I hope that the authors could either refine these aspects or remove unproper formulation.
Claims And Evidence: The story is imaginative, but its scientific foundation is questionable.
Issue with Problem 1: The problem highlighted in Problem 1 does not make sense to me. Without proper regularization for the creator, the creator may simply choose the simplest prompt for the solver, leading to an equilibrium that is essentially meaningless. Although the paper addresses this issue in Section 3.1 and introduces actor regularization, it still does not resolve my concerns. Regularization should not be treated as a central element of game design. Therefore, I believe the game is fundamentally flawed in its design. Moreover, if regularization is indeed crucial, the paper should discuss it more thoroughly in the main text. Currently, I see no significant discussion of this in the main text.
Discrepancy between Problem 1 and the Formulation: There is a notable gap between Problem 1 and the formulation presented in Section 2. In Problem 1, the game is collaborative, with both layers aiming to maximize the same objective (i.e., a max-max formulation). However, in Section 2, the paper shifts to a min-max game formulation, where the creator appears to act adversarially. This inconsistency suggests that the formulation does not actually solve the problem proposed in Problem 1.
Methods And Evaluation Criteria: Not applicable
Theoretical Claims: Not applicable
Experimental Designs Or Analyses: Yes, I reviewed the experiment details and found that the proposed approach, as described in Appendix A, appears to be rather heuristic. For instance, it uses a specific number of prompts (e.g., 4 and 16) without clear justification. This raises concerns about the generalizability of the approach: is the demonstrated performance highly sensitive to this hyperparameter? Without further analysis or ablation studies, it is difficult to assess whether the results are robust or overly dependent on this particular configuration.
Supplementary Material: Yes. I have reviewed Appendix A in detail and briefly examined Appendices B, C, D, and others.
Relation To Broader Scientific Literature: The problem of adaptive prompt selection studied in this paper is quite interesting; however, its scientific value remains questionable.
Essential References Not Discussed: This paper provides a comprehensive review of previous works. However, I would like to highlight several key points and relevant literature that should be discussed.
The core idea of regret maximization for the creator bears strong similarities to the following two works, which deserve discussion [1, 2].
The paper should also consider discussing the work [3], which presents an information-theoretic approach to data collection.
The gradient estimator used for the actor appears to be similar to the ReMax algorithm [4].
To my knowledge, there are two main paradigms for adaptive sampling:
- Information-Seeking (Min-Max Formulation): This approach may demonstrate advantages over uniform sampling in compute-limited settings.
- Transferability-Based Sampling: This approach aims to select prompts from a large source pool based on representative samples from the target domain. For example, see [5]
[1] Jiang, Yiding, et al. "Adaptive data optimization: Dynamic sample selection with scaling laws." arXiv preprint arXiv:2410.11820 (2024).
[2] Mindermann, Sören, et al. "Prioritized training on points that are learnable, worth learning, and not yet learnt." International Conference on Machine Learning. PMLR, 2022.
[3] Dwaracherla, Vikranth, et al. "Efficient exploration for LLMs." arXiv preprint arXiv:2402.00396 (2024).
[4] Li, Ziniu, et al. "ReMax: A simple, effective, and efficient reinforcement learning method for aligning large language models." arXiv preprint arXiv:2310.10505 (2023).
[5] Xie, Sang Michael, et al. "Data selection for language models via importance resampling." Advances in Neural Information Processing Systems 36 (2023): 34201-34227.
Other Strengths And Weaknesses: The results are presented in a well-organized format; however, certain parts lack clear explanation in the main text, making it difficult to fully assess the scientific value of the work. For instance:
- The details of creator regularization in Section 3.1 are not sufficiently elaborated.
- The creator optimization step in Section 3.3.2 is not clearly explained.
Additionally, the paper fails to discuss simple baselines (e.g., uniform sampling) in the formulation and does not provide a thorough analysis of why and under what conditions the proposed approaches are expected to outperform these baselines. Addressing these gaps would significantly strengthen the paper's scientific rigor and practical relevance.
Other Comments Or Suggestions: I find that Remark 1 does not provide any new insights beyond reiterating the definition.
Questions For Authors: From my understanding, the min-max formulation is advantageous in compute-limited settings, as it can quickly identify samples whose loss decreases rapidly. However, from a theoretical standpoint, it is clear that the optimal strategy for the creator is to uniformly select all prompts if the solver is provided with sufficient computational resources to optimize under each prompt. In such cases, uniform sampling is expected to performs well. I note that the experiments are conducted with relatively small sample sizes (e.g., 10k). I am curious about the performance advantage of the proposed approach over uniform sampling when the data size is significantly larger (e.g., 1M, as seen in commercial-level LLM products).
Additionally, I am unclear about the design of the prompt buffer:
- Is it incremental (e.g., 10k → 10k + 10k = 20k → 20k + 10k = 30k) in each iteration?
- Or is it fixed (e.g., 10k, 10k, 10k) in each iteration?
I feel confused about this setting. In the latter case, I suspect there may be a knowledge forgetting issue, as previous prompts are discarded in later stages of training.
Finally, I would like to see ablation studies on the hyperparameter choices for online EVA mentioned in Appendix A. The current details provided are too heuristic and lack justification, making it difficult to assess the robustness and generalizability of the approach.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed and thoughtful review, which helps us a lot in shaping a better submission.
---
**Overview:** We would like to clarify several potential misunderstandings in the review:
1. **Problem 1 and Minimax Game:**
- (**i**) "*Problem 1 is a max-max collaborative game.*" --> Incorrect. Problem 1 is a joint optimization problem, whose solution may be approximated in either a min-max or max-max way.
- (**ii**) "*Creator regularization is not implemented.*" --> Incorrect. Creator regularization is explicitly achieved through regret maximization (as in Section 3.1 and 3.2).
- (**iii**) "*The minimax regret formulation does not actually solve the Problem 1.*" --> Inaccurate. The minimax regret formulation provides a worst-case optimal solution to Problem 1 (as in the discussions under Remark 1).
2. **Adaptive Prompt Sampling:**
- "*It is clear that the optimal strategy is to uniformly select all prompts.*" --> Potentially misleading. The assumption on unlimited resources can be impractical, as efficiency is a key bottleneck in training large models. Even w/ unlimited resources, uniform sampling is only optimal under strong assumptions (e.g., iid data, perfect RM, ...), which is impractical. We also cited a rich literature on online learning and network optimization showing uniform sampling can be sub-optimal and may lead to worse local minima (as in Table 5, 4.2.1 and 4.2.4).
---
**1. Rebuttal on Problem 1 and the Minimax Game:**
(**i**) Problem 1 captures the general objective on optimizing the language model for it to perform well with regard to some potentially unknown prompt distribution (we will add $\pi _{\mathsf{true}}(\cdot)$ inside the regularization to emphasize the discussion in 3.1.)
$$
\max _{\phi, \boldsymbol{\theta}} \mathcal{J}(\phi, \theta) := \mathbb{E} _{ x \sim \pi _\phi(\cdot)} [\mathbb{E} _{ y \sim \pi _{\boldsymbol{\theta}}(\cdot \mid x)}[r( x, y)]-\beta_1 \cdot \mathbb{D} _{\mathsf{KL}} [\pi _{\boldsymbol{\theta}}( y \mid x) \| \pi _{\mathsf {base}}( y \mid x) ] ] + \mathcal{R} (\pi _\phi(\cdot), \pi _{\mathsf{true}}(\cdot) ).
$$
This is a joint optimization problem, and itself is **not directly a collaborative (nor competitive) game**. It can be intractable and may be approximated differently in max-max or max-min way by alternating optimization (cf., GAN). With $c$ for creator, $s$ for solver, and $c _{t}$ for target, some choices may be:
- $\max _{s} \max _{c} f _c(s) - \mathsf{KL}(c, c _{t})$
- $\max _{s} \min _{c} f _c(s) + \mathsf{KL}(c, c _{t})$
We believe the current joint optimization formulation is more general, and can induce different practical algorithms.
(**ii**) / (**iii**) When the target true distribution is unknown, the problem is a classical *decision under ignorance* problem (with partial knowledge, it can become decision under uncertainty and strategies like posterior sampling can be applied) (Jiang, 2023; Peterson, 2017), where we need find a propoer decision rule. Here, we seek the *optimal solution under the worst-case*, and this is why we design a game, where **creator regularization** is explicitly achieved by **regret maximization**. The game provides an approximation to the worst-case-optimal solution of Problem 1.
It may be subjective to claim "regularization should not be a central element of game" and incorrect to derive "the game is fundamentally flawed". Here, the regularization is applied to the joint optimization problem, not the minimax regret game.
---
**2. Rebuttal on Online `eva` Settings:**
Buffer subset sizes are chosen as powers of 2 for hardware efficiency, as we run on a single machine with 8 GPUs.
We will add further results in anonymous.4open.science/r/eva-i.
---
**3. Rebuttal on Additional References:**
Thanks for the wonderful suggestions! We will add them in our revised paper. Some discussions:
- [2] has been cited and discussed in 4.2.1.
- [3] is on training reward models, which differs from our main theme. We have cited earlier Thompson Sampling work to reflect this area.
- We do not assume access to target as in [5].
---
**4. Rebuttal on Adaptive Sampling:**
Please see our overview in the beginning.
We use fixed design for the buffer, which is standard in iterative RLHF (e.g., SimPO). In practice, we found "knowledge forgetting" less of an issue (than overfitting), likely due to differences in supervised learning and RL.
---
**References on Fundamentals:**
Jiang, M. (2023). Learning Curricula in Open-Ended Worlds.
Peterson, M. (2017). An introduction to decision theory.
Orabona, F. (2023). A modern introduction to online learning.
---
It is a great pleasure to have the opportunity to learn from a different mind. We believe the rebuttal have sufficiently addressed the concerns. We sincerely hope the reviewer may reconsider the rating on `eva`, and we are happy to discuss further on any potential future works.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification!
**Response to Problem 1 and the Minimax Game:** Thank you for your clarification. However, I disagree with the rebuttal's argument. When you define the problem as a joint optimization problem, it is effectively a single maximization problem—albeit with optimization variables partitioned into two blocks that share the same objective. In contrast, a minimax optimization problem involves two distinct objectives.
To illustrate this distinction, consider the function $ f_c(s) = c \cdot s $, where $ c $ and $ s $ are scalars for simplicity.
- If we solve $ \max_{c} \max_{s} f_c(s) $, the solution is $ c = s = \infty $, yielding an optimal value of $ \infty $.
- However, if we solve $ \max_{c} \min_{s} f_c(s) $, the solution becomes $ c = s = 0 $, with an optimal value of $ 0 $.
These two formulations clearly lead to different outcomes, demonstrating that they are not equivalent.
**Response to the Regularization Concern:** I remain unclear about the regularization aspect. Since the main paper does not discuss it in sufficient detail, could you formulize the regularition term in Equation (2)? Also, it would better to explicitly connect the regularization in Equation (2) to the steps in Algorithm 1.
**Response to the Empirical Results:** I was unable to access the repository link provided, as it appears to be expired. Could you share an updated or alternative link?
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer QZbz –
We sincerely appreciate your valuable comments and support. Our revision is updated to [anonymous.4open.science/r/eva-i](https://anonymous.4open.science/r/eva-i/README.md).
---
**Summary.** We've updated Problem 1 as bilevel optimization (see [1-method](anonymous.4open.science/r/eva-i/revision-1-method.pdf)), where we carefully incorporated your feedback and cross-checked with several domain experts in algorithmic game theory. We've included new ablations along with all references you suggested (see [2-ablations](anonymous.4open.science/r/eva-i/revision-2-ablations.pdf)).
---
1. **Problem 1, The Game, and Regularization ([details](anonymous.4open.science/r/eva-i/revision-1-method.pdf))**
We have revised Problem 1 to the bilevel setting below:
$$
\phi ^* \in \underset{\phi}{\arg \max } \ R(\pi _\phi(\cdot) ; \pi _{\text {true}}(\cdot) ; \mathcal{D}, \theta ^* (\phi)) \\
\textit{ s.t.} \quad \theta ^* (\phi) \in \underset{\theta}{\arg \max} \ \mathbb{E} _{x \sim \pi _\phi(\cdot)}[\mathbb{E} _{y \sim \pi _{\theta}(\cdot | x)}[r(x, y)] - \beta \mathbb{D}[\pi _{\theta} \| \pi _{\text {base }}]].
$$
This naturally translates to a sequential game, where the inner is for the solver to optimize response alignment given the training prompt distribution, and the outter is for the creator to generate training prompts for the solver to perform well in the real world, knowing it will best respond.
Here, $\pi_{\text {true }}$ is the true target prompt distribution, and $R(\cdot)$ is the "regularization" for creator. If $\pi_{\text {true }}$ is known, we can define $R(\cdot)$ to be some f-divergence measure. However, $\pi_{\text {true }}$ is often unknown a priori; this is then a standard decision under ignorance problem and the minimax regret rule gives a worst-case optimal solution. The optimization can be written as:
$$
\phi ^* \in \arg \max _\phi \ \text{Regret}(\pi _\phi, \pi _\theta) \\
\textit{ s.t. } \quad \theta ^* (\phi) \in \arg \min _\theta \ \text{Regret}(\pi _\phi, \pi _\theta) .
$$
Note the inner loop optimization is equivalent. See the link above for details -- we believe the concerns on regularization should now be fully resolved. Please let us know if you'd like to see more explanations in the paper!
---
2. **Ablations ([details](anonymous.4open.science/r/eva-i/revision-2-ablations.pdf))**
| Setting | $n _{\text{new}} = 4$ | $n _{\text{new}} = 8$ |
|---------------------------|----------------------|----------------------|
| RLOO (1x) | 52.6 | 52.6 |
| RLOO-eva (1x) | 57.3 | **57.6** |
| RLOO-eva (2x) | 60.5 | **61.2** |
| RLOO-eva (3x) | **62.4** | **63.0** |
| Setting | ratio = 50% | ratio = 75% | ratio = 25% |
|---------------------------|--------------------------------------|--------------------------------------|--------------------------------------|
| RLOO (1x) | 52.6 | 52.6 | 52.6 |
| RLOO-eva (1x) | 57.3 | 57.0 | **57.5** |
| RLOO-eva (2x) | **60.5** | 59.9 | 59.2 |
| RLOO-eva (3x) | **62.4** | 62.0 | 61.3 |
We find increasing $n _{\text{new}}$ is helpful, and a balanced sampling is more robust. (Note online setting is an adaptation rather than the main contribution.)
---
**Minor clarification.** We hope to clarify the ambiguity in our earlier response regarding the max-max and min-max part. Our initial intention was to clarify that there may not be formal definitions for competitive v.s. collaborative games; "competitive" adversarial training aims to get a more robust policy, which can be interpreted as "collaborative" too. (Side note: there are formal definitions on cooperative v.s. non-cooperative games, differ by whether players can communicate, commit and have binding agreement.) And if the optimization can be decoupled as $x ^*=\arg \max _x\{f(x, y ^*(x)) + R(x)\}, \textit{s.t. \ } y ^* (x)=\arg \max _y f(x, y)$, then replacing the inner max to be a min form may not alter the nature of the game. We appreciate your feedback and note our initial additive setting may be ambiguous, and have revised Problem 1 as discussed above.
---
We believe the revision can sufficiently address previous concerns, and we sincerely hope you may re-consider the rating to `eva`. Please do let us know if you have any further comments, and we are more than happy to discuss with you. Thanks a lot! | Summary: The paper "Evolving Alignment via Asymmetric Self-Play" presents EVA, an innovative framework that addresses a critical limitation in current RLHF methods by replacing static prompt distributions with an adaptive, generative approach. By framing alignment as a game between a prompt creator and response solver, EVA enables continual self-improvement without requiring additional human data. The empirical results are impressive, boosting Gemma-2-9b-it's performance on Arena-Hard benchmarks significantly and allowing it to compete with much larger models. The method's seamless integration with both online and offline RLHF pipelines, coupled with its robust performance across different configurations, makes it a practical and valuable contribution to the field of language model alignment.
# after rebuttal
I will keep my score for accepting this paper.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes
Supplementary Material: I checked the appendix. All parts.
Relation To Broader Scientific Literature: Relevant to the people who work on alignment, LLM and AI.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths
1. Open-ended alignment is an important topic, given that the social trends and human opinions are evolved. This paper tackles this problem through the Evolving Alignment via Asymmetric Self-Play, i.e., EVA.
2. Though co-evolving of the creator and solver is promising, the training is unstable. This works tackle this issues by self-play and the regret-based methods. The framework is elegant and powerful.
3. The evaluation is sufficient and comprehensive.
Weakness
1. I am a bit concerning about the creator. If both creator and solver evolves, is it possible that they evolve to a bad local optimum?
2. Still about the game between the creator and the solver. Does the algorithm can find the equilibrium? or could you provide some analysis about this? As if building the problem as a game, the desired solution may be Nash equilibrium. so we can evaluation whether the formulation of the game is reasonable through checking whether the equilibrium is corresponding to the best solution.
Other Comments Or Suggestions: N/A
Questions For Authors: Please see above sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer's in-depth evaluation and new insights on the game. Below we provide a high-level summary with detailed rebuttal, and will add relevant discussions in the reivisions.
---
**TL;DR**: Under reasonable assumptions, we can evolve to a *local* minimax optimum. (However, due to the nonconvex-nonconcave nature of the neural network optimization, global optimum is generally intractable to find.)
---
> **Q:** *If both creator and solver evolves, is it possible that they evolve to a bad local optimum? Can the algorithm find the equilibrium, or could you provide some analysis about this?*
**A:** To our knowledge, for the nonconvex-nonconcave minimax optimization problem, finding the global equilibrium is generally NP-hard. In sequential settings, there exist alternating gradient descent algorithms (Jin et al., 2020; Wang et al., 2019) that can achieve exact local convergence to *local* minimax. Thus yes, it is possible for them to be "bad" local minimax optimum that are far away from global minimax optimum.
In simultaneous settings, recent works (Dennis et al., 2020; Parker-Holder et al., 2022; Beukman et al., 2024) have shown that when a Nash equilibrium is reached, the solver follows a minimax regret policy and the solution exhibits robustness properties.
We believe the existing analysis helps justify the general game-theoretic formulation. In our empirical algorithm, we take the sequential setting, and use approximations for the regret maximization for the creator to avoid instability during training, as discussed in Section 3.2. We take a mixed sampling strategy to avoid the creator to drift too far away in each iteration (as discussed in the Appendix). Moving forward, we believe deriving a tractable algorithm with differentiable creators is a meaningful next step.
---
**References**
Jin, C., Netrapalli, P., & Jordan. M. (2020). What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization.
Wang, Y., Zhang G., & Ba. J. (2019). On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach.
Zhang, G. (2023). Deep Learning Dynamics: From Minimization to Games. Dissertation at University of Toronto.
Dennis, M., Jaques, N., Vinitsky, E., Bayen, A., Russell, S., Critch, A., & Levine, S. (2020). Emergent complexity and zero-shot transfer via unsupervised environment design.
Parker-Holder, J., Jiang, M., Dennis, M., Samvelyan, M., Foerster, J., Grefenstette, E., & Rocktäschel, T. (2022). Evolving curricula with regret-based environment design.
Beukman, M., Coward, S., Matthews, M., Fellows, M., Jiang, M., Dennis, M., & Foerster, J. (2024). Refining minimax regret for unsupervised environment design. | Summary: The paper proposes Evolving Alignment via Asymmetric Self-Play (Eva), which treats post-training as an infinite game involving two roles: the creator, responsible for generating new prompts, and the solver, which optimizes responses. Eva implements prompt evolution via a regret-based reward objective combined with a prioritized generation buffer, which works for both online and offline RL training.
Eva shows strong performance on Arena-Hard, the win rate of gemma-2-9b-it increased from 51.6% to 60.1% (DPO) and 52.6% to 62.4% (RLOO).
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes.
The open-ended RLHF objective (Problem 1) achieves continuous self-training by jointly optimizing prompts and response policies (§3). However, the authors acknowledge estimation bias but do not quantify its impact or provide error bounds.
Experimental Designs Or Analyses: Yes.
Eva is evaluated across multiple RL algorithms:
- Online: RLOO, OAIF
- Offline: DPO, SPPO, etc. (§4)
Trained using UltraFeedback and tested on three benchmark evaluations, covering both online and offline RLHF scenarios. The choice of reward model (ARMORM-8B) is reasonable, supporting the validity of performance claims.
However, continuous training (§4.2.4) only reports monotonic gains, but does not analyze saturation points or changes in prompt quality after multiple iterations.
Supplementary Material: Yes
Relation To Broader Scientific Literature: None
Essential References Not Discussed: This paper has adequately discussed relevant prior work.
Other Strengths And Weaknesses: Strengths:
- The proposed Eva improves RL post-training performance without additional human prompts
- The empirical results are strong, gemma-2-9b-it’s win rate on Arena-Hard increased from 51.6% to 60.1% (DPO) and 52.6% to 62.4% (RLOO), surpassing Claude-3 Opus and approaching Gemini-1.5 Pro.
- Eva-generated curriculum prompts can outperform human prompts. Compared to a baseline using 6× more human prompts, Eva (1× prompts) performs better across multiple metrics.
Weaknesses:
- Table 4 shows that different approaches' performance are close.
- The initial prompt distribution (UltraFeedback) and the evolution process may be domain-specific (e.g., Figure 11 shows an imbalanced distribution in AlpacaEval), and cross-domain generalization was not fully tested.
- Table 16 shows that the evolved prompts mainly focus on technical tasks, therefore they may have limited coverage.
Other Comments Or Suggestions: See strengths and weaknesses.
Questions For Authors: See strengths and weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | null | null | null | null | null | null | |
SWAN: SGD with Normalization and Whitening Enables Stateless LLM Training | Accept (poster) | Summary: This paper combines two classic technique normalization and whitening to improve SGD and achieves better performance than Adam and other optimizers while saving memory cost by not saving optimizer state. Theoretical insights are also provided to explain the effect of each technique.
Claims And Evidence: The superior performance of SWAN is supported by the results in table 1. The authors also conduct ablation study to show that both modifications are necessary. I am not convinced by the theoretical analysis in section 4. See the theoretical part and weakness part for details.
Methods And Evaluation Criteria: The proposed method combines two classic technique that are widely used by previous work. It is interesting that both techniques together provide a much better performance than each of them alone. The selected benchmark and models of different sizes are reasonable.
Theoretical Claims: I checked the proof of theorem 2, proposition 2 and proposition 1. I feel the results are either trivial or their proofs lack detail for me to confirm correctness.
1. In the proof of proposition 2, can you explain the existence of C_G and C_L? If we just consider $H=diag(h_i)$ and $W_{whitened}^{(t)}=diag(w_i)$, then $Q=\frac{(\sum h_i w_i)^2}{(\sum h_i w_i^2)(\sum h_i)}$ can be arbitrarily small even with fixed $h_i$.
2. In the proof of proposition 1, can the subset $O_l$ have multiple elements? Is there any conclusion for the relationship between $H(V)\_{lk,lk’}$ and $H(V)_{l’k,l’k’}$ for both $l,l’ \in O_l$? Moreover, the convergence result of $H(V)$ after normalization can’t hold when the size of $O_l$ is larger than 1.
Experimental Designs Or Analyses: The experiment design is clear, with three different version of SWAN to make a fair comparison with previous methods. I have a question for figure 6. When you say you compute mean of GradNorm gradient, do you compute the mean of GradNorm(G) or the original G but obtained from checkpoints trained with GradNorm update rule? If it is the mean of GradNorm(G), then the results sound less interesting because the distribution indeed should change less when you have a hard constraint on the scale of random variables.
Supplementary Material: I checked appendix B, C, E, F, H, I, J.1.1.
Relation To Broader Scientific Literature: This paper follows previous work on designing a more efficient optimizer for training LLMs.
Essential References Not Discussed: The paper has included recent progress on designing new optimizers.
Other Strengths And Weaknesses: Strength
1. The paper proposed an efficient algorithm for GradWhitening. It is interesting that it converges very fast.
Weakness
1. Even though the authors try to give justification for the methods from the theoretical prospective, they only prove some results for each technique separately. It is unclear why combining them together can improve the results so much. Also each theoretical result will break when you add another operation. So I feel all the theoretical results are not so meaningful.
Other Comments Or Suggestions: 1. There is a typo in the definition of GradNorm (right part of line 146). You seem to miss a square term when defining $s$.
2. Another typo in theorem 1 when you define the standardized stochastic gradient.
3. It will be good to list all the assumptions rather than just saying inheriting from other paper.
Questions For Authors: 1. Have you considered switch the order of gradnorm and gradwhitening?
2. Have you tried normalizing along another axis when doing gradnorm? I am curious whether we can get a similar analysis (theorem 1) for normalizing along the other axis. If not, why is one better than the other?
3. Do you have any insight on why combining the two techniques can improve performance over each of them own?
4. Question 1 in theoretical part.
5. Question 2 in theoretical part.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **We thank the reviewer for their thorough and constructive feedback. We are grateful that the reviewer acknowledged the novelty, writing, empirical performance, and the overall significance of our work. Below, we address each point raised:**
---
**1. On Combining GradNorm and GradWhitening:**
- This is an excellent question. Indeed, we found there is a theoretical explanation behind this, which in fact gave rise a class of more general algorithms than SWAN. However, this insight is non-trivial and, due to space limitations, we plan to present a more comprehensive analysis in follow-up work. Below we briefly describe the high level idea:
- To start with, given a gradient matrix $G$ with shape $m$ by $n$, GradNorm and GradWhitening can be interpreted as projection operators under specific norm constraints. The extended version of SWAN can be thus viewed as an iterative process:
$$
G \rightarrow \text{GradNorm}(G) \rightarrow \text{GradWhitening}(\text{GradNorm}(G)) \rightarrow \text{GradNorm}(\text{GradWhitening}(\text{GradNorm}(G))) \rightarrow \ldots
$$
until a fixed point is reached. In other words, it corresponds to performing steepest descent under multiple (non-Euclidean) norm constraints, as opposed to the standard SGD update which is steepest descent under a single Euclidean norm constraint.
- In theory, one could choose an arbitrary collection of norms—provided they satisfy certain theoretical properties—to guide the update. When \(G\) is nearly a square matrix, even a single iteration (as implemented in SWAN) is nearly sufficient.
- In revision, we will present the main results as extended discussion.
---
**2. Order of GradNorm and GradWhitening (Reviewer Question Q1):**
From a fixed-point perspective above, the final result is invariant to the order as long as the iterative process converges to a multi-norm fixed point. Hence, the key is not the order but achieving convergence under the imposed norm constraints.
---
**3. Normalizing Along Different Axes (Reviewer Question Q2):**
We indeed experimented with normalizing along alternative axes. We found that normalizing along the row-wise direction (as in our current formulation) indeed yields superior performance. This is consistent with Theorem 1, which suggests that the primary source of gradient noise exhibits a row-wise scaling structure. We will include additional discussion and experimental evidence of this in the revised manuscript.
---
**4. Regarding the Proof of Proposition 1:**
The reviewer asks whether the subset $O_l$ can have multiple elements. In fact, at equilibrium, $O_l$ contains only a single index. We acknowledge that during early training—before convergence—multiple dominating indices may appear (as observed in numerical experiment in Figure 8), and similar structures are still present across the normalized diagonal blocks. We will clarify this point in the revision to make it explicit that our theoretical results assume convergence, at which point $O_l$ becomes a singleton.
---
**5. Regarding the Proof of Proposition 2:**
Our argument in Proposition 2 is not that the bound cannot be arbitrarily small, but rather that a small condition number does not necessarily lead to a diminished bound. This distinguishes our method from standard SGD, where such a decrease would be expected. We will revise our presentation to better articulate that our bound remains meaningful even when the gradient matrix is well-conditioned.
---
**6. Typos and Presentation of Assumptions:**
We thank the reviewer for pointing out typos and suggesting improvements in the presentation of our assumptions (e.g., in Theorem 1 and throughout the appendix). We will carefully revise the manuscript to correct these issues and to list all relevant assumptions explicitly.
---
**Once again, we appreciate the reviewer's insightful comments, which help us improve both the theoretical exposition and empirical presentation of our work.**
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification of the theoretical proof. I will keep my score and look forward to the discussion of the fixed-point analysis. | Summary: This paper introduces a new stateless optimizer SWAN (SGD with Whitening And Normalization) with the same performance as the Adam optimizer for LLM training. The author analyses that SGD with GradNorm and GradWhitening applied in tandem can minimize the condition number, stabilize gradient distributions across transformer block and coverage more robust to the local curvature condition. This paper evaluates the SWAN on LLM pre-trained tasks, where SWAN outperformed other optimizers, and analyses the effect and efficiency of SWAN.
## update after rebuttal:
I have read the response by the authors and my concerns are mostly addressed. I keep my score.
Claims And Evidence: The claims are overall supported by theoretical or experimental evidence. The evidences are clear and convincing.
However, the motivation of applying normalization and whitening onto gradient is not very clear. Especially the best performance is usually obtained with around 2 iterations when using Newton’s iteration to obtain an approximated whitened/orthogonalized representation. What is the results when using more iterations? E.g. 5 iterations.
Methods And Evaluation Criteria: The methods and evaluation criteria are reasonable.
Theoretical Claims: I do not find remarkable errors in theoretical claims. The claims are supported by analysis and previous studies. The theoretical analysis in this paper is somewhat solid, providing both practical considerations and dynamic analysis. One main concern is that the assumption 1 is too strong, and I donot think assumption 1 hold in practice.
Experimental Designs Or Analyses: The experiments are overall comprehensive. The results show that SWAN outperformed other optimizers. The authors conduct ablation experiments to figure out the effect of GradNorm and GradWhitening and why and how they help optimization. The author also analyses memory efficiency and throughput of SWAN.
Supplementary Material: A brief look is taken at the supplementary material, which is comprehensive. The theoretical analysis is sufficient. The experiment settings are clear and detailed. This paper also provides more experiment results in the supplementary material.
However, assumptions in Theorem 1 of Tian et al. (2023), which should be described in Appendix C, are unclear. The order and segmentation of the appendix are confusing.
Relation To Broader Scientific Literature: The authors provide a new optimizer combining with normalization and whitening technology, which is effective and efficient in memory according to the experiments.
Essential References Not Discussed: I think this paper should give credit to the normalized gradient method in training DNNs, e.g., the paper [1]
[1]Block-normalized Gradient Method: an Empirical Study for Training Deep Nerual Network. preprint arXiv:1707.04822
Other Strengths And Weaknesses: The paper is well organized.
However, the colored column in Figure 2 is confusing. It would be better if more detailed explanation is marked in this figure.
Other Comments Or Suggestions: Typo problem: The punctuation after the equations are not unified.
Questions For Authors: 1. In Section2, the author hypothesize the additional history information in Adam is because the approach does not take into account the interactions and structures between different variables. Could the authors provide more evidence?
2. According to my understanding, the gradient matrix G refers to the stochastic gradient of a particular weight matrix. The elements among columns of G are all variables. Therefore, how to understand whitening the gradient data in GradWhitening? May be the term “orthogonal the gradient vector” better?
3. The experiments only involve C4 dataset. Could the advantage of SWAN preserve when it transfers to other tasks, for example vision or Multimodal tasks? (Even though this paper addresses for LLM training, it seems the method is general without considering what type data is)
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **We thank the reviewer for their insightful feedback and for acknowledging many of the strengths of our work. We address the specific points raised below:**
**1, Iteration Count and Motivation for Whitening:**
Regarding the number of iterations in the Newton–Schulz procedure, our ablation experiments in Appendix B show that the performance does not peak at 2 iterations; rather, increasing the iteration count (e.g., to 5) continues to improve the approximation accuracy of the whitening operator. We chose 2 iterations in our main experiments as a trade-off between computational cost and performance, while the improvement with additional iterations is clearly demonstrated in our supplementary results.
For motivation for whitening, please refer to our response below.
**2, Evidence for the claim that Adam needs historical information because it ignores non-diagonal interaction:**
In Section 5.3 of our paper, we provide theoretical motivation for why Adam requires historical information—and why our method does not. Below, we rephrase the analysis from the natural gradient descent/Fisher information perspective. First, recall that the Fisher information matrix (FIM) is defined as $F = \mathbb{E}[gg^T]$ where $G$ is the gradient matrix and $g = \operatorname{vec}(G)$ is the flattened gradient (hence $F$ is $mn$ by $mn$.). Adam approximates the FIM using two key approximations:
- It uses exponential moving averages (EMAs) over time to estimate the expectation $\mathbb{E}[\cdot]$.
- It further adopts a diagonal approximation to $F$, thereby ignoring interactions between different parameters.
In contrast, our gradient whitening operation is based on a block (identical) diagonal structural assumption:
$$
\tilde{F} = I \otimes M,
$$
where $\otimes$ is the Kronecker product, $I$ is the $n$ by $n$ identity matrix and $M$ is a $m$ by $m$ matrix represents the FIM for each identical block diagonals. We can find the optimal $M$ by solving the optimization problem
$$ \min_{\tilde{F} = I \otimes M} ||\tilde{F} - F||^2_{Frobenius} $$
the optimal solution for each block is given by the unbiased estimate
$$ M \approxeq \frac{1}{n} \sum_{j=1}^n g_j g_j^T = GG^T $$
with the summation taken over the columns of $G$ (each column corresponds to a "diagonal block" in the $F$). Then, we can use the optimal $\tilde{F}$ to perform natural gradient descent where the update is given by:
$$\Delta \operatorname{vec}(W) = \tilde{F}^{-1/2} g$$
simplifying to matrix form, this is equivalent to
$$\Delta W \propto M^{-1/2} G$$, which is exactly (up to scaling) $\text{GradWhitening}(G) = (GG^T)^{-1/2}G$.
In plain language, we have shown that: because $\tilde{F}$ assumes each diagonal block is identical, one can estimate the full FIM using spatial information (i.e., average over blocks using $\frac{1}{n} \sum_{j=1}^n g_j g_j^T$ instead of over time) rather than relying on temporal averaging. This key insight allows the whitening operation to be derived as the solution to the FIM approximation problem under our block diagonal assumption. This analysis shows that our structural assumption permits bypassing the need for historical averaging.
**3, Clarification on Terminology for the Whitening Operation:**
We follow standard terminology from the literature (e.g., in Decorrelated Batch Normalization by Huang et al., 2018) where the operation$(GG^T)^{-1/2}G$
is commonly referred to as “whitening”. Another reason is that performing natural gradient descent using the inverse square root of FIM is also referred to as whitening in the literature. (Adam can also be seen as a diagonal approximation of the inverse square root of FIM). We will add further clarification in the revised manuscript to ensure this point is unambiguous.
**4, Generalization Beyond the LLMs:**
Although our experiments focus on LLM pretraining with the C4 dataset, we expect that with minimal changes, similar benefits could be achieved in vision or multimodal tasks. We plan to explore these extensions in future work.
**5, Additional Revisions:**
We appreciate the reviewer’s detailed suggestions regarding the clarity of the appendix (including the presentation of assumptions from Theorem 1 of Tian et al., 2023), the labeling in Figure 2, and consistency in punctuation after equations. We will carefully revise these sections to improve readability and ensure consistency throughout the manuscript.
**Once again, we thank the reviewer for their constructive comments, which will help us improve the quality and clarity of our paper in the final revision.** | Summary: The paper proposes SWAN, an optimizer which is completely stateless. They claim that SWAN outperforms existing optimizers while also using lesser memory (since it is stateless). They support these claim by doing LLM pretraining experiments.
SWAN is similar to another previously proposed optimizer Muon the following changes: 1. They remove momentum, 2. They add column normalization and 3. In one of the versions of SWAN they propose a simplified version of Newton-Schultz iterations which are needed for matrix whitening.
Claims And Evidence: I think the claim that SWAN outperforms Muon is problematic since their comparison is done at a small batch size (130K) while it is well know from prior works that the benefit of momentum emerges at larger batch sizes.
Methods And Evaluation Criteria: Yes, except for the small batch size.
Theoretical Claims: Yes
Experimental Designs Or Analyses: NA
Supplementary Material: No
Relation To Broader Scientific Literature: This paper adds to many new recent optimizers trying to improve on Adam in both speed and memory requirements.
Essential References Not Discussed: Most references have been discussed.
Other Strengths And Weaknesses: The authors do not describe how they set hyperparameters such as weight decay.
Other Comments Or Suggestions: I think the paper would benefit from focusing on one main benefit and showing evidence for it so that no doubt remains. For example, the aforementioned use of small batch sizes. I would also recommend that the authors release the codebase so that it is easy to see details such as weight decay (which is not specified in the paper).
Questions For Authors: 1. For Muon optimizers, was Adam used for first and last layer? (as in recommended in the referenced blogpost on muon).
2. Could the authors do experiments with 1 million batch size (with tune LRs). Any >100M sized model should suffice.
3. Could the authors share their codebase?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **We thank the reviewer for their detailed and constructive feedback. We address the main points raised below:**
1. **"The claim that SWAN outperforms Muon is problematic"**
- Our core contribution is to push the boundaries and demonstrate that: **it is possible to train LLMs matching the performance of Adam using a completely stateless optimizer**. Our baseline “Momentum-GradWhitening” is included mainly for completeness, and whether SWAN can consistently outperform other existing non-Adam optimizers (such as shampoo/Muon/SOAP) is orthogonal to our contribution.
- While the reviewer notes that it is questionable whether SWAN outperforms Muon, it is important to emphasize that Muon is not a stateless optimizer and is primarily designed for wall-clock time acceleration. Our work thus highlights a viable extreme stateless alternative for memory-constrained settings, and this distinction is central to our contribution. We will clarify this distinction in the revision.
- Finally, we would like to note that compared to Muon, the most relevant baseline should in fact be the concurrent work of Apollo [2] (which is a low-rank/rank-1 optimizer) that also claims to match the performance of Adam. In our experiments, we have demonstrated that SWAN consistently outperforms Apollo.
2. **Experiments with Larger Batch Sizes:**
We have conducted additional experiments training a 130M model with a 1M batch size over 2B tokens. For the Adam baseline, we performed a learning rate sweep and found the optimal parameters to be **lr = 0.00075** and **betas = (0.9, 0.95)**. The results are summarized in the table below:
| Training Steps | Adam Val Loss | SWAN (SWAN$^\dagger$) Val Loss |
|----------------|---------------|-------------------------------|
| 500 | 4.1796 | 4.0755 |
| 1.5K | 3.483 | 3.477 |
| 2.5K | 3.398 | 3.370 |
These results demonstrate that SWAN remains on par with or slightly better than Adam in terms of validation loss, even when training with a significantly larger batch size. This again confirms our core finding of "it is possible to train LLMs matching the performance of Adam using a completely stateless optimizer"
3. **"For Muon optimizers, was Adam used for first and last layer?"**
Yes, following standard practice (which predates both our work and Muon), Adam is used for the first and last layers in Muon. SWAN also employs Adam for these layers, aligning with practices used in other baselines such as Galore and Apollo.
4. **"Could the authors share their codebase?"**
We are fully committed to open-sourcing our codebase. We are actively working on a robust open-source release that will include detailed configurations as well as more new optimizers that were not included in our submissions
5. **Regarding hyperparameter settings of baselines:**
As stated in our paper, we strictly follow the experimental setup of Zhao et al. (2024a) [1]. Our experiment code is based on their implementation, and the configurations—including weight decay and other hyperparameters for all baselines—are directly taken using the config files that can be found in their open-source repository. We will further clarify these details in the revised version of the paper.
**Once again, we thank the reviewer for their insightful comments and suggestions, which will help us further improve the manuscript.**
[1] Zhao, Jiawei, et al. "Galore: Memory-efficient llm training by gradient low-rank projection." arXiv preprint arXiv:2403.03507 (2024).
[2] Zhu, Hanqing, et al. "Apollo: Sgd-like memory, adamw-level performance." arXiv preprint arXiv:2412.05270 (2024). | Summary: The paper proposed a "stateless" optimizer using gradient-normalization and gradient-whitening. The proposed method saves half memory over Adam and reaches 2x speedup. The idea is interesting and the writing is clear.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: yes
Relation To Broader Scientific Literature: no
Essential References Not Discussed: see below
Other Strengths And Weaknesses: see below
Other Comments Or Suggestions: see blow
Questions For Authors: Q1: In Figure 1, 12B tokens are not sufficient for 1B models. Does the advantage of SWAN maintain if we train more tokens?
Q2: In Figure 1 (c), does SGD refers to SGD with momentum or without momentum? Further, to make a clearer comparison with existing works, please also include other memory-efficient methods (e.g., Adam-mini, Muon) in the bar plot of Figure 1 (c).
Q3: In Algorithm 1 SWAN, what if keep track of a 1st-order momentum M and apply the GradNorm() and GradWhitening() to M, instead of G? Does it bring extra acceleration?
Q4: [1] studied why SGD performs poorly on Transformers. Please discuss [1] as a motivation to re-design new stateless methods.
[1] Zhang, Y., Chen, C., Ding, T., Li, Z., Sun, R., & Luo, Z. (2024). Why transformers need adam: A hessian perspective. Advances in Neural Information Processing Systems, 37, 131786-131823.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **We thank the reviewer for their careful reading and for the positive comments regarding the clarity of our writing and the significance of our contribution. We address each question below:**
**Q1:**
Regarding the sufficiency of 12B tokens for 1B models, we have conducted additional experiments training a 1B model for 20B tokens. For the Adam baseline, we performed an extensive parameter sweep and found that the optimal parameters were **lr = 0.0007** and **betas = (0.9, 0.95)**. Using these settings, we compared the performance of Adam and SWAN (using the SWAN$^\ddagger$ setting with the same learning rate as Adam). The table below summarizes the training loss at various steps:
| Training Steps | Adam Loss | SWAN Loss |
|----------------|-----------|-----------|
| 20K | 3.054 | 2.989 |
| 40K | 2.880 | 2.810 |
| 60K | 2.792 | 2.726 |
| 80K | 2.728 | 2.661 |
| 100K | 2.681 | 2.609 |
| 120K | 2.659 | 2.574 |
| 150K | 2.651 | 2.556 |
At the end of training, SWAN achieved a roughly **1.8× speedup** compared to Adam while reaching lower validation loss values. These detailed comparisons confirm that the advantage of SWAN persists even when training with more tokens and under a longer training schedule. (In fact, in this run, the extra steps Adam must take to reach SWAN's performance keep growing.)
**Q2:**
In Figure 1 (c), the term “SGD” refers to SGD without momentum. We acknowledge that including comparisons with other memory-efficient methods (e.g., Adam-mini, Muon) in the bar plot would provide a clearer picture. In our revision, we will update the figure to include these baselines. Notably, our method achieves a near memory-optimal footprint—comparable to vanilla SGD (i.e., without momentum)—which is one of the key strengths of SWAN.
**Q3:**
Indeed, in some settings, this additional momentum before SWAN operations could provide further acceleration. However, there's a trade-off given hardware constraints. For example, consider the task of training a 350M model with 1024 context length under mixed precision on a node of 8XA100 40G GPUs. Including momentum forces the users to use larger gradient accumulation. In contrast, in this scenario, the stateless design of SWAN without momentum allows training with 2X larger batch sizes compared with using momentum. We will clarify these trade-offs in our revision.
**Q4:**
We appreciate the reviewer’s reference to Zhang et al. (2024) and acknowledge its relevance in motivating the need for stateless methods. While we have already cited this paper in our submission, we will expand our discussion in the revision to provide a deeper analysis of how the Hessian perspective discussed in [1] further motivates the design of new stateless optimizers like SWAN.
**Once again, we thank the reviewer for their constructive feedback and insightful questions, which will help us improve the manuscript in the final version.** | null | null | null | null | null | null |
Tracking Most Significant Shifts in Infinite-Armed Bandits | Accept (poster) | Summary: The paper studies the non-stationary infinite-armed bandit problem where arms' mean rewards are initially sampled from a $\beta$-regular reservoir distribution and evolve under adversarial/non-stationary dynamics. Prior works focused on stationary rewards or specific non-stationary cases requiring prior knowledge of non-stationarity parameters. This work addresses general adversarial non-stationarity without parameter knowledge, relaxing distributional assumptions of the reservoir. The model captures scenarios with massive action spaces where rewards change adaptively based on played arms.
Main Algorithmic Ideas: The proposed framework introduces two key innovations:
1. Blackbox Reduction: Converts finite-armed MAB algorithms into parameter-free algorithms for infinite arms via dynamic subsampling. Uses empirical regret tracking to detect non-stationarity and trigger restarts.
2. Randomized Elimination: Enhances adaptation through a restarting elimination algorithm that detects "significant shifts" — intervals where all subsampled arms become suboptimal due to non-stationarity.
Main Results: Achieve the first optimal and parameter-free regret bounds for infinite-armed settings which are validated by the experiments.
Claims And Evidence: The claims presented in this paper are all supported by corresponding theoretical proofs.
Methods And Evaluation Criteria: In my opinion, the methods of this work primarily extends the black-box technique from [Wei and Luo, 2021] to the infinite many-armed bandit setting, which is reasonable.
[1] Chen-Yu Wei and Haipeng Luo. Non-stationary reinforcement learning without prior knowledge: An optimal black-box approach. COLT 2021
Theoretical Claims: The theoretical claims presented in this work appears to be sound, and the results align with intuition, but I have not thoroughly check the proofs.
Experimental Designs Or Analyses: In the experiments, it seems that only the impact of different $\beta$ of initial distributions on performance was considered, while the effect of varying non-stationarity was not validated. For instance, the regret bounds presented include two scenarios: one for piece-wise stationary settings, resulting in a $\sqrt{LT}$ bound, and another for drifting, yielding a $V^{1/3}T^{2/3}$ bound. However, the experiments appear to validate only the latter case.
Supplementary Material: I have not thoroughly reviewed the appendix, but the proofs presented there appear to be solid and well-constructed.
Relation To Broader Scientific Literature: Developing optimal and parameter-free algorithms for non-stationary bandits is a highly meaningful problem. Previous results have primarily focused on finite-armed multi-armed bandits (MAB) and other contextual bandit/RL scenarios, as well as bandit convex optimization (BCO) settings. This work advances the research by extending it to the infinite many-armed MAB setting.
Essential References Not Discussed: I believe the discussion regarding [Wei and Luo, 2021] in this work is significantly lacking. The black-box algorithm proposed in this work largely draws inspiration from the MASTER operation in [Wei and Luo, 2021]. The paper only briefly mentions in Remark 2 a subtle difference in the definition of near-stationarity compared to [Wei and Luo, 2021], but it entirely omits any discussion on the methodological and conceptual distinctions. This could lead readers to mistakenly assume that the black-box framework is an original contribution of this work. However, in reality, from the near-stationary detection rules to the integration method of the base algorithm and the format of theoretical guarantee, it bears a high degree of similarity to [Wei and Luo, 2021].
First, I believe the authors should explicitly acknowledge the inspiration and learning from [Wei and Luo, 2021] in their black-box framework. Then, they should clarify the specific methodological differences that enable the transformation of a finite-arm algorithm into an infinite-arm one. As a parallel example, [Wang, 2022], which also extends [Wei and Luo, 2021], focuses on adapting it to non-stationary bandit optimization. In that work, the author clearly states the learning and referencing of the black-box framework, as well as the specific adaptations made to suit the bandit optimization setting.
[1] Chen-Yu Wei and Haipeng Luo. Non-stationary reinforcement learning without prior knowledge: An optimal black-box approach. COLT 2021.
[2] Yining Wang. On adaptivity in nonstationary stochastic optimization with bandit feedback.
Other Strengths And Weaknesses: The strength of this work lies in being the first to propose an optimal and parameter-free algorithm for the non-stationary infinite many-armed multi-armed bandit (MAB) setting. However, the weakness, in my opinion, is the insufficient discussion on how it distinguishes itself from prior work. For instance, both the black-box technique and the most significant arm identification method have been explored before. The paper fails to provide a clear explanation of the specific technical contributions of this work beyond these existing components.
Other Comments Or Suggestions: 1. The abstract does not meet the submission requirements. According to the ICML submission guidelines, abstracts must be written as a single paragraph. However, the abstract in this paper is divided into three paragraphs.
2. The paper proposes multiple algorithms and derives several regret bounds. I recommend that the authors include a table in the introduction section to summarize the algorithms and their corresponding regret bounds and compare them with the previous results. This would make it easier to understand the contributions of this work.
Questions For Authors: My question, as mentioned above, is specifically about understanding what the incremental part of this work is compared to existing research. For example, how exactly was the black-box technique adapted to make it suitable for the infinite-arm setting? Could the authors elaborate on the specific modifications or innovations introduced to achieve this adaptation? This would help clarify the unique technical contributions of this work.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the insightful review, writing suggestions, and careful discussion about the similarities with the blackbox MASTER algorithm of Wei \& Luo, 2021.
> In my opinion, the methods of this work primarily extends the black-box technique from [Wei and Luo, 2021] to the infinite many-armed bandit setting, which is reasonable.
We respectfully disagree with this main point, which we believe may have lead to some confusion about the novelty of our contribution. Although our Algorithm 1 is also a blackbox, it has an entirely different algorithmic design, requirements on the base procedure, and regret analysis than MASTER. We do not agree it is a "modification or adaptation of MASTER" for the infinite-armed setting.
We elaborate on the differences below:
1. Our requirement on the base algorithm (Assumption 2) is stronger than the requirement for the base learner in MASTER (Assumption 1 of Wei & Luo, '21). Namely, we require the base algorithm to achieve instance-dependent $\log(T)/\Delta$ regret bounds in mildly non-stationary environments whereas MASTER requires its base learner to have a $\sqrt{KT}$ regret bound (see Lemma 3 of Wang, '22). This difference is quite crucial as a $\sqrt{KT}$ bound would have been insufficient for attaining the optimal regret bound in our setting (see discussion in Lines 301--308, Column 2). This difference is discussed in Remark 2 of our paper.
2. As you correctly observed, we rely on empirically tracking the cumulative regret of the algorithm to detect non-stationarity (which is a new idea), rather than tracking UCB indices of different base algorithms as MASTER does.
3. Our algorithm is conceptually different from MASTER, with **no randomized multi-scale scheduling** of different base algorithms to facilitate re-exploration of discounted arms. Instead, we run a single a base algorithm and track its cumulative regret to detect non-stationarity.
4. As a result, our regret analysis substantially differs from that of MASTER's, as there is no need to argue about the guarantees of the re-exploration schedule. Instead, much of the proof of Theorem 2 relies on proving a novel high-probability regret upper bound for subsampling in mildly corrupt environments (see Lines 222-233, Column 2).
Additionally, one of our main contributions is the elimination algorithm (Algorithm 2) which in fact achieves tighter regret bounds than our blackbox. Note our elimination algorithm is not a blackbox at all , and thus bears no similarity with MASTER. The only other works which study "significant arm identification" (Suk & Kpotufe, '22, '23) all again rely on randomized multi-scale scheduling of different base algorithms to target a worst-case $\sqrt{KT}$ regret bound. Our regret analysis is thus very different, with no need for analyzing random scheduling, and instead focused on novel variance-based confidence bounds for tracking empirical regret (see discussion in Sec. 5.3).
We will better elaborate on these differences with the blackbox MASTER in rewrites.
> In the experiments, it seems that only the impact of different $\beta$ of initial distributions on performance was considered, while the effect of varying non-stationarity was not validated. For instance, the regret bounds presented include two scenarios: one for piece-wise stationary settings, resulting in a $\sqrt{LT}$ bound, and another for drifting, yielding a $V^{1/3} T^{2/3}$ bound. However, the experiments appear to validate only the latter case.
[Here](https://imgur.com/a/infinite-armed-nonstationary-bandit-experiments-Kj6VjRZ), we provide two plots of additional synthetic experiments. The second plot covers the piecewise stationary setting where we use a rotting rate of $\rho_t=1$ at $L=\sqrt{T}$ different rounds with $\beta=1$. The prior art AUCBT-ASW (Kim et al., '24) has a regret bound of $\sqrt{LT} = T^{3/4}$, yet we see from the plot that our procedures (Blackbox and Elimination) have empirically better regret owing to the small number $\tilde{L} = O(1)$ of significant shifts. We've also included an additional benchmark, a sliding-window version of SSUCB (of Bayati et al., '20) using a window size of $\sqrt{T}$, for more expansive comparison.
> The abstract does not meet the submission requirements. According to the ICML submission guidelines, abstracts must be written as a single paragraph. However, the abstract in this paper is divided into three paragraphs.
Thank you for pointing this out, as well as other writing suggestions. We'll fix it in revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed explanation, which helped clarify my misunderstanding. I now understand that, at the algorithmic level, the main difference from MASTER lies in the structure: MASTER maintains multiple base learners, each with different parameter settings, to estimate the unknown non-stationarity. In contrast, this work adopts an adaptive restart approach, maintaining only a single base learner and restarting it with adjusted parameters when necessary.
Given this, I wonder if the proposed black-box method is actually more practical and computationally efficient than MASTER. I would suggest adding more comparison and discussion with MASTER in the main paper. Currently, the focus seems to be mainly on the difference in assumptions (e.g., the mild non-stationarity assumption) and the type of changes detected (as discussed in your points 1 and 2). However, this may give readers the impression that the algorithm shares the same structure as MASTER—just as I initially thought. In my view, the most important distinction is actually the one you raised in point 3, regarding the practical advantages of the algorithm.
Additionally, I’m curious whether this type of technique could be extended to the non-stationary linear bandit setting, which also involves an infinite arm set.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We will add more discussion in the rewrite highlighting the differences of our blackbox with MASTER.
**MASTER vs our black-box method in terms of practicality/computational efficiency**: as said in our first response, our procedure only uses a single call to a base algorithm per epoch and thus has run-time $O(KT)$ and memory complexity $O(K)$ where $K$ is the maximum subsample size used. To contrast, MASTER (if instantiated with subsampling for our infinite-armed setting) would have a run-time of $O(KT)$ but a memory complexity of $O(KM)$ where $M$ is the maximum number of base algorithms scheduled at any one round, which is a random quantity and could take on a worst-case value of $O(\log(T))$.
**Non-Stationary Linear Bandits**: a key difference between the two settings is that, in non-stationary linear bandits, the rewards of all unplayed arms change as the linear parameter $\theta_t$ changes over time. Thus, non-stationary linear bandits typically requires different techniques which tracks estimates of $\theta_t$ for non-stationarity detection. Indeed, even MASTER instantiated with OFUL implicitly does this in comparing the OFUL UCB indices of different base algorithms. We conjecture that a similar analysis as presented in this submission would go through for non-stationarity linear bandits with a fixed top reward value (e.g., $\langle a_t^*, \theta_t\rangle = 1$ for the optimal arm $a_t^*$ at time $t$) using an optimal-design based elimination algorithm. We agree this is an interesting direction for future work, and will include discussion to this extent in revisions. | Summary: This paper considers an infinite-armed bandits problem in a non-stationary setup. The means of the arms are drawn from a reservoir distribution and are chosen by an adaptive adversary in later rounds. Two algorithms are proposed with theoretical guarantees and experiments are conducted to illustrate the empirical performance.
## update after rebuttal
While the authors have resolved some of the questions, I lean to maintain my score. While the rotting cases considered in the proof of the bounds are valid (thus worst case bound is proved), I believe more general cases that consider both the rotting and rising parts of the nonstationary process are of great interest. Additionally, as mentioned by Reviewer T9xW, more comparisons with the existing method MASTER are required.
Claims And Evidence: Yes, the authors provide proof outlines in sections 4 and 5 with details in the appendix.
However, in remark 1, the authors indicate Kim et al. 2022; 2024 allow for unplayed arms' rewards to change each round. As I checked these two papers, they consider **rested** rotting bandits, where only the means of the pulled arms will decrease and the means of unplayed arms remain unchanged.
Methods And Evaluation Criteria: Yes, the blackbox non-stationary algorithm transforms an algorithm for stationary bandits to adapt to the non-stationary bandits. The second algorithm tracks the significant changes in the means of the arms. These are common techniques in the non-stationary bandits literature.
Theoretical Claims: I went through the proofs are the theorems. They appear reasonable to me.
The thing that confuses me is the use of $V$, $V_R$ and $S_T$ in the regret bounds. These quantities measure the level of non-stationarity of the instances, but they also depend on the chosen arms and, consequently on the algorithm. Since different algorithms result in different values for these measurements, a direct comparison of regret bounds becomes nontrivial. For example, if an algorithm $\pi_1$ achieves a regret bound $R_{\pi_1}$ of order $O(V_{\pi_1}^{1/3})$ and another algorithm $\pi_2$ achieves $R_{\pi_2}=O(V_{\pi_2}^{1/2})$, it remains unclear whether $R_{\pi_1}<R_{\pi_2}$ or not. It is appreciated that the authors can provide more discussions on this.
In addition, for the regret lower bounds in section 3, the authors invoke the lower bounds in Kim et al. (2024), which are derived for the rotting infinite-armed bandit problem. Since this manuscript is considering the more general infinite-armed bandits problem, where the bandit instances may not be the rotting bandits case and the means of the arms are chosen by an adaptive adversary, the lower bounds for the rotting bandits cannot be directly applied here.
Experimental Designs Or Analyses: The paper provides experimental results on some synthetic datasets, which follow the rotting bandits setup.
The experiments compare the performance of the proposed algorithms and two other algorithms in the infinite-arms setup. As there is a large amount of work on (finite-arm) non-stationary bandits, it is expected to include those algorithms as well (maybe by sampling some arms at first, followed by applying these algorithms on the sampled arms.) Also, the more general bandits setups (e.g., fluctuation, rising bandits) are expected, beyond the rotting bandits setup.
Supplementary Material: I skimmed through the proofs the theorems in the appendix. They appear largely reasonable to me. But I did not check every detail.
Relation To Broader Scientific Literature: This paper considers regret minimization in the non-stationary bandits with infinite-many arms. It extends tracking the significant shifts to the infinite-armed bandits case or, extends the infinite-armed bandits to the more general non-stationary case.
Essential References Not Discussed: The references look appropriate to me.
Other Strengths And Weaknesses: **Strengths**
1. This paper introduces a parameter-free algorithm for the non-stationary infinite-armed bandits.
2. The blackbox framework extends the finite-armed bandit techniques to the non-stationary case.
**Weaknesses**:
1. The paper indicates the rising non-stationarity is benign and only exploit the case where the rewards are decaying (Corollary 5). Therefore, it remains unclear to me whether the bound can be further improved or not if take the rising part into consideration.
Other Comments Or Suggestions: None. Please refer to the above sections.
Questions For Authors: Please refer to the above sections.
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the detailed review, and astute questions.
**On Comparison of Regret Bounds and $V, V_R, S_T$ Depending on Agent**: You're correct that, in general, a direct comparison of bounds between algorithms is tricky since the adaptive adversary may vary its behavior depending on the algorithm. As a trivial example, a naive algorithm which samples a single arm and commits to it for $T$ rounds would incur linear regret $\Omega(T)$ in a stationary environment, yet could incur constant regret if an adversary helps the agent and increases the reward of the committed arm to make it optimal.
We note this phenomenon is inherent to all online learning settings involving an adaptive adversary, yet there is a substantial literature of works analyzing _dynamic regret bounds_ in terms of quantities such as $V$ or $L$ (which as you note depends on the adversary and algorithm) (e.g., see [1]-[4] below).
For some specific choices of adversary, a comparison of regret bounds is meaningful. For example, if the adversary is oblivious, then $V,V_R,S_T$ in our work can be upper bounded by worst-case quantities which do not depend on the agent's decisions, but on restless changes. Even for such an oblivious adversary, we note optimal and adaptive regret upper bounds were unknown before this work.
Whether other meaningful choices of adversary could yield direct comparison of regret bounds is an intriguing direction for future work.
We'll include this discussion in a revision.
**On Lower Bounds in Rotting vs. General Non-Stationary Setup**: You're correct that we consider a more general setup than the rotting setup of (Kim et al., '24). Since our setup includes the rotting problem as a sub-case, the worst-case lower bounds for our setup are at least as large as the worst-case lower bounds for the rotting sub-case. Thus, the lower bounds of Kim et al., '24 hold for our setting as well.
**On Broader Experimental Comparison with Other Algorithms and Setups:** We emphasize our main contribution is theoretical, rather than to propose a practical algorithm. The algorithms in our paper are of a theoretical nature that serves to resolve the main question of attaining the first optimal and adaptive regret bound for non-stationary infinite-armed bandits. The synthetic experimental results of Section 6 are mainly to support the message that our algorithm does indeed attain improved regret over the previous state-of-art. We agree with the reviewer that there's much further work to be done in designing more practical procedures, and admit the theoretical state-of-the-art is far from this.
We'd also like to note that the strategy of "sampling a fixed set of arms and then running a finite-armed non-stationary bandit algorithm'' would not have the correct theoretical regret upper bounds as the optimal sampling rate depends on the unknown non-stationarity and, furthermore, only worst-case rates of the form $\sqrt{LKT}$ for $K$ sampled arms are known for such procedures (e.g., Wei & Luo, '21), which would be inadequate for attaining the optimal regret rate (see discussion in Lines 301--308, Column 2). The difficulty of using this approach is also further discussed in Lines 112--140 (Column 2) of the paper.
> The paper indicates the rising non-stationarity is benign and only exploit the case where the rewards are decaying (Corollary 5). Therefore, it remains unclear to me whether the bound can be further improved or not if take the rising part into consideration.
We'd like to emphasize that we _allow_ for both rising and rotting non-stationarity. You are correct, however, that the bound of Corollary 5 is worst-case and only takes into account the challenge of rotting changes. It is an interesting future direction to study whether rising can improve the rate. Note, even in finite-armed bandits, improved regret bounds beyond the worst-case $\sqrt{LT} \land V^{1/3} T^{2/3}$ rates are still unknown in general.
> However, in remark 1, the authors indicate Kim et al. 2022; 2024 allow for unplayed arms' rewards to change each round. As I checked these two papers, they consider rested rotting bandits, where only the means of the pulled arms will decrease and the means of unplayed arms remain unchanged.
Thanks for catching this! You're correct that Kim et al. ('22, '24) in fact study the same rested setting as ours, and we'll revise this remark.
[1] Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed bandit problem. SIAM
journal on computing, 2002.
[2] Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour, and Karthik Sridharan. Online Optimization : Competing with Dynamic
Comparators. AISTATS 2015.
[3] Tianbao Yang, Lijun Zhang, Rong Jin, and Jinfeng Yi. Tracking slowly moving clairvoyant: Optimal dynamic regret of online
learning with true and noisy gradient. ICML, 2016.
[4] Peng Zhao, Yu-Jie Zhang, Lijun Zhang, and Zhi-Hua Zhou. Dynamic regret of convex and smooth functions. NIPS ’20 | Summary: This manuscript deals with the problem of regret minimization in an infinite-arm bandit model with a reservoir distribution where shifts can occur. As such, this work is at the crossroad of infinite bandit and non-stationary bandits. Recently, Kim et al. (2024) have characterized the minimax cumulative regret as a function of the reservoir distribution of non-stationarity indices. However, their procedure required the knowledge of these non-stationarity indices (namely V_R and T_R). In this work, the authors introduce two rate-optimal procedures which only require the knowledge of the reservoir distribution. The procedures are based on a combination of three ideas: (i) using, as classical in this field, subsampling techniques to work with a finite arm problem (ii) tracking the empirical regrets/arm reward (which is estimable as the optimal reward is known) and (iii) suitably reinitializing the sampling process when the empirical regret is unusually large or all arms are unusually large.
## update after rebuttal
In their rebuttal, the authors have well addressed my questions. Nevertheless, this does not change significantly my opinion on this work.
Claims And Evidence: The manuscript is clearly written and all the proofs seem to be correct.
Methods And Evaluation Criteria: The proposed procedures are minimax adaptive with respective to the cumulative regret which is the standard way of evaluating the performances for a bandit problem.
Theoretical Claims: I checked all the proof sketches and all the general arguments in the proof but I did not check all computational details. There does not seem to be any typo.
Experimental Designs Or Analyses: The small-scale numerical experiments illustrate well the theoretical findings.
Supplementary Material: I checked the general arguments of the proof in the supplementary material.
Relation To Broader Scientific Literature: The main contribution of this work is adaptivity to non-stationarity in infinite-arm bandits. This improves over the recent work of Kim et al. [2024]. For that purpose, the authors mainly adapt a sucessive elimination algorithm from (Even-Dar et al, 2006). As an aside, the proofs built upon previous arguments of Suk for non-stationary multi-armed bandits.
Essential References Not Discussed: Not really, but the authors should emphasize that the "subsampling idea" for infinite-arms bandit is much older than Bayati et al[2020], see the earlier work of Berry. It is in fact standard in the infinite arm field.
Other Strengths And Weaknesses: S1) On the positive side, this is the first work which is adaptive to the changes in the infinite arm regime. Besides, the new algorithms are quite simple and natural.
W1) The counterpart of this strength is that the key ideas for the algorithms are not so original, although such subsampling techniques appear to be new in for non-stationary infinite-arms bandits.
W2) I feel that the topic is perhaps not that important so that achieving adaptivity (while the optimal rate were already known) makes a big impact.
Other Comments Or Suggestions: As such, hypotheses are lacking for Theorem 4 and Corollary 5. If the authors do not want to rely on Assumption 1, they should at least introduce a dedicated assumption.
In Section 1.2, the authors emphasize that this is the first work to develop dynamic regret bounds of the $V^{1/3}T^{2/3}\wedge \sqrt{LT}$, while such result are unknown in the finite-arm setting. I would simply like to point out that the infinite-arm setting has distinct feature which can make the problem easier than finite arm: the optimal reward is known and the distribution of the mean rewards is also known.
Questions For Authors: In Theorem 2, the authors provide a cumulative regret bound for Algorithm 2 which rely that use, as black-box, a bandit algorithm for $\alpha$-mildly correct bandits. How does the bound in Theorem 2 depends on $\alpha$? Alternatively, should the black-box satisfy the bound of Assumption 2 for any $\alpha$?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the detailed review and positive comments.
> the authors should emphasize that the "subsampling idea" for infinite-arms bandit is much older than Bayati et al[2020], see the earlier work of Berry. It is in fact standard in the infinite arm field.
This is correct - subsampling was introduced earlier for this problem, and we'll adjust the writing to reflect this.
> W1) The counterpart of this strength is that the key ideas for the algorithms are not so original, although such subsampling techniques appear to be new in for non-stationary infinite-arms bandits.
Indeed, you're correct that subsampling was not used in the prior works on non-stationary infinite-armed bandits (Kim et al., '22, '24). We'd like to highlight two additional technical innovations required for our result:
1. Tracking cumulative regret using variance-based confidence bounds (this was crucial for attaining the optimal regret bound for $\beta < 1$ which was not achieved in previous works _even with parameter knowledge_). Note the previous works on adaptive non-stationary finite-armed bandits (e.g., Wei & Luo '21; Suk & Kpotufe '22) don't require such a refined analysis as their worst-case regret rates scale with the number of arms.
2. Doing a high-probability per-arm regret analysis for subsampling (i.e., Lines 222--233, Column 2) which is novel and departs from the previous regret analysis for subsampling (e.g., Bayati et al., '20).
> W2) I feel that the topic is perhaps not that important so that achieving adaptivity (while the optimal rate were already known) makes a big impact.
As we discussed in the previous answer, in fact the optimal regret upper bound was not known for $\beta < 1$ as there was a gap between upper and lower bounds even with knowledge of non-stationarity. Our work closes this gap.
> As such, hypotheses are lacking for Theorem 4 and Corollary 5. If the authors do not want to rely on Assumption 1, they should at least introduce a dedicated assumption.
Theorem 4 and Corollary 5 rely on Assumption 1, but without the upper bound on masses involving $\kappa_2$.
We'll make this more clear in a revision.
> In Section 1.2, the authors emphasize that this is the first work to develop dynamic regret bounds of the $V^{1/3} T^{2/3} \land \sqrt{LT}$, while such result are unknown in the finite-arm setting. I would simply like to point out that the infinite-arm setting has distinct feature which can make the problem easier than finite arm: the optimal reward is known and the distribution of the mean rewards is also known.
This is an apt point that the two settings are not directly comparable. We contend that the infinite-armed setting is not necessarily "easier" than the finite-armed counterpart. For instance, the optimal regret bounds in our setting must be free of the number of arms and so a crude $\sqrt{KT}$ bound cannot be plugged in for subsampling, instead necessitating tighter instance-dependent or variance-based bounds (as can be found in our work). In comparison, the $\sqrt{LKT}$ regret bound in $K$-armed non-stationary bandits does not require such refined analyses.
> In Theorem 2, the authors provide a cumulative regret bound for Algorithm 2 which rely that use, as black-box, a bandit algorithm for $\alpha$-mildly correct bandits. How does the bound in Theorem 2 depends on $\alpha$? Alternatively, should the black-box satisfy the bound of Assumption 2 for any $\alpha$?
As our goal is to attain the optimal regret bound of $\sqrt{LT} \land V^{1/3} T^{2/3}$, we in fact only require Assumption 2 to hold in each episode $[t_{\ell}, t_{\ell+1})$ for $\alpha \approx (t_{\ell+1} - t_{\ell})^{-\frac{1}{\beta+1}}$.
In terms of $\alpha$, we then show a regret bound of $\sum_{\ell=1}^{\hat{L}} (t_{\ell+1} - t_{\ell}) \cdot \alpha$ where $\hat{L}$ is the total number of episodes. Thus, for Theorem 2 to hold, it is **not** required for Assumption 2 to be true for any $\alpha$. Nevertheless, we show in Appendix C that classical algorithms such as UCB do in fact satisfy Assumption 2 for any $\alpha$. | null | null | null | null | null | null | null | null |
Cache Me If You Must: Adaptive Key-Value Quantization for Large Language Models | Accept (poster) | Summary: The paper proposes AQUA-KV, a KV cache compression method for autoregressive LLMs that explots inter and intra layer dependencies for improving cache quantization accuracy. It can be combined with additional compression techniques such as pruning. To this effect, they train predictors for predicting the value of a Key & Value pair using other cache entries, and apply quantization on the residual information that could not be predicted. This method only needs to store the information that cannot be recovered from other sources. They use the previous layer keys to predict the subsequent keys, and use both the previous layer values and the current layer keys to predict values in their implementation. In their practical implementation they use linear regression for all predictors, and any quantization method can be used for quantizing the residual information. For evaluation they measure perplexity and end to end performance. They measure perplexity on WikiText-2 and end to end performance accuracy on LongBench tasks.
Claims And Evidence: I am mildly concerned about conclusions drawn from Table 2 based on the end to end performance of their method and other methods on LongBench. I highlight this concern and phrase it as a question to the authors below.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The paper does not have any theoretical claims.
Experimental Designs Or Analyses: Yes I checked the evaluation of the proposed method with various other cache quantization schemes for 2 bit quantization, when evaluated on perplexity and end to end accuracy on LongBench. I also checked the evaluation of the proposed method with using HIGGS as the quantization mechanism as compared to other quantization methods for 5 different LLMs on the perplexity and end to end performance tasks.
Supplementary Material: No.
Relation To Broader Scientific Literature: The main contribution of the paper is to the literature on KV cache compression and quantization techniques for memory efficient inference in LLMs.
Essential References Not Discussed: I am not aware of any important references not discussed.
Other Strengths And Weaknesses: I think the main strength of the proposed method is that it is modular and can be easily combined with any quantization method, and it can be applied on top of any token pruning method. This really increases the versatility of the procedure.
Other Comments Or Suggestions: None.
Questions For Authors: My main concern is that in Table 2, where the authors compare their method to other quantization approaches across various LLMs, their method shows a 0.3 improvement over HIGGS on the LongBench average score for Llama 3.X 8B and 70B in the 3-bit quantization setting but requires an additional 0.5 GB of cache memory. Similarly, for 2-bit quantization, it achieves a 0.4-0.6 improvement but demands 0.6 GB more memory on the LongBench average score for Llama 3.X 8B and 70B. This suggests that the experiment may not be fully controlled, making it unclear whether the observed gains in LongBench performance are significant or if any improvement would persist under a similar cache size constraint. Can the authors have a more controlled experiment, or explain why the existing experiment is controlled enough ?
The other question I have is whether the authors compared with KV cache compression techniques based on token selection and pruning, such as StreamingLLM, SnapKV since the authors have compared their method with H2O.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and suggestions. We appreciate that you highlight the modularity of AQUA-KV and address your concerns below.
> My main concern is that in Table 2, where the authors compare their method to other quantization approaches across various LLMs, their method shows a 0.3 improvement over HIGGS on the LongBench average score for Llama 3.X 8B and 70B in the 3-bit quantization setting but requires an additional 0.5 GB of cache memory. Similarly, for 2-bit quantization, it achieves a 0.4-0.6 improvement but demands 0.6 GB more memory on the LongBench average score for Llama 3.X 8B and 70B. This suggests that the experiment may not be fully controlled, making it unclear whether the observed gains in LongBench performance are significant or if any improvement would persist under a similar cache size constraint. Can the authors have a more controlled experiment, or explain why the existing experiment is controlled enough ?
We agree that our comparison can be improved by controlling strictly for the cache size. To address this, we conducted additional evaluations with raw HIGGS (our strongest baseline), where the quantizer was given more quantization clusters to increase the average bitwidth. To recall, a 2.x bit HIGGS splits the data into $d{=}2$-dimensional vectors and rounds each vector to one of $n{=}2^4{=}16$ clusters, with an additional 16-bit scale per $g{=}1024$ quantized values (the default configuration from [1]). This yields $\log_2 n / d + 16/g \approx 2.0156$ bits per parameter.
[1] https://arxiv.org/abs/2411.17525
To offset AQUA-KV predictors, we evaluate HIGGS with a lattice of $n{=}18$ clusters instead of 16, with an average bitwidth of $\approx 2.1$, resulting in a slightly larger overall cache size than for AQUA-KV.
## Table: WikiText-2 Perplexity (non-Instruct models, setup from Section 4.2)
| Method | Avg. Bits | Llama 3.2 3B | 3.1 8B |
|--------------------|----------|--------------|--------|
| - | 16 | 6.98 | 5.61 |
| AQUA-KV | 2.09 |**7.03** | **5.72** |
| HIGGS ($n=16$) | 2.02 | 7.47 | 5.89 |
| HIGGS ($n=18$) | 2.10 | 7.40 | 5.85 |
## Table: Average LongBench scores (Instruct models, setup from Section 4.2)
| Method | Avg. Bits | Llama 3.2 3B | 3.1 8B |
|--------------------|----------|--------------|--------|
| - | 16 | 44.61 | 48.13 |
| AQUA-KV | 2.09 | **44.30** | **47.77** |
| HIGGS ($n=16$) | 2.02 | 42.80 | 47.37 |
| HIGGS ($n=18$) | 2.10 | 43.31 | 47.16 |
As we can see, HIGGS with additional clusters does indeed perform better at the cost of a greater memory footprint, but AQUA-KV still outperforms it.
Please also note that the AQUA-KV memory overhead can be reduced by quantizing the predictor weights. We report this setup in Table 1 (see “GPTQ”, L359 for 4-bit quantization). We discuss this in L346-358 (right) and report more detailed results in Table 4 (Appendix).
We hope that these new results alleviate the reviewer’s concern and will add them to Section 4.1 and perform additional evaluations in Appendix.
> The other question I have is whether the authors compared with KV cache compression techniques based on token selection and pruning, such as StreamingLLM, SnapKV since the authors have compared their method with H2O.
In our work, we chose the token pruning proposed in the H$_2$O paper as it was a middle ground between StreamingLLM [1] and more recent methods such as SnapKV [2]. In principle, AQUA-KV can also be combined with other token pruning strategies, including the two you proposed. We agree that exploring these combinations can further strengthen our paper, but we need additional time to incorporate them into our codebase and ensure that our experiment setup uses these pruning strategies properly. We thank the reviewer for the suggestion and will add these comparisons in Section 4.3 in the final version of the paper.
[1] https://arxiv.org/abs/2309.17453 Xiao et al, 2023. Efficient Streaming Language Models with Attention Sinks
[2] https://arxiv.org/abs/2404.14469 Li et al, 2024, SnapKV: LLM Knows What You are Looking for Before Generation
---
Rebuttal Comment 1.1:
Comment: Thanks the detailed responses and clarifications, I will keep my current evaluation. | Summary: This work proposes AQUA-KV, a method of using inter-layer and intra-layer information to reduce the size of the KV cache with minimum overhead via a supplementary probe. AQUA-KV supplements a “backbone” quantization algorithm, where it functions to improve accuracy by using the information available from the previous layer (for both the keys and values) and within the same layer (for the values). By this method, only residual information unique to each token need be saved in the KV cache, allowing for better compression.
Claims And Evidence: The claim that AQUA-KV contributes more to accuracy than the baseline quantization is somewhat supported, especially for smaller models. However, there are too few evaluations of larger models (Llama 70B) to be certain that the proposed method adds value.
Methods And Evaluation Criteria: The limited number and types of evaluations conducted are the main weakness in this work. Although long sequence evaluation is also important, evaluations on shorter sequences should also be conducted. For example, using batched inference could also necessitate KV cache compression.
Moreover, in Table 2 of the paper, there is only a small difference between the performance of HIGGS on Llama 3.1 70B compared to AQUA-KV with HIGGS. Much more rigorous evaluation is required to see if AQUA-KV is effective for large models as well as smaller models. Language generation tasks by instruction-tuned models should be evaluated rigorously as these are closest to those used for actual LLM production. MMLU, GSM8K, HumanEval, and IFEval are frequently used.
I am willing to change my rating if these concerns are addressed.
Theoretical Claims: There were no theoretical claims in this work.
Experimental Designs Or Analyses: The experimental design of checking results for different backbone quantization algorithms and probes was sound. Also, the authors provided sufficient analysis of the effects of using multiple previous layers and previous tokens. The major issue was the paucity of evaluations.
Supplementary Material: None was provided.
Relation To Broader Scientific Literature: KV cache quantization is an emerging research topic with high practical value. With the increasing volume of LLM inference, reducing KV cache memory is a key consideration for LLM service providers. The authors propose a new method of improving outcomes from compressing the KV cache while retaining accuracy.
Essential References Not Discussed: None
Other Strengths And Weaknesses: The method is simple to understand and has relatively little overhead, both during the training and inference stages. Although the authors do not provide an optimized implementation, there does not appear to be any fundamental barrier to integrating their solution to frameworks such as vLLM.
Also, by demonstrating the effectiveness of their method on the LongBench, the authors show that their method is competitive with other KV cache quantization algorithms on long-sequence evaluations, even matching the performance of the unquantized BF16 baselines.
Other Comments Or Suggestions: There appears to be a missing word in paragraph 3 of Section 4. “We evaluate perplexity on base (non-instruct) models since they have better .”
The title displayed on the first page does not match the title displayed on the top of the subsequent pages.
Questions For Authors: Could the prediction for the keys be overlapped with the calculation of the previous FFN layer? This could reduce the overhead from the probe, although some KV cache quantization methods may not be compatible.
Could the benefit of applying the keys to reconstruct the values be investigated more thoroughly? This creates a dependency that prevents overlapping.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. Overall, the review appreciates the efficacy and simplicity of AQUA-KV, but suggests additional evaluations on extra benchmarks and asks follow-up questions about implementation details. We do our best to address these below.
# Additional evaluations
> In Table 2 of the paper, there is only a small difference between the performance of HIGGS on Llama 3.1 70B compared to AQUA-KV with HIGGS. Much more rigorous evaluation is required to see if AQUA-KV is effective for large models as well as smaller models.
First, we would like to note that the quantized 70B model already has very high quality, meaning that any absolute changes in score will be small.
Thus, we ask the reviewer to take into account the *relative* difference: for 2.x bits per value, the 70B model perplexity with 16-bit cache is 2.54. AQUA-KV @ 2-bit increases only PPL by +0.08, whereas raw HIGGS increases it by +0.23 (*almost 3x*). Likewise, the LongBench score drops by 0.13 for AQUA-KV and 0.74 for the nearest 2-bit baseline (>5x error increase).
To fully address the remark concerning larger-scale evaluations, we evaluate the 72B Qwen 2.5 model in the same setup as in Section 4.2:
|Model|Method|Avg.Bits|Wiki2PPL (non-Instruct)|GSM8K (Instruct)|
|-|-|-|-|-|
|72B|-|16|3.49|95.8|
|72B|AQUA-KV|2.09|**3.56**|**95.5**|
|72B|HIGGS|2.02|3.66|93.7|
We also report additional benchmarks for Llama 3.1 70B below and in our response to Reviewer AgDD. To further explore the scalability, we will run the remaining evaluations for 70B+ models in the final version of the paper.
> Language generation tasks by instruction-tuned models should be evaluated rigorously as these are closest to those used for actual LLM production. MMLU, GSM8K, HumanEval, and IFEval are frequently used.
We agree and evaluate GSM8K and IFEVAL across different models, including 70B. We prioritize 2.x bit evaluations due to time constraints and since this is where augmenting quantizers makes the most sense.
**GSM8K accuracy (%) for Instruct models in the same setup as Section 4.2**
|Method|Avg.Bits|Llama 3.2 3B|3.1 8B|3.1 70B|Qwen 2.5 3B| 7B|
|-|-|-|-|-|-|-|
|Uncompressed|16|76.5|85.1|94.7|61.2|76.6|
|AQUA-KV|2.09|**77.7**|**84.3**|**94.2**|**59.9**|**72.2**|
|HIGGS|2.02|70.3|79.2|**94.2**|35.8|59.7|
**IFEval accuracy (%) for Instruct models in the same setup as Section 4.2**
|Method|Avg.Bits|Llama 3.2 3B|3.1 8B|3.1 70B|Qwen 2.5 3B| 7B|
|-|-|-|-|-|-|-|
|Uncompressed|16|77.0|78.9|88.0|66.5|76.9|
|AQUA-KV|2.09|**75.1**|**79.9**|**88.1**|**66.2**|66.9|
|HIGGS|2.02|72.4|75.7|87.0|59.3|**68.6**|
The results show a similar trend to our LongBench evaluations, with AQUA-KV being substantially closer to the uncompressed baseline than raw HIGGS.
We will include these results in the final version of the paper and conduct additional experiments with the other two benchmarks (MMLU and HumanEval).
> Although long sequence evaluation is also important, evaluations on shorter sequences should also be conducted.
We hope that this concern can be alleviated with the results we reported above, since those benchmarks have shorter sequences (e.g. GSM8K question plus answer takes up, on average, **198 tokens** for Llama-3.1/3.2 tokenizer). We also report perplexity with shorter sequence length (see the first table in our response to Reviewer AgDD). We do note, however, that KV-cache compression is most effective in the long-context regime.
# Questions about overlapping AQUA-KV with model inference
> Could the prediction for the keys be overlapped with the calculation of the previous FFN layer?
Thank you for this suggestion. It is indeed possible to overlap FFN/MLP computation with computing the next layer AQUA-KV cache. Furthermore, **since the value predictor is linear, we can overlap half of its computation (from previous values) as well,** then add the other half after the fact.
> Could the benefit of applying the keys to reconstruct the values be investigated more thoroughly? This creates a dependency that prevents overlapping.
We have investigated this question via ablation analysis in **Table 4, section “predictor inputs”, on L809 (w/o $K_{rec}$ → V)**. To summarize, the key predictor does improve perplexity and LongBench scores, but only slightly (in the last digit). This component can indeed be removed in cases where one cares about better overlap. We will discuss this trade-off in Section 4.1.
> I am willing to change my rating if these concerns are addressed.
We hope that the additional evaluations and discussions we provided can alleviate the reviewer’s concerns. If you add any follow-up suggestions in the next discussion phase, we will address them in the final version of the paper. | Summary: The paper presents AQUA-KV, an approach that leverages dependencies between keys and values across adjacent attention blocks. The method employs linear predictors trained to estimate KV caches for a given block based on previously generated keys and values. Subsequently, the residuals are quantized to low bit-widths to achieve efficient KV cache compression.
## update after rebuttal
The paper presents a novel approach to reduce the KV cache quantization errors. After rebuttal, the authors clarify several parts in the paper and add more evaluations. Therefore, I recommend to accept this paper.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes.
Supplementary Material: All parts.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths: The idea is simple and effective, and it can quantize the stored KV cache residuals to 2-bit.
Weaknesses:
1. The paper is somehow difficult to follow. See Questions below.
2. The experiments are only conducted on Wiki2PPL and some LongBench tasks. It lacks evaluations on more challenging tasks such as math and code tasks with CoT prompts.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. What does "high-compression mechanisms for internal network states" mean in the abstract? Is it an observation?
2. In L59-60 in the paper, the author mentioned "vector quantization". However, I can not find any details.
3. L196-197 needs to be elaborated.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback and valuable suggestions. We agree that improving clarity and expanding evaluations would strengthen the paper, and we address these points below:
> What does "high-compression mechanisms for internal network states" mean in the abstract? Is it an observation?
We meant to say that there exist methods that quantize KV states to low-bitwidth with small quality degradation. In the final version of the paper it could be rephrased to “In this work, we aim to improve Key & Value compression by exploiting two observations: … 2) the existence of high-compression methods for internal network states (e.g. attention Keys & Values).”
> In L59-60 in the paper, the author mentioned "vector quantization". However, I can not find any details.
By “vector quantization” we referred to HIGGS, which is a vector quantization method, as described in L161. We meant that using HIGGS with predictors provides the best quality compared to other methods for KV quantization, as discussed further in Table 2.
We will clarify that by adding “and the more advanced **vector quantization scheme** HIGGS” in L272.
> L196-197 needs to be elaborated.
These lines are indeed somewhat convoluted. We meant the following:
1. we noticed that using 1- and 2-bit quantizers (e.g. HIGGS) can ‘explain’ ~0.75 and ~0.89 of the variance respectively. In other words, they have a ~0.25 and ~0.11 relative quantization error.
2. If a probe can predict keys/values with the same error as a 1-bit quantizer, we found that we can use 1 less bit for quantization (e.g. 3-bit instead of 4-bit) after the residual with, on average, the same accuracy (e.g. see Table 2).
We will clarify this in the revised paper.
> The experiments are only conducted on Wiki2PPL and some LongBench tasks. It lacks evaluations on more challenging tasks such as math and code tasks with CoT prompts.
We agree that AQUA-KV can benefit from additional evaluations on larger models. As requested, we have conducted evaluations on GSM8k (CoT) and IFEval and report them to in the tables below.
**GSM8k accuracy (%) for Instruct models in the same setup as Section 4.2**
|Method|Avg.Bits|Llama 3.2 3B|3.1 8B|3.1 70B|Qwen 2.5 3B| 7B|
|-|-|-|-|-|-|-|
|Uncompressed|16|76.5|85.1|94.7|61.2|76.6|
|AQUA-KV|2.09|**77.7**|**84.3**|**94.2**|**59.9**|**72.2**|
|HIGGS|2.02|70.3|79.2|**94.2**|35.8|59.7|
**IFEval accuracy (%) for Instruct models in the same setup as Section 4.2**
|Method|Avg.Bits|Llama 3.2 3B|3.1 8B|3.1 70B|Qwen 2.5 3B| 7B|
|-|-|-|-|-|-|-|
|Uncompressed|16|77.0|78.9|88.0|66.5|76.9|
|AQUA-KV|2.09|**75.1**|**79.9**|**88.1**|**66.2**|66.9|
|HIGGS|2.02|72.4|75.7|87.0|59.3|**68.6**|
The results generally align with the trends observed in LongBench evaluations, with AQUA-KV being substantially closer to the uncompressed baseline than raw HIGGS. We will add these evaluations to the final version of the paper and conduct additional experiments for code benchmarks (e.g. HumanEval).
We appreciate the reviewer’s insightful comments, which have helped us improve the paper’s clarity and experimental scope. The additional evaluations confirm AQUA-KV’s consistent performance across domains, as noted in our response. We hope these revisions address all raised concerns. If you add any follow-up suggestions in the next discussion phase, we will address them in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the response. I will keep my score. | Summary: The paper proposes a learned predictor based adaptive quantization for KV Cache compression. The idea is this -- transformer models are residual in nature, i.e. each subsequent layers add smaller and smaller deltas to the outputs -- this means that the intermediate representations are highly dependent. This turns out to be true for even KV Cache vectors. Paper leverages this understanding to train simple linear predictors for KV Cache and only quantize the residuals. since liner predictors explain significant variance the quantization on residuals are correspondingly accurate leading to further compression.
Claims And Evidence: Yes. The claims are well supported
Methods And Evaluation Criteria: Yes. The method is novel and well-suited application of micro-machine learning.
Theoretical Claims: No theoretical claims made
Experimental Designs Or Analyses: Experimental setup seems correct. Longbenchmark is a representative benchmark for KV Cache issue. The bit compressions used are reasonable. The baselines seem reasonable. ( Disclaimer: I am not very well versed with baselines in this field. For instance, are there any other adaptive methods that can be combined with quantization / pruning -- since no such method is discussed in the paper)
Supplementary Material: No.
Relation To Broader Scientific Literature: The idea of using local learned components in compression is a new idea, in my understanding. Post-training compression is generally fixed algorithms. Learning is used in compression in two ways -- intermittent training of entire model for recovering compression loss (QAT or LTH) or from-scratch training of compressed models (SynFLOW like pruning OR ROAST). The idea of using compact ML models for reducing dimension / variance of data while natural is new in compression literature.
Essential References Not Discussed: I am not well versed with related literature
Other Strengths And Weaknesses: [Strengths]
1. Good use of learned predictors
2. Good gains in memory footprints at same accuracy
3. The impact on efficiency in contained.
4. Empirical evaluation including ablations is useful.
[Weakness]
Nothing I can think of.
Other Comments Or Suggestions: None.
Questions For Authors: [Questions out of curiosity , do not impact the evaluation of the paper]
1. If you train your predictors post-ROPE for 8192 sequences, do you see any deterioration in inference on sequences beyond 8192 tokens?
2. How do you think about combining multiple compression techniques together -- low-rank, quantization, predictor-based, pruning,etc.
3. What in your opinion is the lower-bound (bits / token) on compression for reasonable performance. Like can we go below 2 bits?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback and are glad that they appreciate our method's design and empirical results.
Below, we provide detailed answers to the questions posed in the review:
> If you train your predictors post-ROPE for 8192 sequences, do you see any deterioration in inference on sequences beyond 8192 tokens?
In short, we found that AQUA-KV is not sensitive to the training sequence length: specifically, we did not see a significant impact on longer sequences present in our LongBench evaluation (with some tasks in excess of 100k tokens [1]). We attribute this to the fact that AQUA-KV only trains simple linear predictors.
[1] https://github.com/THUDM/LongBench
However, we agree that it is important to analyze the effect of training sequence length. To that end, we evaluate AQUA-KV for the Llama 3.2 3B model with **varying training sequence length** and measure the impact on perplexity.
Table 1. WikiText-2 PPL for different training sequence lengths.
|Eval \ Train sequence length|128|1024|4096|8192|
|-|-|-|-|-|
|8192 |7.02|7.03|7.03|7.03|
|128 |17.87|17.87|17.87|17.87|
Note that in these evaluations, we control for the total number of tokens: every time we halve the sequence length, we also double the number of sequences in the calibration dataset. Otherwise, training with 64-token sequences would overfit because of insufficient training dataset size. For convenience, we report additional sequence lengths in https://anonymous.4open.science/r/rebuttal-pics-24A3/.
> How do you think about combining multiple compression techniques together -- low-rank, quantization, predictor-based, pruning,etc.
Our approach is indeed compatible with different cache compression techniques applied simultaneously. Specifically, **we combined AQUA-KV with H$_2$O pruning in Section 4.3 (detailed results provided in Appendix E)** and show that AQUA-KV can augment H$_2$O to further improve its size-to-accuracy trade-offs by combining three techniques together: pruning (H$_2$O), predictors (AQUA-KV) and quantization (HIGGS). We also explored low-rank *predictors* in Appendix C (Table 4), which can further reduce the memory footprint at the cost of degraded performance. In case you are interested in other specific combinations, we will consider them and add to the final version of the paper.
> What in your opinion is the lower-bound (bits / token) on compression for reasonable performance. Like can we go below 2 bits?
In our work, we focused on ~2 bits per value because this setup can achieve favorable quality-to-size trade-offs for practitioners. However, it is indeed interesting to evaluate AQUA-KV in the extreme sub 2-bit setup. To that end, we evaluate AQUA-KV on top of a 1-bit HIGGS variant with d=8 group dimension and n=256 clusters, with the rest of hyperparameters matching our setup from Section 4.1. This results in circa 1 bit per stored value. We report our results for Llama 3.2 3B and 3.1 8B below.
Table 2. WikiText-2 perplexity evaluation of AQUA-KV for 1 bit compression.
| Method | Avg. Bits | Llama 3.2 3B | 3.1 8B |
|--------------------|----------|--------------|--------|
| - | 16 | 6.98 | 5.61 |
| AQUA-KV ($d{=}8$, $n{=}256$) | 1.09 | **7.52** | **6.10** |
| HIGGS ($d{=}8$, $n{=}256$) | 1.02 | 16.18 | 19.83 |
Table 3. LongBench evaluation of AQUA-KV for 1 bit compression.
| Method | Avg. Bits | Llama 3.2 3B (Instruct) | 3.1 8B (Instruct) |
|--------------------|----------|--------------|--------|
| - | 16 | 44.61 | 48.13 |
| AQUA-KV ($d{=}8$, $n{=}256$) | 2.09 | **40.61** | **43.09** |
| HIGGS ($d=8$, $n{=}256$) | 2.02 | 23.02 | 24.94 |
To summarize, AQUA-KV with 1-bit HIGGS quantization can achieve substantially better quality than raw 1-bit quantization, but both methods show higher quality deterioration. Still, this is an interesting setup and a potential frontier for future research. We will add these and additional sub 2-bit evaluations to the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will maintain my recommendation to accept the paper. | null | null | null | null | null | null |
VCT: Training Consistency Models with Variational Noise Coupling | Accept (poster) | Summary: The authors propose an improved consistency training (CT) method by introducing a variational noise coupling scheme. The core idea involves training a data-dependent noise emission model using an encoder architecture inspired by Variational Autoencoders (VAEs). The method is theoretically linked to the VAE framework by deriving a loss function analogous to the Evidence Lower Bound (ELBO). Empirical evaluations on multiple image datasets demonstrate the superior of the proposed method.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes. The theoretical derivation in Appendix B (consistency lower bound) is basically correct but contains minor error. See Other Comments Or Suggestions for details.
Experimental Designs Or Analyses: Yes. Two issues should be addressed:
1. Gradient clipping: The proposed method uses gradient clipping (clipping value=200), while baselines do not. It is unclear whether this technique alone contributes to performance gains.
2. Training iterations for ImageNet: The authors increased baseline training iterations from 100k to 200k for fairness but did not report results at 100k. This obscures the true computational trade-offs. Including 100k results and discussing training costs would strengthen the comparison.
Supplementary Material: Yes,particularly the proof of consistency lower bound in appendix B.
Relation To Broader Scientific Literature: The work builds on consistency training [1] and leverages VAE-inspired coupling [2]. The idea of enhancing noise-data coupling aligns with [3], but differs by learning the coupling via an encoder instead of relying on the prediction of the consistency model itself during training. This connection is appropriately discussed.
[1] Song, Y., Dhariwal, P., Chen, M., and Sutskever, I. Consistency models. In International Conference on Machine Learning, 2023c. URL https://api.semanticscholar.org/CorpusID:257280191.
[2] Kingma, D. P. Auto-encoding variational bayes. International Conference on Learning Representations, 2013.
[3] Issenhuth, T., Santos, L. D., Franceschi, J.-Y., and Rakotomamonjy, A. Improving consistency models with generator-induced coupling. arXiv preprint arXiv:2406.09570, 2024.
Essential References Not Discussed: No essential references appear missing.
Other Strengths And Weaknesses: Strengths:
- The method is intuitively reasonable and grounded in established frameworks (VAEs, Flow Matching).
- Extensive experiments validate the approach across datasets.
Weaknesses:
- Theoretical justification: While the proposed loss is derived as an upper bound to the VAE loss, the tightness of this bound is not discussed. In VAEs, the ELBO is a tight bound that achieves equality when the variational posterior matches the true posterior (i.e., optimality). However, the paper does not clarify whether the proposed upper bound can similarly reach equality or under what conditions this would occur. This raises questions about the theoretical validity of the method compared to the VAE framework, as a loose upper bound might weaken the connection to the original ELBO’s guarantees.
- (Minor) Marginal gains on ImageNet: The improvement (5.13 to 4.93 in 1-step FID) is modest, raising questions about scalability to higher resolutions and more complex datasets.
Other Comments Or Suggestions: 1. Typos: In Appendix B, Equations (43, 48, 49) should use $\geq$ instead of $leq$.
Questions For Authors: 1. $\beta$ selection: Table 1 shows $\beta$ significantly impacts performance. How was $\beta$ chosen for each experiment? Can the authors provide guidelines for selecting β in practice?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments.
### **W1. On Eq. (8), Typos, and Its Tightness**
**R1.** The correct form of Eq. (8) should be:
$$||x_0 - f_\\theta(x_1,1)||^2 \leq
N\\sum_{i=0}^N||f_\\theta(\\psi_{t_{i+1}}(x_0; x_1),t_{i+1})- f_{\\theta^-}(\\psi_{t_i}(x_0; x_1),t_{i})||^2.$$
This follows from the CS inequality. We establish its connection to the continuous-time CM as $ N\\rightarrow\infty $ and analyze their optimality. Applying a Taylor expansion under the assumption that $ \\Delta t := t_{i+1} - t_i = \frac{1}{N} $ (which can be relaxed) and the above inequality, we obtain:
$$- \\log p_{\\theta}(x_0) \\leq \\frac{1}{2\\sigma^2}\\, E_{q_{\\phi}(x_1| x_0)} \Big\Vert x_0 - f_\\theta(x_1, 1)\Big\Vert^2 +\text{KL}(q_{\phi}(z | x_0)||p(z)) + C$$
$$
\leq \frac{1}{2\sigma^2} N\\sum_{i=0}^N \Big\Vert \frac{d}{dt} f_\theta(\psi_t, t) \Big\vert_{t=t_i} \Big\Vert^2 (1/N)^2 + \text{KL}(q_{\phi}(z | x_0) || p(z)) + C$$
$$\rightarrow \frac{1}{2\sigma^2} \int_0^1 \\mathbb{E} \left[ \Big\Vert \frac{d}{dt} f_\theta(\psi_t, t) \Big\Vert^2 \right] dt + \text{KL}(q_{\phi}(z | x_0) || p(z)) +C, \quad \text{as } N\rightarrow\infty,
$$
for a constant $C$.
Taking $ N\\rightarrow \\infty $ is reasonable even in practical scenarios, as CT and ECT propose designing the $N$ scheduler in a coarse-to-fine manner. Additionally, we observe that at the optimal values of $ \\theta $ and $ \\phi $, the reconstruction losses (i.e., the first term in the upper bounds) specifically recover the consistency function. Consequently, the bound becomes tight at the optimum. We will incorporate this discussion in the camera-ready version.
### **W2. On Gradient Clipping.**
**R2.** To evaluate the impact of gradient clipping, we applied this technique to our baseline model, iCT-VE, on CIFAR10. In the following table, we report results for both baseline and out VC method with and without gradient clipping (GC):
|Method|1-Step FID|2-Step FID|
|----------------------|-----------|-----------|
|iCT-VE (w/o GC) |3.61|2.79|
|iCT-VE (w/ GC)|3.52 |2.57|
|(Ours) iCT-VE-VC (w/o GC)|3.20|2.45|
|(Ours) iCT-VE-VC (w/ GC)|2.86|2.32|
From the table it is clear that the main improvement comes from the learned coupling. Interestingly, in this case using GC for the baseline resulted in a performance improvement, even though generally GC is not mentioned in other CM literature, and we originally added it to our method to prevent early training instabilities for the learned noise distribution. However, we agree that applying GC to the baselines ensures a fairer comparison. We will include this discussion and report the results for all baselines with GC.
### **W3. On ImageNet.**
**R3.** We also run the ImageNet experiments with 100k iterations. For ECM-LI-VC we increased $\beta$ to $\beta=100$ as $\beta=90$ diverged during training. Note that for runs with 100k iterations, we sometimes encountered divergences for our models with small $\beta$ and sometimes also for the baselines, while it seems to be solved when training for 200k iterations. The 1-step/2-step FID results are as follows:
| Method|1-Step FID| 2-Step FID|
|-------------|-----------|-----------|
|ECM-VE| 5.66| 3.78|
|ECM-LI| 5.63| 3.48|
|(Ours) ECM-VE-VC| 5.67| 3.67|
|(Ours) ECM-LI-VC| 6.34| 3.77|
For the settings with 100k iterations, our method performs similarly or slightly worse than the baseline. We believe this is due to the fact that the encoder for our model requires more iterations to learn the coupling, as demonstrated by the improved results for 200k iterations. We agree with the reviewer that including these results in the paper is important. We will add them, along with the corresponding results with OT coupling, to the camera-ready version.
### **W4. On $\beta$-Selection.**
**R4.** In our experiments, $\beta$ was tuned with a coarse grid search with different values with a gap $10$. For iCT on CIFAR10, we initially tested the values $\beta=[10, 20, 30, 40]$, of which $\beta=30$ gave the best performance, then tested also for $\beta=[25, 35]$ which did not improve the performance. Similarly, for ECM, we tested for $\beta=[10, 20, 30, 40]$, and after achieving the best performance for $\beta=10$, we tested $\beta=[5, 15]$ which did not improve the performance. The tuning was done with the VE kernel and the best values were used also for the LI kernel. We used the best values of $\beta$ also in FashinMNIST and FFHQ without additional tuning. For ImageNet we observed on early runs that $\beta$ needed to be much bigger, so we initially tuned for $\beta=[30, 60, 90, 120]$. After achieving the best results with the VE kernel for $\beta=90$, we further tuned for $\beta=[70, 80, 90, 100, 110]$ for both VE and LI kernels, and found $\beta=100$ to be the best for VE and $\beta=90$ for LI. We will add a discussion about this process in the camera ready version, as we agree with the reviewer that guidelines on how to choose $\beta$ are important for the practitioners.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors' response. One of my concerns remains unsolved:
Regarding the tightness of the upper bound, if we take $N\to \infty$, the following equality should be proved according to the authors' response
$$E_{x_1 \sim q_\phi(x_1|x_0)}[\Vert x_0 - f_\theta(x_1, 1) \Vert^2] = \int_0^1 E[\Vert \frac{d}{dt} f_\theta(\psi_t. t) \Vert^2] dt.$$
So I have the following question for this equality
- What is the input of $\psi_t$?
- Why does this equality hold?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the additional comments. In the derivation, $f_\theta(\psi_t,t)$ refers to the network evaluated at time $t$ with the corresponding input given by $\psi_t$. $\psi_t$ is defined in the paper as the flow function $\psi_t(x_0;x_1)$ conditioned on both $x_0$ and $x_1$. The bound is given by
$$\frac{1}{2\sigma^2} E_{q_{\phi}(x_1 \mid x_0)}||x_0 - f_{\theta}(x_1, 1)||^2 = \frac{1}{2\sigma^2} \int_0^1 E_{x_1 \mid x_0} \left[ \left|\left| \frac{d}{dt} f_\theta(\psi_t(x_0;x_1), t) \right|\right|^2 \right] dt,$$
Regarding the equality, in the optimal case, the reconstruction loss in $\frac{d}{dt} f_\theta$ is minimized, meaning that $f_\theta$ is a constant and equals the trajectory origin following the consistency function’s definition, which holds also for $x_0 = f_\theta(x_1, 1)$. | Summary: The paper aims to improve the training dynamics of Consistency Training (CT) by replacing the independent joint distribution between the source (data) and target (Gaussian noise) with a learned coupling. This coupling is parameterized as an encoder that maps each data point to a conditional noise distribution. Both the consistency model and the encoder are trained end-to-end using a mixture of consistency loss and a KL-divergence term between the outputs of the encoder and the prior. Experimental results show the benefits of this approach, with the learned coupling achieving improved sample quality, as measured by FID across multiple image datasets.
### Update after rebuttal
I thank the authors for their response. While I still have some reservations about the novelty of the core idea, the paper convincingly demonstrates the effectiveness of the proposed approach for consistency training through extensive experiments. The method also shows promise for integration with other consistency-based techniques. In light of this, I will raise my score.
Claims And Evidence: All claims made in the paper are properly explained and well supported, except for Eq. 8 which seems either incomplete or involves a typo.
According to the triangle inequality, the left hand side (unsquared) is less than or equal to the summation on the right hand without squared terms. The inequality written is incorrect, and requires either removing the squared terms or including a constant multiplier $N$ on the right hand side to fix it.
Methods And Evaluation Criteria: The proposed method makes sense. The idea of changing the independent coupling is a fairly popular one and has been shown time and time again to lead to improvements in standard diffusion- and flow-based generative modelling.
The evaluation criteria is also reasonable with a good selection of datasets chosen. However, as this is a few-step generative modelling work, the selection of baselines compared against is quite small and including more methods will be better. Furthermore, the reported FID numbers are, at times, significantly worse compared to the baselines which raises concerns about the fairness of the evaluation, and how well the baseline was finetuned compared to the new method.
Theoretical Claims: Only a single theoretical claim is made, that connects the proposed loss function with a VAE-style ELBO, which is sufficiently explain in Appendix B.
Experimental Designs Or Analyses: The experiments are sound, if a bit lacking in the amount of baselines.
Supplementary Material: I have read through all appendices.
Relation To Broader Scientific Literature: As mentioned earlier, the main idea of learnt coupling between the source and target distributions is quite popular and has been applied in the context of diffusion and flow-based models before [4]. These ideas have also been used in the context of distillation, for example, minibatch OT has been shown to improve consistency models in [1]. However, the reported FID numbers in the experiments are significantly worse than more recent SOTA few-step consisteny training models such as sCT [2] and SCT [3], as well as distillation-based approaches.
[1] Li, Yiheng, et al. "Immiscible diffusion: Accelerating diffusion training with noise assignment." arXiv preprint arXiv:2406.12303 (2024).
[2] Lu, Cheng, and Yang Song. "Simplifying, stabilizing and scaling continuous-time consistency models." arXiv preprint arXiv:2410.11081 (2024).
[3] Wang, Fu-Yun, Zhengyang Geng, and Hongsheng Li. "Stable Consistency Tuning: Understanding and Improving Consistency Models." arXiv preprint arXiv:2410.18958 (2024).
[4] Albergo, Michael S., et al. "Stochastic interpolants with data-dependent couplings." arXiv preprint arXiv:2310.03725 (2023).
Essential References Not Discussed: The paper references the majority of prior works needed to understand the full context and all essential references are included.
Other Strengths And Weaknesses: **Strengths**:
* The paper is very well written and nicely structured.
* The task of improving CMs is an important one, and the idea of changing the independent coupling is an interesting one.
* A good amount of ablations are performed that clarify the impact of each design decision.
**Weaknesses**:
The paper lacks a bit of novelty as the main idea has already been shown to work in flow matching scenarios. Other similar ideas have shown to transfer over and also work well on consistency models, so it isn't incredibly surprising that this idea also falls in the same category. The improvements achieved over the baselines are quite marginal, and the final reported FID scores are mostly much worse than those of more recent consistency-based approaches.
Other Comments Or Suggestions: No comments.
Questions For Authors: 1) Is it possible to expand upon the baselines you compare against to also include more recent consistency works, as well as other few-step generative models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for the careful and thoughtful review. Below we address some of the points raised by the reviewer, especially the correctness of Eq. (8), how our results compare with the baselines, and the novelty of the method.
### **W1. About Eq. (8).**
**R1.** We thank the reviewer for pointing out the mistake in Eq. (8). The correct form of Eq. (8) should be:
$$ ||x_0 - f_\\theta(x_1,1)||^2 \leq
N\\sum_{i=0}^N||f_\\theta(\\psi_{t_{i+1}}(x_0; x_1),t_{i+1})- f_{\\theta^-}(\\psi_{t_i}(x_0; x_1),t_{i})||^2.$$
This follows from the Cauchy–Schwarz inequality. We notice that this modification does not affect our discussion. In particular, we demonstrate the connection between our training objective (an upper bound of the negative log-likelihood as in VAE) and the continuous-time Consistency Model as $N\\rightarrow\\infty$ in **R1.** to the **Reviewer zkoD**. We will incorporate this correction and discussion in the camera-ready version.
### **W2. On Related Baselines and Their Comparisons.**
**R2.** We agree with the reviewer that having additional baselines can be beneficial for the paper. Upon camera-ready version, we plan to add a more comprehensive table, similarly to Table 1 from TCM [3], and including results from other relevant consistency model works such as SCT from [5], sCT from [4], and TCM.
### **W3. On FID comparison.**
**R3.** Regarding our FID results, on CIFAR-10 the aforementioned methods achieve 1-step/2-steps FID of 2.92/2.02 (SCT), 2.85/2.06 (sCT), and 2.46/2.05 (TCM), which are comparable to ours especially in the 2-steps regime. It is also important to consider that those improvements are orthogonal to ours and could in principle be combined. On Imagenet $64\times 64$, our results are generally worse than the ones reported by the aforementioned methods, but it is important to take into account that we used minimal settings in terms of network size and training budget due to computational constraints. Regarding the iCT baseline, there is no official open source implementation available, and not being able to reproduce the exact results seems to be a common problem found in other papers too (see for example [1,2]).
### **W4. On Novelty.**
**R4.** We agree that the concept of coupling and its advantages have been explored before in other works. However, in the context of Flow Matching, it has been shown that coupling generally results in straighter trajectories which result in improved generation with less function evaluation, which does not entail that coupling would necessarily result in improved performance in CMs. While, as pointed by the reviewer, minibatch OT-coupling was already explored in CMs, our learned coupling shows improved scalability compared to minibatch OT with respect to data dimensionality and batch size, as we can see from our experiments on $64\times64$ images.
### **References**
[1] Issenhuth, T., Santos, L. D., Franceschi, J.-Y., and Rakotomamonjy, A. Improving consistency models with generator-induced
coupling. arXiv preprint arXiv:2406.09570, 2024.
[2] Lee, J., Park, J., Yoon, J., and Lee, J. Stabilizing the training of consistency models with score guidance. In ICML 2024
Workshop on Structured Probabilistic Inference & Generative Modeling, 2024a.
[3] Lee, S., Xu, Y., Geffner, T., Fanti, G., Kreis, K., Vahdat, A., and Nie, W. Truncated consistency models. arXiv preprint
arXiv:2410.14895, 2024b.
[4] Lu, C. and Song, Y. Simplifying, stabilizing and scaling continuous-time consistency models. arXiv preprint
arXiv:2410.11081, 2024.
[5] Wang, F.-Y., Geng, Z., and Li, H. Stable consistency tuning: Understanding and improving consistency models. arXiv
preprint arXiv:2410.18958, 2024. | Summary: This paper proposes a method that combines VAE and Consistency Model, specifically using the encoder to predict the noise corresponding to the data. The resulting data-noise coupling is used to train the consistency model. The authors claim that this approach can reduce the variance in consistency model training. Experimental results in the paper validate the effectiveness of this method in improving generation performance.
## update after rebuttal
I thank the author's response and I will maintain my score as Weak Accept. However, I still think that the comparison with CD is necessary, otherwise the importance of this paper will be reduced.
Claims And Evidence: The paper attempts to demonstrate that the proposed method can reduce the training variance of CT, but the experimental results (Figure 3) do not provide sufficient direct support for this claim, and the improvement is not significant enough.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: This paper makes no new theoretical claims.
Experimental Designs Or Analyses: It is recommended to add additional baselines such as consistency distillation in the experiment.
Supplementary Material: I reviewed the experimental part in the supplementary materials.
Relation To Broader Scientific Literature: This paper proposes a method to reduce the variance of consistency model training, which may have an impact on the field of generative models and visual generation.
Essential References Not Discussed: This article discusses relatively comprehensive related work.
Other Strengths And Weaknesses: Strengths
+ The paper is clearly written and easy to understand. The proposed method is simple and straightforward, and looks promising.
+ The robustness experiments with beta enhance the method's effectiveness.
Weaknesses
- In Figure 3, for gradient variance and FID metrics, iCT-VC does not show a clear advantage over iCT, with some intervals performing worse than iCT.
- It is recommended to include a comparison with CD, as one of the disadvantages of CT compared to CD is training variance, and this method aims to alleviate that variance, making a comparison with CD meaningful.
Other Comments Or Suggestions: It is recommended to conduct experiments on continuous consistency model training because its theoretical upper limit is higher.
Questions For Authors: In Line 300, it looks like beta does not affect weighting. Can the authors explain this?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank reviewer for the comments and feedback. We address here some of the points and concerns raised.
### **W1. Figure 3 Does Not Have Enough Support.**
**R1.** We believe that the initial disadvantage of our method compared to the baseline is due to the fact that the encoder is still in early training, which results in an initial increase in variance. As training proceeds and the encoder’s quality improves, the variance reduces accordingly and the FID performance of our method surpasses the one of the baseline. While at first the improvement can seem only marginal, we would like to highlight that for models that already perform relatively well on a given dataset, small FID improvements can correspond to a significant image quality enhancement.
### **W2. On Comparison with CD.**
**R2.** We appreciate the reviewer’s suggestion that including additional baselines could strengthen the paper. Our focus is on training-from-scratch methods that do not assume access to a pre-trained teacher model. Thus, we primarily compare our approach with the CT counterpart. Nevertheless, we recognize the importance of a comprehensive evaluation and will include the FID results in a table for clarity in the camera-ready version.
### **W3. $\beta$-Weighting.**
**R3.** We thank the reviewer for spotting the mistake. The correct formula for $\lambda_{kl}$ is $\beta \lambda_{ct}(t_N)$ when using the adaptive loss, while simply $\beta$ when using the weighting like in EDM. We will update the paper accordingly in the camera ready version. | null | null | null | null | null | null | null | null |
LaRA: Benchmarking Retrieval-Augmented Generation and Long-Context LLMs – No Silver Bullet for LC or RAG Routing | Accept (poster) | Summary: The paper investigates whether Retrieval Augmented Generation (RAG) or Long Context (LC) generation is superior to answer questions with LLMs. For this, it first identifies shortcomings of evaluations in current studies and then introduces its own dataset (LaRA) that aims to mitigate these shortcomings. In an extensive evaluation with different models, the authors find that choosing RAG over LC or vice versa depends on various factors.
Claims And Evidence: Yes, the majority of the claims are backed by clear evidence. The authors did a very good job of outlining the shortcomings of existing studies. The experimental results are interpreted well. Also, the paper is well-written. Thus, it is simple to follow.
Methods And Evaluation Criteria: The paper uses a synthetic pipeline to generate QA pairs. The dataset itself is central to the paper. However, the authors remain vague when describing the exact generation process. For me, it is unclear by which criteria the in-context samples are selected and what the authors mean by refining the prompts when "the pass rate does not meet a predefined threshold" (lines 240-241). Also, while human evaluation is performed when evaluating the LLM's judgements of the answer quality, no such evaluation is performed in the actual question generation process. Thus, as a reader, I just need to "trust" that this went well. Could it be that the generation produces wrong answers, and the final LC or RAG answers are actually correct?
Also, it remains unclear what the synthetic question generation process means for the task complexities. Is it possible that GPT-4o creates more complex questions than GPT-4o can answer?
In my eyes, the one core contribution of the paper revolves around a good dataset. The authors describe very well that current literature has significant shortcomings but then do an uncareful job when defining their own data.
Theoretical Claims: This is not really a theory paper, so I have no comments.
Experimental Designs Or Analyses: In my eyes, the comparison between RAG and LC is a bit more tricky than presented. Essentially, what the investigations may mean is that you compare two different retrieval strategies. If the model stays the same, then the question becomes whether retrieval works better in LC (within the model) or in RAG (with an external retriever). This notion does not find space. While models are alternated, retrievers are not. There is literature proposing advanced retrieval strategies (https://arxiv.org/abs/2406.14162, https://arxiv.org/abs/2311.09476). This could significantly enrich the observations.
Besides, there could be more alignment of the findings with prior literature. For instance, Leng et al. (https://arxiv.org/pdf/2411.03538) find similar results that longer contexts are harder. Schimanski et al. (https://arxiv.org/abs/2402.08277) find that open-source models are generally lacking QA capabilities. Li et al. (https://arxiv.org/abs/2407.16833) find that more chunks increase the performance. For me, the paper would become more credible if these results were reflected on.
Supplementary Material: Yes, I have read through the entire appendix. I didn't check any data.
Relation To Broader Scientific Literature: As stated, relating to existing insights in the results and broader literature in information retrieval may be helpful.
Essential References Not Discussed: I have pasted some examples above. I think nothing entirely critical is missed out.
Other Strengths And Weaknesses: As stated above.
Other Comments Or Suggestions: While I like the extensive motivation of the paper, I feel like this takes too much space overall in the paper. The appendix is relatively short. More experiments and human data investigations would serve the soundness of the paper well.
Questions For Authors: As stated above:
- Could it be that the generation produces wrong answers, and the final LC or RAG answers are actually correct?
- Is it possible that GPT-4o creates more complex questions than GPT-4o can answer?
- What role does retrieval play in comparing LC vs. RAG?
- What is the stance of prior literature on your results?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's constructive feedback, which helps enhance the clarity of our paper! Let us answer your question below.
>**More details about the QA generation process and answer to question “Could it be that the generation produces wrong answers, and the final LC or RAG answers are actually correct?”**
Thanks for this advice. We conducted human inspection after the generation. After constructing prompts and example QAs, we sample 40 generated QAs for each context type and task, which are then manually verified for validity. We only stop modifying the prompts and in-context examples for a given context type and task when the accuracy reaches above 90%. Additionally, larger models compared to smaller ones, and stronger proprietary models compared to open-source models, consistently demonstrate higher accuracy, which further validates the overall correctness of the QAs. If the correctness of QAs were not guaranteed, we would likely observe random results. While this process cannot ensure that all QAs are 100% correct, it does guarantee a very high accuracy rate, making them effective for evaluation purposes. We will include these details in our revision.
>**The impact of retrieval strategies and answer to question “What role does retrieval play in comparing LC vs. RAG?”**
We agree that comparing RAG and LC is influenced by many factors, and a complete advanced RAG system can be more complex and have more modules, including query rewrite, different retrieval strategies, reranking, summarization, etc. However, we would like to highlight that this does not affect the value of our work from two perspectives. First, [5] conduct a systematic analysis of RAG implementations, and we adopt their advice to use a hybrid search strategy combining vector search and BM25. We choose gte-1.5 [4], a very strong embedding model released in late 2024, for search and reranking, ensuring that our RAG implementation already employs a strong strategy. On the other hand, our benchmark itself is one of the core contributions, providing effective support for future systematic exploration of the impact of different RAG modules in long-context QA.
Below are the experimental results of replacing gte-1.5 with bge-m3 [6] and adding Recomp [7] as a summarization module (also suggested in [5]). As can be seen, further complicating the RAG process does not bring significant additional gains, but instead makes retrieval a computationally-intensive process.
| |ours (32k)|bge-m3 (32k)|Recomp (32k)|ours (128k)|bge-m3 (128k)|Recomp (128k)|
|-|-|-|-|-|-|-|
|Qwen-2.5-7B|62.62|61.78|62.45|56.30|55.81|56.22|
|Qwen-2.5-72B|69.97|70.11|70.34|62.68|61.88|63.09|
>**Is it possible that GPT-4o creates more complex questions than GPT-4o can answer?**
Yes, this holds true under our generation process, as we employ a short-context generation method for creating QAs. At Line 244, we mention that generating QA pairs for long texts is inherently a long-context generation problem. To improve generation quality, we divide long contexts into several short segments and generated QA pairs based on these individual segments. This means that GPT-4o only needs to process a short context when generating QAs, but when answering, it needs to find answers from the complete context, which is far more challenging than generating them.
>**What is the stance of prior literature on your results?**
While [1] observes that longer contexts pose challenges for RAG, our work provides a more nuanced analysis, demonstrating that RAG performance is comparable to LC LLMs on tasks such as location and hallucination detection. [2] states that open-source models perform poorly on QA, so we discuss the results of open-source and proprietary models separately in our paper. [3] finds that more chunks can increase performance of RAG; we also conduct related experiments in Figure 2 to verify the impact of using more chunks. Detailed discussions of these connections to prior work will be added to the appendix in the final version.
### References
[1] Leng, Quinn, et al. "Long context rag performance of large language models." arXiv 2024.
[2] Schimanski, Tobias, et al. Towards faithful and robust llm specialists for evidence-based question-answering. arXiv 2024.
[3] Li, Zhuowan, et al. Retrieval augmented generation or long-context llms? a comprehensive study and hybrid approach. EMNLP 2024.
[4] Zhang, Xin, et al. mgte: Generalized long-context text representation and reranking models for multilingual text retrieval. ACL 2024.
[5] Wang, Xiaohua, et al. Searching for best practices in retrieval-augmented generation. EMNLP 2024.
[6] Chen, Jianlv, et al. Bge m3-embedding: Multi-lingual, multi-functionality, multi-granularity text embeddings through self-knowledge distillation. arXiv 2024.
[7] Fangyuan Xu, Weijia Shi, and Eunsol Choi. Recomp: Improving retrieval-augmented lms with compression and selective augmentation. ICLR 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. I think these comments make a lot of sense overall. The only point I'm still not 100% sure about is the data quality aspect. I'm sure the majority is right, but how many may be wrong is important for the benchmark in my eyes. However, I think the authors did a great job in responding. If the points mentioned in the response are included, I think, this is sound work. Thus, I recommend accepting instead of rejecting! I change the score to 3.
---
Reply to Comment 1.1.1:
Comment: We really appreciate your positive feedback and support! Over the past two days, authors of this work have conducted additional human evaluations on the dataset's accuracy. We sample 10 cases from each context type and task, totaling 120 cases (3 context types * 4 tasks * 10 cases each). We only sample cases from the 128k context because the 32k data is obtained using the same pipeline. Out of these, 117 are completely correct, indicating an error rate of approximately 2.5% for LaRA.
We believe that this lower error rate can be used for systematic evaluation, and the analysis based on experimental results is reliable.
Sincerely,
Authors of LaRA | Summary: The paper proposes LaRA, a benchmark that attempts to answer if RAG is still necessary compared with long-context LLMs.
The LaRA dataset is constructed from novels, academic papers, and financial statements with four tasks: locating specific information, comparing different parts of the text, reasoning about the content, and detecting hallucinations (questions that are not answerable from the provided context).
Results show that the choice between RAG and LC is not trivial, as it varies significantly depending on factors such as model size, query type, type of tasks, context length, context type, and number of retrieved chunks.
Notably, the proprietary LLMs tends to perform better in the long context configuration, except for the hallucination category.
Claims And Evidence: This is a dataset/benchmarking paper. The main claims are in experiment findings and are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Overall, I liked how the dataset is structured for evaluation, featuring recent data across various domains and question types, with both 32k and 128k context configurations.
However, the evaluation setup is somewhat restrictive, as all data is designed to fit within the LLM's context window. This raises a key question: what happens when content exceeds this limit? Should RAG be used, or is the context ensemble approach described Section 2 more suitable? A clearer definition of the scope of long-context may be needed.
Theoretical Claims: Not Applicable.
Experimental Designs Or Analyses: My main concern is the RAG retrieval setup. The paper mentions "5 chunks per document" and a "hybrid search combining embedding similarity and BM25." However, it’s unclear how many chunks are retrieved per question—possibly 5. If the RAG setup retrieves only a small number of chunks without further experimentation, it might be unfair, as important context could be missed due to retrieval inaccuracies.
Supplementary Material: No.
Relation To Broader Scientific Literature: Existing literature provided conflicting evidence in terms of whether RAG is still necessary given LLMs. This paper attempts to answer this question by resolving limitations of existing approaches (Insufficent context lengths, Data Leakage, Inappropriate Contexts Handling). Findings in this paper show that the choice between RAG and LC is not trivial, as it varies significantly depending on factors such as model size, query type, type of tasks, context length, context type, and number of retrieved chunks.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ### Strengths
1. The paper is well-written, easy to follow, and tackles an important question by comparing RAG with long-context LLMs.
2. I liked how the dataset is structured for evaluation, featuring recent data across various domains and question types, with both 32k and 128k context configurations.
3. The interpretation of results is well-explained, even though no definitive solution due to "no silver bullet."
### Weaknesses
1. As noted in the *Experimental Designs or Analyses* section, my main concern is the potential unfairness in the RAG setup, as it relies on a limited number of chunks without further experimentation. This could lead to retrieval failures, affecting performance.
2. The definition of "long context" needs further clarification, as the experiments are capped at 128k context. What happens beyond this limit, and does RAG remain relevant in such scenarios?
Other Comments Or Suggestions: 1. I think the framing of the "Hallucination detection" task could be improved. Typically, "hallucination" refers to model outputs, whereas in this paper, it is used to describe a question that cannot be answered based on the given context. Referring to a question as a "hallucination" may be somewhat misleading.
2. I think more analysis on the retrieval size could strength the claims of this paper.
Questions For Authors: 1. Please elaborate the RAG retrieval setup.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's constructive feedback, and we are very happy that the reviewer liked how our dataset is structured! Let us answer your question below.
>**RAG retrieval setup: the potential unfairness in the RAG setup, as it relies on a limited number of chunks without further experimentation**
Yes, in the main results, each query retrieves 5 chunks. The specific method is to search for 5 chunks based on similarity using the GTE embedding model and search for 5 chunks using BM25, then take their intersection. If fewer than 5 chunks are found, the remaining chunks are equally selected from the two retrieval methods. We adopt this hybrid search strategy with reference to [1], which is a strong method.
In Figure 2 of the draft, we provide experimental results with different chunk numbers and sizes, comparing the impact of the amount of retrieved information on smaller models (Qwen-2.5-7B) and larger models (Qwen-2.5-72B). We gradually increase the number of chunks from 5 to 30. Since 30 retrieved chunks means the total input length has already reached 30 * 600 = 18,000, which is on the same order of magnitude as long-context, further increasing the number of chunks would cause RAG to lose its significant efficiency advantage. We set 5 as the default number of chunks and 600 as the chunk size primarily by referencing previous works [1, 2].
We further explore the impact of increasing the number of chunks to 100 on these two models with 128k context, and the results are provided below. We find that too many chunks can cause RAG's performance to decrease rather than increase.
|Model\#Chunk|5|10|15|20|25|30|40|50|80|100|LC|
|-|-|-|-|-|-|-|-|-|-|-|-|
|Qwen-2.5-7B|56.30|60.15|62.80|61.65|59.76|59.62|59.22|58.47|57.39|55.61|48.91|
|Qwen-2.5-72B|62.68|63.48|64.05|64.06|65.08|67.78|68.18|67.57|66.89|65.87|65.11|
>**The definition of "long context" needs further clarification, as the experiments are capped at 128k context. What happens beyond this limit, and does RAG remain relevant in such scenarios?**
**This is one of the key designs of our benchmark.** Under extremely long context, RAG has an overly obvious advantage. An extreme example is knowledge base-based QA, where LC LLM cannot process the entire knowledge base and struggles to rely on external knowledge to answer correctly.
As emphasized in the paper, when collecting contexts, we choose texts that are as close as possible to the LLM's limit without exceeding it. This allows for a fair comparison of the actual capabilities between RAG and LC LLMs. If the context exceeds the LLM's input limit, we need to employ tricks like truncation, which could result in the answer to a question not being present in the LLM's input, thus failing to reflect the LLM's true capabilities. In Table 1, we conduct relevant experiments to verify the impact of such excessively long contexts. In lines 127-149, we specifically analyze why these overly long contexts cannot be appropriately used to compare RAG and LC LLMs in long-context QA scenario.
We will clarify this point more clearly in the final version.
>**The framing of the "Hallucination detection" task**
Thanks for this suggestion. We will rename "hallucination detection" to the more appropriate term "hallucination occurrence" to express whether RAG and LC LLM produce hallucinations.
### References
[1] Wang, Xiaohua, et al. Searching for best practices in retrieval-augmented generation. EMNLP 2024.
[2] Li, Zhuowan, et al. Retrieval augmented generation or long-context llms? a comprehensive study and hybrid approach. EMNLP 2024.
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for the response and additional empirical results. While I liked the contributions of this paper, I remain concerned about the 128k context window cap, which is a significant limitation of the work. Although some might argue that evaluating tasks larger than the model's context window is beyond the scope of this work, I believe it represents a crucial and unavoidable research problem and should not be overlooked especially for RAG vs. long context comparisons.
---
Reply to Comment 1.1.1:
Comment: We'd like to thank the reviewer's feedback on our response and appreciation for LaRA's contribution. We'd like to further clarify why **we intentionally excluded contexts exceeding 128k** and chose contexts close to this limit.
>### **Using context lengths exceeding 128k results in an unfair or excessively tricky comparison.**
Many open-source and proprietary LLMs have a context window limit of 128k, including the ones we test in this work (Llama-3.1-8B, Llama-3.2-3B, Llama-3.3-70B, Qwen-2.5-7B, Qwen-2.5-72B, GPT-4o, etc.). Testing long-context performance on inputs exceeding 128k would necessitate truncation, making it difficult to fairly compare RAG and LC. We wouldn't know if LC's inability to answer is due to lacking long-context processing ability or information loss from truncation. In Section 2, we empirically verify this point. We find that with a 200k context length, which far exceeds the input limit of some LLMs, truncation can lead to the answer being absent from the LLM's input. This results in a low LC-LLM score, not because the model cannot answer long-text queries, but because the answer is not present in the input. Furthermore, even if the context exceeds 128k, the LLM ultimately processes only 128k due to its limit, making the comparison unfair.
Therefore, **choosing 128k context is not a limitation, but a deliberate design for fair comparison between LC-LLM and RAG on current mainstream models**.
>### **If needed in the future, extending LaRA to longer context lengths will be very easy**
Testing contexts beyond 128k is easy; we could simply include them in LaRA. However, this contradicts our goal as it forces 128k-limited LLMs to handle longer contexts using special treatments (truncation or other tricks), which goes beyond LC-LLM's inherent abilities and makes the comparison too tricky.
In addition to the existing context and QA pairs, we provide comprehensive details and procedures for generating new data. If the context limit of mainstream models increases in the future, allowing them to accept longer inputs, our method can be used to generate new testing data. We can also easily extend our benchmark to larger context lengths. **However, our experimental designs and analysis are reasonable and effective for the tested LLMs with a 128k context limit.**
We appreciate the reviewer's engagement in this crucial consideration of LaRA's design and are happy to discuss further if anything is unclear. | Summary: This paper introduces a new benchmark called LaRA, which is designed to systematically compare Retrieval-Augmented Generation (RAG) and long-context (LC) large language models (LLMs). It evaluates 11 models on 2,326 test cases across four key tasks (information retrieval, reasoning, comparison, and hallucination detection) using naturally occurring long texts. The study finds that neither approach is universally superior. The choice depends on model size, task type, and context length. RAG benefits weaker models and excels in hallucination detection, while LC performs better in reasoning and structured text processing, particularly for stronger models with extensive context capabilities. The findings offer practical guidelines for optimizing LLM applications through strategic use of RAG and LC.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: This paper contributes by providing a new benchmark for RAG vs. LC comparison with four carefully designed key tasks. It also provide a comprehensive analysis of different aspects.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The authors provide a novel benchmark featuring four types of questions.
2. Comprehensive experiments are performed on the benchmark with multiple LLMs.
3. Practical insights are provided based on the experimental results.
Weaknesses:
1. Some implementation details are missing from the main paper.
2. Some parts (with 2024 knowledge) of the benchmark may not be less useful for future LLMs.
3. More analyses could be done on the experiments (see Questions).
Other Comments Or Suggestions: 1. The year information should be included when adding in-text citations.
Questions For Authors: 1. How many chunks were used in the setting of RAG experiments?
2. Is hallucination detection a valid task? How do we know the performance change is not purely because of the change in context length?
3. In Line 367, are the results for 128k and 32k contexts directly comparable? Do they share the same input?
4. Are there task-specific scores for the results in Figure 2? Do different tasks share the same trend?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to Thank you for your constructive review, as well as your positive feedback! Let us answer your question below.
>**Some implementation details are missing from the main paper---How many chunks were used in the setting of RAG experiments?**
In Section 4, paragraph “Implementation of RAG (line 269)”, we provide the details that "Our evaluation employs a standardized configuration with a chunk size of 600 tokens, 5 chunks per document, and an overlap of 100 tokens between chunks." In the main results, we use a chunk size of 600 and 5 chunks. Additionally, we further explore the impact of more chunks in Figure 2.
>**Some parts (with 2024 knowledge) of the benchmark may not be less useful for future LLMs.**
For future LLMs, all existing benchmarks may become outdated due to information leakage when they are used to train newer LLMs. However, on one hand, we have already used nearly the most recent corpus in our data selection, with almost all contexts being released in the second half of 2024, except for novels. On the other hand, in addition to providing datasets for evaluation, we also provide concrete pipeline for creating new data, which can be used to generate new datasets with more updated contexts.
>**Is hallucination detection a valid task? How do we know the performance change is not purely because of the change in context length?**
Hallucination remains a significant challenge for LLMs. While RAG can potentially mitigate this issue, we provide a quantitative assessment of RAG's effectiveness in reducing hallucinations specifically in long-context scenarios. Our evaluation spans models of various sizes, including both proprietary and open-source LLMs, comparing RAG against standard long-context input. Our experimental design focuses on measuring models' ability to abstain from answering when presented with unanswerable questions—defining abstention as correct behavior and hallucination as incorrect.
The experimental results yield three key findings: (1) LC LLMs are substantially more susceptible to hallucinations compared to RAG; (2) Model strength does not correlate with reduced hallucination rates that stronger LLMs do not demonstrate fewer hallucinations; (3) Increasing context length corresponds to higher hallucination probability. Two of three findings are independent of the increase in context length, therefore we believe this is an effective task that provides strong support for the study of hallucinations in RAG and LC LLM.
>**In Line 367, are the results for 128k and 32k contexts directly comparable? Do they share the same input?**
Yes, they are directly comparable. At Line 244, we mention that generating QA pairs for long texts is inherently a long-context generation problem. To improve generation quality, we divide long contexts into several shorter segments and generated QA pairs based on these individual segments. This means that for both 32K and 128K contexts, GPT-4o generated QA pairs using the same prompts and similarly sized segments. Therefore, the distribution of these QA pairs can be considered approximately equivalent across different context lengths.
Specifically, in lines 249-252 we wrote "we split the long context into multiple segments, each approximately 10k tokens in length, and input them individually into GPT-4o to generate QAs." In lines 267-273, we clarified that we have different segmentation strategies for different types of contexts. We will clarify this point more clearly in the final version.
>**Are there task-specific scores for the results in Figure 2? Do different tasks share the same trend?**
We find that the trends across different tasks are generally similar, but there are some exceptions. Below we provide the results of Qwen-2.5-72B at 32k length. We find that for the location task, novels perform worse than other context types, possibly because novels contain more similar content, making it difficult to locate answers. For the reasoning task, papers performed best, which we speculate is because papers have stronger logical structure, lower information redundancy, and are more conducive to reasoning. We will add a systematic analysis of performance across different tasks on various context types in the appendix.
| |location|Reasoning|Comparison|Hallu|
|-|-|-|-|-|
|Novel|72.00|76.27|71.11|88.14|
|Financial|89.19|61.02|72.41|82.20|
|Paper|88.68|84.91|63.16|84.91| | Summary: This paper studies the problem of benchmarking RAG and long-context LLMs. The authors first revisit the existing benchmarks to compare RAG and long-context LLMs. They further construct a dataset called LaRA, which contains location-related question, reasoning-related question, comparison-related questions and hallucination detection questions. They conduct experiments with seven open-source LLMs and four proprietary LLMs and systematically analysis the comparison between RAG and long-context LLMs.
Claims And Evidence: I think the claims are well-supported.
Methods And Evaluation Criteria: NA
Theoretical Claims: NA
Experimental Designs Or Analyses: The experimental designs make sense to me.
Supplementary Material: Yes. All.
Relation To Broader Scientific Literature: Yes, could be interesting the a broader community.
Essential References Not Discussed: I would encourage the author to discuss [1] which is also a paper comparing RAG and long-context LLMs.
[1] Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG. ICLR 2025.
Other Strengths And Weaknesses: - Line 68, “lanuage” typo
- Line 156, Section “3” link is not effective.
- I would recommend the authors add another “RAG” and “LC” column in Table 2 to make things clearer.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your positive review! Please see our responses below.
>**Discuss with [1]**
Thanks for pointing out this important related work. [1] mainly investigated the phenomenon that increasing the number of retrieved passages does not consistently improve the performance of LLMs. As the amount of retrieved information increases, performance first increases and then decreases. Some of our experimental results align with the conclusions in [1]. In Figure 2, we observe that as the number of retrieved chunks and chunk size increase, LLM performance first improves and then declines. Furthermore, weaker models compared to stronger ones, such as Qwen-7B versus Qwen-72B, begin to show performance degradation earlier, indicating that weaker models are more susceptible to the influence of large amounts of irrelevant noise in the retrieved information. We will add the discussion with [1] in the revision.
We further explored the impact of increasing the number of chunks to 100 on Qwen-2.5-7b-instruct and Qwen-2.5-72b-instruct with 128k context and find that too many chunks can cause RAG's performance to decrease.
|Model\\#Chunk|5|10|15|20|25|30|40|50|80|100|LC|
|-|-|-|-|-|-|-|-|-|-|-|-|
|Qwen-2.5-7B|56.30|60.15|62.80|61.65|59.76|59.62|59.22|58.47|57.39|55.61|48.91|
|Qwen-2.5-72B|62.68|63.48|64.05|64.06|65.08|67.78|68.18|67.57|66.89|65.87|65.11|
>**I would recommend the authors add another “RAG” and “LC” column in Table 2 to make things clearer.**
Thanks for this advice, and we will change it in our revision!
>**Typos**
Thanks for pointing out these typos. We have fixed them in the revision and will keep polishing our paper.
### References
[1] Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG. ICLR 2025. | null | null | null | null | null | null |
Learning from True-False Labels via Multi-modal Prompt Retrieving | Accept (poster) | Summary: This paper propose a novel weakly supervised labeling setting, namely True-False Labels (TFLs) which can achieve high accuracy when generated by pre-trained Vision-Language Models (VLM). Moreover, the paper derived a risk-consistent loss for this setting and propose a convolutional-based Multi-modal Prompt Retrieving (MRP) method to bridge the gap between the knowledge of VLMs and target learning tasks. Experimental results demonstrate the effectiveness of the proposed TFL setting and MRP learning method.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The proposed methods have some limitations that reduce the significance of the paper's contributions:
Firstly, the proposed weak supervision setting appears to be a straightforward combination of full supervision and complementary labeling. The authors do not sufficiently justify why and how this particular weak supervision setting is advantageous for VLM fine-tuning compared to existing weak supervision setting. For instance, prior work such as [1] using partial label setting for fine-tune VLM. The paper would benefit from a more thorough comparison and analysis of how the proposed approach improves upon or differs from existing weak supervision setting in the context of VLM fine-tuning.
The proposed methods lacks novelty, as it is essentially equivalent to the widely-used self-training approach in weak supervision learning. Specifically, the proposed method is essentially equivalent to a two-stage self-training approach. In the first stage, complementary labels or ground-truth labels are assigned to the samples. In the second stage, the model learns from the samples with complementary labels by utilizing the predicted posterior probabilities. This formulation aligns closely with conventional self-training frameworks, where pseudo-labels are iteratively generated and refined to improve model performance. The authors should clarify how their approach differs from or advances beyond this well-established paradigm, as the current presentation does not sufficiently highlight novel methodological contributions.
In summary, while the paper explores an interesting direction, the lack of significant novelty in the proposed methods and insufficient comparison to existing approaches reduce the overall impact of the work. The authors should address these limitations by providing a more thorough analysis of how their approach advances the state-of-the-art real-world VLMs fine tune.
[1] Zhang, Jiahan et al. Candidate Pseudolabel Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data. ICML (2024).
Theoretical Claims: Yes. All proofs for theoretical claims in this paper are correct.
Experimental Designs Or Analyses: Yes, I have carefully reviewed the experimental designs and analyses, and several issues need to be addressed to ensure the soundness and validity of the results:
1. **Unusual Weak Supervision Results**: The results of the weakly supervised learning methods in Table 2 and Table 3 appear to be quite unusual. Specifically, the fine-tuned results are significantly worse than the zero-shot results, which is counterintuitive and raises concerns about the validity of the experimental setup or implementation. The authors should provide more detailed training configurations and hyper parameters for these methods to ensure reproducibility. Additionally, they should carefully examine whether the poor performance is caused by suboptimal training details, such as learning rates, optimization strategies, or data pre-processing. Without a thorough investigation and clarification, the credibility of the reported results remains questionable.
2. **Insufficient Baseline Comparisons**: The current experimental design lacks comprehensive comparisons with relevant baselines, which limits the ability to assess the effectiveness of the proposed method. Specifically:
- For only using supervised samples, some state-of-the-art few-shot fine-tuning methods such as [1] should be included as baselines.
- For the semi-supervised setting, advanced semi-supervised methods specifically designed for fine-tuning models [2, 3] should be compared.
- For complementary labeling, more sophisticated methods utilizing complementary labels such as [4] should be incorporated
- Finally, self-training methods under other weak supervision settings [5] should also be added to the comparison.
By expanding the comparison to these relevant baselines, the authors can provide a more comprehensive evaluation of their method's performance and better highlight its potential advantages over existing approaches. Without such comparisons, it is difficult to assess whether the proposed method truly advances the state-of-the-art for VLMs fine-tuning.
In summary, while the paper explores an interesting direction, the experimental design and analysis need significant improvements to ensure the validity and soundness of the results. Addressing these issues would strengthen the paper's contributions and provide a more convincing evaluation of the proposed method.
Reference:
[1] Zhang, RenRui et al. Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling
[2] Gan, Kai ea al. Erasing the Bias: Fine-Tuning Foundation Models for Semi-Supervised Learning
[3] Wang, XuDong et al. Debiased learning from naturally imbalanced pseudo-labels
[4] Wang, Wei et al. Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical
[5] Zhang, Jiahan et al. Candidate Pseudo label Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data
Supplementary Material: Yes. I check all supplementary material, i have no question for supplementary material of this paper.
Relation To Broader Scientific Literature: The key contributions of this paper are related to the broader scientific literature in the following ways:
1.**Novel Weak Supervision Annotation (True-False Labels)**: Compared to prior work, such as [1], this paper proposes a new weak supervision annotation scheme—True-False labels—specifically designed for fine-tuning foundation models. This annotation can be automatically generated by VLMs, which distinguishes it from traditional weak supervision approaches that often rely on manual or heuristic labeling. This contribution addresses a gap in the literature by providing a more scalable and efficient way to generate weak supervision signals for fine-tuning.
2. **Fine-Tuning Method for True-False Labels**: Building on the proposed annotation scheme, the authors design a fine-tuning method tailored to True-False labels, which demonstrates competitive performance. This methodological advancement extends the existing literature on weak supervision by showing how such annotations can be effectively utilized to improve model performance. The results suggest that True-False labels, despite their simplicity, can serve as a viable alternative to more complex weak supervision strategies.
Reference:
[1] Zhang, Jiahan et al. Candidate Pseudo label Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data. ICML (2024).
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Other Strengths:
1. **Novel Weak Supervision Annotation Scheme**: The introduction of True-False labels as a new weak supervision annotation method is a creative and potentially impactful contribution. This approach leverages the capabilities of Vision-Language Models (VLMs) to automate the annotation process, offering a scalable and efficient alternative to traditional weak supervision techniques.
2. **Competitive Performance**: The proposed fine-tuning method, tailored to the True-False labels, demonstrates competitive performance on the evaluated benchmarks. This suggests that the method is effective in leveraging weak supervision signals to improve model performance.
3. **Relevance to Broader Research Trends**: The work aligns with the growing interest in reducing reliance on expensive manual annotations while maintaining or improving model performance. The proposed method could inspire further research into automated weak supervision strategies for fine-tuning foundation models.
Other Comments Or Suggestions: I don't have any further comments of suggestions.
Questions For Authors: **Question 1: Comparison with Standard Self-Training and Prior Work**
The proposed method appears to be essentially equivalent to a two-stage self-training approach:
1. In the first stage, complementary labels or ground-truth labels are assigned to the samples.
2. In the second stage, the model learns from the samples with complementary labels by utilizing the predicted posterior probabilities.
While this formulation shares similarities with standard self-training and prior work such as [1], it is unclear why the proposed method is superior. Specifically:
- **Compared to standard self-training**: What are the advantages of using complementary labels (True-False labels) over traditional pseudo-labels? Is there a theoretical or empirical justification for why this approach leads to better performance or faster convergence?
- **Compared to [1]**: How does the proposed method improve upon or differ from the weak supervision strategies explored in [1]? For instance, does the use of True-False labels provide better robustnessor more efficient utilization of unlabeled data?
I hope authors provide a more in-depth discussion, supported by either theoretical analysis, empirical results, or intuitive insights, to clarify the unique benefits of their approach over existing methods.
**Question 2: Poor Performance of Previous Weak Supervision Methods**
The results of previous weak supervision methods, as shown in Table 2 and Table 3, are notably poor—even worse than the zero-shot performance. This raises significant concerns and requires further explanation:
1. **Potential Causes**: Could the poor performance be attributed to suboptimal training configurations, such as inappropriate learning rates, insufficient training epochs, or inadequate hyper parameter tuning? Alternatively, is there an inherent limitation in the design of these weak supervision methods that makes them unsuitable for the task or dataset at hand?
2. **Proposed Method's Advantages**: The authors should clearly explain how their training method differs from previous weak supervision approaches and why it avoids the pitfalls that led to the poor performance of those methods.
Reference:
[1] Zhang, Jiahan et al. Candidate Pseudolabel Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data. ICML (2024).
I appreciate the authors' efforts in this work. I would like to raise my score if my concerns were addressed.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. We will address each concern point by point:
**Q1&Method (Novelty & Comparison with Self-Training)**
R1: Our TFL framework introduces three key innovations that fundamentally address limitations in standard self-training and prior work like Candidate Pseudo-Label Learning[1]:
1. **Random Label Sampling as Implicit Regularization**:
TFL employs uniform random label sampling as implicit regularization to prevent model overfitting. This stochastic mechanism in TFL, analogous to established techniques like dropout, SGD and random data augmentation, promotes better generalization solutions [3]. Unlike confidence-based pseudo-labeling in self-training and [1], which amplifies CLIP's inherent biases through error propagation [1,2].
2. **High-Accuracy TFL for Bias Amplification Mitigation**
Prior studies [2,3] have demonstrated that pseudo-labeling noise can create a cumulative effect on classes with inaccurate pseudo-labels, amplifying the model's inherent bias toward certain classes. The experimental results in Table 6 of [1] further support this conclusion, showing that accuracy improvements tend to be more pronounced on datasets where Zero-shot CLIP initially performs better. This observation aligns with TFL’s design motivation to mitigate bias amplification through stochastic sampling. TFL achieves over 99% annotation accuracy compared to the 85% accuracy shown in [1] (Fig.2). This significant improvement in labeling precision (via confidence-ranking-independent label generation) substantially reduces noise propagation.
3. **Hybrid Supervision Mechanism**
TFL integrates strong supervision from retained true labels to provide semantic correction anchors, whereas [1] relies solely on candidate pseudo-label. The utilization of hybrid supervision is mathematically formalized through our risk-consistent estimator (Eq.5). This hybrid labels provide explicit optimization direction, enhances noise robustness and improves stability in complex scenarios.
**Q2&E1 (Poor Performance of Weak Supervision Baselines)**
R2: We rigorously verified our implementation (code available at [Anonymous GitHub](https://anonymous.4open.science/r/TMP-2D10)) under identical configurations (epochs, architecture, hyperparameters). The observed performance gap primarily arises from **Large Label Space Challenges**:
- **TFL data contains numerous unseen class for semi-supervised learning** (e.g., 196 classes with only 42 supervised samples in Stanford Cars). This results in most classes lacking supervised samples. Existing semi-supervised methods for unknown classes fail to handle such a high proportion of unseen categories, leading to catastrophic performance collapse.
- **The candidate sets generated by TFL are excessively large for partial/complementary-label learning**. Oversized candidate sets (>100 classes) severely degrade disambiguation capabilities. In the weak supervision community, Complementary-label learning[5] and partial-label learning[6] typically uses candidate sets containing ≤30 classes.
Our method overcomes this through:
- MPR(Sec. 3.4)
- CLIP prior integration(Eq. 9-10)
**E2&Method (Baseline Comparisons)**
R3: In our experiments, we have conducted comprehensive comparisons with relevant works (Tables 2&3), consistently demonstrating superior performance. We further expanded comparisons to include: Few-shot methods(Tip-Adapter), Semi-supervised VLM fine-tuning approaches(FineSSL), More sophisticated methods utilizing complementary labels(DebiasPL), Advanced complementary-label methods(SCARCE). As shown below:
| Method | CIFAR-100 | Caltech-101 |
|:---:|:---:|:---:|
| Tip-Adapter | 76.47 | 86.77 |
| Tip-Adapter-F | 77.40 | 86.81 |
| FineSSL | 67.28 | 28.72 |
| DebiasPL | - | 63.40 |
| SCARCE | 44.06 | 39.39 |
| CPL+LaFTer | 77.30 | 93.40 |
| TMP | 78.72 | 90.60 |
Notably, TMP achieves comparable performance to CPL+LaFTer on CIFAR-100 and Caltech-101, despite the fact that CPL+LaFTer requires more resources, such as leveraging additional LLM knowledge, and utilizes iterative pseudo-label refinement (T=10). These results conclusively validate TMP's effectiveness through its noise-robust probability estimation framework.
Reference:
[1] Zhang J, et al. Candidate Pseudolabel Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data, ICML, 2024.
[2] Menghini C, et al.. Enhancing clip with clip: Exploring pseudolabeling for limited-label prompt tuning, NeurIPS, 2023.
[3] Wang X., et al. Debiased Learning from Naturally Imbalanced Pseudo-Labels, CVPR, 2022.
[4] Ali A, et al. The implicit regularization of stochastic gradient flow for least squares, ICML, 2020.
[5] Wang, Wei et al. Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical, ICML 2024.
[6] Xia S, et al. Towards effective visual representations for partial-label learning, CVPR 2023.
---
Rebuttal Comment 1.1:
Comment: Since most of my concerns have been resolved, I am inclined to recommend acceptance of the manuscript now. | Summary: This paper proposes a novel weakly supervised setting called True-False Labels (TFLs), leveraging VLM to reduce the difficulty of manual annotation. TFLs indicates whether a sample belongs to a label randomly and uniformly sampled from a candidate label set. In addition, this paper derives a risk-consistent estimator to explore and utilize the conditional probability distribution information of TFLs and introduces a Multimodal Prompt Retrieval to bridge the gap between VLM knowledge and the target learning task.
Claims And Evidence: The paper presents a relevant and important problem, and the claims made are generally supported by evidence. However, the motivation behind the proposed framework is not sufficiently clear, making it difficult to fully understand how the proposed approach can effectively address the problem. The lack of clarity regarding the framework's motivation undermines the convincingness of the presented evidence and solution.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are suitable for the problem and application at hand.
Theoretical Claims: Yes. Assumption 1 does not account for the case where the TFL is incorrect. Other parts of the proofs seem to be correct with no issues.
Experimental Designs Or Analyses: The analysis of the hyperparameter $\lambda$ lacks convincing evidence, as the performance improves when its value is small. This raises concerns about the lack of support for the effectiveness of TMP.
Supplementary Material: No. There is no supplementary material uploaded.
Relation To Broader Scientific Literature: The paper builds upon two key areas of research: weakly supervised learning and vision-language models (VLMs). It primarily leverages the strong generalization ability of VLMs to address weak supervision and proposes a novel annotation strategy. In terms of problem formulation, the work is more closely related to prior research on pseudo-label learning.
Essential References Not Discussed: I have not identified any essential related works that are missing from the paper at this time.
Other Strengths And Weaknesses: **Strengths:**
1. This paper proposes a novel weakly supervised learning annotation method, expanding the scope of weak supervision approaches.
2. The integration of VLMs to address weak supervision is an effective combination and is a promising research direction.
**Weaknesses:**
1. The introduction spends a significant portion explaining label annotation, which makes the latter part of the introduction insufficient and difficult to understand the motivation. Additionally, there is a lack of smooth transition to the methodology, making the proposed method less understandable.
2. The overall organization of the paper could be improved in terms of readability. In addition, Table 1 lacks necessary explanations, making it difficult to understand the meaning of certain columns. Improving readability would enhance clarity.
Other Comments Or Suggestions: Please refer to the Questions For Authors.
Questions For Authors: 1. Assumption 1 appears problematic, as it seems to disregard cases where TFL are incorrect.
2. Why was prompt retrieving designed? The method lacks sufficient explanation, and I did not fully understand the motivation behind introducing the TMP framework. Could this be clarified by providing further details in the introduction and starting from Section 3.4?
3. In Equation (9), does $x$ not need to go through the image encoder?
4. When the hyperparameter $\lambda$ is small, the performance improves, but this seems to lack evidence supporting the effectiveness of TMP.
5. In right-hand side of line 372, the paper states that "our approach has achieved results approaching those of the fully supervised method". Does this refer to the first row of the table 3? If yes, there is still a noticeable gap, so it may be worth reconsidering whether this statement is appropriate.
6. In line 718 of the appendix, how is the hyperparameter $m$ determined? This is not explained in the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. We will address each concern point by point:
**Q1: Assumption 1 appears problematic**
R1: In fact, Assumption 1 emphasizes that consistent classifiers can learn from pre-existing VLM-generated or human-annotated TFLs data, i.e., TFLs consistent learn assumption, rather than focus on the results of labeling the data must be perfect.
Furthermore, high-performance VLMs exhibit minimal annotation errors, with accuracy levels comparable to or surpassing human performance [1,2]. This suggests that they can satisfy the TFLs consistent learn assumption. Besides, in real-world scenarios, the goal of TFLs labeling is to mitigate noise introduced by VLMs-based annotations, ensuring that the labeled data adheres to the TFLs consistent learn assumption as closely as possible.
**Q2&W1: Motivation for MPR requires clarification**
R2: We have described the motivation for MPR in lines 104-109 and 290-294, among others, in the paper. More details are as follows:
1) **Weak Supervision Enhancement**: MPR supplements discriminative features from weakly supervised data through cross-modal retrieval, reducing learning complexity while improving model robustness.
2) **Task-Specific Adaptation** (Section 3.4): Our learnable convolutional network retrieves domain-aware embeddings (e.g., culinary textures in Food-101), addressing CLIP's generic prompt limitations.
3) **Modality Alignment** (Eq.7-9): By dynamically aggregating Top-K visual and textual prompts, MPR enhances modality consistency while preserving VLM knowledge.
Ablation studies (Table 4) confirm MPR contributes 1.07% average accuracy improvement across datasets. We will strengthen the methodological motivation in Section 1.
**Q3: Equation (9) image encoder clarification**
R3: The revised formulation is:
$P_{CLIP} = \text{Softmax}(\cos(g_I(x), \mathbb{Q}_T))$
where $g_I(\cdot)$ denotes CLIP's image encoder. We will update Eq.9 and ensure consistency in all algorithm descriptions.
**Q4: Effectiveness of TMP**
R4: As shown in Tables 2&3, TMP achieves optimal performance across all datasets. Notably, optimal performance never occurs at λ=0. This demonstrates that our balanced integration strategy (λ=0.1) optimally combines VLM knowledge with task-specific adaptation, thereby validating TMP's effectiveness.
**Q5: Line 372's "approaching fully supervised" claim may be overstated**
R5: Compared to baseline methods, our approach achieves performance approaching fully supervised levels on specific datasets. For example, on the Food-101 dataset (93.55% vs. 94.94% fully supervised). However, a 15% performance gap persists on the Stanford Cars dataset. We have revised the statement to: "Our method achieves performance comparable to fully supervised approaches on Food-101."
**Q6: Hyperparameter m determination**
R6: These hyperparameter $m$ were selected through experimentation to strike a tradeoff between precision and efficiency (when m=1, the accuracy is the highest, but the training cost is relatively high).
Reference:
[1] Street W, et al. LLMs achieve adult human performance on higher-order theory of mind tasks, arXiv, 2024.
[2] Kapania S, et al. "Because AI is 100% right and safe": User attitudes and sources of AI authority in India, CHI, 2022. | Summary: The paper proposes TFLs, a weakly supervised framework leveraging VLMs to generate high-accuracy labels efficiently. A risk-consistent estimator exploits TFLs’ conditional probabilities, and MPR aligns VLMs with target tasks. Experiments show significant gains over baselines.
Claims And Evidence: The claims are supported clearly in the current form.
Methods And Evaluation Criteria: They make sense to some extent.
Theoretical Claims: I check them.
Experimental Designs Or Analyses: I check them
Supplementary Material: I review the supplementary material.
Relation To Broader Scientific Literature: Refer to the detailed comments.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths:**
1. The integration of VLMs with weakly supervised learning is a compelling contribution. The method achieves a high annotation accuracy of over 99.5% in experiments while significantly reducing human effort (69.9× speedup compared to traditional labeling).
2. The paper is well-structured, with a persuasive narrative that clearly motivates the problem and technical contributions. A detailed theoretical proof of the risk-consistent estimator is provided, solidifying the method’s foundation. Experimental results demonstrate the effectiveness of the proposed TFL setting and MRP learning method.
3. The MPR method represents the first attempt to fine-tune VLMs through prompt retrieval (as opposed to directly learning textual embeddings), offering a fresh perspective on prompt engineering.
**Weaknesses:**
1. Each TFL provides only a single binary judgment (True/False) per candidate label. While efficient, this may restrict the richness of supervision, particularly for ambiguous or fine-grained classes. Extending the framework to allow multiple judgments (e.g., sampling multiple candidates per instance) could enhance its robustness.
2. Although experiments include fine-grained datasets (e.g., Stanford Cars), the gains here are slightly smaller than those for coarse-grained tasks. Further analysis on how class granularity impacts performance would enhance understanding.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. By design, the number of "False" labels in TFLs will vastly exceed "True" labels, especially as the candidate label set grows. How does the proposed method address potential class imbalance during training?
2. Extending TFLs to multi-label classification or open-vocabulary scenarios appears promising. Could the authors elaborate on potential extensions in future work?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. We will address each concern point by point:
**Q1: Class Imbalance in TFLs**
R1: We appreciate the reviewer's insightful observation regarding class imbalance. The imbalance between True and False labels is an inherent characteristic of the TFL framework. However, we have taken this into account in our design. The proposed risk-consistent estimator in the TMP is specifically designed to mitigate the effects of such imbalance. By estimating the probability distribution over categories, the estimator effectively reduces the bias introduced by the overrepresentation of False labels, ensuring balanced learning outcomes. The experimental results show that this approach enables the model to handle label imbalance without significant degradation in performance.
**Q2: Multi-Label/Open-Vocabulary Extensions**
R2: We thank the reviewer for identifying this impactful research direction.
1. For **multi-label classification**, we can extend the single positive label framework [1] by assigning one True label (indicating presence) or one False label (indicating absence) per candidate label for each instance.
2. For **open-vocabulary scenarios**, the samples that do not have an overwhelming advantage in the highest output confidence of the model can be selected as the new class samples, i.e., the samples have relatively high confidence in multiple categories. We conducted preliminary experiments in which the samples belonging to the new class were input into the already trained model. The experimental results indicated that the recognition accuracy for the new class was approximately 54%.
**W1: Single Judgment Limitation**
R3: We appreciate this thoughtful suggestion. We respectfully clarify that while multiple judgments could theoretically help, our experiments show:
1. **Diminishing Returns**: Dual judgments only improve accuracy by 0.3% on CIFAR-100/Stanford Cars.
2. **Efficiency Trade-off**: Each additional judgment linearly increases annotation costs, contradicting TFL's efficiency goals.
**W2: Fine-Grained Performance Gap**
R4: We gratefully acknowledge this valuable critique. We agree this warrants deeper investigation. Two key factors explain the gap:
1. **Semantic Overlap**: Fine-grained classes (e.g., car) share more visual features than coarse ones (e.g., animals), reducing CLIP's discriminative power.
2. **Prompt Sensitivity**: As shown in [2], class descriptions prompt from LLM may improve this gap.
Reference:
[1] Zhou D, et al. Acknowledging the unknown for multi-label learning with single positive labels, ECCV, 2022.
[2] Pratt S, et al. What does a platypus look like? generating customized prompts for zero-shot image classification, ICCV, 2023. | Summary: This paper introduces a weakly supervised learning framework that leverages True-False Labels (TFLs) to enhance annotation quality and efficiency. In this setting, each instance receives a binary label indicating whether it belongs to a randomly sampled candidate class, thereby mitigating errors common in conventional vision-language model outputs. The authors derive a risk-consistent estimator to fully exploit the conditional probability distribution of TFLs. Furthermore, a convolutional-based Multi-modal Prompt Retrieving (MPR) method is proposed to effectively align pretrained vision-language model knowledge with target learning tasks, addressing the inherent label noise issue.
Claims And Evidence: Overall, the submission's claims are clear and convinced.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria seem to be suitable for the problem at hand.
Theoretical Claims: I’ve glanced through the proofs for the theoretical claims, but I didn’t rigorously verify their accuracy.
Experimental Designs Or Analyses: Missing an ablation study on the hyperparameters K_T and K_I. Without this analysis, it remains unclear how sensitive the model’s performance is to the number of retrieved text and image embeddings.
Supplementary Material: I reviewed the supplementary material, focusing primarily on Section A.3 and the subsequent sections.
Relation To Broader Scientific Literature: The paper advances weakly supervised learning by introducing True-False Labels with multi-modal prompt retrieving, extending unbiased risk estimation and leveraging vision-language prompt techniques.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: Strengths:
1. This paper introduces True-False Labels, reducing annotation cost and label noise by using binary decisions on randomly sampled candidate labels, thereby simplifying the labeling process.
2. Extensive experiments across diverse datasets demonstrate the effectiveness of the proposed method.
Weaknesses:
1. The random sampling of candidate labels assumes a uniform distribution, which may fail to reflect the natural class imbalance often present in real-world datasets. This could lead to underrepresentation of rare classes, potentially impacting the model’s performance on these less frequent categories.
2. The convolutional-based prompt retrieval approach, while efficient, might not capture complex semantic relationships between visual and textual modalities as effectively as transformer-based architectures, potentially limiting its expressive power.
3. In datasets with high inter-class similarity, the binary labeling scheme might lack the granularity required to distinguish subtle differences. This could lead to confusion between similar classes, resulting in an increased rate of misclassification.
Other Comments Or Suggestions: N/A.
Questions For Authors: The hyperparameter λ balances the contributions from the pre-trained model and the learned model. As illustrated in Fig. 3, this parameter is highly sensitive on Stanford Cars. I'm curious whether this instability and inconsistent performance across different datasets is a common phenomenon.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. We will address each concern point by point:
**Experimental Designs (Ablation study on the hyperparameters $K_T$ and $K_I$)**
R1: We conducted comprehensive ablation studies on the hyperparameters $K_T$ and $K_I$. Experimental results demonstrate that performance variation remains within 0.5% across all datasets when adjusting $K_T$ (text prompts) and $K_I$ (image prompts) between 5-20. This stability stems from the inherent design of our MPR ($Top-K(\cos(\cdot, \cdot))$), where retrieved embeddings exhibit high cosine similarity and contain domain-specific information relevant to downstream tasks.
**W1-1 (Underrepresentation of rare classes)**
R2-1: In real-world annotation scenarios, rare classes are generally unknown a priori. Deliberate selection of rare classes risks inducing additional imbalance, which could exacerbate annotation bias. Under these practical constraints, the uniform random sampling in TFL represents the most feasible solution, as it aligns with practical annotation workflows while reducing labeling complexity and noise.
**W1-2 (Model's performance on less frequent categories)**
R2-2: TMP addresses class imbalance through the risk-consistent estimator, which reweights less frequent categories using CLIP's prior knowledge preserves recognition capabilities for rare classes. Empirical validation across all five datasets—including naturally imbalanced ones like Caltech-101—confirms competitive performance (**90.60% accuracy**).
**W2 (Architecture Choice - Convolutional Limitations)**
R3: Our convolutional MPR design is motivated by two key considerations:
1) **Characteristics of Retrieval**: As MPR focuses on retrieval rather than capture complex semantic relationships, the extraction of task-relevant information benefits more from local feature matching than complex semantic relationships. For prompt retrieval, the experimental results show that demonstrates that local texture patterns (captured by CNNs) are more effective than global attention mechanisms (by 3%).
2) **Computational Efficiency**: CNNs achieve faster training speed compared to Transformers, which is crucial for weakly supervised scenarios with limited computational budgets.
**W3 (The binary labeling scheme might lack the granularity required to distinguish subtle differences)**
R4:In real-world annotation scenarios involving subtle differences, the binary labeling scheme proves simpler to implement than multi-class annotation. Annotators only need to determine whether a randomly provided candidate label is correct, rather than selecting one from many visually or semantically similar options. Compared to conventional labeling methods that require annotators to distinguish between nuanced categories, TFL significantly reduces the skill requirements for annotators while maintaining theoretical rigor. This approach is particularly advantageous for datasets with high inter-class similarity (e.g., Stanford Cars), where traditional labeling often struggles with ambiguity.
**Question (Hyperparameter Stability)**
R5: In our experiments, the observed sensitivity is not a universal phenomenon. We attribute λ's instability to CLIP's zero-shot capability: weaker zero-shot performance (e.g., Stanford Cars) introduces noisier conditional probability estimates $p(y|\bar{y}, s=0, x)$ forcing greater reliance on CLIP’s prior $P_{\text{CLIP}}$ and amplifying sensitivity. | null | null | null | null | null | null |
Empower Structure-Based Molecule Optimization with Gradient Guided Bayesian Flow Networks | Accept (poster) | Summary: In this paper, the authors propose a method that leverages gradient guidance in the context of structure-based drug design. In particular, they augment MolCraft (that uses Bayesian Flow Networks as generative model for structure-conditioned ligand design) to be compatible with gradient guidance according to some (learned) energy function. The guidance is applied on both continuous (atom coordinates) and discrete (atom types) variables. The authors also propose a a backward correction strategy for more effective optimization. They show good results on CrossDocked2020 benchmark and on sub-structure conditioned generation.
## Update after rebuttal
I thank the authors for their rebuttal. However my main concerns were not really addressed, mainly lack of novelty (yet another classifier guidance to a generative model very similar to diffusion) and irrelevant experiments (only trained on crossdocked on properties that are totally irrelevant). If the authors argue that this is a general framework for classifier guidance on BFN, results should have been shown on other datasets/tasks (molecular data or not) and other properties. One model trained in one dataset (known to be vey flawed) on irrelevant properties does not show empirically that this is a "general method".
Therefore, I will keep my rating.
Claims And Evidence: - The paper is not very well written an difficult to follow. It would be helpful to have better general overview of BFN/MolCraft models on the main paper for more clarity. The paper also misses a lot of experimental details on how guidance is done, making the understanding of experimental results a bit tricky.
- The main contribution of the paper is to show that it is possible to do gradient guidance with Bayesian Flow Networks. This is not particularly surprising, given the relation between BFNs and diffusion models/flow matching (Xue et al. ICML24).
- Moreover, the authors show results on only a single dataset, that has been highly overfitted in the last few years. It would be nice to see results of guidance in the context of BFN in other datasets/tasks to show that the propose model really works in practice (either molecular datasets or even other modalities like images, other molecules, language, etc).
- A lot of design choices are needed to be made (guidance temperature, the backward correction, the property predictors, training/sampling hyperparameters etc). It's not trivial to conclude if this approach only works because it has been overly finetuned for the dataset or if this approach would work in other settings.
- It seems to me that the hyperparmeters of the model (and there are many of them) have bee tuned on the test set of crossdocked.
Methods And Evaluation Criteria: - The proposed method (gradient guidance on top of MolCraft) makes sense.
- The authors show results on a single dataset (CrossDocked2020), which has been highly studied/overfitted in the last few years.
- It is well known that this dataset nor the metrics (eg docking score, QED, SA, etc) are very relevant for actual drug design. Some other metrics (like those from PoseCheck/PoseBuster papers) provide a bit more insight into quality of the molecules. I think the results from PoseCheck should be displayed on the main document instead of on the appendix. From Table 9, we can see that MolJO has similar Posecheck metrics as in MolCraft, which hints to the fact that the gradient guidance does not improve the quality of the generated conformations.
Theoretical Claims: The theoretical claims seem coherent, but I did not went through the details
Experimental Designs Or Analyses: - I feel that comparing the proposed method with other approaches that do not do guidance is not very informative. The most relevant comparison to do is between the proposed approach and MolCraft (since this is a version of the model w/o gradient guidance).
- It would also be nice if we would see the improvement of gradient guidance in tasks/datasets other than CrossDocked. This could be other molecule datasets (either conditioned on target pocket or not) or other modalities (proteins, images, or anything else where BFN has been applied).
- The properties optimized in this paper are not relevant for drug discovery and it is difficult to say if any contribution on the paper would actually reflect any improvement on real use-cases.
- With respect to inference time: What is the computational overload of the proposed gradient guidance? How does the inference time compares with MolCraft?
Supplementary Material: I quickly skimmed the supplementary material on the appendix of the manuscript. I did not go through details of the provided source code.
Relation To Broader Scientific Literature: This works proposes to do gradient guidance on top of Bayesian Flow Networks (BFN). In particular, they build on top of MolCraft, a BFN-based generative model for pocket-conditioned ligand generation. Structure-based drug design an important problem on drug discovery, however, the dataset used by ML practitioners---as well as the metrics used to measure performance on this dataset---are know to not be very useful in practice.
Essential References Not Discussed: N/a
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: n/a
Questions For Authors: See above.
Ethical Review Concerns: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's thorough reading and insightful feedback, which have helped us identify areas for improved clarity and presentation. We shall address each point in our responses below, and we welcome further questions.
**Q1: Explanation of BFN Fundamentals**
We thank the reviewer for highlighting the need for improved clarity. We will enhance the paper's readability by providing a better overview and more detailed explanations of our guidance approach.
For experimental details of guidance, our gradient consists of pretrained MolCRAFT and plug-and-play energy functions. We describe the guided sampling in Algorithm 1 and Appendix D.1, and we will make them clearer.
**Q2: Novelty**
Thanks for raising this important question. While the work of Xue et al. establishes connections between BFNs and SDEs, our contribution goes beyond merely showing the theoretical feasibility. We derive a principled approach to gradient guidance specifically within the BFN framework, rather than reducing BFNs to SDEs and applying existing techniques.
Our contributions lie in both the methodology (deriving gradient guidance within BFNs and proposing a generalized sampling strategy) and in implementing and evaluating SBMO applications. (1) Methodologically, we show how gradient works through guided Bayesian update, offering a unique perspective as distinct from discretizing an SDE, and we believe contextualizing the guidance within BFN is a novel contribution. (2) Practically, the empirical results in Table 3 also show MolJO's distinct advantage compared to simply reducing BFN to SDE.
**Q3: Limited Dataset Evaluation**
We appreciate this concern. We add the evaluation on PoseBusters V2 test set (180 out of 384 complexes, after excluding those with sequence identity > 30% or with non-standard residues) as a held-out test. We also report the PoseBusters passing rate (PB-Valid), showing that MolJO's improvements are not dataset-specific or the result of overfitting.
||PB-Valid|RMSD < 2|PB-Valid & RMSD < 2|Vina Score Avg|Vina Score Med|Vina Min Avg|Vina Min Med|Vina Dock Avg|Vina Dock Med|SA|QED|Connected|Success Rate|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|MolCRAFT|56.4%|43.6%|26.5%|-6.93|-6.95|-7.18|-7.14|-7.77|-7.76|0.65|0.45|93.9%|20.6%|
|MolJO|68.0%|53.1%|36.3%|-7.74|-7.73|-8.16|-8.09|-8.66|-8.69|0.74|0.57|95.7%|43.5%|
**Q4: Hyperparameter Choices and Potential Overfitting**
As mentioned in Q3, the consistent performance on both datasets confirms that MolJO generalizes well beyond the specific choices for CrossDock. Moreover, they remain robust across reasonable ranges. For property predictors or training / sampling, we used the same architecture and the same BFN hyperparameters ($\beta_1, \sigma_1$) as MolCRAFT without specifically tuning for the task.
**Q5: PoseCheck Metrics in Main Paper**
We agree that the PoseCheck metrics should be presented in the main text rather than the Appendix for better accessibility, and we will revise our manuscript accordingly.
Though not directly incorporating strain energy as an objective, MolJO indeed improves the conformation quality as suggested by Figure 9, Appendix G, where our CDF is consistently above that of MolCRAFT. Furthermore, the evaluation on PoseBusters (see Q3) reveals notable improvements in PB-Valid.
**Q6: Baseline Comparisons**
We appreciate this excellent point. Our ablation studies in Section 5.4, Table 3 indeed provide direct comparisons, showing how each component contributes to performance improvements. Furthermore, we're expanding our comparison to include additional optimization baselines such as DecompDPO, where MolJO maintains competitive. Please refer to Q5 in Reviewer fSvs.
**Q7: Relevance of Optimized Properties**
We appreciate the reviewer's expertise in drug discovery. Our primary contribution is a general optimization framework whose efficacy can be validated through these in-silico metrics. Notably, we've observed that improvements correlate with enhanced key interactions and PB-Valid that are relevant to drug design.
**Q8: Computational Overhead**
Compared to MolCRAFT, MolJO requires approximately 2× or longer inference time in our updated study. The computational overhead depends on the complexity of the energy functions employed. MolCRAFT generally takes ~22s, previous MolJO with 9-layered energy proxies took ~146s, and we have experimented with 4-layered proxies that take ~45s. MolJO can be further accelerated with an efficient strategy where gradient is applied only at selected timesteps rather than at every step [2], which we leave for future work. We thank the reviewer for motivating this analysis, as it led to more efficient implementations.
[1] A Periodic Bayesian Flow for Material Generation.
[2] Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models. | Summary: This paper propose a gradient-based molecule optimization framework for the SBDD task, which in experiment achieves state-of-the-art performance on CrossDocked2020 benchmark. Besides, it extend MolJO to a wide range of optimization settings, including multi-objective optimization and challenging tasks in drug design such as R-group optimization and scaffold hopping, further underscoring its versatility.
Claims And Evidence: Yes, most of the claims are clear and supported by the evidence.
Methods And Evaluation Criteria: The method is mainly operated in the probability parameter space instead of the raw data space with the paradigm of BFN. The application is wide and appropriate for the task.
Theoretical Claims: The theory and proposition in the paper follows BFN and others such as GeoBFN and MolCRAFT. The formulation of update of parameters is correct.
Experimental Designs Or Analyses: The experiment is complete, while some of the methods are missing. We hope several baseline should be included for more comprehensive comparison, such as VoxBind [1], DiffBP [2] and D3FG [3], in which the metrics can be found in the recently proposed benchmark [4]. Besides, for the optimization methods, can DecompDPO be a competitor? If the comparison is hard to conduct, please explain why.
Finally, in some specific scenarios like fragment growing and scaffold hopping, CBGBench [4] also take these tasks into consideration, please discuss related work and appropriately include evaluated baselines in the benchmark to demonstrate the superiority of the proposed method.
[1] https://arxiv.org/abs/2405.03961
[2] https://pubs.rsc.org/en/content/articlelanding/2025/sc/d4sc05894a
[3] https://arxiv.org/abs/2306.13769
[4] https://arxiv.org/abs/2406.10840
Supplementary Material: Yes, I have reviewed most of them. Specifically, the experimental-related parts are reviewed in detail.
Relation To Broader Scientific Literature: The concept related to probabilistic models has been mentioned in BFN. The task related to SBDD has been partially adopted in MolCraft. However, I have not previously observed molecular optimization based on BFN in prior work.
Essential References Not Discussed: To enhance the completeness of the paper in the Pocket-Aware Molecule Generation section, it is essential to include VoxBind and DiffBP to provide a more comprehensive overview of recent advancements in the field. Additionally, in flexible SBDD, the recently proposed FlexSBDD[5] represents a state-of-the-art (SOTA) approach in SBDD-related drug design. It is recommended to include it in the Related Work section.
For the Gradient-Based Molecule Optimization section, additional optimization methods, such as DecompDPO and various 2D-based approaches, should be incorporated. This will facilitate a smoother introduction for readers unfamiliar with molecule optimization and help contextualize the proposed approach within the broader landscape.
In the experimental evaluation, particularly in the tasks of scaffold hopping and fragment growing, incorporating CBGBench for reference would be beneficial. This would not only provide a standardized benchmark for assessing performance but also help clarify the significance of these tasks in molecular optimization.
[5] https://arxiv.org/abs/2409.19645
Other Strengths And Weaknesses: All of them are listed and mentioned.
Other Comments Or Suggestions: In conclusion, I suggest that:
- Add relevant related work and baselines, such as DiffBP, D3FG, and VoxBind, as these methods have been evaluated on previous benchmarks with established metrics. This will enhance the completeness of the paper.
- Include a discussion and reference to CBGBench in the constraint optimization section.
Questions For Authors: - Although the optimization in this work is based on BFN, in the case of Gaussian BFN (where coordinates are treated as variables), the optimization process is structurally similar to guided diffusion. Could you explain the similarities and differences between the two in terms of their formulation and underlying principles?
- The guided term’s energy E —how is it obtained? It would be helpful to introduce this in the beginning of Methods for clarity.
- Moreover, this guidance mechanism bears some resemblance to energy-based compositional diffusion [6], as it can be seen as a superposition of sampling across two energy landscapes. However, since this work is BFN-based (modeling in parameter space), its physical interpretation is less explicit. Could this aspect be further discussed?
- In Claim 2 of Section 5.3: “Optimized molecules form more key interactions for binding,” while this supports the idea that interaction expansion allows the model to explore a broader chemical space, in many real-world drug optimization tasks, certain lead compounds possess specific key interactions (e.g., π-π stacking or hydrogen bonding). During optimization, it is often desirable to preserve these critical pharmacophoric interactions rather than modify them arbitrarily. For example, in cases like 1A2G and 2PC8, the optimized molecules retain the original interactions while expanding upon them, which enhances their practical applicability. How do the authors perceive this issue in the context of their model’s optimization strategy?
**If my suggestions can be adopted and the questions I raised can be clarified, I will consider appropriately increasing my rating.**
[6] https://arxiv.org/pdf/2206.01714
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's thorough reading and insightful feedback, which have helped us improve clarity and presentation. We shall address each point in our responses below, and we welcome further questions.
## Questions
**Q1: Similarities and Differences between Gaussian BFN and Guided Diffusion**
Thank you for this insightful observation. We explain the fundamental differences and structural similarities between Gaussian BFN and guided diffusion as follows:
- Key Differences: Our approach guides in parameter space ($θ$) rather than in data space ($y$). In the continuous case, $θ$ exhibits lower input variance and therefore provides more informative signals for guiding the generative process toward the desired output. Guided diffusion typically steers the sampling process in the data space directly, while our BFN-based approach performs inference in the parameter space. Such parameter-space guidance arguably connects deeper to the final target properties due to lower input variance, which allows for more explicit Bayesian modeling of uncertainty and more reliable property optimization.
- Similarities: Both approaches use gradient information to steer the generative process toward desired outputs, operating for Gaussian distributed variables.
**Q2: Regarding the Guided Term's Energy Function E**
We thank the reviewer for suggesting this clarification. While we included this discussion in the Appendix, we agree it should be introduced earlier for clarity and will revise accordingly in revision.
The energy function forms part of a Boltzmann distribution over parameters $θ$. Although the target property is explicitly defined only at $t=0$, Bayesian inference defines the parameter space $θ$ for any timestep $i$. This allows us to associate property values with every $\theta_i$ throughout the generative process, enabling prediction of time-dependent properties during intermediate optimization stages.
We train a predictor $E(\theta_t, t)$ directly over the parameter space to estimate $\nabla_{\theta_t} p_E(\theta_t)$ given $\theta_t$ at different accuracy levels. This approach proves effective as it provides a consistent guidance signal throughout the sampling process.
**Q3: Connection to Energy-Based Compositional Diffusion**
We appreciate the connection to [6], which introduced how to interpret diffusion models from the perspective of energy-based model (EBM), and thus applies the additivity from EBM to diffusions.
In our approach, the energy function can be viewed as the negative log-likelihood estimated by a conditional model for given properties. In practice, our method resembles classifier guidance since we train property predictors to serve as the energy function. However, our framework doesn't require explicit "labels" as conditions for controllable generation.
**Q4: Preservation of Key Interactions**
We appreciate the reviewer's expertise in drug design and the important point raised about preserving key pharmacophoric interactions. We fully agree that in real-world drug optimization, certain critical molecular substructures forming intermolecular interactions (e.g., π-π stacking, hydrogen bonding) should be preserved rather than arbitrarily modified.
Our MolJO framework actually addresses this concern by design. The framework allows for precise control over which molecular substructures should be modified and which should be preserved, while our energy function can guide the redesign of remaining substructures. This better aligns with practical applications such as scaffold hopping.
## Other Suggestions
**Q5: Additional Related Work and Baselines**
We thank the reviewer for suggesting these important methods for comparison. We agree that including these baselines will enhance the completeness of our evaluation. We commit to including comprehensive comparisons with these methods in our revision.
We have cited the results from CBGBench (DiffBP, D3FG) and borrowed the samples to calculate the median Vina affinities and Success Rate based on the statistics available for VoxBind. For DecompDPO, we directly cite the numbers from their paper. It can be seen that our MolJO maintains superiority in optimizing the overall properties, reflected by its highest Success Rate (51.3%).
||Success Rate|Vina Score Avg|Vina Score Med|Vina Min Avg|Vina Min Med|Vina Dock Avg|Vina Dock Med|QED|SA|Div|
|---|---|---|---|---|---|---|---|---|---|---|
|DiffBP|-|-|-|-|-|-7.34|-|0.47|0.59|-|
|D3FG|-|-|-|-2.59|-|-6.78|-|0.49|0.66|-|
|VoxBind|21.4%|-6.16|-6.21|-6.82|-6.73|-7.68|-7.59|0.54|0.65|-|
|DecompDPO|36.2%|-6.10|-7.22|-7.93|-8.16|-9.26|-9.23|0.48|0.64|0.62|
**Q6: CBGBench for Constraint Optimization**
We appreciate the reviewer highlighting CBGBench. This is indeed an excellent work that sets up extensive experimental design for structure-based molecule optimization. We will add a thorough discussion of CBGBench, particularly in the context of constraint optimization tasks. | Summary: This paper proposes MolJO, a framework that jointly guides continuous 3d coordinates and discrete atom types of 3d molecules based on the geometry of the target protein pocket and one or more molecular property classifiers. The paper also proposes a backward correction strategy that corrects parameters of Bayesian update distribution based on the current optimized sample and parameters from previous timepoints. It is shown that MolJO outperforms existing methods for generating molecules with better docking scores and molecular properties such as QED, SA, notably methods that do not involve gradient-based guidance or do not do guidance over discrete atom types.
### update after rebuttal
Thank you to the authors for providing explanations and running additional experiments for the Top-of-N comparison and error bars. It would also be good to include a Top-of-N comparison for TagMol in future versions.
For Table 3, does w/o guidance mean no guidance over both atom types and coordinates? If so, authors should include an ablation row where guidance is done only over coordinates and not atom types (equivalent to TagMol if I understand correctly), so that we can observe the impact of only turning on guidance over atom types (current row 4 of table 3), since this is a key argument being made by authors (and also TagMol + BC). Also, it doesn't seem like there is a huge difference between the docking scores of rows 1 and 4, which measures the impact of guidance--a p-value would be helpful here. Do authors have any intuition for why BC and guidance work synergistically?
I think the method is interesting but the clarity of the work can be significantly improved, I will keep my score as-is.
Claims And Evidence: The paper claims that gradient guidance is needed over discrete atom types in the setting of SBMO, since optimizing for molecular properties requires knowledge of the molecular topology, hence existing methods such as TAGMol suffer because they only do guidance over atom coordinates. MolJO proposes two novelties: gradient-based guidance over discrete atom types and backward correction.
- One thing that is not clear to me: do the values in Tables 1-2 incorporate backward correction? The difference between TAGMol and MolJO in Table 1 (lines 13-14) is about the same or smaller than the difference between MolJO with BC vs MolJO without BC in Table 3 and Figure 8 in terms of docking score. Since the authors claim that their improvement over TAGMol is due to guidance over atom types, it would be helpful to clarify this.
- The backward correction section is a bit hard to follow -- what exactly do authors mean by "correcting the past" if parameters from past timesteps (n-k) are being used to update p_U at the current n-th timestep? Also, is the goal of the method essentially variance reduction like in [1]?
- The guidance component based on molecular properties is very important since this is one of the key problems authors are tackling but implementation of property-guidance is in the appendix. In my opinion this should be in the main paper.
[1] https://link.springer.com/article/10.1007/s10107-016-1030-6
Methods And Evaluation Criteria: Authors demonstrate their method to improve molecule generation on an established benchmark on both constrained and unconstrained optimization. Authors evaluate their method on three types of molecular properties (affinity, QED, SA) and their combinations. The proposal of gradient-based guidance over atom types for optimizing molecular properties is cool jointly with continuous coordinates is interesting.
Theoretical Claims: I have gone through the proofs in the main paper and they seem reasonable. Equations (11) and (12) are a bit hard to read, it might help to put the RHS all on one line instead of splitting it.
Experimental Designs Or Analyses: The experiments generally make sense and authors compare with a lot of baselines in Table 1 which is good. There are several things that I would like to clarify.
- Why don't the tables contain error bar?
- Why did authors only report top-10 for MolJO and not other methods?
- For Figure 3, how did authors restrict the size of generated molecules? The table seems to have different sizes for each method (also should this be called a Figure or a Table?)
- For the ablation in Table 3, should the values in the last row (row 6) match line 14 in Table 1? If not, what are the difference between Table 3, row 6 and Table 1, row 14?
- For Table 3, it's confusing to me why the SDE from Xue et al. is called "SDE with classifier guidance", but then authors do an ablation of methods without and without guidance. Can authors clarify what they mean by SDE classifier guidance?
Supplementary Material: I have gone over sections D and F in the supplementary material.
Relation To Broader Scientific Literature: The paper tackles an important problem in the scientific community, which is generation of molecules based on the structure of a protein target site, while also optimizing for specific molecular properties, since molecule often have to meet several properties.
Essential References Not Discussed: To my knowledge, the work is not missing essential references.
Other Strengths And Weaknesses: Strengths:
- The combination of gradient-based guidance over discrete and continuous data types is an interesting application of BFNs.
- The empirical gains from the proposed backward-correction are notable based on the ablation study.
- Authors compare with a large amount of baselines and show gains on several metrics.
Weaknesses:
- The novelty of the backward correction sampling is not clear since authors write that Qu et al. implement the sampling for when k=n, and in Fig 8 write that the "strategy is robust within the range k \in (50, 200]" and n=200, so the need for a variable window size is not obvious to me since (if I understand correctly), all samples are generated over 200 steps?
- Some experimental information is unclear or not described in the main paper (listed above; importantly, information on property guidance).
Other Comments Or Suggestions: - page 6: "fourthfold" --> "four-fold"?
Questions For Authors: [1] Why are the values different between Table 3 row 6 and Table 1 row 14? Based on the writing it seems like these should be the same settings; if they are different, then authors should clarify this in the writing.
[2] Can authors clarify experimental information like: how they restrict the size of generated molecules in Fig. 3; how classifier guidance SDE differs from "without and with guidance" settings in Table 3?
[3] What are "me-better" molecules? I couldn't find a definition in the paper.
[4] Are the gains in Table 1 rows 13 vs 14 only due to guidance over atom types, or also due to BC? Authors should revise the claims if the latter is the case, or ideally provide an ablation for this (run MolJO at k=1 or MolCRAFT with BC).
[5] Why don't authors do a top-of-N comparison for other methods?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the careful reading and insightful feedback that helps improve the clarity and completeness of our work.
## Questions
**Q1: Table Value Inconsistencies**
We apologize for the confusion. The values in Table 1 and 3 were obtained from different runs. For Table 3 (ablation studies), we sampled 10 molecules per pocket instead of 100 per pocket. We will clarify this in our revision.
**Q2: Fig (Table) 3 Clarification**
Thanks for bringing up this issue that helps improve our clarity. All models use the same size specifications from DecompDiff. The differences result from some methods having failed molecule generations (e.g., invalid, not connected). We apologize for the confusion and will correct this labeling (from "Figure" to "Table") in our revision.
For guided SDE, we followed Xue et al. and perform guidance by modifying the conditional score following common practices.
**Q3: "Me-better" Definition**
Thank you for noting this oversight. "Me-better" refers to molecules with improved specific properties over existing compounds—a term borrowed from traditional drug discovery [1]. We will add the definition in the revision.
[1] Me-Better Drug Design Based on Nevirapine and Mechanism of Molecular Interactions with Y188C Mutant HIV-1 Reverse Transcriptase. https://pubmed.ncbi.nlm.nih.gov/36364174/
**Q4: Effect of Joint Guidance and Backward Correction**
Thank you for this insightful question. The ablation study is actually presented in Table 3, which directly addresses this question by isolating the contributions of each component. We apologize for the confusion caused by our initial abbreviation "w/ (guidance)" (MolJO-based) and "w/o (guidance)" (MolCRAFT-based) that can be misinterpreted as w/ or w/o (BC). We will improve the clarity in our revision.
As the reviewer suggested, we evaluated "w/ guidance, Vanilla" (row 4) that corresponds to MolJO at k=1 (guidance without BC), and "w/o guidance, BC" (row 3) that corresponds to MolCRAFT with BC. Table 3 shows that both components contribute to performance gains. Notably, the combined improvement exceeds the sum of individual improvements, suggesting these components work synergistically.
**Q5: Top-of-N Comparison**
Thanks for the great suggestion. We will add top-of-N comparisons for all methods in our revision, which generally shows what the "concentrated space" for desirable drug-like candidates looks like for generative models. Our method shows the best Success Rate (70.3%), indicating better optimization efficiency.
|Method|Success Rate|Vina Score Avg|Vina Min Avg|Vina Dock Avg|QED|SA|Div|
|---|---|---|---|---|---|---|---|
|AR|19.1%|-6.71|-7.12|-7.81|0.64|0.7|0.6|
|Pocket2Mol|40.5%|-5.8|-7.18|-8.32|0.67|0.84|0.59|
|FLAG|9.6%|50.37|6.27|-6.57|0.74|0.78|0.71|
|TargetDiff|32.6%|-7.06|-8.1|-9.31|0.64|0.65|0.67|
|DecompDiff|32.1%|-5.78|-6.73|-8.07|0.61|0.74|0.61|
|MolCRAFT|55.0%|-7.54|-8.4|-9.36|0.65|0.77|0.63|
|IPDiff|34.6%|-8.15|-9.36|-10.65|0.6|0.62|0.69|
## Additional Clarifications
**Q6: Tables 1-2 Component**
Thank you for highlighting this potential source of confusion. Yes, Tables 1-2 results include backward correction. To clarify, our contribution over TAGMol is two-fold: (1) We derived joint guidance over atom types and coordinates within the BFN framework. (2) We proposed the BC sampling algorithm that further improves optimization performance. As described in Q4, the results in Table 3 demonstrate that both contributions are significant. In our revised manuscript, we will make these distinctions clearer to avoid confusion.
**Q7: Backward Correction Explanation & Novelty**
We acknowledge that "correcting the past" is potentially confusing terminology. The correction applies to timesteps [n-k, n), while preserving parameters from [0, n-k).
As for its novelty, we develop a more flexible sampling strategy, establishing a sliding window approach that generalizes previous methods and explores a more nuanced control of variance. We appreciate the reviewer connecting this to variance reduction techniques, and we will investigate this connection further in our future work.
**Q8: Error Bars**
Thanks for the advice! We report the error bars as 95% confidence intervals for our main result in Table 1, and will add it to the Appendix.
||Vina Score|Vina Min|Vina Dock|QED|SA|
|---|---|---|---|---|---|
|AR|0.066|0.049|0.082|0.004|0.003|
|Pocket2Mol|0.063|0.058|0.097|0.003|0.002|
|TargetDiff|0.172|0.102|0.075|0.004|0.003|
|FLAG|0.778|0.525|0.142|0.003|0.002|
|DecompDiff|0.060|0.048|0.073|0.004|0.003|
|IPDiff|0.141|0.088|0.072|0.004|0.003|
|MolCRAFT|0.122|0.070|0.097|0.004|0.003|
|DecompOpt|0.415|0.210|0.528|0.011|0.006|
|TAGMol|0.175|0.088|0.135|0.004|0.003|
|Ours|0.136|0.078|0.083|0.003|0.003|
**Q9: Presentation & Typos**
We thank the reviewer for pointing these out, and we will revise our manuscript as requested. | Summary: The paper proposes MoIJO, a gradient-guided framework for SBMO. The key contributions are:
Joint gradient guidance over both continuous (coordinates) and discrete (atom types) modalities via Bayesian Flow Networks, avoiding modality inconsistency.
Backward correction strategy with a sliding window to balance exploration-exploitation.
SE(3)-equivariance preservation through energy function design.
Experiments on CrossDocked2020 show SOTA results. The method also generalizes to multi-objective optimization and drug design tasks.
Claims And Evidence: 1 The experiments lack statistical significance tests (e.g., p-values), making the improvement questionable.
Methods And Evaluation Criteria: 1 The molecule size bias is not fully addressed. Larger molecules inherently have better Vina scores (Fig. 5), but MoIJO’s superiority on size-controlled subsets (Table 4) is only briefly discussed.
Theoretical Claims: 1 The Taylor expansion (Eq. 18) assumes E(θ,t) is locally linear, which may not hold for complex energy functions. The approximation error is unquantified.
2 The SE(3)-equivariance proof assumes protein CoM is zero, but how do you keep this for real-world pockets?
Experimental Designs Or Analyses: 1 The reported RMSD ratio is based on non-symmetry-corrected values (Page 21), which may overestimate pose consistency.
2 While MoIJO outperforms baselines in strain energy (Table 9), the absolute energy values (163 kcal/mol) are still higher than reference molecules (114 kcal/mol).
Supplementary Material: All the supplementary materials has been reviewed.
Relation To Broader Scientific Literature: The work builds on BFNs and gradient-guided diffusion.
Essential References Not Discussed: GraphVF[1] Combines SE(3)-flows and GNNs for joint coordinate-type optimization.
[1] Sun, F., Zhan, Z., Guo, H., Zhang, M., & Tang, J. (2023). GraphVF: Controllable Protein-Specific 3D Molecule Generation with Variational Flow. arXiv [q-Bio.BM]. Retrieved from http://arxiv.org/abs/2304.12825
Other Strengths And Weaknesses: The BFN derivation (Appendix A) is overly condensed; a step-by-step example would improve readability.
Other Comments Or Suggestions: Suggestion: Add a schematic diagram of the backward correction process
Questions For Authors: How does the Taylor expansion in Proposition 4.1 handle highly non-convex energy landscapes? Would higher-order terms significantly affect guidance?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's thorough reading and insightful questions, which have helped us identify areas for improved clarity and presentation. We shall address each point in our responses below as well as our revised manuscript, and we welcome further questions.
**Q1: Smoothness of Energy Landscapes**
Thanks for raising the insightful question about the limitations of first-order Taylor expansion in our approach. We have followed established guided diffusion methodology [1] which assumes that with sufficiently small step sizes, the changes in the energy landscape between consecutive steps remain modest. We acknowledge that this approach inherently assumes the energy landscape is relatively smooth in the local region of expansion, which may not hold universally for complex energy functions.
As the reviewer correctly pointed out, higher-order terms could potentially provide more accurate gradient estimates, which is an active area of research in guided diffusion models [2]. However, higher-order methods introduce computational overhead and potential numerical instability during sampling, which we will leave for future work. While the empirical results demonstrate the practical utility of our approach despite this approximation, we plan to further develop adaptive guidance schemes that adjust based on estimated local curvature.
[1] Diffusion Models Beat GANs on Image Synthesis.
[2] Inner Classifier-Free Guidance and Its Taylor Expansion for Diffusion Models.
**Q2: BFN Derivation Clarity**
We will enhance our presentation in the revision with a more detailed, step-by-step explanation of the Bayesian Flow Network derivation.
**Q3: Schematic Diagram of Backward Correction**
Thank you for this suggestion. In Figure 1D, we have a schematic diagram illustrating the backward correction process, and we will continue to make it clearer in our revision.
**Q4: Statistical Significance**
We acknowledge the importance of statistical validation. We have conducted pairwise t-tests comparing our guided Backward Correction against both Vanilla and guided SDE. The results show statistically significant improvements (p<0.05), and we will add it to our revision.
||vs. Vanilla|vs. SDE|
|---|---|---|
|Vina Score|2.63E-13|2.55E-19|
|Vina Min|3.31E-31|6.48E-19|
|Vina Dock|2.79E-35|7.84E-4|
|SA|8.10E-115|2.10E-50|
|QED|1.98E-26|1.82E-12|
**Q5: Molecule Size Bias**
The reviewer makes an excellent point regarding molecule size bias, which is a crucial issue for SBDD. We apologize for the confusion caused by Table 4, and we would like to clarify that rather than solving the size bias issue directly, our primary goal in size-controlled experiments was to demonstrate that our guidance approach effectively improves molecules across the size ranges, and consistently outperforms baselines across different molecular sizes.
For Table 4, we will improve our presentation by calculating optimal scores separately for each size (Ref & Large), which will better highlight that our improvements stem from enhanced molecular quality rather than size bias.
**Q6: SE(3)-equivariance with Real-world Pockets**
The reviewer correctly identifies a limitation for unknown protein pockets, where reference positions are required to clip and obtain the pocket region. We acknowledge this as a limitation of current SBDD methods and certainly worth exploration. For known protein pockets, we simply subtract the centroid to ensure the center of mass is at the origin.
**Q7: RMSD Ratio Without Symmetry Correction**
Thank you for this important observation. Here we followed the calculation from MolCRAFT for fair comparison with existing methods. This approach calculates symmetry-corrected RMSD between Vina Docked PDBQT and generated molecules when possible, but falls back to non-symmetry-corrected values in cases where symmetry correction cannot be applied. We appreciate the reviewer highlighting this important consideration for accurate assessment of molecular pose consistency, and we will clarify the detail in our revision and investigate the issue further in the future.
**Q8: Strain Energy Higher Than Ref**
Thank you for this important observation. Although our results are generally within reasonable ranges for computational generation, we agree there is room for improvement. Future work could incorporate additional chemical prior knowledge to further reduce strain energy. We appreciate the reviewer highlighting this point, as it identifies an important direction for continued refinement of our approach.
**Q9: Missing Citation to GraphVF**
We sincerely thank the reviewer for bringing GraphVF to our attention. This work indeed makes valuable contributions by encoding different modalities in latent space for joint coordinate-type optimization. We agree that the unified space represents a promising direction. We will update our manuscript to include this important reference and add relevant discussion on how it relates to our work. | null | null | null | null | null | null |
Playmate: Flexible Control of Portrait Animation via 3D-Implicit Space Guided Diffusion | Accept (poster) | Summary: This paper proposed a novel audio-driven DiT-based portrait animation pipeline with customized emotion control and driving video control. The major contributions are 1) a motion-decoupled module with perceptual loss and adaptive normalization, 2) an emotion-control module with DiT blocks, and 3) an implicit 3D decoupled face representation. The experimental results of video visualization demonstrate the superiority of the Playmate framework.
Claims And Evidence: The claims are supported by qualitative visualizations such as videos and figures, as well as quantitative comparison and ablation analysis.
Methods And Evaluation Criteria: The two-stage framework is well-explained, and the evaluation provides a comprehensive comparison of SOTA methods in both the HDTF benchmark and the self-collected dataset. However, it's unclear how the head pose and facial dynamics transfer and the perceptual loss improve the disentanglement, is there any detailed explanation for this?
Theoretical Claims: I have checked the theoretical formula in the method section for both perceptual loss and Adaptive Normalization.
Experimental Designs Or Analyses: I appreciate the authors for providing both video visualizations and benchmark comparison tables. However, I still have a few minor concerns:
1) Alongside the ablations on CFG scales (Table 2) and the Adaptive Norm (Figure 7), it would be beneficial to include a quantitative analysis of each proposed module—specifically, the perceptual loss and the emotion-control module.
2) How does the method handle scenarios in which both the audio and the driving video provide conditions for localized lip movement? Is there a mechanism to balance or integrate these dual inputs?
3) Although the supplementary website offers visual comparisons, a user study could further substantiate the effectiveness of the proposed Playmate framework in comparison to baseline methods. (Optional)
Supplementary Material: I have reviewed the supplementary materials in both PDF and the anonymous website.
Relation To Broader Scientific Literature: The proposed method contributes to the portrait animation with more flexible control.
Essential References Not Discussed: I believe all essential references have been included already.
Other Strengths And Weaknesses: Please see the "Methods And Evaluation Criteria" and "Experimental Designs Or Analyses" sections.
Other Comments Or Suggestions: In general, I believe this is a well-written paper in good shape. The existing experimental result is convincing and demonstrates expressive facial motions and head movements. The method is novel and offers flexible control.
Questions For Authors: Please see the "Methods And Evaluation Criteria" and "Experimental Designs Or Analyses" sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your feedback. We appreciate your recognition of our method's innovation and applicability. Here are our responses to your comments.
**Q1:However, it's unclear how the head pose and facial dynamics transfer and the perceptual loss improve the disentanglement, is there any detailed explanation for this?**
Our transfer loss is primarily inspired by DPE[1] and VASA-1[2]. DPE introduces a bidirectional cyclic training strategy akin to CycleGAN's A2B2A' and B2A2B' pathways to achieve the disentanglement of pose and expression. Our approach is similar but constructs bidirectional, non-cyclic pathways. Specifically, we randomly sample two frames $I$ and $J$ from a video clip; these frames are characterized by their expression and pose attributes: $exp_i$, $pose_i$ for frame $I$, and $exp_j$, $pose_j$ for frame $J$. After applying attribute transfer, we obtain images $I_{exp_i,pose_j}$ and $J_{exp_j,pose_i}$. If the attribute transfer technique is perfect, we should obtain two portrait images with identical expressions and poses. To achieve this goal, we apply a perceptual loss to make the synthetic results appear more realistic.
\[1] Pang Y, Zhang Y, Quan W, et al. Dpe: Disentanglement of pose and expression for general video portrait editing
\[2] Xu S, Chen G, Guo Y X, et al. Vasa-1: Lifelike audio-driven talking faces generated in real time
**Q2:Alongside the ablations on CFG scales (Table 2) and the Adaptive Norm (Figure 7), it would be beneficial to include a quantitative analysis of each proposed module—specifically, the perceptual loss and the emotion-control module.**
Thank you very much for the reminder.
(1)About the transfer loss, the purpose of this loss is to achieve more precise disentanglement of facial attributes. Due to time constraints, we calculated the APD (Average Pose Distance) metric of Playmate on two datasets (HDTF and our dataset), as shown in the table below.
|Dataset|APD-jaw|APD-pitch|APD-roll|
|----|:----:|:----:|:----:|
|HDTF|${3.003}^\circ$|${1.308}^\circ$|${1.214}^\circ$|
|Our Dataset|${3.714}^\circ$|${1.751}^\circ$|${1.398}^\circ$|
(2)About the emotion-control module, we have compared our method with several approaches, as shown in the table below, demonstrating the effectiveness of our expression control and achieving superior generation quality in terms of emotional expression.
|Methods|FID$\downarrow$|FVD$\downarrow$|LPIPS$\downarrow$|Acc(Emo)$\uparrow$|
|----|:----:|:----:|:----:|:----:|
|EAMM|111.710|210.275|0.223|0.160|
|EAT|95.085|166.316|0.138|0.450|
|DreamTalk|119.032|199.962|0.246|0.350|
|EDTalk|135.215|221.897|0.289|0.460|
|Playmate|68.234|149.837|0.112|0.550|
**Q3:How does the method handle scenarios in which both the audio and the driving video provide conditions for localized lip movement? Is there a mechanism to balance or integrate these dual inputs?**
Since in most cases users expect to use audio to drive lip movements, when both audio and a driving video are available as inputs, we default to using audio for driving lip movements and expressions, while using the driving video to control the pose. In fact, we support multiple driving modes. For example, audio can drive lip movements, while the driving video controls the expression and pose, and vice versa. As shown in [video1](https://playmate111.github.io/videos/pose_control_demo/female.mp4) and [video2](https://playmate111.github.io/videos/pose_control_demo/male.mp4), the first row displays the reference image, the first column shows the pose control mode, and the remaining cells present the generated results. The lip sync is synchronized with the audio input, whereas the pose control is achieved through various driving mechanisms (e.g., driving video, preset mode, fixed pose).
**Q4:Although the supplementary website offers visual comparisons, a user study could further substantiate the effectiveness of the proposed Playmate framework in comparison to baseline methods.**
Thank you for your suggestion. Due to time constraints, we conducted a user study involving 10 participants who rated videos using the MOS (Mean Opinion Score) rating method, on a scale of 1 to 5, across four metrics: Lip Sync (LS), Video Definition (VD), Naturalness (N), and Visual Appeal (VA). As illustrated in the table below, Playmate has a notable advantage in the VD and VA metrics. While the LS and N metrics are slightly lower than Sonic's, they still outperform those of other methods, showcasing Playmate's strong competitiveness.
|Methods|LS$\uparrow$|VD$\uparrow$|N$\uparrow$|VA$\uparrow$|
|----|:----:|:----:|:----:|:----:|
|JoyVASA|2.500|2.286|1.714|1.929|
|Hallo|2.964|2.929|3.071|2.893|
|Hallo2|3.036|2.929|2.893|2.786|
|MEMO|3.321|3.036|3.179|3.143|
|Sonic|3.821|3.071|3.750|3.500|
|Playmate|3.750|3.857|3.464|3.643|
---
Rebuttal Comment 1.1:
Comment: After reading the other reviewers’ comments and the authors’ rebuttal, I sincerely appreciate the authors’ effort in providing additional visualizations for pose and emotion control videos, more comparison results, as well as a user study. The ablation study in Q2 has effectively resolved my concerns regarding the proposed modules.
While I find the user study useful, the sample size of only 10 participants limits its statistical significance. I encourage the authors to consider expanding the study to a larger scale in the final version.
Overall, my concerns regarding the experimental design and analysis have been sufficiently addressed. I will maintain my original rating of accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your recognition and encouragement! We are delighted to receive your suggestions and will implement them in our revision and future work. We have reorganized a user study involving 50 participants, but due to time constraints, we were unable to complete the experiment and compile the data before the response deadline for this submission. We will continue to carry out this work and update the results in the final version. Furthermore, we will work on this in our future endeavors and keep the community updated on our progress and milestones. | Summary: The work introduces Playmat, a diffusion transformer based talking face generation model. Playmate is able to generate talking heads (portrait animation) given reference image and audio signal, as well as an emotion signal. It splits training into two stages, first training the talking face generation model (diffusion transformer backbone), and then training an emotional control module while keeping the backbone fixed. Playmate makes use of the same 3D motion representation from face-vid2iid and Liveportrait. The results demonstrate competitive quantative and qualitative performance of the method, with good video generation quality.
Claims And Evidence: - **precise motion decoupling**: This line is used throughout the paper however I found its explanation (a) confusing and (b) lacking evidence. Namely, it is described to decouple expression and head pose, however section 4.3 does not explicitly investigate this. Section 4.3 performs an image qualitative analysis of certain failure cases with and without the adaptive norm, however I fail to see how these disjoint artifacts related to improved pose and expression decoupling. The lack of video in the supplementary material makes further investigation impossible.
- **state-of-the-art lip synchronization**/**superior video quality**: as shown both quantitatively and qualitatively, this is not the case, and the strength of this claim is not validated. Sonic achieves superior performance in this regard. Further, qualitative videos in the supplementary material demonstrate still that Sonic has better video quality given its overall expression and pose quality over Playmate, in my opinion.
Methods And Evaluation Criteria: In general, yes. The metrics and datasets follow related art, and the baselines are sound choices.
Theoretical Claims: The paper does not make any theoretical claims.
Experimental Designs Or Analyses: In general, the experimental designs are sound, with the exception of the following:
- It is common practice in this field to include a user study. While quantitative results are good, and performing good on benchmarks is important, it is not the full picture. A user study enables quantifying the qualitative results of the work in terms of user preference, which is the intended use case of this technology, and is critical to evaluating the quality of the method.
- While quantitative results show good performance on image/video metrics, they fall short in lip synchronization. This is in contrast to the repeated claim and analysis that Playmate achieves state-of-the-art performance in terms of lip synchronization (abstract/L50/col#2, conclusion/L414/col#2) - note however that caption in Figure 2 makes the alternative "competitive" claim. This conflicting language is misleading to the actual performance of the model.
- Further, analysis that video quality is superior to that of other methods, while shown quantitatively, is not supported qualitatively. Primarily, the method falls short in lip sync and expression realism compared mainly to Sonic. in talking head work like this, video quality is highly correlated to the animation quality, and I am not convinced of Playmates results. Playmate also suffers from odd artifacts and expressions, as shown in the videos on the supplementary material website, often have mouth ajar, or eyes bulging.
- These artifacts reduce the impact of analyses towards "precise motion decoupling" as well
- This further highlights the importance of a user study
Supplementary Material: Yes, all of it and all videos present. These are appreciated.
Relation To Broader Scientific Literature: The contributions of this work are primarily scoped to talking head generation. The techniques used in this work pull from the broader literature, however the contributions do not generalize beyond their scope. This statement is made objectively, and does not demote the work itself.
Essential References Not Discussed: Related art appears sound.
Other Strengths And Weaknesses: - **Strength**: The paper does introduce a novel orchestration of components into a novel training framework and architecture. This is a strength, and is a good combination of prior with a few new components.
- **Strength**: the emotion control does seem to work well.
- **Weakness*: The writing, particularly around claims and introduction of components is often embellished. The language used is often "gratuitously grandiose" - I use this language primarily to illustrate my meaning. This embellishment feeds the narrative of overselling the work's contributions. This is a minor concern and could be addressed in revision. For example,
- "meticulously designed": over embellishing language
- "specialized Proj modules": these are just linear layers it appears
- The authors often say they "introduce" a component, which is misleading terminology that would indicate this is a novel introduction of this technique, when it is in fact not
- **Weakness**: the strength of the novelty of the work however is not proven in experiments in my opinion, and the qualitative results do not convince me. While emotion control seems to work well, the overall expression quality is not great and a little uncanny still.
Other Comments Or Suggestions: see other.
Questions For Authors: - I am confused about you "Adaptive Normalization" - from the definition, using pre-computed global and private mead/std values is not adaptive, and is instead fixed. Adaptive Normalization, as the term is normally used for, involves learnable parameters in the network, as is commonly used for DiT models. Can you please explain this further, or make a distinction between the two concepts?
- Further, at inference, which private mean/std is used? How does this work for unseen speakers?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful for your review and valuable comments, and we hope our response fully resolves your concerns.
**Q1:precise motion decoupling... (a) confusing and (b) lacking evidence...**
About the motion decoupling. We emphasize motion decoupling because it is the foundation for Playmate to achieve various flexible driving combinations. When multiple control conditions exist, we can provide multiple driving modes. For example, a driving video can be used to control the pose, while audio is used to drive lip movements. This is also the most significant distinction between Playmate and methods like Sonic and EMO. Sonic only takes a reference image and audio as inputs, with the generated results (lip, pose, and expressions) all driven by the audio, without allowing users to specify different driving methods. We have uploaded multiple pose control videos and emotion control videos on our anonymous project website([video1](https://playmate111.github.io/videos/pose_control_demo/female.mp4) and [video2](https://playmate111.github.io/videos/pose_control_demo/male.mp4)). The lip sync is synchronized with the audio input, whereas the pose control is achieved through various driving mechanisms(e.g., driving video, preset mode, fixed pose).
**Q2:state-of-the-art lip synchronization/superior video quality...Sonic achieves superior performance in this regard...**
Thank you very much for the reminder. Although our quantitative metrics for lip synchronization on the two test sets are marginally lower than Sonic's, they still outperform other compared methods. Furthermore, across the remaining four comparison metrics, our method consistently exceeds the performance of all other competing approaches. We have uploaded more comparison videos with Sonic to our anonymous project website([video3](https://playmate111.github.io/videos/vs_sonic/0.mp4), [video4](https://playmate111.github.io/videos/vs_sonic/1.mp4), [video5](https://playmate111.github.io/videos/vs_sonic/2.mp4) and [video6](https://playmate111.github.io/videos/vs_sonic/3.mp4)). These videos show that, qualitatively, our lip synchronization is not far behind Sonic's. Moreover, in terms of video clarity, we are significantly better, especially in areas like the teeth. Regarding the differences from Sonic, let us briefly explain here. Sonic is a purely audio-driven algorithm that generates all features of the portrait, including lip movements, expressions, pose, etc., based on audio. This indicates that its driving flexibility is limited. In contrast, we achieve multiple controllable portrait driving methods by constructing a precise attribute disentanglement space, providing users with various flexible driving options. The implementation difficulty of this decoupling and then driving approach is higher than that of a simple audio-driven method. This is also the reason why we emphasize "Flexible Control" in our paper title.
**Q3:About the user study.**
Thank you for your suggestion. This question was also raised by Reviewer SMci. Please refer to our response to Reviewer SMci's question Q4.
**Q4:While quantitative results show good performance on image/video metrics...This conflicting language is misleading to the actual performance of the model.**
Thank you very much for the reminder. We will correct these issues in the revised version and will also carefully revise the full text.
**Q5:Further, analysis that video quality is superior to that of other methods, while...**
For details on the comparison with Sonic, see the response provided in Q2.
**Q6:the strength of the novelty of the work however is not proven...**
Thank you for your skepticism. In this response, we have added multiple tests(including responses to other reviewers), and uploaded more visual results on anonymous website, hoping that these results will dispel your doubts and concerns in this regard.
**Q7:About the Adaptive Normalization.**
Regarding Adaptive Normalization, our approach focuses on adapting to the dimensions of facial attributes. We apply distinct means and standard deviations for pose and expression, which provides additional prior information and reduces the learning complexity for the model. This facilitates more flexible control over the generated outputs. In the inference stage, we have the flexibility to derive these means and standard deviations from various available sources, thereby enabling more precise and controllable driving effects.
**Q8:At inference, which private mean/std is used...**
This question was also raised by Reviewer 54Zx. Please refer to our response to Reviewer 54Zx's question Q3.
**W1:The writing, particularly around claims and introduction of...**
We will correct these issues in the revised version and will also carefully revise the full text.
**W2:the strength of the novelty of the work however is not proven in experiments in my opinion, and...**
Due to the character limit on response, please refer to our response to Q6.
---
Rebuttal Comment 1.1:
Comment: The rebuttal sufficiently address some of my more pressing concerns of performance comparison to methods like Sonic. While falling short in certain areas, the argument *for* flexible control makes sense and I appreciate the clarification. Pending improvements to the language mentioned, I am raising to a weak accept.
---
Reply to Comment 1.1.1:
Comment: We are delighted to receive your response and suggestions. Thank you for raising your evaluation and for your support. Due to the limitations of the discussion strategy, we are unable to directly modify the submitted PDF file or submit a new PDF file in this stage to showcase our revised paper. However, we have thoroughly reviewed and made amendments to the paper. The key updates include:
- Revised the statements regarding lip synchronization performance(abstract/L50/col#2, conclusion/L414/col#2), corrected to "exhibiting strong competitiveness in lip synchronization".
- Revised "specialized Proj modules" in Section 3.2(L235/col#1) to "Proj modules".
- Revised "introduce face representation techniques" in Section 2.3(L113/col#2) to "utilize face representation techniques".
- Revised "introduce the pairwise head pose and facial dynamics transfer loss" in Section 3.1(L197/col#1) to "utilize the pairwise head pose and facial dynamics transfer loss".
- Revised "introduce a self-attention module" in Section 3.2(L230/col#1) to "utilize a self-attention module".
The aforementioned modifications will be reflected in the final version. Additionally, we will continue to strive to optimize the overall performance of Playmate in our future work and keep the community updated on our progress and milestones. | Summary: This work targets to generate lifelike talking videos for arbitrary identity, guided by a speech clip. Emotional and pose conditions are carefully devised to control the talking status. Specifically, a motion-decoupled module and emotion-control module are designed to enhance the performance.
Claims And Evidence: Authors claim superior talking head performance and compare it with state-of-art approaches. However, for pose control and emotion control, the comparison and experiment validation are missing.
Methods And Evaluation Criteria: The overall approach is composed with two components. One is the enhanced disentanglement in latent space while the other is audio-emotion conditioned diffusion transformer. Both are adaptively modified to accomplish current function. The talking video evaluation criteria follows previous approaches and is plausible.
Theoretical Claims: There are no theoretical claims involved in this paper.
Experimental Designs Or Analyses: 1. Authors design emotional control strategies but it seems to lack comparison with emotion-condition based state-of-art approaches.
Supplementary Material: Authors include an appendix and an anonymous website, which provide valuable information.
Relation To Broader Scientific Literature: This approach introduces a diffusion transformer to the talking head generation field. It is interesting for the community to know such a design can enhance the talking video generation performance.
Essential References Not Discussed: N.A
Other Strengths And Weaknesses: 1. The attached website does not include head pose control videos. It is hard to evaluate the pose-control performance.
2. Line 215 includes some spelling mistakes. “to enhance”.
Other Comments Or Suggestions: N.A
Questions For Authors: 1. How is the training stability of applying head pose and facial dynamics transfer loss in equation 4? Any strategy to evaluate its effectiveness?
2. Is there any possibility to evaluate the accuracy of pose conditional control?
For instance, facial reconstruction algorithms for the head pose angles.
3. For the motion-decoupled module, authors introduce adaptive normalization. In the inference stage, how does denormalization operate?
4. About the collected dataset, it seems the overall performance becomes worse for most approaches. What is the difference between the collected dataset and the HDTF dataset? How many videos are involved in the collected datasets? Will this dataset be released?
5. The presented videos show high image quality, but the poses do not showcase too much variability.
6. It works surprisingly well with only two DiT blocks inserted before the MLP head for emotional control, any explanations?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: First, we would like to thank the reviewer for your careful reading and providing numerous constructive comments! Below we address the concerns mentioned in the review.
**W1:The attached website does not include head pose control videos.**
Thank you for pointing this out. We have uploaded multiple pose control videos and emotion control videos on our anonymous project website. For the pose control videos([video1](https://playmate111.github.io/videos/pose_control_demo/female.mp4) and [video2](https://playmate111.github.io/videos/pose_control_demo/male.mp4)), the lip sync is synchronized with the audio input, whereas the pose control is achieved through various driving mechanisms(e.g., driving video, preset mode, fixed pose). Regarding the emotion control videos, we have compared our method with several approaches([video3](https://playmate111.github.io/videos/emotion_vs/male_10s.mp4) and [video4](https://playmate111.github.io/videos/emotion_vs/female.mp4)). Additionally, we conducted quantitative comparisons on emotion control accuracy and video quality, please refer to our response to Reviewer SMci's Q2.
**W2:Line 215 includes some spelling mistakes. “to enhance”.**
Thank you very much for the reminder. We will correct these issues in the revised version and also carefully revise the full text.
**Q1:The training stability of the transfer loss.**
Regarding the training stability, since we use $\mathcal{M}$ from LivePortrait as pre-trained model, which inherently possesses face attribute disentanglement capabilities, fine-tuning with transfer loss ensures both training stability and rapid convergence. We have uploaded sample image([image1](https://playmate111.github.io/src/Disentanglement.png)) on website to demonstrate disentanglement between head pose and facial dynamics.
**Q2:Evaluate the accuracy of pose conditional control.**
In the field of image animation, Average Keypoint Distance (AKD) and Average Pose Distance (APD) are commonly used to evaluate pose control performance. We calculated the APD metric of Playmate on two datasets (HDTF and our dataset), as shown in the tables below.
|Dataset|APD-jaw|APD-pitch|APD-roll|
|----|:----:|:----:|:----:|
|HDTF|${3.003}^\circ$|${1.308}^\circ$|${1.214}^\circ$|
|Our Dataset|${3.714}^\circ$|${1.751}^\circ$|${1.398}^\circ$|
Additionally, we have uploaded sample pose visualization videos on our anonymous project website([video5](https://playmate111.github.io/videos/poseControl/32.mp4) and [video6](https://playmate111.github.io/videos/poseControl/2.mp4)), demonstrating that Playmate achieves good generation quality even in pure pose-driven scenarios.
**Q3:In the inference stage, how does denormalization operate?**
In the inference stage, for expression, the mean and standard deviation are the same used during training, calculated from all training data. For head pose, the mean and standard deviation are optional and can be computed from user-provided driving videos. If not provided, they can either be set to default parameters(e.g., computed from forward-looking videos) or calculated from randomly selected videos in the dataset.
**Q4:About the collected dataset.**
Our own dataset involved about 39k clips, featuring more complex scenes, diverse styles, higher clarity, and greater challenges, which leads to worse overall performance for most approaches. We'll try our best to release the collected dataset, but we hope you understand that both we and the community need to exercise caution when releasing rich datasets. This caution stems from concerns regarding potential risks, particularly those related to individual privacy and likeness rights.
**Q5:The poses do not showcase too much variability.**
Thank you for your interest in this section. In fact, we support multiple pose driving methods. As shown in our newly uploaded sample videos, under the same audio, different pose control conditions result in different poses generated by Playmate. We support various methods to enhance pose variability, such as using videos with significant pose variations as driving videos, or manually adjusting the rotation value of the pose.
**Q6:It works surprisingly well with only two DiT blocks inserted before the MLP head for emotional control, any explanations?**
Thank you for your affirmation of our emotion control module. We believe its effectiveness is due to two main reasons: (1) Precise latent space construction and attribute disentanglement, enabling the model to effectively correlate emotion control with emotion features after decoupling attributes like pose and expression, leading to effective learning; (2) A two-stage training approach, where we first stabilize audio-driven training, then train the emotion control module separately while keeping most weights fixed, reducing training complexity. Training all parameters simultaneously might prioritize other labels, rendering the emotion control signals ineffective. | null | null | null | null | null | null | null | null |
From Black Boxes to Transparent Minds: Evaluating and Enhancing the Theory of Mind in Multimodal Large Language Models | Accept (poster) | Summary: This paper studies MLLMs’ ability on theory of mind. The authors first construct a benchmark testing MLLMs’ first-order and second-order theory of mind based on grid world setting, and then probe MLLMs’ understanding of beliefs with linear probing. Experiments show that some attention heads show distinguishment of true and false beliefs. Finally, the authors propose improving MLLM theory of mind through attention calibration.
In summary, the authors study a valuable topic, but the proposed benchmarks is flawed. The analysis and proposed method is not novel and efficient enough. A revision of the benchmark, further analysis, and a method with more novelty can improve the paper.
Claims And Evidence: The authors compare the performance of different MLLMs and humans on the proposed benchmarks and report all the results. However, there are some concerns.
How do the authors define “robustness”? does humans’ high accuracy really imply the robustness of the proposed benchmarks?
What is the definition of “error-transfer” nature?
Methods And Evaluation Criteria: The benchmark can test MLLMs’ theory of mind under proper settings.
The authors proposed a method to improve MLLM theory of mind. However, the proposed methods is too straightforward and lacking in novelty. The figure in Suppl. Sec. E shows that the method relies heavily on hyperparameter tuning and can not show improvement on TB and FB tasks at the same time.
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: The setting on video-only evaluation is flawed. If the testees are not told that agents can not know what is happening outside when the door is closed, they will make mistakes intrinsically. Such errors do not mean that they do not have good theory of mind. Instead, they are misled by the authors.
In Suppl. Fig. 11 and Fig. 12, the authors give two different captions to one video snapshot, which is insensible. The same video can not represent the situations where there is a timing and where there is no timing. This also shows the intrinsic flaws in the video-only setting.
In Suppl. Fig. 12 and Fig. 13, the authors refer to “green agent”, but there is no green agent in the setting. The “Question” asks about the white agent’s belief, but the “belief true” and “belief false” show the yellow agent’s belief. Therefore, there are typos or maybe even wrong annotations in the presented data example.
In Suppl. Sec. B, the authors do not explain the meaning of 𝑦𝑝𝑟𝑜𝑡𝑎𝑔𝑜𝑛𝑖𝑠𝑡, 𝑦𝑝𝑎𝑟𝑡𝑖𝑐𝑖𝑝𝑎𝑛𝑡, and 𝑦𝑜𝑚𝑛𝑖𝑠𝑖𝑒𝑛𝑡, making it hard for readers to understand the details. The content in the “Belief” row may reflect the authors’ misunderstanding of first-order and second-order belief. The authors should further clarify.
Supplementary Material: I have viewed all parts of the supplementary material.
Relation To Broader Scientific Literature: The key contributions are related to MLLM benchmarking, MLLM representation analysis, and MLLM hallucination mitigation.
Essential References Not Discussed: The authors do not point out previous grid-based benchmarks (L149-L153), making it hard for readers to understand related work. If the authors are the first to propose grid-based benchmarks, they should claim confidently. If the authors are not, they should explicitly cite previous grid-based benchmarks.
The authors did not discuss works on general MLLM benchmarks such as MMBench [1], SEED-Bench [2], MV-Bench [3], MM-Vet [4], R-Bench [5], etc.
[1] Liu, Yuan, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan et al. “Mmbench: Is your multi-modal model an all-around player?.” In Computer Vision – ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part VI, pp. 216-233. Cham: Springer Nature Switzerland, 2024.
[2] Li, Bohao, Yuying Ge, Yixiao Ge, Guangzhi Wang, Rui Wang, Ruimao Zhang, and Ying Shan. “Seed-bench: Benchmarking multimodal large language models.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13299-13308. 2024.
[3] Li, Kunchang, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang et al. “Mvbench: A comprehensive multi-modal video understanding benchmark.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22195-22206. 2024.
[4] Yu, Weihao, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. “MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities.” In International Conference on Machine Learning, pp. 57730-57754. PMLR, 2024.
[5] Wu, Mingrui, Jiayi Ji, Oucheng Huang, Jiale Li, Yuhang Wu, Xiaoshuai Sun, and Rongrong Ji. “Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models.” In International Conference on Machine Learning, pp. 53553-53570. PMLR, 2024.
Other Strengths And Weaknesses: See the reviews above.
Other Comments Or Suggestions: Some cited papers have published versions such as [1, 2]. The authors should cite published versions to prove that their research field is well-recognized by the community.
[1] Shapira, Natalie, Mosh Levy, Seyed Hossein Alavi, Xuhui Zhou, Yejin Choi, Yoav Goldberg, Maarten Sap, and Vered Shwartz. “Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models.” In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2257-2273. 2024.
[2] Sap, Maarten, Ronan Le Bras, Daniel Fried, and Yejin Choi. “Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs.” In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 3762-3780. 2022.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your helpful and constructive reviews. We respond to your concerns and questions below.
# Claims And Evidence
The term "robustness" refers to the internal consistency of our task design. The GridToM dataset is generated via a fully automated pipeline with systematically controlled scenarios.
Our human experiments are intended to confirm that the auto-generated scenarios and reasoning chains are intuitive and interpretable to humans—thus indirectly validating the method's reliability.
To avoid ambiguity, we will revise "robustness" to "consistency" for clarity.
In L331, our intention was to reference the psychological "unexpected transfer task", where false beliefs arise from unobserved changes. The weaker TB performance may be due to the model overgeneralizing from false-belief patterns and ignoring key multimodal cues.
We will revise the terminology to avoid ambiguity.
# Methods And Evaluation Criteria
To our knowledge, we are the first to enhance MLLMs' ToM abilities through targeted attention direction detection and intervention. Our method is not highly sensitive to the choice of hyperparameters. The large range of the hyperparameter Alpha was selected to test the model's limits (interference beyond this range could lead to the failure of all responses).
Our method improves VLMs' ToM performance through interpretable interventions based on internal representations and input perturbations, not hyperparameter tuning. It requires no training or architecture changes, making it generalizable to other reasoning tasks.
Appendix E explores a wide hyperparameter range. We found that the effect is limited to a valid interval—outside of which responses fail. Alpha remains effective within the range of [-50, 50], while the choice of K depends on the number of hidden heads in the model and shows consistent effectiveness across all intervention tasks. Within the valid range, both hyperparameters affect model performance by no more than 10% on average.
In the revision, we will emphasize this stability and avoid confusion about performance fluctuations from extreme values.
We also demonstrate gains on both TB and FB tasks, as shown by the BOTH metric in Table 1.
# Experimental Designs Or Analyses
Testees were not misled. In the video-only condition, core rules—including that closed doors block perception—were clearly explained. The removed text was narrative only and did not affect task understanding. We will clarify this in the revision.
We acknowledge a caption error in Fig. 11. It depicts a first-order belief task for both TB and FB, as correctly stated in the main text (L767–L769). Reviewer ummn also noted this.
Fig. 12 includes two captions for two second-order TB videos, as shown in Fig. 7. All videos include temporal annotations.
In L656, the timing setup refers to the presence of a timeline and corresponding timestamps for events in all scenarios. For example, whether the yellow agent's door is closed at the moment the white agent enters the red room determines whether the white agent can correctly infer the yellow agent's belief.
Fig. 12 and Fig. 13 contain textual errors in the appendix. The second question in Fig. 12 and the first in Fig. 13 should read: "Where does the white character think the yellow character thinks the white character should be?"
The mention of a green character is a typographical error limited to the appendix. The main text and experimental setup are correct. We will fix this in the final version.
Regarding "Question": Fig. 12 and Fig. 13 illustrate second-order belief reasoning, which involves inferring what one agent believes about another agent's belief. The content of the second-order belief is, by definition, the first-order belief itself. This is not an error.
As for Suppl. Sec. B, we acknowledge that some symbols were not explicitly defined. However, their meanings are clearly explained in Section 4.2:
- $y_{protagonist}$: protagonist's belief label
- $y_{participant}$: participant's belief label
- $y_{omniscient}$: omniscient's belief label
The "Belief" row shows true/false belief combinations across perspectives, essential for second-order reasoning. These tasks involve nested beliefs (e.g., what the participant believes about the protagonist's belief), forming a natural 2×2 structure, as shown in Fig. 7 and Suppl. Sec. B.2.
# References Not Discussed
Benchmarks like MMBench focus on general MLLM capabilities, while our work targets ToM reasoning—nested beliefs, perspective-taking, and multi-agent cognition—which these benchmarks do not cover. Our task-specific, cognitively grounded framework includes structured annotations for probing reasoning depth. We will mention these benchmarks to highlight the distinction and complementarity of our approach.
# Other Comments
Thank you for pointing out relevant related works. We will update Shapira et al. to its EACL 2024 version and add Sap et al. (EMNLP 2022) to the related work section.
---
Rebuttal Comment 1.1:
Comment: ## Claims and Evidence
Thanks for the detailed explanation.
## Methods and Evaluation Criteria
Thanks for the detailed explanation. I do notice that in Fig. 17 most settings can help improve Qwen2's performance on TB task. However, I also noticed some other phenomena in the figures. For example, in Fig. 15, "+protagonist,K=16,$\alpha$=-40" achieves best performance on First-order TB task. However, the same setting undermines LLaVA-NeXT-Video-7B-hf's performance on first-order FB task, as shown in Fig. 16. The same for many other settings for LLaVA-Next-Video-7B-hf. Similarly, in Fig. 18, most settings undermine the performance on First-order FB task. How do the authors explain these phenomena? It seems that these phenomena are in conflict with Tab. 1.
## Experimental Designs Or Analyses
Thanks for the detailed explanation. Most of my concerns have been addressed. But I am still concerned whether the two videos for second order FB tasks in Fig. 7 are substantially different so that the models and testees will not be misled. Can the videos explicitly show the timing? 4 frames selected from the source videos are inadequate. A figure like Fig. 8 and Fig. 9 will be better.
The correspodance between videos and captions is in chaos. Only after the authors' explanation can I understand. I suggest that the authors should correct all the typos and explicitly claim the correspondance between video clips and captions, so that the readers can understand without difficulty.
Thanks for the following response. All my concerns has been solved, I increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt response. In regard to the questions and concerns you raised, we provide the following reply.
# Methods and Evaluation Criteria
We believe that the phenomenon you mentioned actually corroborates the conclusions presented in Table 1. Regarding the identical settings in Figures 15 and 16, for instance, "+protagonist, K=16, $\alpha$=-40," the LLAVA-NEXT-VIDEO-7B-HF model exhibits a "symmetric" performance following interference. We interpret this as an indication of the "contradictory nature" between the TB and FB tasks, which is consistent with the results of other models listed in Table 1 (e.g., ChatGPT 4o, DOUBAO-1.5-VISION-PRO, QWEN2-VL). These models face difficulties when simultaneously evaluating both TB and FB tasks. Specifically, these models show high sensitivity to the FB task, and this sensitivity impacts the model’s judgment on the TB task, leading the model to exhibit beliefs that should only appear in the FB task when faced with the TB task. Our benchmark tests intuitively highlight this challenge.
Furthermore, Figure 18 demonstrates the effect of our method on the interference for the QWEN2-VL model in the FB task. Since this model is more sensitive to the FB task (FB task accuracy: 97.0%, TB task accuracy: 26.6%), although our method provides limited improvement on the FB task, it successfully maintains relatively stable performance within a reasonable range (0.92–0.98), ensuring a limited decrease in accuracy. Meanwhile, QWEN2-VL also exhibits the aforementioned "contradictory nature," even though its performance is below the baseline. The newly scaled figure with the axes can be viewed here: https://anonymous.4open.science/r/icml25-CE4F.
Based on the above discussion, we attempted to use the same interference settings with the opposite Alpha hyperparameter (For the LLAVA-NEXT-VIDEO-7B-HF model, we used the following settings: TB task: +protagonist, K=16, $\alpha$=-40; FB task: +protagonist, K=16, $\alpha$=40). As a result, we obtained a BOTH metric of 34.4% (with TB accuracy at 63.8% and FB accuracy at 51.6%), and the QWEN2-VL model improved to 55.6%. However, we maintain that the original results (with the same Alpha hyperparameter settings) serve as an objective discussion point, as this intriguing phenomenon emerged from the experiments and lacks theoretical support at this stage. Nonetheless, your question has made us realize that this phenomenon warrants further discussion to advance the community’s understanding, and we will include this in the revised manuscript.
# Experimental Designs Or Analyses
Yes, the videos in Figure 7 are all different; in fact, Figure 7 contains four distinct videos. By observing the states of the room switches in the green and red rooms at frames 0, 11, 19, and 36, it is evident that they are all different. A key event influencing the beliefs of both agents is whether the green room door, where the yellow agent is located, is closed when the white agent first enters the red room, and whether the white agent observes the closing of the green room door. This directly determines the type of second-order ToM task.
In our experiment, we actually used 7 frames (L139-142), including 4 key frames and 3 intermediate frames. The table in Figure 7 is designed to show the state of the room switches for the four key frames to highlight the crucial events, while the 3 intermediate frames maintain the visual story coherence for the MLLMs. We selected 7 frames for two main reasons: first, to ensure that the input token count does not exceed the maximum token limit of any MLLM to avoid automatic truncation of information; second, to keep the input information both concise and complete. Through experiments on Initial Belief, First-order Belief, and Second-order Belief (Tables 3 and 1), we verified that MLLMs can capture complete visual information with the 7 frames.
Additionally, we will correct all typographical errors and provide clearer explanations of the dataset.
**Once again, thank you for all your valuable suggestions.**
Thank you for the updated score and for taking the time to consider our rebuttal. We sincerely appreciate your recognition. We will make sure to address all the issues you previously raised in the final version of the paper. | Summary: This paper develops a new approach to evaluate Theory of Mind (ToM) abilities of Large Language Models. Taking as a starting point the potential limitations of previous ToM experiments (difficulties in capturing an agent’s perception, ToM tasks not addressing internal representations), it designs a specific test environment (GridToM), in which context and cognitive perspective information can be controlled. This benchmark is used to test five state-of-the-art multimodal LLM with human raters as a baseline. Subsequently, the authors apply an activation inference strategy to two of the LLMs (LLAVA-NEXT-VIDEO-7B-HF, QWEN2-VL-7B-INSTRUCT) resulting on improved ToM performance across tasks.
Claims And Evidence: The main objective is benchmark development, for which claims are validated through LLM performance comparison and human users ground truth.
Methods And Evaluation Criteria: The paper is itself on an evaluation method and in my view follows best practice, in particular in view of context generation through the 'dynamics' of the environment, which produced multiple instances of ToM problems, from a consistent set of principles.
Results are clearly and transparently reported for both first-order and second-order aspects.
There is a good sample of LLM being tested across parameter size and instruction-tuned types of models.
One limitation could be that the binary options in the benchmark may contribute in part to the high performance observed.
Theoretical Claims: N/A
Experimental Designs Or Analyses: GridToM implements the unexpected transfer task in an ‘ARC’ benchmark esthetic. This design supports a context in which to articulate different agent’s viewpoints (perspective separation) and belief inference.
Making second-order (meta-cognitive) aspects accessible to the benchmark is definitely advancing the state-of-the-art of LLM ToM investigation.
The use of attention heads to distinguish information across perspectives is intellectually compelling, and provides demonstrable enhancements. It may be worthy of its own investigation but I would still suggest to keep the early results, even on a subset of LLMs, in the final version of the paper.
Supplementary Material: All, with a special interest in B.2 and C.2
Relation To Broader Scientific Literature: ToM abilities for LLM is a debated topic since the 'Sparks of AGI' paper [Bubeck et al., 2023]. This paper is relevant to most papers to date in offering alternatives to Sally-Anne type testing and experiments based on tests such as [Strachan et al., 2024]. It is less connected to ToM robotics papers, though.
Essential References Not Discussed: Verma, M., Bhambri, S. and Kambhampati, S., 2024, March. Theory of mind abilities of large language models in human-robot interaction: An illusion?. In Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (pp. 36-45).
(secondary):
Mao, Y., Liu, S., Ni, Q., Lin, X. and He, L., 2024. A review on machine theory of mind. IEEE Transactions on Computational Social Systems, vol 11, n.6, Dec 2024.
Other Strengths And Weaknesses: I would not identify major weaknesses in the paper. When limitations are considered, it could be appropriate to relate this approach to the work of Verma et al. [2024] in particular on the perceived behavior recognition issue: to which extent the benchmark addresses a subset of it or might scale up to more complex behaviors.
Other Comments Or Suggestions: The paper shows an appropriate balance between the core discussion and supplementary material, which is useful considering the experimental design and its multimodal content. It is generally well illustrated considering the sophistication of the experiments.
Still on presentation, I was less convinced about the TARS example; I recognize the pedagogic value, although the ToM benchmark issue is complex enough to deserve additional explanations rather than analogies. Whether it should be modified is however left to the authors’ discretion.
Post-rebuttal: the authors have answered my questions and responded satisfactorily; I remain positive about this paper.
Questions For Authors: Is the attention-based enhancement dependent somehow on modalities?
How could you extend the narrative scenarios beyond binary choices for beliefs?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your recognition of our work and thank you for the thoughtful suggestions. We respond to your concerns and questions below.
# Methods And Evaluation Criteria
Our benchmark is designed as a foundational framework, using binary labels to establish clear distinctions between correct and incorrect beliefs. Introducing a wider range of options and more comprehensive evaluation schemes is indeed a valuable direction. We believe that the evaluation of multiple options could potentially be extended using a binary classification approach, where multiple negative samples are categorized as generalized erroneous beliefs. This will be part of our future research plans.
# Experimental Designs
Thank you very much for recognizing the value of using attention heads to distinguish different perspectives and enable second-order belief reasoning. We agree that this approach is highly worthy of further investigation and have planned to conduct broader model validation and more detailed analysis in our future work. At the same time, we will adopt your suggestion to retain the early results and include these preliminary experimental conclusions in the final version of the paper.
# Relation To Broader Scientific Literature
We understand your point about the current paper’s insufficient connection to embodied intelligence and the field of robotics. In fact, we are also deeply interested in embodied intelligence. This work represents our initial effort to explore the Theory of Mind (ToM) capabilities of MLLMs. We plan to extend this approach to more interactive and visual scenarios in future work, aiming to build a closer bridge between ToM in MLLMs and embodied intelligence in robotics.
# Essential References Not Discussed & Other Strengths And Weaknesses
Thank you for pointing out relevant related works. Although we have not discussed Verma et al. (2024) and Mao et al. (2024) in the current version, we recognize their relevance and value. We will include discussion of these papers in the revised manuscript to better position our work within the broader context of ToM research in LLMs, particularly as it relates to human-machine interaction and embodied intelligence.
# Other Comments Or Suggestions
Thank you for the feedback. Our original intention was to use a vivid example to illustrate higher-order beliefs. We acknowledge that ToM benchmarks involve complexities beyond simple analogies. Following your suggestion, we will add a more detailed explanation of ToM in the revision.
# Questions for Authors
**1. Modality Independence:**
We believe that attention-based enhancement is not strictly dependent on a specific modality but rather on carefully designed experiments. Prior work and our own results show the effectiveness of attention enhancement across both text-only and multimodal settings. Our experiments (Sup. G and H) demonstrate generalization across GridToM and MMTOM, suggesting the potential for extending this method to other modalities such as audio. Additionally, We believe attention-based enhancement depends on carefully designed experiments to reduce the impact of noise in attention signals.
**2. Binary Testing in ToM Research:**
ToM studies often rely on structured binary choices (e.g., true/false) to assess belief reasoning, as seen in classic tasks like the “unexpected location” and “unexpected contents” paradigms. Existing benchmarks for LLMs, such as ADV-CSFB (text-based) and MMTOM-QA (multimodal), also follow this approach. Our study continues this tradition. That said, exploring more open-ended or multi-choice formats could test reasoning flexibility in more complex settings, though this would introduce higher demands in task design and annotation. We see this as a promising direction for future research.
Once again, thank you for your kind support and constructive feedback.
---
Rebuttal Comment 1.1:
Comment: Thank you for provided detailed follow-up comments, which confirm my positive appreciation of the paper.
You answered both questions to my satisfaction, and the reference to other ToM benchmarks has allayed any remaining concerns on binary testing.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive evaluation and thoughtful review of our work. We truly appreciate your constructive comments and will make sure to address all the points you raised in the final version of the paper. | Summary: This paper aims to explore the Theory of Mind (ToM) capabilities for multimodal large language models (MLLMs).
To this end, it proposes a new dataset, GridToM, which is designed to evaluate MLLM Tom reasoning from multiple perspectives.
Based on GridToM, they then conduct experiments using different tech to detect the ToM in MLLMs in a zero-shot setting.
Experimental results and analyses show the attention heads in MLLMs can be capable of distinguishing such mental states.
Claims And Evidence: Most claims are well supported.
Methods And Evaluation Criteria: The major contribution of this paper is the GridToM dataset, which provides manipulable multimodal visual-linguistic causal stories. However, I fail to find any relevant information about how this dataset is built, e.g, do the authors collect data from existing datasets with additional annotation? How do the authors conduct the annotation process? What is the annotators' background? What is the quality control process? Such information is very important to a dataset/benchmark paper, which is not provided in the main text nor the Appendix. Also, ToM is a difficult task even for humans - it makes me doubtful about data quality.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: The paper only experiments with the zero-shot setting, which makes their findings and conclusions less generalizable.
Additionally, as the authors suggested in Sec 5.2, the selection of models is indeed somewhat limited.
Also, from a perspective of dataset construction, there is no validation of the data quality.
Supplementary Material: Yes, all, in particular Appendix B and C for feature extraction and data sample.
Relation To Broader Scientific Literature: This paper is very relevant to ToM research, and can be important to MLLM reasoning.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Please see above
Other Comments Or Suggestions: The figure 3 is difficult to understand without the detailed description of the Appendix, while I believe a good figure should be easy to follow with just the caption information.
Also, for 1st order: why the labels for both TB and FB of omniscient are "Purple" ( which seems contradictory to Figure 11)? and why "Red" for both TB and FB of protagonist?
Questions For Authors: Please see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your helpful and constructive reviews. We respond to your concerns and questions below.
# Methods And Evaluation Criteria
We agree that the construction of the GridToM dataset should be described in greater detail. We provide further clarification here and will include the full pipeline both in the main text and the appendix of the revised version. Our dataset is built primarily through automated generation and verification, with minimal manual annotation and thoughtful quality control. Despite the intrinsic complexity of ToM reasoning, our structured, script-driven approach ensures consistency and reliability across visual input, actions, and narratives.
1. Dataset Construction and Annotation
- Map Design: We manually created 27 distinct 10×7 maps in Excel, each with 3 rooms and unique layouts.
- Automated Checking and Rendering: map validity was verified with Python scripts (e.g., enclosed rooms, door placement). Then, using the MultiGrid library, we rendered maps with:
- Colors: Assigned from 6 highly distinguishable colors (red, green, blue, yellow, purple, white).
- Agent Placement: Two groups of agents were randomly placed in hallways with colors distinct from rooms; initial orientations were randomized.
- Path Planning: Agent trajectories were generated using Breadth-first search to ensure valid, logical movement without dead ends.
- Task Generation: The combination of different variables results in 648 basic samples. For each sample, we generate both “door open” (TB) and “door closed” (FB) conditions, totaling 1296 samples (see L122–127). Second-order belief tasks follow the same structure with minor narrative adjustments.
2. Annotation and Data Quality
- Automation First: Key elements (layout, paths, doors, task type) were generated and verified via script, minimizing subjective error.
- Human Review: We manually reviewed samples for layout issues, trajectory logic, and narrative coherence.
- Staged Execution: Tasks were divided into three stages with controlled timing to ensure logical, coherent event flow.
- Controlled Variables: We used unified logic for all visual and script elements, systematically varying only key factors (room order, agent orientation, colors, door state).
3. On ToM Difficulty and Dataset Validity
- Controlled Scenarios: Carefully constrained scenes reduce noise, allowing clearer focus on ToM and multimodal reasoning.
- Scalability: Current difficulty is moderate and sufficient for analyzing belief reasoning. We plan to expand with more complex scenarios in future releases.
# Experimental Designs Or Analyses
We focus on zero-shot evaluation because Theory of Mind research emphasizes testing a model’s ability to reason about others’ mental states without task-specific demonstrations. Zero-shot settings help assess whether such capabilities naturally emerge during pretraining, without overfitting to specific prompts or memorized patterns. In other words, we believe that the abilities revealed by few-shot learning pertain more to the model’s other capabilities rather than ToM abilities themselves. On the other hand, zero-shot remains a widely adopted and informative setting for assessing underlying reasoning abilities in this domain. This approach follows precedent in prior work such as [Shapira et al., 2023; Jin et al., 2024], all of which adopt zero-shot setups to evaluate ToM reasoning in LLMs or MLLMs.
As for model selection, due to the limited availability of MLLMs supporting multi-image inputs at the time, we selected two strong-performing models: LLaVA-Next-Video and Qwen2-VL. We are actively working on extending our experiments to more models and tasks.
# Other Comments Or Suggestions
In the first-order tasks shown in Fig. 3, the "Omniscient" label represents the objective ground truth. Since the omniscient perspective observes the entire event sequence, the belief remains "Purple" for both TB and FB conditions.
Second-order belief reflects whether one agent correctly estimates another’s belief. Due to space limitations, Fig. 3 only presents the case where the first-order belief is false, which is why both TB and FB under the protagonist’s view are labeled as "Red." The complete set of cases, including those where the protagonist’s belief is labeled as "Purple," is detailed in Fig. 7.
We acknowledge that there is a mistake in the caption of Fig. 11. Fig. 11 illustrates a first-order belief task to both TB and FB conditions, as correctly explained in the main text (L767-769). Reviewer r21H also pointed this out. We will address this issue in the revised manuscript. | null | null | null | null | null | null | null | null |
Bellman Unbiasedness: Toward Provably Efficient Distributional Reinforcement Learning with General Value Function Approximation | Accept (poster) | Summary: This paper studies distributional RL, in particular the statistical functional formulation of it. They begin by introducing the concept of Bellman-unbiasedness, obtain results on which equivalent conditions lead to this, and exactly characterize which sets of statistics are both Bellman-unbiased and Bellman-closed, which are the set of the first $N$ moments for any $N>0$. They then introduce an algorithm SF-LSVI which learns these moments, and they prove regret bounds for this algorithm.
Claims And Evidence: I would distill the claims of this paper as (i) introducing and studying the stochastic equivalent of Bellman closedness, and (ii) introducing a distributional RL algorithm SF-LSVI and analyzing its performance.
I believe (i) is well supported, however (ii) is not, for reasons I highlight below.
I think the regret analysis of Section 5 is not what one would want to do in general, and may be answering the wrong question. SF-LSVI is a method for learning a sketch of the first $N$ moments $(s_1,\dots, s_N)$, but the regret bounds in terms of $Reg(K)$ only measure accuracy in terms of the value function, i.e. the accuracy of $s_1$. Theorem 5.5 says nothing about how well $s_2,\dots, s_N$ are learnt, and the bound would hold even if they were learnt arbitrarily poorly. It also appears that additionally learning $s_2,\dots,s_N$ does not improve the regret compared to learning $s_1$ directly. Due to this, I don't believe that the analysis of Section 5 supports the claim that SF-LSVI is an efficient distributional RL algorithm.
Methods And Evaluation Criteria: There is no evaluation nor datasets used as this is a work of theoretical nature.
Theoretical Claims: I carefully checked most proofs. I think there are a couple of minor inaccuracies/issues with some results:
- Lemma C.2. states that the quantile sketch is not mixture-consistent for any quantile level $\alpha \in [0,1]$, while Example 4 states that the maximum and minimum functionals are both mixture-consistent. Since these are exactly the quantile sketches for $\alpha=\{0,1\}$ respectively, these results contradict each other.
- There are 2 issues in the proof of Lemma C.2., or at least in my understanding of it. The final step of the proof follows by setting $p_{z_0} = 2\alpha - \sum_{n'=0}^n p_{z_{y_n'}}$, however I think this could be problematic for two reasons. Firstly we must have $p_{z_0}\geq 0$ as it is a probability measure, however this construction may validate that. Similarly we must also have $p_{z_0} < \alpha$ as this is an assumption introduced earlier in the proof, but this assumption can also be broken by the given construction.
- Section C.2. claims to prove that the statistical function $\psi_{\text max}$ is Bellman-closed. To do this, they define its corresponding Bellman operator as $T_{\psi_{\text max}(\eta(s))} = \max_{s' \sim P(s,a)} \psi_{\text max} ( {(B_r) \sharp} \eta(s')) $. This is not a valid operator however, as they are applying a nonlinear function to the inside of the statistic $\psi_{\text max}$ on the right-hand side, which is **not** valid for the definition of Bellman-closedness. Instead to do this, writing $u= \psi_{\text max}(\eta(s))$, they should introduce a Bellman operator $T_{\psi_{\text max}}$ such that $T_{\psi_{\text max}(\eta(s))}$ can be written as a function of $u$.
Experimental Designs Or Analyses: N/A (no experiments done)
Supplementary Material: I reviewed the entirety of the supplementary material.
Relation To Broader Scientific Literature: The introduction and analysis of Bellman unbiasedness naturally follows from the analysis of Bellman closedness in literature. The regret analysis is similar to previous regret analysis for distributional RL algorithms in literature, as illustrated in Table 1.
Essential References Not Discussed: I don't believe there are any essential references not discussed, at least to my knowledge.
Other Strengths And Weaknesses: **Strengths**
- The notion of Bellman unbiasedness is a natural extension of Bellman closedness, and is a valuable step towards understanding and designing sample-based distributional RL algorithms.
- The SF-LSVI algorithm is a nice distributional RL algorithm to study, as in some ways it is the "most similar" to standard RL, and understanding it deeply seems to be a natural step in progress.
**Weaknesses**
- Theorem 3.6. potentially lacks novelty as in light of Lemma 3.5. it essentially reduces to the exact same result of Theorem 4.9. of Rowland (2019) (if I understand correctly).
- The choice of regret analysis is questionable and perhaps not the best, as I discussed above.
- There are a couple of theoretical issues as I discussed above.
- Many parts of the paper can be improved by a careful re-reading to improve the overall flow, at the moment there are a number of minor grammar mistakes and ill-formed sentences.
Other Comments Or Suggestions: - There should be a bit more care in the subsection "Distributional Bellman Optimality Equation" of Section 2: $\eta^\star_h$ is *not* uniquely defined, as opposed to the optimal value function.
- In Lemma C.2. the notation $\psi_{q_{\alpha}}$ is used but never defined.
- I believe the authors define a functional to be linear if there exists a function $g$ such that $F(\mu) = \int g d\mu$, however this is never defined in-text. This also overlaps with the definition of homogeneity introduced in Lemma 3.5., I would suggest unifying this notation and cleaning this presentation.
- Section 3 is currently a mix of existing concepts and new concepts/results. I would suggest moving the existing results to a prior section (Section 2 perhaps), so that Section 3 is entirely novel which would make the contribution more clear.
- After Theorem 3.6., it is stated "we extend beyond linear statistical functionals to include nonlinear statistical functionals, showing the uniqueness of the moment functional." This is perhaps misleading since the Bellman unbiased assumption exactly limits to "linear statistical functionals", and this is required in the proof.
- In Example 1 of Appendix Section C, $\lambda$ is overloaded as both the argument of the exponential functional and the mixture coefficient.
Questions For Authors: - Rowland et al. (2019) introduce the notion of approximate Bellman closedness for the setting that statistics cannot be learnt exactly (such as quantile RL or categorical RL). What results can be obtained for approximate Bellman unbiasedness?
- Is unbiasedness the "best case" that we'd hope for? What if we have two estimators for a sketch, one which is unbiased and one which is biased but with a lower variance and potentially lower MSE? The bias will likely compound over applications of the operator, but I would imagine if the variance reduction is large enough it could be preferable?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their time and thorough evaluation of our paper. We have organized our responses to your comments below. **Due to character limits, we have focused on addressing what we considered to be the most important comments. We ask for your understanding that we could not provide responses to all questions.**
---
### 1. Theorem 5.5 says nothing about how well $n$-th moment are learnt.
First, we note that our paper investigates the necessary conditions for achieving provable efficiency and **learning all moments simultaneously in distRL** by using finite-dimensional sketch-based updates.
The reviewer’s concern seems to stem from the observation that standard regret is not well-suited for evaluating the effectiveness of distRL, as it fails to capture discrepancies in higher-order moments (order 2 and above). This may have led to a misunderstanding that SF-LSVI does not achieve moment matching.
However, Lemmas 5.3 and 5.4 theoretically guarantee that all sketches are learned exactly in finite-dimensional spaces. The reason we use the standard regret in our quantitative evaluation is to follow the conventional regret analysis framework in RL literature.
----
### 2. Minor issues on theoretical claims
Thank you for the careful review of these points. It can be assured that the issues you raised do not change the theoretical results.
- (A2-1) In Lemma 3.2, since we derive a contradiction when $p_{y_0}>\alpha$ and $p_{z_0}<\alpha$ , we cannot include cases where $\alpha=0 \text{ or } 1$. Therefore, it is correct to modify it to $\alpha \in (0,1)$, which allows us to draw conclusions consistent with existing max and min functionals.
- (A2-2) If we understood your second point correctly, we believe the above response addresses it. If not, please let us know and we’d be happy to clarify further.
- (A2-3) Are you referring to $(\mathcal{B}\_r)$? For the max functional, since $\max_{z \in Z}(Z+r)=\max_{z \in Z}(Z)+r$ (and similarly for the min functional), $T_{\psi_\text{max}}$ and $T_{\psi_\text{min}}$ are valid operators. For clarity, we’ll revise Line 703 as:
- $T_{\psi_\text{max}}\Big(\psi_{\text{max}}(\bar{\eta}(s))\Big)= \max_{s' \sim \mathbb{P}(\cdot|s,a)}\Big(r + \psi_{\text{max}}(\bar{\eta}(s'))\Big)$
---
### 3. Theorem 3.6 essentially reduces to the exact same result of [Rowland et al 2019]
Theorem 3.6 extends the results of [Rowland et al. 2019] by encompassing a broader class of statistical functionals.
Since [Rowland et al. 2019] only focuses on **linear functionals** (i.e., $s(\mu)=\mathbb{E}_{Z \sim \mu}[h(Z)]$), they cannot demonstrate whether variance is a Bellman closed sketch.
However, since $$\text{Var}(\mu) = \mathbb{E}\_{Z\_1 \sim \mu}[(Z\_1-\mathbb{E}\_{Z\_2 \sim \mu}[Z\_2])^2] = \mathbb{E}\_{Z\_1 ,Z\_2 \sim \mu}[Z\_1 ^2 -2Z\_1 Z\_2+Z\_2^2]=\mathbb{E}\_{Z\_1 ,Z\_2 \sim \mu }[h(Z\_1, Z\_2)],$$ it is a functional that is homogeneous of degree 2. By Lemma 3.5, such functional is Bellman unbiased, and our result, Theorem 3.6 leads to the property that variance is also Bellman closed.
That is to say, Theorem 3.6 can be summarized as a generalized conclusion that determines whether unbiasedly estimatable statistical functionals, which represent a broader domain, are Bellman closed.
---
### 4. Response to [Other Comments and Suggestions]
(A4-1) While $\eta^{\star}_h$ is generally not uniquely defined, this occurs in cases where $\pi^{\star}$ is not uniquely defined due to the lack of total ordering on distributions in a control case. We will mention this in [Bellemare et al 2017] along with a statement excluding such situations.
(A4-3, 5) We will add text defining linear functionals. As demonstrated earlier with the variance example, homogeneity is a separate concept that does not overlap with linear functionals. Therefore, the statement following Theorem 3.6 is valid.
---
### 5. Response to [Questions For Authors]
While the questions are beyond our current scope, they are insightful, and we would be happy to explore them further. Briefly speaking, we believe that the algorithm cannot achieve the tight regret bound if the bias does not converge to 0 during the learning process.
However, due to the character limit and the possibility that these points are not central to the reviewer’s evaluation, we have chosen not to focus on them here. If the reviewer’s main concerns have been addressed, we will aim to revisit these topics in the next response round, as space permits.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for responding to my questions/concerns, and for clearing up my misunderstanding of homogeneous vs linear functionals (please add this distinction to the text!). I'm also happy with their rewriting of the max operator so that readers can clearly tell that it is Bellman-closed, and with the added restriction to Lemma C.2. that $\alpha\in(0,1)$ I think the proof should go through.
I acknowledge that Lemmas 5.3 and 5.4 provide guarantees on how well all moments are learnt, but my point still stands that Theorem 5.5. only concerns the first moment, and it feels a bit disappointing that the main result of this paper says nothing about what the algorithm is learning (the first m moments). I think that it wouldn't be a difficult change to modify the notion of regret used and the proof to take this into account, which I would recommend the authors to do, either for this paper or for future work they may do in this area. I also can appreciate that the style of regret result of Theorem 5 has been used in previous analyses of similar distributional RL algorithms, so this is in line with the literature, although for the reasons listed above I don't believe this to be the right choice of analysis.
With these comments though I'll raise my score to a weak accept.
---
Reply to Comment 1.1.1:
Comment: We are glad that our responses have addressed your questions and concerns, and we deeply appreciate your decision to increase the score. For clarity, we will revise the text to include variance as an example to distinguish between the definition of linear functionals and homogeneous.
---
## Difficulties in Redesigning Regret to Reflect Theorem 5.5 and Higher-Order Moments
We share your concern that, despite proving the consistency of higher-order moments in Lemmas 5.3 and 5.4, Theorem 5.5 only addresses first-order moments.
We attempted to reconstruct a new regret that reflects higher-order moment evaluations but faced technical difficulties in generalizing the proof. While it's possible to take a simple approach of defining regret as the sum of differences between moments, utilizing optimistic estimates like in line 9 of the pseudocode becomes challenging for higher-order moments. Since the optimistic algorithm operates greedily only for first-order moments (i.e., $a^k_h = \arg \max_a Q^k_h(s^k_h,a)$), the relationship $V^k_h(s^k_h)=Q^k_h (s^k_h ,a^k_h)$ holds, but this relationship doesn't hold for higher-order moments, making proof generalization difficult.
We believe that developing a new regret formulation that circumvents these limitations is necessary. However, we found this to be a quite non-trivial challenge, both conceptually and technically. To maintain clarity and focus in the current paper, we chose not to address these complexities and instead restricted our analysis to conventional regret. We are considering the definition of generalized regret as a future research topic building on this paper.
---
## Response to [Questions For Authors]
We are happy to address the points we were unable to include in the original rebuttal due to space constraints.
**(A5-1)**
Defining *Approximate Bellman Unbiasedness* (ABU) is indeed an interesting direction. First, Approximate Bellman Closedness (ABC) is a concept that allows for an average approximation error of sketches up to $\epsilon$.
$\sup_{(x,a)}\frac{1}{N}\sum_{n=1}^N| \psi_n(\eta_{\pi}(x,a))- \hat{\psi}_n(x,a)| \leq \epsilon$
Here, $\hat{\psi}_n (x,a)$ represents the value obtained while learning the statistical functional $\psi_n$. Since Bellman closedness is defined in cases where the transition kernel is given, $\hat{\psi}_n$ in ABC refers to the value when the transition kernel is provided.
On the other hand, Bellman unbiasedness differs from Bellman closedness in that it is defined for cases where unbiased estimation is done through sampling without a transition kernel.
Therefore, when considering ABU, $\hat{\psi}_n$ in the above equation should be interpreted as being estimated by a finite number of samples $\hat{\psi}_n^{(k)}$, and the definition should include the estimation process.
$\sup_{(x,a)}\min\_{\phi\_{\psi}}\frac{1}{N}\sum\_{n=1}^N \Big| \psi_n(\eta\_{\pi}(x,a))- \mathbb{E}\Big[\phi\_{\psi}\Big(\hat{\psi}\_{n}^{(1)}(x,a), \cdots , \hat{\psi}\_{n}^{(K)}(x,a)\Big)\Big] \Big| \leq \epsilon$
**(A5-2)** The fundamental issue with having bias lies in the difficulty of analyzing the size of the confidence region. In the case of Bellman unbiased sketches, we can analyze the size of the confidence region through concentration inequality by making the sequence of sketches a martingale. However, when using sketches that are not Bellman unbiased or setting up biased estimators, theoretical development becomes challenging because the applicable concentration results are not clear.
We expect that to ensure convergence, the estimator must at least be consistent ($\text{Bias}(k) \rightarrow 0$), and to achieve near-optimal regret, it must be asymptotically efficient ($\sqrt{k}\ \text{Bias}(k) \rightarrow 0$). As you mentioned, if some estimators have slow asymptotic convergence rates, We expect they will have proportionally suboptimal regret. | Summary: This paper considers learnability and provable efficiency of distributional RL (distRL). The proposed notion of *Bellman unbiasedness* extends *Bellman closedness* in the literature to address the estimation errors stemming from finite samples. They show that moment functionals are the only finite statistical functionals that are both Bellman unbiased and closed. Built on this result, they introduce SF-LSVI for distRL with general function approximation, which enables estimating the distribution unbiasedly in finite-dimensional embedding spaces without misspecification error.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: N/A
Theoretical Claims: See Questions 3 & 4.
Experimental Designs Or Analyses: N/A
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths:**
1. The proposed *Bellman unbiasedness* extends *Bellman closeness* to finite sample setting, which is meaningful and important for online/offline RL.
2. The SF-LSVI enables estimating distributions unbiasedly in finite-dimensional embedding spaces, addressing the intractability of implementation in infinite-dimensional space in previous work.
**Weakness:** See Questions.
Other Comments Or Suggestions: No.
Questions For Authors: 1. Is it possible to show the advantage of SF-LSVI over standard expectation-based learning algorithms? For example, in a cost minimization setting, can it also achieve a small-loss bound as in (Wang et al., 2023)?
2. How would parameter $N$ affect learning and sample complexity?
3. Can you elaborate on the relationship between Bellman unbiasedness and closeness? For example, Figure 1 shows that categorical representation is Bellman unbiased but not closed, but I am unable to find the proof for the argument.
4. In Definition 3.4, $\phi_\psi$ maps $k$ sampled sketches to an estimated one. But I feel like there is an alternative approach where we form an estimation directly on the mixture distributions, i.e., $\\{ (\mathcal{B}\_r)_{\\#} \bar\eta(s_i') \\}\_{i=1}^k$.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their time and thorough evaluation of our paper. We have organized our responses to your comments below. If any of our responses fail to address the intent of your questions or if you have remaining concerns, please let us know.
---
### 1. The advantage of SF-LSVI over standard expectation-based learning algorithm
SF-LSVI is an algorithm that has the advantage of accurately learning distribution information beyond expectation while maintaining a tight upper bound in terms of standard regret. Therefore, it has the advantage of being able to accurately obtain not only the mean but also various moment information.
However, the standard regret measure currently used to evaluate online RL algorithms cannot measure discrepancy in moments other than the first-order moment (expectation). Due to this inherent limitation of the measure, we do not believe it is suitable for distinguishing between the performance of expectation RL and distRL. As we wrote in the Conclusion section, we believe a generalized definition of regret that also evaluates discrepancies occurring in second and higher-order moments is needed, and we are pursuing this in our follow-up research.
---
### 2. How would parameter N affect learning and sample complexity?
Through Lemma 5.3, since the size of the confidence region increases as $\tilde{O}(\sqrt{N})$, the regret also reflects a factor of $\tilde{O}(\sqrt{N})$. Learning $N$ moments can be viewed as increasing the feature dimension by $N$ times, so space complexity adds a factor of $O(N)$, and according to [Jin et al 2020]'s results, (per-step) computational complexity adds a factor of $O(N^2)$. We will add this explanation to Theorem 5.5 for a deeper understanding of the results.
---
### 3. Relationship between Bellman unbiasedness and closeness
Bellman closedness refers to a property of sketches that allows accurate updating of distribution information when the transition kernel is given. However, in sample-based updates, since the transition kernel is not given, additional sketch properties beyond Bellman closedness are needed. Bellman unbiasedness refers to a complementary property that ensures unbiased learning of sketch updates through finite samples, and through this property, we can guarantee tight upper bounds in regret.
The position of categorical sketch in Figure 1 is based on the results from [Rowland et al 2019]. They showed that categorical sketch is a linear functional (Lemma 3.2 of [Rowland et al 2019]), and by our Lemma 3.5, since all linear functionals are homogeneous over degree 1, categorical sketch is Bellman unbiased.
The fact that categorical sketch is not Bellman closed was proven in Lemma 4.4 of [Rowland et al 2019], so combining these two results leads to the representation in Figure 1. We will clarify this process more explicitly in Figure 1 and Appendix C.
---
### 4. Alternative approach to estimate directly on the mixture distributions
A key distinction between "Bellman unbiasedness" and "Bellman closedness" is that we can learn exact information about the return distribution **without the knowledge of the pre-defined transition kernel**. This means SF-LSVI needs only a finite number of sampled sketches to learn the return distribution unbiasedly, rather than requiring knowledge of the transition kernel.
As noted in Line 324, this unbiasedness property allows us to transform the learning process into a martingale, construct confidence regions through concentration inequality, and ultimately develop an algorithm that achieves tight upper bounds.
While we could introduce new definitions for sketches (such as max, min) that can be estimated consistently but with bias, constructing confidence regions for non-martingale processes remains largely unexplored without distribution priors. The theoretical analysis of such an approach would likely be extremely challenging.
---
### References
- [Jin et al 2021] : Jin, Chi, Qinghua Liu, and Sobhan Miryoosefi. "Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms." *Advances in neural information processing systems* 34 (2021): 13406-13418.
- [Rowland et al 2019] : Rowland, Mark, et al. "Statistics and samples in distributional reinforcement learning." International Conference on Machine Learning. PMLR, 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed explanation. The response addresses most of my questions. Regarding the relationship between Bellman unbiasedness and closeness, I am curious why Bellman unbiasedness is not a subset of Bellman closeness. As you mentioned in the response, "in sample-based updates ... additional sketch properties beyond Bellman closedness are needed", so it seems to me that Bellman unbiasedness could have been more restrictive.
---
Reply to Comment 1.1.1:
Comment: We are truly grateful that our responses have helped address your questions and concerns.
The simplest reason why Bellman unbiasedness (BU) is not a subset of Bellman closedness (BC) is that there exist sketches, like categorical sketches, that are BU but not BC.
To explain the subtle difference, BU means that there exists an unbiased estimator of the ground truth sketch **when given a finite number of sampled sketches.**
Here, BC plays a complementary role by maintaining the condition of **when given a finite number of sampled sketches** during the update process.
Since there is no Categorical Bellman operator that exactly preserves the meaning of categorical sketches during the update process, we cannot obtain a finite number of sampled sketches for the target.
Therefore, while a sketch can be BU without being BC, dynamic programming becomes infeasible in such cases. | Summary: The paper aims to design provably efficient and exactly learnable distributional reinforcement learning algorithm in an online setting, especially under general value function approximation.
For the main findings, they introduce two key properties for statistical functionals:
(1). Bellman Closedness: The sketch (compressed representation) remains consistent under Bellman updates.
(2). Bellman Unbiasedness: The sketch can be unbiasedly estimated using sampled next states.
They find out that only moment functionals (e.g., mean, variance, higher-order moments) satisfy both properties and prove that quantile-based functionals (like those used in QR-DQN) are neither closed nor unbiased.
This work proposes Statistical Functional Least Squares Value Iteration (SF-LSVI) that focuses on matching a finite number of moments of the distribution instead of fitting the full distribution using a learnable and unbiased moment-based Bellman update.
Rather than estimating the full return distribution (which is infinite-dimensional), the paper learns a finite-dimensional sketch composed of moment functionals. These sketches are provably closed under Bellman updates and can be unbiasedly estimated from samples, making them ideal for regret analysis and online learning.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense and are well-aligned with the problem the paper addresses.
Theoretical Claims: (1) Theorem 3.3 shows quantile functional cannot be Bellman closed. It aligns with existing theory.
(2) Definition of Bellman Unbiasedness and Lemma 3.5 claim: only statistical functionals that are homogeneous of finite degree can be unbiasedly estimated under sketch-based Bellman updates.
The logic follows known properties of estimators; similar arguments are used in kernel mean embedding and U-statistics.
(3) Theorem 3.6 shows only moment functionals are both Bellman closed and unbiased, it's consistent with Rowland et al. (2019) and standard function approximation theory.
(4) Lemma 5.4 Confidence bound uses martingale concentration inequalities and normalized moment scaling.
(5) SF-LSVI achieves a regret bound expressed in Theorem 5.5 based on Lemma 5.4, Eluder dimension theory and Martingale-based sketch estimation. The derivation path is well aligned with previous work, with careful adjustments for statistical functional setting.
Experimental Designs Or Analyses: The main contribution of this work is theoretical, aimed at addressing foundational issues in distributional RL. Instead of providing empirical experiments, the paper provides non-trivial regret bounds and tight complexity analysis, which is already valuable.
The proposed method (SF-LSVI) fills a known theoretical gap: regret-optimal DistRL under general function approximation.
Supplementary Material: Review all the supplementary materials including notation statement, pseudocode, related work and also the proof.
Relation To Broader Scientific Literature: This paper extends prior work on distributional reinforcement learning and general value function approximation by building on Bellman closedness (Rowland et al., 2019) and eluder dimension-based regret analysis (Wang et al., 2020).
Unlike previous approaches relying on quantile or full-distribution representations, the authors show that only moment functionals satisfy both Bellman closedness and unbiasedness. They redefine Bellman completeness through a moment-based lens, addressing model misspecification issues found in works like Chen et al. (2024).
This work leads to SF-LSVI, a distributional RL algorithm with provable regret guarantees and a strengthened theoretical foundation.
Essential References Not Discussed: Related works are well discussed.
Other Strengths And Weaknesses: The paper demonstrates strong originality by introducing the novel concept of "Bellman unbiasedness" and showing that moment functionals are uniquely suited for provably efficient distributional RL.
Its theoretical contributions are significant, addressing long-standing challenges in DistRL such as the intractability of full-distribution learning and model misspecification.
The work clearly builds on and advances the literature in a well-structured and technically rigorous way.
However, one notable weakness is the lack of empirical validation to support the theoretical findings.
Overall, the paper is clear and well-motivated, with impactful insights for the theory of distributional reinforcement learning.
Other Comments Or Suggestions: Empirical validation can be considered, so that we can compare proposed method with quantile based approaches such as QRDQN and IQN in different tasks.
Questions For Authors: I am curious about the potential applicability of this approach in deep reinforcement learning scenarios.
Although quantile-based methods have known theoretical limitations, algorithms such as QR-DQN and IQN have demonstrated strong empirical performance across a range of risk-sensitive tasks.
It would be interesting to explore whether representations based on moment functionals can be effectively integrated with neural networks, and whether such integration could lead to performance improvements over existing quantile-based approaches like QR-DQN.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their time and thorough evaluation of our paper. We have organized our responses to your comments below. If any of our responses fail to address the intent of your questions or if you have remaining concerns, please let us know.
----
### 1. Lack of empirical validation
While we acknowledge the value of experimental results, our primary aim is to establish theoretical connections between distributional RL and General Value Function Approximation (GVFA). Our main contribution lies in developing theoretical foundations and deepening understanding of these fields. We note that many GVFA papers with similar theoretical objectives [Jin et al 2021, Li et al 2024] likewise focus on theoretical advances without experimental validation.
---
### 2. Potential applicability in deep reinforcement learning scenarios
Thank you for your interest in deep RL applications. Our SF-LSVI algorithm, which learns distributions via moment matching, can be related to MMDQN [Nguyen et al., 2021] when adapted to deep RL. Notably, MMDQN has already demonstrated superior performance compared to C51, QRDQN, and IQN.
While MMDQN uses particle-based moment matching, SF-LSVI explicitly constructs and updates predefined moment functionals. Since it is well known that, in truncated moment problems, 10-20 moments typically suffice for reconstructing distributions, we could learn the sketches of distribution (moments) more efficiently than existing distRL methods that require 50-200 statistical functionals.
---
### References
- [Jin et al 2021] : Jin, Chi, Qinghua Liu, and Sobhan Miryoosefi. "Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms." *Advances in neural information processing systems* 34 (2021): 13406-13418.
- [Li et al 2024] : Li, Yunfan, and Lin Yang. "On the model-misspecification in reinforcement learning." *International Conference on Artificial Intelligence and Statistics*. PMLR, 2024.
- [Nguyen et al 2021] : Nguyen-Tang, Thanh, Sunil Gupta, and Svetha Venkatesh. "Distributional reinforcement learning via moment matching." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 35. No. 10. 2021. | Summary: The paper proposes a distributional RL algorithm in the finite horizon episodic MDP setting. They propose bellman unbiasedness, a notion complementary to bellman closeness in prior work. They analyze the regret bound of the algorithm and compare against prior work in the space, showcasing theoretical improvements.
Claims And Evidence: The theoretical claims made in the paper are fairly clear and backed up by proof.
Methods And Evaluation Criteria: There is no empirical evaluation of the theoretical results in this work, which arguably is a place for improving the current paper.
Theoretical Claims: I have skimmed through certain theoretical arguments in the paper and they generally are sensible to me.
Experimental Designs Or Analyses: No empirical designs.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper is generally related to distributional RL and theoretical RL on regret bound for learning efficiency.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: The paper is fairly solidly grounded in theoretical discussions and it has made solid theoretical contributions connecting the learning efficiency of distributional RL as measured in regret. I think the paper can use a bit more improvement in presenting concrete examples of statistical functionals to make the results more accessible to algorithmic minded readers and add a section for empirical evaluation.
Other Comments Or Suggestions: NA
Questions For Authors: === *concrete examples of statistical functionals* ===
I think the paper will be made more accessible to readers less versed in theoretical discussions, to provide a bit more concrete examples on statistical functionals that fall into different categories. For example, in the appendix maybe discuss why max and min functionals are bellman closed (it is easier to see why they cannot be estimated in an unbiased way) and why categorical functionals are not bellman closed.
=== *bellman unbiasedness vs. closeness* ===
I would love to understand better the contribution this work makes in relation to results in Rowland et al 2019. Rowland showed that the only finite bellman closed statistical functions are spanned by finite moments, where as in here the illustration shows that min and max functionals are also bellman closed?
One implication of bellman closeness is that the statistical functionals can be learned via a recursive bellman backup, does that mean in principle max and min statistical functionals can also be obtained by bellman backup and computed as a fixed point for dynamic programming (though not computable in finite samples due to the lack of unbiasedness).
=== *translating results to discount case* ===
The discussions are limited to finite horizon episodic MDP - I wonder what happens if we consider infinite horizon discounted MDP with discount $\gamma$, how should we translate the regret bound in table 1 as a function of $\gamma$ or is this feasible at all?
=== *empirical validation* ===
I think the paper will benefit greatly from even a simple empirical validation of the results - simulating the regret bound as you would compute in theory, with a tabular mdp environment and see how theoretical insights might be validated. This will be valuable to more empirically minded readers and make a better case for the theoretical results in this work.
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and effort in reviewing our paper. We have organized our responses to your comments below. If any of our reconstructed responses miss the intent of your questions or if there are remaining concerns, please let us know so we can address them.
-------
### 1. Concrete examples of statistical functionals
In our paper, we indicated in Figure 1 that max and min are Bellman closed but not unbiased statistical functionals, and the proof for this is provided in Appendix C.2. To make it easier to follow, we will add a note in the main text stating that 'the proof is included in the appendix.' For the categorical sketch, since Lemma 4.4 of [Rowland et al 2019] proves that it is not Bellman closed, we did not include a separate proof. We will update Figure 1 in the main text to include a reference to their proven result.
-----
### 2. Bellman unbiasedness vs Closedness
First, Theorem 4.3 from [Rowland et al 2019] states:
> The only finite sets of statistics of the form $s(\mu)=\mathbb{E}_{Z\sim\mu}[h(Z)]$ that are Bellman closed are ...
>
In other words, they provide theoretical results for sets of "linear" statistical functionals that satisfy Bellman closedness. Since max and min are non-linear statistical functionals that fall outside the scope of their theory, we cannot determine whether they are Bellman closed.
Similarly, since variance is nonlinear, we cannot determine its Bellman closedness using their results. However, since we already know that first and second moments are Bellman closed, we can naturally derive that variance is Bellman closed. Therefore, this indicates that their theory does not sufficiently cover various commonly used statistical functionals. Keeping this in mind and comparing with Rowland's theory, our theory can be interpreted as a generalized result that is helpful to test Bellman closedness for a broader category of unbiasedly estimatable statistical functionals.
Although the second question falls outside the scope of our paper, it raises important points to address by breaking it into two parts.
> "For a given transition kernel, does a nonlinear Bellman closed sketch always have a fixed point?"
Since Lemma 3 in [Bellemare et al 2017] proves that the distributional update is a contraction, we can see that statistical functionals with bounded values also converge to a fixed point.
> "In sample-based updates without a given transition kernel, does a nonlinear Bellman closed sketch always have a fixed point?"
Our paper proves convergence for Bellman unbiased sketches in a scenario with "only finite sampling allowed without a given transition kernel," but does not examine convergence for other Bellman closed sketches. Since max and min are Bellman closed but nonlinear, we cannot use existing linearity-based contraction proofs. A separate proof approach would be needed, making this an interesting direction for future work.
-----
### 3. Translating results to discount case
While regret analysis for the infinite horizon discounted case falls outside the scope of our current paper, we believe it presents an interesting problem. In our results, when performing $N$ sketch-based updates, there is no additional cost beyond the confidence region increasing by a factor of $\sqrt{N}$, so we expect that in the discount case, there would also be an additional factor of $\sqrt{N}$ involved.
----
### 4. Lack of empirical validation
While we acknowledge the importance of experimental results, our paper aims to theoretically connect two fields - distRL and General Value Function Approximation (GVFA) - so our main contribution lies in establishing theoretical foundations and broadening understanding. We kindly request that you consider this in your evaluation, as many GVFA papers with similar objectives [Jin et al 2021, Li et al 2024] also do not necessarily include experimental validation.
----
### References
- [Rowland et al 2019] : Rowland, Mark, et al. "Statistics and samples in distributional reinforcement learning." *International Conference on Machine Learning*. PMLR, 2019.
- [Bellemare et al 2017] : Bellemare, Marc G., Will Dabney, and Rémi Munos. "A distributional perspective on reinforcement learning." *International conference on machine learning*. PMLR, 2017.
- [Jin et al 2021] : Jin, Chi, Qinghua Liu, and Sobhan Miryoosefi. "Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms." *Advances in neural information processing systems* 34 (2021): 13406-13418.
- [Li et al 2024 ] : Li, Yunfan, and Lin Yang. "On the model-misspecification in reinforcement learning." *International Conference on Artificial Intelligence and Statistics*. PMLR, 2024. | null | null | null | null | null | null |
The Illusion of Role Separation: Hidden Shortcuts in LLM Role Learning (and How to Fix Them) | Accept (poster) | Summary: The paper studies issues in LM role-learning for security purposes (e.g., following instructions in system prompts over user instructions), and identifies two key issues: task type exploitation (the model following user tasks that are similar to the system prompts) and proximity to beginning of text (the model follows instructions close to the start of the input). The authors propose two methods for addressing these, finetuning on swapped user and system roles (for task type exploitation) and “PFT” (position-enhanced finetuning; for proximity to beginning of text). Evaluations show both methods help reduce the success of attacks on models.
Claims And Evidence: The authors perform a number of fairly controlled experiments to back up and test their hypotheses around what is causing issues in model role-learning, and they evaluate their proposed fixes across a few different attack strategies. This makes me reasonably confident that their claims are valid.
Methods And Evaluation Criteria: The benchmarks evaluated on make sense to use, and the techniques are evaluated on both Llama and Gemma models. The closed-domain setting used (where user tokens cannot include instructions) is definitely a large simplification from real-world settings, but this is not unreasonable for the studies performed in the paper. It would be interesting to see if the methods propose harm or help performance in such open-domain settings.
Theoretical Claims: This is a primarily empirical work, and the mathematical explanations where present seem correct.
Experimental Designs Or Analyses: The experiments and results shown in figures 2 and 3 isolating the effect of inserting sequences, instructions, or shifting tokens are fairly convincing and show fairly clear trends. The experiments generally are well-controlled and fairly targeted in identifying and testing the weaknesses and fixes proposed in the paper.
The evaluation for checking if PFT reduces general performance does not seem particularly comprehensive: it would be more useful to see if general evaluations such as GSM8k, AlpacaEval, etc. are reduced by performing this style of position-id shifted training, as the password task may be quite easy to fit to.
Supplementary Material: I read the supplementary material (appendices) where relevant to further investigating my questions and concerns above.
Relation To Broader Scientific Literature: This present a fairly clear identification and fix for two issues with LM safety, being novel in both identifying them and then proposing fixes.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Overall, I think this paper is fairly strong, with a solid methodological setup that clearly establishes the proposed issues, and strong results that the fixes proposed do indeed improve these issues. Its largest weakness is the fact that only the closed-domain setting is examined, which feels like a fairly restrictive setup (and limits the scope of the insights in this paper) – I feel that in many cases users will be allowed to put in their own instructions and we wish for the model to follow these (to some reasonable extent). However, the paper is aware of the limitation and still makes novel and interesting insights.
Other Comments Or Suggestions: - Figure 3 caption typo: “privildged” -> “privileged”
- 6.1 typo “ x②” -> “ ②”
Questions For Authors: See my questions and concerns above, in particular with regards to open-domain performance.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are glad you like this work!
**Open-domain performance** Thank you for your positive feedback! We choose a close-domain setting as it is easy to validate and evaluate the model role-separation capability. This is also a common choice in many security-related works, such as [StruQ](https://arxiv.org/pdf/2402.06363) . We agree with you that evaluating in open-domain would have a more comprehensive evaluation, and we would like to discuss it in future works. Thanks again for your support!
**Evaluation of PFT impacts on other tasks** we show PFT doesn’t incur extra performance cost compared to SFT on Alpaca dataset (sec 6.2).
**Typos** Thanks for your careful reading! We will correct those typos and acknowledge your feedback in the next version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! I've read it and the other reviews and am keeping my score. | Summary: The paper studies how well LLMs are able to distinguish between different input roles like system, user etc. The authors motivate their work by claiming that existing fine tuning approaches do not teach the LLM genuine role differentiation but rely on spurious patterns picked up by the model during training. To this end, they propose an experimental framework where they use benign training data and adversarial test data to detect if the model really learns to distinguish roles or just memorizes patterns. They also discover shortcuts that arise from model behavior and show that minor perturbations can be used to cause model failure. They propose PFT, a fine tuning method that manipulates postion IDs to create numerical gap between system and user tokens. They show through experimental results that the PFT method is able to perform better than standard SFT models and other comparison models on Adversarial as well as Ordinary datasets.
## update after rebuttal
The authors discuss all questions and addressed the points raised by me in their rebuttal, so changing my score. Good work and good Luck!
Claims And Evidence: The overall paper was a good read and many claims made were supported by experimental exploration and evidence. It would be great if following points are strengthened by the authors,
1. Generalization of PFT to different prompt structures might need to be further tested beyond adversarial test examples, examples include longer prompts, mixed role messages etc.
2. Explain why delimiter tokens under perform, more detailed comparison across variants or exploration of the embedding space might strengthen this claim
3. Comparison and analysis of PFT performance to more related methods that perform similar encoding related fine tuning might help the readers understand the reasons for improvement and better understand the novelty
Methods And Evaluation Criteria: The paper discusses an important problem of role separation vs. pattern memorization which is very insightful and important. The controlled evaluation setup to train on benign data and test on adversarial data are methodologically sound and will help isolate the phenomenon. The following points are my feedback:
1. Could you please add more information about the datasets being used for training and testing, like size and how it was constructed? As 2K examples for the training set seems to be small and its not clear how different the initial and symm versions are.
2. The PFT introduces a numerical gap between the different role contents and is motivated empirically and intuitively by observing failure models, it would be good if there can be formal theoretical or ablation analysis as to why the proposed algorithm might result in better role separation understanding
Theoretical Claims: The paper is majorly empirical and experimental with intuitive hypothesis and good experiments. As such, no issues with proof correctness arise. However, PFT introduces a numerical gap between the different role contents, it would be good if there can be formal theoretical analysis as to why the proposed algorithm might result in better role separation understanding
Experimental Designs Or Analyses: The paper has good experimental design and studies. The controlled evaluation framework, short cut diagnosis and the quantitative evaluations are good. The paper might benefit from broader generalization testing and human centered evaluation to fully validate some of the robustness claims, but the overall experiment settings and empirical results are good.
Supplementary Material: Yes the results on Gemma PFT version are presented along side additional details about evaluation data and model settings.
Relation To Broader Scientific Literature: The paper discusses role separation in LLMs, and is well situated within the broader literature on prompt injection, role conditioning and positional encoding in LLMs. It extends the work on role-specific embeddings (Wu et. al., 2024) by offering an alternative method by modifying position IDs. It also discusses about prompt injection attacks (Willson et. al, 2022; Yu et. al., 2023) and shits the focus to robust role separation rather than focussing on performance against known attacks.
References
Yu, J., Wu, Y., Shu, D., Jin, M., and Xing, X. Assessing prompt injection risks in 200+ custom gpts. arXiv
preprint arXiv:2311.11538, 2023.
Wu, T., Zhang, S., Song, K., Xu, S., Zhao, S., Agrawal, R., Indurthi, S. R., Xiang, C., Mittal, P., and Zhou,
W. Instructional segment embedding: Improving llm safety with instruction hierarchy. arXiv preprint
arXiv:2410.09102, 2024.
Willison, S. Prompt injection attacks against GPT-3, 2022. URL https://simonwillison.net/2022/Sep/
12/prompt-injection/.
Essential References Not Discussed: The paper has cited most of the related recent works, the following few recent papers might be of interest to the reviewers to consider citing:
ALIS: Aligned LLM Instruction Security Strategy for Unsafe Input Prompt(https://aclanthology.org/2025.coling-main.613/) (Song et al., COLING 2025)
Zverev, Egor et al. “ASIDE: Architectural Separation of Instructions and Data in Language Models.” (2025).
Other Strengths And Weaknesses: Strengths:
1. Clear problem framing - paper articulates the problem of role separation in LLMs in a focused and compelling way
2. Strong Empirical studies and analysis - authors design a thoughtful experimental framework that isolates shortcut learning from genuine role understanding
3. Well written and clear structure - paper is generally well-written and clearly structured
Weakness:
1. Limited Motivation for PFT - paper does not provide theoretical justification for the PFT method, nor does it deeply analyze why positional encodings work better than delimiter tokens for role separation
2. Limited Novelty - The PFT which shifts position IDs to encode roles is an adaptation of existing ideas from long-context learning and positional interpolation
Other Comments Or Suggestions: None
Questions For Authors: 1. Could you please add more information about the datasets being used for training and testing, like size and how it was constructed? As 2K examples for the training set seems to be small and its not clear how different the initial and symm versions are.
2. The PFT introduces a numerical gap between the different role contents and is motivated empirically and intuitively by observing failure models, it would be good if there can be formal theoretical analysis as to why the proposed algorithm might result in better role separation understanding
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and suggestions! We are glad that you find the paper well written, and the empirical evidence strong.
Many of your questions about theoretical results and other embedding-based methods are great suggestions! We will study them in the future projects. More specifically,
**Theoretical analysis** Intuitively PFT introduces invariant signals to the SFT data and helps curb models learn various shortcuts. It would be interesting to study what property of the transformer architecture or the chat format explains the empirical phenomenon. We will leave it as future work, and will add more discussions in the next version.
**Applying PFT to generalized prompt structures and embedding based methods** In related work, we acknowledged that the PFT in the current form doesn’t directly apply to generalized prompt structures, and embedding based methods (the one we cited is concurrent to our work) have the same motivations and could be used to enhance role-separation (in fact, our methods can be understood as an embedding-based methods — it changes the positional encoding and thus effectively changes the embeddings at each layer, while not requiring explicit embedding tuning). However, the main contribution in this paper is the clear definition of the role-separation problem, and controlled experiments for evaluation. It’s a natural next step to systematically study how to best incorporate role information at token level. We also thank you for adding more related work. They are concurrent to this work, and we will definitely include it in the next version.
**Why enhancing delimiter token doesn’t work well** We suspect it’s because the differentiating signal is still not strong enough, and a more robust approach is to manipulate tokenwise signatures (like PFT, or embedding-based approaches). We agree a deep theoretical analysis could formalize these intuitions.
**Relationship to long-context learning** We briefly discussed the long-context learning works in line 427. In fact, the similar methods in long-context learning have completely different motivations (change positional encoding to simulate longer contexts in training), and we are glad that methods with completely different motivations could work! This suggests the big potential for more advanced techniques of manipulating positional encoding. We thank the reviewer for bringing this to our attention. We already briefly discuss them in the related work, but will add more discussion in the next version
**Ablation studies** since we do controlled experiments (changing one component at a time), we are effectively doing ablation studies.
**Dataset details** For dataset_initial, we discuss the main design and leave the details in Appendix C. We also include actual training data in the supplementary material. We introduce dataset_symm in Sec 4.1 line 206 as a way to combat short 0 by data augmentation. It’s also in the supplementary material. We agree that the different pieces of dataset info are introduced with the development of the paper, and thus are scattered around. In the next version we will add a more concentrated paragraph for data setup.
**Whether the size of training data is too small** we think it really depends on the task, and empirically we do find this is more than enough for the model to learn role-separation (we run for one epoch and perform early stopping) | Summary: The paper introduces the concept of role-separation learning, which reflects the LLM capability to distinguish the system instructions and user queries. The authors evaluate the role-separation capability of LLMs through a controlled experimental framework and conclude current fine-tuned models use task type exploitation and proximity to begin-of-text for role identification, which is considered as relying on superficial proxies. The authors also propose the data-augmentation and Position-enhanced fine-tuning method, which is based on modifying position IDs, to achieve robust actual role-separation capabilities.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: There is no theoretical claim demonstrated in the paper.
Experimental Designs Or Analyses: I checked three main experimental designs and all designs make sense to me:
1. Using accuracy on constructed tasks to present the model role-separation capability and compare the performance of different fine-tuning methods. The input question consists of a clear system instruction and ambiguous user query, which could mislead the LLM to generate an undesired response. When the LLM provides the answer following the provided system instructions under some misleading attacks, the LLM presents role-separation capability. The paper designs different fine-tuning datasets and compares their performance through Table 1 and Table 2.
2. Evaluating the impact of non-essential information like "You are an AI assistant" with different numbers and insert positions. The non-essential instruction could be inserted before or after the key instruction. When the non-essential instruction is inserted after the key instruction, increasing the number of non-essential instructions shifts the position of the key instruction backward.
3. Evaluating the impact of positional distance of the key instruction. Adjust the distance by inserting empty tokens before the key instruction or shifting the position IDs of the key instruction. This experiment does not provide general instructions, therefore isolating the impact of the key instruction positions.
Supplementary Material: I reviewed the appendix, which provides experiment details containing prompt design, model and training details, and PFT performance on mitigating the effect of Proximity to Begin-of-Text shortcut
Relation To Broader Scientific Literature: Existing works evaluating role separation capability do not decompose the influence of role separation capability and other confounding factors. This work demonstrates that LLM performance also depends on pattern matching and superficial shortcuts. This work designs a controlled experiment framework to isolate the role separation capability from pattern memorization. And design experiments to estimate the influence of task-type association shortcuts and proximities to begin-of-text shortcuts. This work tries to evaluate the actual role separation capability which precisely reflects the LLM capability to distinguish systems instructions and user queries.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The paper investigates an important concept of role separation since recent real-world applications of LLM increasingly incorporate instructions and information from multiple sources.
2. The paper evaluates and improves the actual role-separation capability in isolation from pattern-matching behavior and potential shortcuts. The paper evaluates the performance against multiple types of attacks.
3. The author proposed the Position-enhanced fine-tuning method which has statistically better performance in improving role-separation capability compared to vanilla SFT.
Weaknesses:
1. Evaluation of the framework and method using more models could be helpful. Models with different scales and types present different capabilities of comprehending, reasoning, and instruction-following. Experiments on different models could provide a more robust and comprehensive view of the proposed method. Additionally, for the PFT method, as shown in Sec 6.1, the optimal distance d may depend on the model type. Conducting experiments on a wider range of models and providing corresponding hyperparameter d will facilitate the further application of the proposed method.
2. It’s unclear whether “Accuracy” is a precise metric to evaluate the role-separation learning capability. The role-separation capability requires the LLM to correctly identify the task, corresponding solving strategy, and problem. However, accuracy needs the LLM to correctly understand the task and solve it, thus is also influenced by the model problem-solving capability.
3. Some related work section needs more details. For instance, in the Prompt Injection Attacks section, how the prompt injection attacks are designed and related to this work. It is also unclear why the mentioned online game datasets are suitable for this work.
Other Comments Or Suggestions: Some examples from the selected datasets would help the readers understand the problem setup and challenges more easily.
Questions For Authors: 1. I’m curious about whether providing some similar in-context examples will improve role-separation capability.
2. In Sec 4.1, the authors mention the task-type association shortcut influences the accuracy, does it mean the second and third columns of numbers shown in Table 1 also include the contributions of this shortcut to the task success? If so, is there any better way to distinguish the contribution of this shortcut and actual role-separation capability?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review! We are glad you find the problem of role-separation important, and the experiment designs make sense.
For your questions about whether other factors (such as model capability or evaluation metrics) confound the conclusions, we discussed in the paper that controlled experiments help remove them. But we will add more clarifications to the next version. To your questions, more specifically:
**Regarding evaluation with more models** Yes, we agree and include results on both Llama and Gemma models, and find common trends. We want to emphasize that the two models are very different. In particular, Llama 3 models use a large RoPE base frequency hyperparameter (500k) to support long contexts, whereas Gemma uses only 10k. This could partially explain the different choices of $d$ in PFT. But more importantly, we use controlled experiments to further remove the confounding of other model capabilities (comprehending, reasoning, etc): in each experiment, we change only the targeted aspect while holding other aspects the same; then we use the performance difference/trend as evidence for each claim.
**On use of accuracy as metric for role-separation capabilities** First, the tasks are chosen to be simple so that the model's problem solving skill is not a limiting factor (the model performs well in ordinary data). Second, even if model capabilities might still affect the absolute metric scores, we can remove this confounding effect by looking at differences in accuracy. This is what we did: from side by side comparison in table 1, to trend analysis in Fig 5, we use the accuracy difference to prove our claims.
For other questions:
**On why prompt injection attacks relate to role-sep problem** we used prompt injection attacks as motivation for the role-separation problem (section 2), and discussed that we used these adversarial datasets (many are collected from online games) to test the model OOD performance as true measure of role-sep capability (Sec 3). We discussed those prompt injection datasets in the paragraph of line 148, and used Fig 1 as illustration of their designs. We acknowledged that descriptions of the prompt injection attacks are scattered (because of page limits). In the next version, we will add a more concentrated paragraph discussing adversarial datasets.
**On in-context learning** We had early experiments with ICL, and found it suffers the same task-type association problem as SFT. Because ICL and SFT share similar characteristics, we stick with SFT for this paper. It would be an interesting future direction to explore how our results generalize with ICL.
**On question about shortcut and Table 1** Yes, the “good” result of the second column is an illusion caused by task-type association, and we find it by swapping the contents of the system and the user (line 195). Then we remove the task-type spurious correlation by training on data augmented with the swapped examples, and observe extra performance jumps in the 3rd column of table 1 (line 206). | null | null | null | null | null | null | null | null |
Sable: a Performant, Efficient and Scalable Sequence Model for MARL | Accept (poster) | Summary: This paper proposes to use retentive networks to process multiple agents' observations and actions in MARL. The proposed framework can scale to a large number of agents. Extensive experiments and analysis are conducted. As a result, the proposed framework shows performance improvements in 34/45 of tasks and also achieves memory efficiency compared to MAT and IPPO.
Claims And Evidence: Regarding the memory used, I'm wondering if agents share parameteres, espicially on Q, V, K matrices, which may lead to linearly increased memory with respect to agents and time steps.
Methods And Evaluation Criteria: Equation 6 is unclear and confusing. Please read Questions For Authors
Theoretical Claims: No theoretical claims provided
Experimental Designs Or Analyses: The experiments are cndense. I'm wondering what would be the effect when the chunk size and the number of agents increase jointly.
Supplementary Material: Yes
Relation To Broader Scientific Literature: Due to the auto-regressive way used in generating actions, the paper may relate to extensive-form games or MARL which considers communication among agents.
Essential References Not Discussed: no
Other Strengths And Weaknesses: This paper provides extensive details about the algorithms, implementatio, and experiments with figures and tables.
Other Comments Or Suggestions: - it is a bit confusing to concurrently use subscript for agent index and time step. besides, it is confusing to use i refers to both agent index and a chunk.
Questions For Authors: - In euqation 5, do you share K dan V for different agents and different time steps?
- in page 4: which decomposition theorem are you referring to? In the auto-regressive way, agents' actions are dependent in a order. How would \hat{h}_0 be?
- it is unclear about the \tau and \tau_prev. The author needs to explain how does \tau and \tau_prev differ from the subscript i used in Equation 3.
- it is unclear whether you refer to a chunk as a set of agents or a set of observations over time step. In line 205, you are talking about processing agents in parallel. However, in equation 6, L is actually analogous to B in euqation 3 so a chunk is a set of observations over time step. Based on this, using \tau to refer to a chunk can be confused since \tau refers to a trajectory. Moreover, in equation 6, i <= Nt_do is also confusing since i is agent index so only when t_do=0 you will get i > Nt_do. And, I'm not sure what j stands for. Is it an agent or a time step?
- it is confusing that in section 2 you are using continuing tasks while in equation 4 you consider epsiodes. If you are using episodic tasks, Equation 6 will never achieve a second terminal time step (therefore t_do is the terminal time step rather than the first terminal timestep).
- it is not clear how's the decay matrix updated according to your equation (where equation 3 explicitly incorporate exponentially updates)
- since the GPU memories with chunk size 128 is below 3.3 and the performance of Sable seems to be insensitive to the size of chunk. Would it be possible to use a smaller chunk size, e..g, 8, for all tasks? What if the number of agents increase as well?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the feedback. Your comments on the retention equations and other aspects of our work have helped us identify areas for improvement. We address your questions and comments below.
>wondering if agents share parameteres, espicially on Q, V, K matrices
Sable uses a single network for all agents. The QVK matrices are not unique per agent, but shared in the sense that each agent is treated as another element in the sequence. While longer sequences incur higher memory usage, RetNets address this by constraining computational memory to a fixed chunk size.
>the effect when the chunk size and the number of agents increase jointly
The chunk size can be increased as the number of agents increases, but this will affect memory requirements. The optimal setting is to process the entire training sequence at once, but if this isn't feasible, Sable allows the chunk size to be tuned to maximize hardware utilization and enable training on arbitrarily large sequences.
>which decomposition theorem are you referring to
Please see our reply to D22g marked (**).
>agents' actions are dependent in a order. How would \hat{h}_0 be?
As shown in Line 177 Column 2, $\hat{h}\_{0}$ is $h\_{t-1}^{dec}$, where $h_{t-1}^{dec}$ is the decayed hidden state from the previous timestep. $\hat{h}$ is an intermediary variable that accumulates the hidden state over agents within a single timestep. This is decayed once per timestep to produce $h_{t}^{dec}$. Regarding the agents’ actions order dependency, we mitigate any potential bias by shuffling agent order during training.
> it is unclear about the \tau and \tau_prev
> using \tau to refer to a chunk can be confused
(***) Thank you for going through our paper in such detail, we will make the following changes to the paper to help with the clarity of our method.
For the chunkwise representation, we split a trajectory $\tau$ consisting of $L$ timesteps, with $N$ agents gathered during inference into smaller chunks each of length $C$ such that the retention for chunk $i$ can be given as:
$\begin{equation}\begin{aligned}Q_{[ \tau_{i} ]} &= Q_{C(i-1):Ci}, K_{[\tau_i]} = K_{C(i-1):Ci}, V_{[\tau_i]} = V_{C(i-1):Ci} \\\\h_i &= K^T_{[\tau_i]} \left( V_{[\tau_i]} \odot \zeta \right) + \delta \kappa^{ \lfloor L/C \rfloor } h_{i-1}, \zeta = D_{N \cdot \lfloor L/C \rfloor, 1:N \cdot \lfloor L/C \rfloor} \\\\\text{Ret}(\boldsymbol x_{[\tau_i]}) &= \left( Q_{[\tau_i]} K^T_{[\tau_i]} \odot D \right) V_{[\tau_i]} + \left( Q_{[\tau_i]} h_i \right) \odot \xi \\\\\text{where } \ & \xi_{j} = \begin{cases} \kappa^{\left\lfloor j / N \right\rfloor + 1}, & \text{if } j\leq Nt_{d_0} \\\\
0, & \text{if } j > Nt_{d_0} \\\\\end{cases}.\end{aligned}\end{equation}$
Here $h$ is an intermediary variable carrying information from one chunk to the next, $h_0$ is the hidden state at the beginning of $\tau$ that will be used for training, $\zeta$ is the last row of the decay matrix that is created from the data of the chunked trajectory for chunk $i$ and $\xi$ is a duplicated column vector. Please see our answer to (C1) of reviewer `qBu1`.
This removes the confusing $\tau$ and $\tau_{prev}$ notation from the text and should link Equations 3 and 6 more clearly. It also removes $i$ from the definition of $\xi$ giving clarity around $Nt_{d_0}$, as $j$ is now an index in $\xi$.
>it is unclear whether you refer to a chunk as a set of agents or a set of observations over time step
Sable's flexibility allows for treating either entire rollouts with multiple agents at each timestep, or just the number of agents, as the training sequence length. With E environments, L timesteps, N agents, and C chunks, the default training batch shape is $(E, NL)$, divisible into $(E, N [L / C] )$ size chunks. When using only the number of agents as the sequence length, the shape is $(EL, N)$, divisible into $(EL, N/C)$ size chunks.
>in section 2 you are using continuing tasks while in equation 4 you consider epsiodes
All tasks we consider have fixed length time horizons and termination conditions and we allow for environments to automatically reset once an episode terminates. Thus, for a fixed rollout length it is possible for there to be multiple terminal timesteps.
>not clear how's the decay matrix updated according to your equation
In our case, the decay matrix is blockwise lower diagonal of size $(LN, LN)$ with block size $(N, N)$ where $N$ is the number of agents. Each element is exponentially decayed given its position in time for a given trajectory which follows Equation 2. We discuss our adaptations to the decay matrix after Equation 6 in Lines 213-240 and give an example in Appendix D.
>Would it be possible to use a smaller chunk size, e..g, 8, for all tasks?
This is possible, but not practical. Smaller chunks during training use less memory at the cost of wall clock time. In practice, we train with a chunk size that is as large as our computational memory permits for the fastest training wall clock time.
---
Rebuttal Comment 1.1:
Comment: I confirm that I have read the author response to my review and will update my review in light of this response as necessary. | Summary: The paper proposes to replace the attention mechanism in Multi-Agent Transformers with Retentive Networks and shows that such tweak (called Sable method by the paper) leads to improvement in the following three dimensions: strong performance, memory efficiency and scalability. The paper evaluates the Sable method on multiple multi-agent benchmarks to show its performance against the baseline methods spanned from independent learning, centralized training with decentralized execution, to centralized learning.
Claims And Evidence: * The claim on the scalability may be a bit untenable. My understanding of RetNet is that it was proposed to reduce the memory cost in inference time, assuming that the model is well trained and everything else would remain the same as the transformers. So, in this sense, **Sable is essentially the same as MAT and should be a centralized method**. So why Centralized Learning is deemed as not memory efficient or scalable. But Sable, as one of such kind, is scalable? Further, if the memory constraint is the main issue in MAT when scaling up to large numbers of agents, we can then resort to some memory-efficient optimizations of transformers (E.g., SGD instead of Adam) or memory-efficient transformers? Can the authors explain why RetNet is required specifically here?
* The claim of “a new sequence model for MARL” is also a bit debatable. The Sable is no different than MAT in the sequence modelling perspective. They are both centralized methods that take the whole sequence from all agents as input and output the joint actions autoregressively.
Methods And Evaluation Criteria: **The method part is rather vague in general and there is not much info on this in the main text**. The paper, possibly intentionally, puts the method details (implementation details) into the appendix, which I guess might imply that the algo details of Sable are pretty like MAT: they both use the PPO-like training for the actor and critic updates, as specified in Algorithm 1, and the observation sequence encodings from the encoder are fed into the decoder to produce actions.
1. However, it’s unclear what $o_b$ is. What does $b$ mean here?
2. In the main text, “The decoder takes a similar sequence but of actions instead of observations as input”, which implies the decoder only takes the actions as input (different from the algorithm).
3. Again, “we use MAT-style single-timestep sequences to optimise memory usage and reserve chunking to be applied across agents.”, which implies that Sable does not take the trajectories as input (different from algorithm).
4. Furthermore, “this change to the encoder makes it unable to perform full self-retention across agents, as it cannot be applied across chunks” what does it mean here? The Sable does not rely on the retention then?
Theoretical Claims: There are no theoretical proofs and claims in this paper.
The paper mentions "It is this autoregressive action selection which leverages the advantage decomposition theorem to give Sable theoretically grounded convergence guarantees." But **there is no such analysis throughout the paper, including appendix**. It's even **unsure what advantage decomposition theorem it refers to (no references provided)**.
Experimental Designs Or Analyses: In section 4.2, the degradation of IPPO performance looks a bit suspicious: any reasons why it happened? Can it be addressed by normalizing the returns or learning rate decay? Regarding the memory usage, is it in the training or in the inference?
Supplementary Material: Checked the appendix in detail, especially D. Sable implementation details, and C. Hyperparameters
Relation To Broader Scientific Literature: n/a
Essential References Not Discussed: there are quite a lot of papers on reducing the memory cost in training transformers, e.g., *Memory-efficient Transformers via Top-k Attention*, *Memory Efficient Continual Learning with Transformers*, and in transformer optimizations, e.g., *ZeRO: Memory Optimizations Toward Training Trillion Parameter Models*, *Full Parameter Fine-tuning for Large Language Models with Limited Resources*. The paper should discuss how these are related and should consider some of them as the baseline improvements for transformer architectures used in MAT
Other Strengths And Weaknesses: * Strengths: the empirical results are promising
* Weakness: the paper in its current version does not present the method in a clear and convincing way; the paper also misses quite a large chunk of related work on memory-efficient transformers and the optimizations.
Other Comments Or Suggestions: There is an error in Line 20 in algorithm 1: should be gradient descent for $\phi$.
Questions For Authors: The paper provides the hyperparameter search space for Sable, MAT etc. but what are the final configs used to produce the reported results?
Ethical Review Concerns: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your clarifying questions and feedback. We provide detailed responses below.
> The claim on the scalability may be a bit untenable
RetNets' chunkwise formulation allows us to process long sequences in small chunks, scaling to arbitrarily long sequences regardless of whether they consist of many agents at a single timestep or over multiple timesteps. This scalability difference is evident in Figure 4, where MAT scales much worse than Sable.
> Sable’s details are vague, it is essentially the same as MAT, both are centralized methods, the claim of "a new sequence model for MARL" is debatable
We emphasize that Sable introduces a fundamentally different sequence modeling approach compared to MAT. While both Sable and MAT are CL methods, Sable can reason temporally, which allows it to model sequences of agents over time and capture long-term dependencies, unlike MAT, which is limited to reasoning within a single timestep.
Due to the page limit, we did not have enough space to add all the method details in our initial submission but they can all be found in the appendix. We have an extra page for the camera-ready version and will move these method details into the main text to improve clarity.
> What does b mean
The b subscript denotes a batch of trajectories from the buffer. We acknowledge that this notation could be made more clear, and we will update Algorithm 1 and the notation accordingly to avoid confusion.
> the decoder only takes the actions as input
We are only referring to the first block of the decoder, the second block performs cross retention with the output of the first block and the encoded observations as can be seen in Figure 13.
> Sable does not take the trajectories as input
This statement referred to is specifically to the scaling strategy described in Section 3 (Scaling the number of agents) for handling thousands of agents. It represents a variant of Sable optimized for extremely large agent counts, not the default implementation used in most experiments. The algorithm in Appendix D shows the full version of Sable that conditions on trajectories.
> Sable does not rely on the retention
Sable still relies on retention. This statement referred to a limitation when using the agent-chunking scaling strategy described in Section 3. When chunking across a large number of agents, self-retention can only be applied per chunk, which means that agents in different chunks do not “retend” to each other. When the number of agents all fit into a single chunk, it is possible to perform full self-retention across all agents as they all fit within the same chunk.
This provides a design trade-off to the user when considering the scale of the problem at hand and the memory available. Furthermore, we would like to point out that in Figure 4, we show that even though Sable cannot perform full self-retention across all agents (only per chunk) at large scales, it still achieves the best performance, whereas MAT is not even able to fit into memory at the extreme end.
> unsure what advantage decomposition theorem it refers to
(**) We are referring to the advantage decomposition theorem (ADT) originally derived in [[Kuba (2022)](https://bit.ly/4hTYYQU)] (there called Lemma 1). This theorem underpins the Fundamental Theorem of Heterogeneous-Agent Mirror Learning (HAML) as mentioned on Line 58 Column 2 in the introduction. We will amend the text to make the link between HAML and the ADT clear.
> degradation of IPPO performance looks a bit suspicious
Please refer to our answer to a similar question from reviewer `pmav` marked (*).
> memory usage, is it in the training or in the inference?
The memory usage reported in Figure 5 is measured during training as this is when the bulk of the memory is used.
> On the usage of memory efficient transformers
Memory efficient transformers without a dual form do not maintain a hidden state, thus would need to keep a cache of per-agent/timestep observations during inference. The two most important aspects of retention in Sable for scaling is low memory requirements and the dual form for efficient inference. For a further discussion on the differences between RetNets and Transformers, we refer the reviewer to Section 2.4 in [[Sun (2023)](http://bit.ly/4ldsu7i)]. We are not against adding some of these related works into the text if the reviewer thinks it would be useful.
> should be gradient descent for ϕ
Thank you for pointing this out, we will update it.
> final configs used to produce the reported results?
The final configs, source code, and raw experimental data are available at the link at the end of Section 3. You can download the data by pressing "Experiment Data" at the top of the page and find optimal hyperparameters at All experiments data/Benchmark/optimal-hyperparams/… to reproduce our results. We chose not to add the configs directly to the appendix as it would add a large number of extra pages. | Summary: This paper presents a novel sequence modeling approach for MARL. It adopts the retention mechanism instead of the attention mechanism in MAT to achieve computational efficiency, memory efficiency, and scalability.
## update after rebuttal
During the rebuttal, the authors adequately addressed most of my concerns. Although the problem settings are not clearly presented in the current version of the manuscript, the authors showed their willingness and plan to address this in the modified manuscript. In this regard, I will maintain my score toward acceptance.
Claims And Evidence: In general, yes. The authors elaborate the reasoning mathematically and prove their argument experimentally. (e.g. Memory usage comparison with baseline methods)
Methods And Evaluation Criteria: Yes. They compared the proposed methods in various MARL benchmark problems.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, their experiments mostly seem valid. However, to my understanding, some of the baselines utilize partial information (in the default setting), unlike the proposed method or MAT. This information gap makes direct comparisons unfair. An explicit acknowledgment of this information gap (if it exists) may be necessary to avoid misleading readers unfamiliar with the baselines and experiments. If the authors have modified their implementation to address this gap, it should be properly mentioned in the manuscript.
Supplementary Material: Most of them, including additional experimental results, task settings, and the structure of the proposed method.
Relation To Broader Scientific Literature: The paper presents a novel approach that adopts RetNet (Retentive Networks) for MARL, achieving scalable methods applicable to very large-scale multi-agent tasks, including scenarios with thousands of agents.
Essential References Not Discussed: As the paper covers various MARL settings, this version reasonably includes essential literature, although it omits some state-of-the-art (SOTA) algorithms in specific test settings. For example, the paper introduces and compares somewhat outdated literature on value-based methods.
Other Strengths And Weaknesses: Strength
- The paper explored the multi-agent problems in various perspectives, such as IL, CTDE, and CL.
- The paper conducted extensive experiments to evaluate the proposed model in diverse benchmark problems.
- The authors open-sourced their code.
- The proposed methods are applicable to very large-scale multi-agent problems.
Weakness
- Although the paper evaluated the proposed method in diverse MARL tasks, its major contribution is replacing the attention mechanism in MAT with the retention mechanism.
Other Comments Or Suggestions: (C1) In Eq. (3), $\nu_{ij}$ and $\zeta_{ij}$ contain index $j$ but $j$ does not appear in their expressions. Perhaps, it could be expressed differently to avoid any confusion.
(C2) In training part, the corresponding loss functions and algorithm presented in Appendix should be mentioned for readers to refer to them.
(C3) Some pictorial illustration would be helpful for readers to understand Neom, a newly introduced task, if possible.
Questions For Authors: (Q1) Do all baseline methods are trained in via centralized training?
(Q2) The authors mentioned that their approach is classified as centralized training. Then, what is the formal formulation for the main problem setting? DecMDP? MMDP? or what? The proper formulation rather than just general problem formulation for cooperative MARL tasks should be mentioned somewhere in the manuscript.
(Q3) How critical is it for performance to conduct a random permutation of the order of agents within a timestep?
(Q4) Do QMIX and MAPPO still leverage partial information during decision-making, while Sable and MAT utilize global information?
(Q5) It would be helpful for readers to better understand the content if the dimensions of each matrix were explicitly defined somewhere in the manuscript. For example, $\zeta=D_{NL,1:NL}$ is a confusing. Is it different from $\zeta=D_{NL,NL}$?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback, especially the close attention paid to our equations/notation. We provide detailed responses below.
>(Q1) Do all baseline methods are trained in via centralized training?
In addition to answering the above question, we also wish to clarify a misunderstanding evidenced by the following comment:
> “However, to my understanding, some of the baselines utilize partial information (in the default setting), unlike the proposed method or MAT. This information gap makes direct comparisons unfair.”
While MAT and Sable process information from all agents using a single network, they only use local observations during training and not the global state, unlike CTDE methods, e.g. MAPPO, QMIX. All methods use the same observations at inference time. We will clarify the definition of CL in the introduction. Not all baselines belong to the CL paradigm; we include baselines from IL, CTDE and CL.
> (Q2) Formal problem setting
In Section 2, we define the problem setting as a decentralised-POMDP.
> (Q3) Random permutation of agents and performance
We didn't investigate this, believing it's more principled to randomly permute agent order each timestep. This prevents the model from relying on specific orderings, avoiding bias [[Kuba (2022)](https://bit.ly/4hTYYQU)].
> (Q4) Do QMIX and MAPPO still leverage partial information during decision-making, while Sable and MAT utilize global information?
Please refer to our answer in Q1.
> (Q5) Dimensions of matrices
Please refer to our answer to (C1) below and to how we intend to rewrite Equation 6 in our answer to reviewer `wrvH` marked (***).
> outdated literature/baselines on value-based methods.
A well-tuned QMIX has been shown to outperform various extensions [[Hu (2023)](https://bit.ly/3QUdGwx)]. For this reason, we feel that QMIX represents a sufficiently strong value-based baseline. We will also clarify this in the experiments section.
>(C1) In Eq. (3), $\nu_{ij}$ and $\zeta_{ij}$ contain index $j$ but $j$ does not appear in their expressions. Perhaps, it could be expressed differently to avoid any confusion.
Since Equation 3 does not have $\xi$ or $\nu$, we assume the reviewer is referring to $\xi$ and $\zeta$ in Equation 6. We acknowledge that our presentation of Equation 6 was imprecise. This misrepresentation was inadvertently transferred from the original RetNet Paper. Both $\zeta$ and $\xi$ are matrices of the same shape as the decay matrix $D$, with dimensions $C \times C$, where $C$ is the chunk size. These matrices contain values that are constant across columns but vary across rows. We will revise Equation 6 to eliminate the ambiguity caused by the overloaded use of the index $i$, and we will explicitly define the role of $j$ to avoid confusion. We refer the reviewer to our response to reviewer `wrvH` marked (***) for an overview of how we intend to update Equation 6.
> (C2) In training part, the corresponding loss functions and algorithm presented in Appendix should be mentioned for readers to refer to them.
Due to the page limit, we did not have enough space to add this in our initial submission (moving it to the appendix), but since we have an extra page for the camera-ready version, we will add the loss function back to the main text.
> (C3) Some pictorial illustration would be helpful for readers to understand Neom, a newly introduced task, if possible.
Thank you for the suggestion, we will add a render of a step in Neom.
> major contribution is replacing the attention mechanism in MAT with the retention.
Indeed, the reviewer is correct that this is our main contribution. However, we do not see it as a weakness of our work. The use of retention in Sable goes beyond a straightforward replacement of attention in MAT. To get it to work, we had to change several aspects of the original retention mechanism including:
* Introducing a reset mechanism within the decay matrix to ensure that memory is retained within episodes and not across their termination boundaries.
* Carefully control the decay over timesteps, which, unlike the original RetNet’s decay that operates only over token positions, has to handle multiple tokens/observations in each timestep.
* Developing a cross-retention mechanism, a retentive encoder and an encoder-decoder RetNet, none of which are part of the original RetNet design and are also not straightforward implementations.
Therefore, Sable as a working retention-based sequence model for RL, is a highly non-trivial algorithmic implementation. This should also be clear when comparing our code with the original implementation of RetNets and/or MAT. Additionally, retention enables Sable to attend over entire trajectories, which is impossible in MAT and is the main reason for Sable's impressive performance. We are excited about what Sable is capable of, with extensive empirical evidence giving such a strong signal for its potential use in applications.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I need some more clarifications on some points regarding Q1 and Q2.
In general cooperative MARL settings, the problem is considered as DecPOMDP, as each agent executes based on its own partial observation, not including others. In CL and 3.Method.Execution, the model utilizes aggregations of observations from all agents and "iteratively" generates action via the "centralized decision" maker. How is this viewed as DecPOMDP?
Do authors view partial observability as there are some states (perhaps part of the global state) affecting transitions but not being included in the aggregation of observations?
If the decision maker utilizes the aggregated observations from all agents during execution, this additional information can lead to improved performance compared to general MARL settings based on Dec-POMDPs, which rely on partial information during execution.
Please clarify if I’m mistaken; otherwise, I hope these differences are clearly addressed in the problem formulation and experimental settings.
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with us in discussion, we sincerely appreciate it.
*“Do authors view partial observability as there are some states (perhaps part of the global state) affecting transitions but not being included in the aggregation of observations?”*
This is exactly correct, the general problem setting we consider is a cooperative task with shared rewards where the global state is not factorised across individual agent observations. That is, even if at execution the agents can condition on other agents’ observations through attention/retention for CL, this does not reconstruct the full state, and therefore remains a partial (but aggregated) observation. We do acknowledge this provides more information per agent compared to CTDE and IL methods at execution time, but it also comes with increased inference costs, which is exactly what we are addressing with Sable.
As a concrete example, consider a two-agent grid world where agents receive a joint reward when they simultaneously reach a goal `G`.
```
|-----|-----|-----|-----|-----|-----|-----|
| # | # | # | # | # | # | # |
| # | . | A2 | . | . | G | # |
| # | . | # | . | . | . | # |
| # | . | . | . | . | . | # |
| # | A1 | . | . | # | . | # |
| # | # | # | # | # | # | # |
|-----|-----|-----|-----|-----|-----|-----|
```
`A1` then has a partial observation of the grid, which can be given as
```
# | . | .
# | A1 | .
# | # | #
```
while `A2` has partial observation
```
# | # | #
. | A2 | .
. | # | .
```
An aggregation over these observations won’t reconstruct the true global state which implies that the problem remains 1) partially observable and 2) cooperative due to the shared reward.
We do however notice that our current notation concerning what agents condition on (in section 2) does not capture this as precisely as it should. We will update our definition of a Dec-POMDP to include an observation function (which is quite standard, e.g. Oliehoek and Amato, 2016). In our case, the observation function maps from the underlying global state and agent id to the agent’s probability distribution over the power set of concatenated observations. For IL, the probabilities are only non-zero over singleton sets (i.e. single observations) and for CL it has full support (i.e. includes probability mass on all possible combinations). We note, still in both these cases, the emitted observation remains partial with respect to the full state. We will also make this more clear in our experiment section to highlight the differences and that this could influence performance.
Finally, we note that the MAT paper (Wen, et al., 2022) considers the Markov game formulation of the problem. We do not feel this is the best setting given the environments considered. Most, if not all the environments in MAT, and those we consider in our work (as well as the practical applications we ultimately care about) do not have full state observability at execution. Therefore, we remain convinced that the Dec-POMDP formulation is the most well-suited to describe our problem setting. That said, we remain open to any counter arguments to this view, and would happily update our definition if an improved formulation is proposed.
**References**
* Wen, M., Kuba, J., Lin, R., Zhang, W., Wen, Y., Wang, J. and Yang, Y., 2022. Multi-agent reinforcement learning is a sequence modeling problem. Advances in Neural Information Processing Systems, 35, pp.16509-16521.
* Oliehoek, F.A. and Amato, C., 2016. A concise introduction to decentralized POMDPs (Vol. 1). Cham, Switzerland: Springer International Publishing | Summary: The work proposes a novel sequence model architecture for multi-agent reinforcement learning (MARL) and conducts a large empirical evaluation to validate the efficacy of the new approach. The architecture is based on retention networks and optimises the sequence model architecture similar to the prior multi-agent transformer (MAT) in a central learning fashion. However, in evaluations across 45 tasks, the novel architecture is found to outperform standard MARL baselines and MAT by significant margins, and to be significantly more efficient in terms of memory requirements, model inference speed, and to be more scalable to tasks with many agents. To verify the importance of different components of the proposed approach, ablation studies are provided in two tasks that verify the importance of each novel component.
Claims And Evidence: Overall, I find the claims made in this work to be clearly presented and well supported by clarifications and empirical evidence.
The only minor point that I found confusing is in the introduction, in which the authors contrast their approach to three categories of MARL: independent learning, centralised training with decentralised execution, and centralised learning. However, as I understand, Sable represents a centralised learning in the same way as the multi-agent transformer is centralised learning. This makes the contrast to the categories somewhat confusing, and I would suggest to clarify the relationship of Sable to such prior work to avoid confusion.
Methods And Evaluation Criteria: The methodology appears sound and is largely well presented in Section 3 of the work. However, the work omits several details that are only mentioned in the Appendix and should at least be briefly stated in the main part of the work:
1. The training objective is only defined in Appendix D and should be included in Section 3.
2. The network architecture is somewhat hard to follow from Section 3 without visualisation. Figure 13 is very well presented but unfortunately only shown in the Appendix of this work.
3. Experiments include tasks with continuous and discrete action spaces but the work does not clarify how the Sable network is adapted to adjust for these differences. Would the authors be able to clarify how the policy is adjusted for these settings and whether the optimisation objective differs across these settings?
Theoretical Claims: The work does not provide any theoretical claims and proofs.
Experimental Designs Or Analyses: I verified the details provided about the conducted experiments. I find the evaluation to be well presented and very detailed. I commend the authors for following suggestions of recent work on evaluation practises in RL, and for providing plenty of details in the supplementary material.
Below are some clarification questions and further comparison points that have not been presented in this work and would benefit the contextualisation of this work:
1. The work compares heavily to the multi-agent transformer approach and states that MAT is "not able to condition on observation histories". Would the authors be able to elaborate on this statement? Given the MAT architecture is based on a transformer, it would seem plausible to add longer context based on the observation history of agents, similar to in Sable, even if this has not been done in the original work.
2. In Figure 4, the performance of IPPO is shown to degrade throughout training for a LBF task with 128 agents which is unexpected. Would it be possible that this degradation is the result of suboptimal hyperparameter tuning? How do the authors explain that performance of IPPO becomes worse as the algorithm continues to train?
3. The work discusses the achieved returns and memory efficiency of Sable and MARL baselines but does not discuss their training cost. Would the authors be able to provide the cost of training Sable in comparison to MAT, IPPO and MAPPO? Related, Figure 5 (b) shows how the memory cost can be reduced by using smaller chunk sizes without deteriorating performance. Would I be correct in assuming that reduced chunk size comes at a cost of reduced training speed?
4. Figure 1 compares the throughput of Sable to MAT in terms of steps per second. Would the authors be able to provide a similar comparison to IPPO and MAPPO and clarify how exactly these numbers were obtained?
Supplementary Material: I reviewed supplementary material B, C and D.
Relation To Broader Scientific Literature: The authors state that their work takes inspiration from recent work in linear recurrent models that considered e.g. the application of state space models in RL [1]. Would the authors be able to elaborate in what way the retentive network architecture applied in Sable differs from this work?
[1] Lu, Chris, Yannick Schroecker, Albert Gu, Emilio Parisotto, Jakob Foerster, Satinder Singh, and Feryal Behbahani. "Structured state space models for in-context reinforcement learning." Advances in Neural Information Processing Systems 36 (2023): 47016-47031.
Essential References Not Discussed: I am not aware of any essential references that are not discussed.
Other Strengths And Weaknesses: I would like to commend the authors on a strong empirical evaluation that provides depth and breadth and answers focused questions. I further would like to emphasise that the work releases all experimental data and code.
Other Comments Or Suggestions: No further suggestions.
Questions For Authors: 1. Would the authors be able to elaborate why the multi-agent transformer architecture is stated to not be able to use observation history as inputs?
2. In Figure 4, the performance of IPPO is shown to degrade throughout training for a LBF task with 128 agents which is unexpected. Would the authors be able to elaborate on why this might occur?
3. Would the authors be able to provide information on the training and inference cost of Sable in comparison to MAT, IPPO and MAPPO?
4. Figure 5 illustrates how a reduced chunk size can reduce the GPU memory cost without any cost to algorithm performance. Would I be correct in understanding that a smaller chunk size comes at a cost of reduced training speed due to lower degrees of parallelisation?
**In response to the author rebuttal, I increased my score**
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your feedback, especially your comments on our positioning within the context of different MARL algorithms, and questions on implementation and experimental details, which helped us improve the paper. We provide detailed responses below.
> Confusion from the introduction's contrast between Sable and prior MARL approaches.
We acknowledge that the introduction to Sable can be improved and will make it clear that Sable is a CL method. Our narrative was that Sable breaks the typical CL mold by being both performant, memory efficient and scalable, unlike other CL methods. We will update the introduction to convey this more clearly.
> Include the training objective in Section 3 and add Figure 13 to Section 3 to visualize the network architecture.
Due to the page limit constraint, we moved some additional details and the architecture figure to the appendix. We will add these back into the main text of the updated version.
> How is the Sable network and policy adjusted for continuous vs discrete action spaces, and does the optimization objective differ?
Sable's policy network uses different output heads for discrete and continuous actions, but the architecture and PPO optimisation objective remain the same. For discrete actions, the policy head outputs action logits per agent, which are used to sample actions and train the policy. For continuous actions, the policy outputs mean values and a shared log standard deviation parameter, to sample actions from a Gaussian distribution. We will add this and more details to the appendix.
> in what way the retentive network architecture applied in Sable differs from Structured state space models for in-context reinforcement learning?
While both approaches share the core idea of replacing attention with a more memory-efficient mechanism to enable scalable sequence processing, they differ in their underlying architecture, and research goals. Sable relies on the cross-retention mechanism which is an extension we added to RetNets. There is no obvious analogue for this in S5. Additionally, the focus of S5 is on single-agent RL, in-context learning and meta-learning, while our work focusses on computationally efficient long context memory in MARL. If interested, we refer the reviewer to Section 2.4 of the RetNet paper [[Sun (2023)](http://bit.ly/4ldsu7i)] where the difference between S4 and RetNets are discussed in detail.
> Why can't the multi-agent transformer architecture use observation history as inputs?
MAT’s architecture lacks a recurrent formulation and handling temporal memory with transformers is challenging [[Parisotto (2019)](https://bit.ly/449KLfB), [Meng (2022)](https://bit.ly/4lbz6TJ)]. Although it is possible to maintain a cache at inference time for memory over the sequence, it is less scalable due to high memory requirements from maintaining a cache. Our RetNet for RL is advantageous in that it only requires a hidden state and constant memory at inference time.
> Why does IPPO's performance degrade during training in the 128-agent LBF task in Figure 4?
(*) Sharing parameters makes distinguishing agents harder, and partial observability leads to non-stationarity. Sable and MAT overcome this with auto-regressive action selection, which aids coordination and non-stationarity. Hyperparameters shouldn't be the issue, as they were tuned. For additional information, we refer the reviewer to Appendix A3, Lines 808-824.
> Would the authors be able to provide information on the training and inference cost of Sable in comparison to MAT, IPPO and MAPPO?
Below we show training and inference SPS in Neom.
Table 1: _Training SPS_
| Task Name | IPPO | Sable | MAT |
|--------------------- |-----------|-----------|-------------|
| Neom-512-ag | ~24k | 410 | 63 |
| Neom-128-ag | ~59k | 2542| 1505 |
| Neom-32-ag | ~180k | 11111 | 10391 |
Table 2: _Inference SPS_
| Task Name | IPPO | Sable | MAT |
|--------------------- |-----------|-----------|-------------|
| Neom-512-ag | 4600 | 759 | 234 |
| Neom-128-ag | 3735 | 1590| 1503 |
| Neom-32-ag | 5022 | 3229 | 3198 |
Sable is significantly faster than MAT but slower than IPPO. This is expected as Sable and MAT both use larger transformer style networks, while IPPO uses a smaller MLP. Additionally, we believe this is a more fair comparison as MAT is also a centralised learning method and the previous SOTA in MARL, while IPPO is independent and has significantly worse performance than Sable.
> Would I be correct in understanding that a smaller chunk size comes at a cost of reduced training speed due to lower degrees of parallelisation?
Yes, exactly. Decreasing chunk size reduces training speed. Larger chunk sizes allow more parallel computation and faster training.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarifications that address most of my comments. I remain convinced that this is an excellent submission that should be accepted. I decided to increase my score to **strong accept**.
That being said, in line with reviewer qBU1 and my prior comments, I hope the authors will be able to make the assumptions made by Sable and its central learning setting more clear and include a nuanced discussion of it with respect to other algorithms. Similarly, as stated by the authors in their response, I hope to see additional details and Figures (e.g. Figure 13) in the main text of the work.
---
Reply to Comment 1.1.1:
Comment: **Thank you** for taking our reply into consideration and increasing your score. We truly appreciate your constructive feedback and we are happy to hear that our clarifications helped to address your comments.
In our updated manuscript, we will make sure to include what you, and reviewer `qBU1`, have asked for. Specifically, we will:
* Clarify the assumptions made by Sable with a more nuanced discussion of its positioning as a CL method with respect to the other algorithms.
* Include additional details and figures in the main text, in particular, the pseudocode (optimisation objectives and Algorithm 1), architecture diagram (Figure 13), visualisation of Neom and an improved Equation 6, to further aid in clarity and understanding.
* Update the problem formulation of the Dec-POMDP to include an observation function to make it more clear what agents condition on during execution and mention in the experiment section how this could influence performance when comparing IL, CTDE and CL. | null | null | null | null | null | null |
Optimal Sensor Scheduling and Selection for Continuous-Discrete Kalman Filtering with Auxiliary Dynamics | Accept (poster) | Summary: The paper addresses the problem of optimal sensor scheduling and selection in Continuous-Discrete Kalman Filtering (CD-KF) for Bayesian State-Space Models (SSMs), where continuous-time processes are observed through multiple sensors with discrete, irregularly timed measurements. The novelty of the work lies in the incorporation of *auxiliary state dynamics*, which influence the measurement process (e.g., sensor energy constraints, environmental conditions). The authors model sensor measurements as *inhomogeneous Poisson processes* and derive an upper bound on the mean posterior covariance matrix, which is continuously differentiable in sensor measurement rates, allowing *gradient-based optimization*. The main contributions are:
1. A differentiable upper bound on the mean posterior covariance of CD-KF.
2. A finite-horizon *optimal control framework* that jointly optimizes measurement rates, auxiliary dynamics, and covariance constraints.
3. A *deterministic scheduling method* for selecting actual measurement times using optimal quantization, minimizing Wasserstein distance from the Poisson distribution.
4. Empirical results in *state-space filtering and dynamic Gaussian process regression*, demonstrating improved trade-offs between accuracy and resource usage.
Claims And Evidence: ### 1. **Upper Bound on Posterior Covariance**
#### The authors derive an upper bound on the mean posterior covariance matrix of the Continuous-Discrete Kalman Filter (CD-KF) in scenarios where sensor measurements follow inhomogeneous Poisson processes.
#### This bound is shown to be continuously differentiable with respect to sensor measurement rates, making it amenable to gradient-based optimization (Proposition 5.1).
#### The derivation relies on Jensen’s inequality and the monotonicity properties of the Kalman update function to ensure that the bound holds in expectation.
#### The correctness of the bound is theoretically proven and further validated through numerical simulations (e.g., tracking the true covariance in experimental results).
### 2. **Finite-Horizon Optimal Control Formulation**
#### The problem is formulated as an optimal control problem over a finite horizon, where the goal is to jointly optimize:
- Sensor measurement rates (modeled as control variables).
- Auxiliary-state dynamics (e.g., energy constraints, environmental interactions).
- Constraints on posterior covariance to balance accuracy and resource constraints.
#### The control framework is mathematically well-posed, ensuring feasibility through a continuously differentiable cost function and constraints (Equations 13a–13h).
#### The authors provide conditions (Assumption 6.1) ensuring that at least one admissible solution exists.
Empirical validation in a robotic sensing task demonstrates that the optimized sensor scheduling effectively balances accuracy and energy constraints.
### 3. **Deterministic Measurement Scheduling**
#### After optimizing sensor rates, a deterministic scheduling method is proposed to convert Poisson-based measurement rates into actual measurement times.
#### The scheduling method is based on optimal quantization, minimizing the Wasserstein-2 distance between the optimized Poisson rate distribution and a deterministic empirical schedule (Proposition 7.1).
#### This ensures that measurements are distributed optimally in time, avoiding the risk of stochastic fluctuations that could degrade performance in real-world applications.
#### The closed-form solution for quantization-based scheduling makes it computationally efficient, allowing practical deployment in real-time systems.
### 4. **Empirical Validation**
#### The proposed approach is validated through two robotic sensing experiments:
- A standard environment where a robot optimizes its trajectory and measurement schedule to minimize state estimation uncertainty.
- A radiation exposure scenario, where sensor performance degrades due to environmental conditions, requiring adaptive scheduling to maintain accuracy.
#### The Kalman filter's posterior covariance estimates obtained through the proposed approach closely track the true values, demonstrating the validity of the upper bound.
#### The optimized measurement schedule leads to improved trade-offs between estimation accuracy and resource usage compared to naive scheduling strategies.
#### The empirical results confirm that the deterministic selection of measurement times closely matches the expected Poisson distribution while reducing variance.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem, offering a rigorous and practical approach to sensor scheduling in Continuous-Discrete Kalman Filtering (CD-KF).
### 1. **Modeling Sensor Measurements with Poisson Processes**
#### Using inhomogeneous Poisson processes is appropriate, as real-world sensors often collect data at irregular intervals due to resource constraints.
#### The differentiability of the posterior covariance bound enables efficient gradient-based optimization, making the approach computationally feasible.
### 2. **Optimal Control Formulation**
#### The authors formulate a finite-horizon optimal control problem that optimizes:
- Sensor measurement rates to balance estimation accuracy and resource constraints.
- Auxiliary state dynamics (e.g., energy usage, environmental effects).
- Covariance constraints to maintain estimation quality.
#### The formulation generalizes existing Kalman filter-based sensor scheduling by incorporating dynamic constraints and auxiliary state interactions.
### 3. **Deterministic Scheduling via Optimal Quantization**
#### Instead of randomly sampling from a Poisson process, the authors propose a deterministic scheduling strategy based on Wasserstein distance minimization, ensuring that measurement times closely match the optimized rates while reducing variability.
### 4. **Evaluation through Robotic Sensing Experiments**
#### The experiments validate the approach in two scenarios:
- Standard sensor scheduling, optimizing a robot’s trajectory and measurement plan.
- Radiation-damage scenario, where sensor degradation requires adaptive scheduling.
#### Results show that the method improves estimation accuracy and reduces resource usage, confirming the effectiveness of the proposed optimization framework.
### 5. **Potential Areas for Improvement**
#### Comparative baselines (e.g., reinforcement learning, heuristic policies) would provide stronger empirical validation.
#### Scalability analysis for larger sensor networks is not extensively discussed.
#### Sensitivity analysis on how different auxiliary constraints affect scheduling decisions would enhance generalizability.
Theoretical Claims: ## Theoretical Claims
### 1. Proposition 5.1 (Covariance Matrix Bound)
### This proposition establishes an upper bound on the covariance matrix of the system state under specific assumptions about the dynamics and noise characteristics.
### The derivation relies on standard results in stochastic process theory and Lyapunov analysis, ensuring that the covariance remains bounded given certain stability conditions.
### Correctness Check: Upon careful review, the proof appears rigorous, leveraging a decomposition of the state transition matrix and spectral properties of the covariance evolution. However, it would be beneficial to validate this bound numerically against empirical estimates to confirm that the theoretical bound is not overly conservative.
### 2. Proposition 5.2 (Auxiliary State Bound)
### This result provides an upper bound on the auxiliary state variable, which is introduced to facilitate the analysis of system evolution.
### The proof depends on a key assumption: the concavity of the auxiliary dynamics function, which is explicitly stated in Section 5.2.
### Correctness Check: The derivation correctly follows from Jensen’s inequality and properties of concave functions, ensuring that the bound holds under the given assumptions. The proof structure is sound, but a sensitivity analysis could further strengthen confidence in the result by assessing its robustness to variations in model parameters.
### 3. Proposition 7.1 (Optimal Quantization Points for Deterministic Scheduling)
### This proposition addresses the selection of quantization points that minimize a given distortion metric in the context of deterministic scheduling.
### The proof constructs an optimization problem based on a distortion-cost function and derives conditions for optimality.
### Correctness Check: The derivation is well-structured, employing Lagrange multipliers and Karush-Kuhn-Tucker (KKT) conditions to find the optimal quantization points. The reasoning follows standard optimization techniques, and the proof is logically sound. Nonetheless, a comparison with numerical optimization results would help validate the theoretical predictions.
## Assumptions and Justifications
### The paper explicitly states several key assumptions, such as the concavity of auxiliary dynamics and boundedness of noise processes.
### These assumptions are reasonable and well-motivated, as they align with standard conditions in stochastic control and optimization literature.
### The authors provide sufficient theoretical justifications for these assumptions, discussing their necessity in establishing key results.
## Areas for Further Validation
### While the theoretical derivations appear correct upon inspection, certain results (especially Proposition 5.1) could benefit from numerical validation to ensure that theoretical bounds align well with empirical observations.
### Additionally, sensitivity analyses on the concavity assumption and parameter variations would further establish the robustness of the results.
Experimental Designs Or Analyses: ### The experimental design and analyses presented in the paper are generally sound, effectively demonstrating the feasibility and practical implications of the proposed approach. Below, we assess key aspects of the experimental setup, evaluation metrics, and areas for improvement.
### 1. Robotic Sensing Experiments (Standard and Radiation-Damage Scenarios)
### The experiments are designed to test the performance of the proposed scheduling method in realistic sensing environments, including a standard scenario and a radiation-damage scenario, where sensor degradation over time is considered.
### The standard scenario provides a baseline where all sensors function optimally, allowing for a controlled evaluation of scheduling effectiveness.
### The radiation-damage scenario introduces progressive sensor failures, testing the method’s adaptability under real-world constraints.
### Assessment: The experiments successfully demonstrate how the algorithm adjusts to sensor degradation and limited resources, making the setup practically relevant and well-motivated. However, additional robustness tests under more severe failure models (e.g., abrupt sensor loss) could further strengthen the analysis.
### 2. Evaluation Metric: Trace of Covariance Matrix (Σ) in Kalman Filtering
### The paper employs the trace of the covariance matrix Tr(Σ)\text{Tr}(\Sigma) as the primary metric for evaluating estimation accuracy.
### This metric is well-justified in Kalman filtering applications, as it provides a measure of overall uncertainty in the state estimate: Tr(Σ)=∑iλi\text{Tr}(\Sigma) = \sum_{i} \lambda_i where λi\lambda_i are the eigenvalues of Σ\Sigma, representing state estimation uncertainty.
### Assessment: This choice is appropriate and aligns with standard filtering performance criteria. However, supplementing it with alternative uncertainty measures (e.g., determinant of Σ\Sigma, worst-case eigenvalue analysis) could provide a more comprehensive evaluation.
### 3. Validation of Trade-Off Between Accuracy and Resource Constraints
### The experiments illustrate the trade-off between estimation accuracy and resource constraints, demonstrating how sensor scheduling affects estimation performance.
### By evaluating different scheduling strategies, the paper effectively highlights the impact of limited sensor availability on overall system performance.
### Assessment: The experimental results clearly support the theoretical claims. However, the inclusion of additional baselines would enhance the comparative analysis, such as:
### - Randomized sensor activation, to assess whether the proposed method significantly outperforms naive random selection.
### - Heuristic-based scheduling, which could provide a lower-complexity alternative for practical use.
Supplementary Material: 1. The appendix provides useful theoretical proofs and numerical details.
2. The supplementary materials (e.g., animations of robot trajectories) enhance the understanding of the proposed approach.
Relation To Broader Scientific Literature: 1. The work extends classical sensor scheduling in Kalman filtering (Le Ny et al., 2009; Marelli et al., 2019) by integrating continuous-discrete modeling and auxiliary state constraints.
2. Connections to active sensing and reinforcement learning-based sensor selection (Yoon et al., 2018; Qin et al., 2024) are briefly mentioned but could be expanded.
3. The paper aligns with recent trends in Bayesian optimization for experimental design (Snoek et al., 2012; Kleinegesse & Gutmann, 2020), but a direct comparison with Bayesian optimization-based approaches is missing.
Essential References Not Discussed: 1. The authors should consider discussing recent works on RL-based sensor scheduling (e.g., deep RL for adaptive sensing in sequential decision-making problems).
2. If there exist alternative stochastic optimal control methods for sensor scheduling, citing those would strengthen the discussion.
Other Strengths And Weaknesses: ### **Strengths**:
1. Well-motivated and practical problem – The paper addresses a real-world challenge in sensor scheduling, with applications in robotics, healthcare, and environmental monitoring.
2. Theoretically rigorous approach – The differentiable upper bound on the posterior covariance and the optimal control formulation provide a solid mathematical foundation for optimization.
3. Effective deterministic scheduling method – The Wasserstein quantization-based approach ensures that actual measurement times closely match optimized rates, improving reliability in planning tasks.
### **Weaknesses**:
1. Lack of comparative baselines – The paper does not compare its approach with reinforcement learning-based or heuristic sensor scheduling methods, making it harder to assess its relative advantages.
2. Limited sensitivity analysis – The impact of different auxiliary state dynamics (e.g., non-convex constraints, stochastic effects) on sensor scheduling decisions is not extensively explored.
3. Potential generalization limitations – The concavity assumptions for auxiliary state dynamics may restrict the applicability of the method to nonlinear or highly dynamic real-world environments.
### **Minor Typos & Formatting Issues**
### **1. Notation Consistency Issues**
- **Equation (9) (Randomized Covariance Matrix Evolution)**
- The notation for **Kalman gain** \( K_s(\Sigma, \xi, t) \) varies slightly across equations. Ensure consistency in using subscripts and argument ordering.
- **Equation (11) (Upper Bound on Covariance Matrix)**
- The function \( \hat{\Sigma}(t) \) is introduced as a bound, but in some places, it is written without the hat (\(\Sigma(t)\)), which may cause confusion.
- **Equation (12) (Auxiliary State Evolution Bound)**
- The auxiliary state update function **\( f_p(\xi, u, t) \)** and **\( g_s(\xi, u, t) \)** use different orderings in some parts of the text—ensure consistency.
### **2. Typographical Errors & Formatting Issues**
- **Page 2, Line 35:** `"axillary states of an SSM"` → should be `"auxiliary states of an SSM"`.
- **Page 3, Line 80:** `"measurements can in- crease energy consumption"` → should be `"measurements can **increase** energy consumption"` (remove extra hyphen).
- **Page 4, Line 128:** `"dynamics are inhomoge- neous Poisson processes"` → should be `"dynamics are **inhomogeneous** Poisson processes"` (remove hyphen).
- **Page 6, Line 192:** `"togheter with ncT continuously differentiable terminal constraints"` → should be `"together with \( n_c^T \) continuously differentiable terminal constraints."`
Other Comments Or Suggestions: 1. Comparative Baselines – Including reinforcement learning-based or heuristic scheduling methods as baselines would strengthen the empirical validation. This would help demonstrate the advantages of the proposed optimal control formulation over alternative approaches.
2. Sensitivity Analysis – Conducting an analysis on how different auxiliary state dynamics (e.g., non-convex constraints, stochastic transitions) affect scheduling decisions would improve the generalizability of the approach. This is particularly important for real-world applications where sensor conditions may change unpredictably.
3. Assumption Justification – The concavity assumption for auxiliary state dynamics is reasonable in some cases but may not always hold in practical scenarios. Discussing potential relaxations of this assumption or alternative formulations for non-convex cases would improve the paper’s robustness.
4. Scalability Considerations – While the current experiments demonstrate feasibility, additional discussion on scalability to larger sensor networks or more complex environments would be beneficial. How does the method scale with an increasing number of sensors or constraints?
5. Clarifications on Deterministic Scheduling – The Wasserstein quantization-based deterministic scheduling is an interesting contribution. However, additional discussion on its limitations and trade-offs (e.g., impact on computational efficiency, adaptability to dynamic sensor failures) would provide more insight into its practical deployment.
6. Minor Typos & Formatting – The paper is generally well-written, but a careful proofreading pass would help eliminate minor typos or inconsistencies in notation (if any). Specific sections, such as theoretical derivations, could benefit from additional explanations to improve clarity for a broader audience.
Questions For Authors: ### Q1. How does your approach fundamentally differ from prior work on sensor scheduling in Kalman filtering (e.g., [Le Ny et al., 2009], [Marelli et al., 2019])? Beyond incorporating auxiliary dynamics, what unique advantages does your upper-bound formulation offer over existing stochastic control or active sensing approaches?
### Q2: How does the method scale to large-scale sensor networks with multiple interacting sensors? Can it handle non-convex auxiliary state dynamics, or does the concavity assumption significantly restrict its applicability?
### Q3: Why is Kalman Filtering chosen over Particle Filtering? Would your approach still be effective for *nonlinear or non-Gaussian*state-space models where Kalman filtering is suboptimal? Could *Particle Filtering (PF)* or *Extended Kalman Filtering (EKF)* be viable alternatives?
### Q4: What contributes to the high-dimensionality of the problem? Is the complexity mainly due to *the number of sensors, control variables, or auxiliary state interactions*? How would the method extend to *nonlinear* systems?
Ethical Review Concerns: No ethical concerns were identified.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Q1: Besides the auxiliary dynamics, our method considers the continuous-discrete setup and the flexibility of not requiring an upper bound on the number of measurements. The method by [Le Ny et al., 2009] is for a continuous-continuous setup, while the method in [Marelli et al., 2019] is for a discrete-discrete setup. We will make sure to clarify this point further in the paper.
Q2: Scalability:
For each new sensor we consider, we will have an intensity rate, which we will need to optimize for. This is typical for optimization-based scheduling procedures.
Considering efficient optimization schemes for large-scale problems will require investigating techniques such as distributed optimization, parralization, and sparsity considerations. Another challenge for scalability is the dimension of the states of the SSM we want to estimate. If we have $n_x$ states, then the covariance matrix of the CD-KF will be in the order of $n_x^2$. However, for large dimensional SSM, there exist research for efficient approximations and methods to deal with the covariance matrix problem. One solution, for example, is to consider a diagonal approximation for the covariance matrix or a low-rank approximation (see [4]). These approximations can be easily integrated into our framework. We will include a remark discussing this in the final paper.
Concavity Assumption: We emphasize that even under the concavity assumption, our approach spans a wide range of dynamical systems (it covers all linear parameter varying systems). This work is the first to explore sensor scheduling with auxiliary dynamics. Additionally, our method can be implemented within a Receding Horizon (RH) framework (refer to Reviewer nw2r's response for a detailed description), where each optimization iteration is addressed by either linearizing the non-convex/non-concave auxiliary state or employing a convex/concave approximation. If we use nonconvex/nonconcave dynamics for $\xi_p$, then we will lose the theoretical guarantees of the method. However, we may still obtain satisfactory results depending on how good the approximation $\frac{d{\mathbb{E}[\xi_p]}}{dt}\approx f_p(\mathbb{E}[\xi],u,t)+\sum^{N_s}_{s=1} \lambda_s(t)g_s(\mathbb{E}[\xi],u,t))$ (or an upper bound approximate). This approximation is similar to the approximation used in the EKF.
We conducted an experiment with non-concave dynamics for the sensor degradation states $\zeta_1$ and $\zeta_2$ of the example in the paper. The experiment demonstrated that our method remains effective (link: postimg.cc/jWcFJ4f3) (as allowed by ICML). We will include this discussion with the experimental results in the paper.
Q3:
The KF is usually chosen in the literature of sensor scheduling instead of the PF for the fact that the covariance matrix dynamics are independent of the actual measurements and because it scales better with large dynamical systems. However, we can still extend our approach with the EKF by utilizing an RH setup where, for each short horizon, we use the linearized dynamics around the current estimate. We will include a remark about this point in the paper.
Q4: We apologize to the reviewer as we did not understand what the reviewer meant by "high-dimensionality of the problem" in connection with our paper. If this refers to scalability, then we have addressed it above. The nonlinear dynamics of the problem were also addressed above.
Comparisons: We acknowledge and agree with the reviewer's suggestion to provide comparisons. We have conducted comparisons with heuristic approaches for scheduling measurements (greedy approach and random sampling of measurement times) that we will provide in the paper (Table link: postimg.cc/qz2Q3GVc). The results suggest that our method ("Optimized") outperforms the greedy and random scheduling approaches for our example. To assess the deterministic scheduling computed based on the optimized measurement rates (denoted "Optimized" ), we compared it with a method (denoted as "M-Optimized" in the table) that is based on sampling $M_c$ realizations of measurement times according to the corresponding Poisson process with the optimized rates. Afterwards, we pick the measurement times corresponding to the realization with the minimal cost. The results show that our deterministic quantization provides similar results to "M-Optimized" without having to sample multiple realizations, which can be computationally expensive and unrealistic since we do not have the real measurements to compute the cost for each realization.
To the authors' best knowledge, no reinforcement learning methods applicable to this setup have been found for comparison (i.e., _multiple_ sensor scheduling in a _continuous-discrete_ setting that _does not require training on pre-obtained data with uniform sampling_).
[4] Chang, Peter G., et al. "Low-rank extended Kalman filtering for online learning of neural networks from streaming data." Conference on Lifelong Learning Agents. PMLR, 2023. | Summary: In this paper, the authors are concerned with optimizing temporal event sequences of measurements for minimizing the uncertainty of continuous-discrete Kalman filter (CD-KF). In particular, they consider a general case where the measurements may affect the underlying states of sensors themselves as well as the measurement target through differential equations.
The proposed method works in three steps. First, the discrete event sequences of measurements are substituted with the intensity functions of the corresponding time-inhomogeneous Poisson processes, which is continuous and more optimization-friendly, and the time evolution of relevant parameters (such as the target uncertainty and the sensor states) is approximately given in terms of the intensity functions.
Second, the intensity functions are optimized in terms of a user-defined objective and constraints.
Finally, the temporal event sequences are recovered by quantizing the intensity functions.
The authors also demonstrated the feasibility of the proposed method in illustrative examples of energy-limited measurement robot.
Claims And Evidence: They claim that
1. the proposed method captures practical scenarios, and
1. the proposed method has some potential and feasibility.
These points are generally well supported, but it would be nice to discuss more around
* how to operate the proposed method under unknown system dynamics,
* performance characteristics such as computation time vs discretization width of differential equation, and
* benefits of penalizing/constraining $\lambda$ indirectly through the auxiliary state $\xi_p$ rather than doing it directly.
Methods And Evaluation Criteria: Mostly yes.
However, a key component of the proposed method is not well described.
In particular, how do you compute the gradient of the intensity functions under numerical solution of (13)?
More specifically,
* how do you handle these constraints (13g,h) with gradient-based optimization?
* is it tractable even if the step size in time of the numerical solver is very small?
Theoretical Claims: I only follow the reasoning in the main text, but it seems mostly reasonable.
One thing I have noticed is that there is no theoretical justification on the substitution $\xi^*\gets \hat{\xi}$ in (13).
These two auxiliary states are not necessarily close to each other because $\hat{\xi}$ is smooth while $\xi^*$ is jaggy.
Experimental Designs Or Analyses: - For the experimental design, I find it a bit unusual that the measurement noise depends on the distance of the robot and a fixed reference point ("location of process") $||p\_r-p\_p||$. It should ideally depend on the $||p\_r-x||$, where $x$ is the target location that is moving randomly over time.
This I think also reveals a limitation of the proposed method, that is, the scale of the measurement noise cannot depend on the target state $x$.
- Another thing is that it is unclear how the state-space representation of the Matern kernel for GP is used in the experiment.
- For the analyses, Figure 1 can be more reader friendly.
For example, what is RTS? How did you draw it?
Supplementary Material: No.
Relation To Broader Scientific Literature: The key contribution is making the Kalman filter applicable to more practical scenario involving irregularly-timed measurements and auxiliary states, which is novel as far as I can tell.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: - L144 left: $t_i\to t_i^-$
- L153 left: "optimal" in what sense?
- L212 left: $\bar{\Sigma}(0;\xi^*)=\Sigma\_0$?
- L175 right: Define $\le_e$
- L219 right: $\ge_0\to \ge_e$?
- L253 left: Proposition 5.2?
- L323 left: wrong signs
- L281 right: ambiguous usage of inequalities
- L297 right: why constraining only with $t\le 1/2$?
- L325 right: $\gamma_s$?
Questions For Authors: Please comment on the points I raised in Claims And Evidence, Methods And Evaluation Criteria, and Experimental Designs Or Analyses.
This may affect my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Unknown dynamics:
Our formulation as an Optimal Control Problem (OCP) with a differentiable cost function and constraints opens avenues for extension to uncertain dynamics using robust/stochastic OCP methods [2,3]. For completely unknown dynamics, planning is challenging since we must schedule measurements based on dynamics we do not yet know. We believe our approach paves the way for future research in this direction.
One approach to handling unknown parameters is a Receding Horizon (RH) setup. We solve a finite-horizon OCP, assuming fixed system dynamics and parameters over the prediction horizon based on current estimates, then apply the control until the first measurement. The subsequent measurement updates the parameters, and the OCP is resolved for the next horizon. This iterative process adapts as more information becomes available. In the revised version, we will include successful results using RH for a moving target (link: postimg.cc/S2t1gQDW (as allowed by ICML)).
Performance characteristics:
Performance, particularly regarding discretization steps, depends on the auxiliary dynamics (e.g., stiffness, fast/slow dynamics, stability) and the KF’s covariance dynamics. Different OCP methods offer trade-offs between accuracy and computational performance (see Appendix D). Our work employed Euler discretization with variable steps while keeping a fixed number of discretization points. In the revision, we will include a figure demonstrating the trade-off between computation time and discretization points of a specific example focused on the KF dynamics (link: postimg.cc/ykL46hPv).
Penalizing auxiliary states:
Often, the auxiliary state carries physical meaning, making its penalization more intuitive. For instance, in our examples, the energy state depends on measurement rates, velocities (actions), and the robot’s position. Penalizing measurement rates is less straightforward than penalizing energy consumption when energy is the limiting factor.
Constraints handling:
Both direct and indirect methods for constrained OCP lack exact guarantees of constraint satisfaction due to numerical errors inherent in the OCP solution and ODE integration. Some methods (e.g., direct collocation) can achieve high accuracy but at increased computational cost (See Appendix D). Ultimately, the choice of method and integration scheme depends on the specific dynamics, much like the selection of ODE solvers. We will include this important remark in the paper.
For gradient computations and optimization in the example, we used an interior-point method with JuMP—a Julia package that employs automatic differentiation to compute the gradient and Hessian of the Lagrangian.
Regarding $\xi$:
We acknowledge the reviewer's concern. To clarify, $\xi^*$ from Proposition 5.1 can be any curve (ensuring (10) is well-defined, though not stated explicitly), so $\hat{\xi}$ and $\xi^*$ need not be close. Rather, $\xi^*$ serves as a placeholder, allowing substitution with $\hat{\xi}$. Proposition 5.1 states that for any curve $\xi^*$, the mean covariance $\bar{\Sigma}(\xi^*)$ is bounded by $\hat{\Sigma}(\xi^*)$. This result is applied by substituting $\xi^*$ with the curve $\hat{\xi}$ obtained from (12) and (6b) with initial condition $\xi_0$, which, per Proposition 5.2, bounds the mean curve $\bar{\xi}$ of $\xi=(\xi_p,\xi_u)$ from the SDE (10) and ODE (6b). The deterministic quantities $\bar{\xi}$ (and $\bar{\Sigma}$) _serve as computationally tangible approximations of the stochastic_ $\xi$ (and $\Sigma$) (_approximating mean behaviour_). Alternatively, sampling methods can be used to compute statistical representations for the trajectories in the OCP. However, this will introduce non-differentiability and will be computationally intensive. We will note this important remark in the revision. We apologize for the initial lack of clarity.
Experimental design questions:
1) As mentioned, the RH approach can handle moving targets (we have implemented an example for it). Nonetheless, many scenarios involve a fixed process location (e.g., gas leak, specific object temperature) or applications independent of process location, such as underwater measurements with sensor biofouling. A detailed example of the latter will be provided in the paper.
2) The process assumes a SSM representation of the Matern kernel, with output $x_p$. The parameters $A$ and $\sigma$ in equation (13.b) are based on this representation. We will update the figure to a more user-friendly version. See also the reply to reviewer 6UyF.
[2] Leeman, Antoine P., et al. "Robust optimal control for nonlinear systems with parametric uncertainties via system level synthesis." the 62nd IEEE Conference on Decision and Control (CDC). IEEE, 2023.
[3] Bemporad, Alberto, and Manfred Morari. "Robust model predictive control: A survey." Robustness in identification and control. London: Springer London, 2007. 207-226.
---
Rebuttal Comment 1.1:
Comment: Thank you for detailed explanations.
In particular, your point on penalizing auxiliary states makes sense.
-------
$\xi^*$ and $\hat{\xi}$ still confuse me: I understood that Proposition 5.1 is used for justifying the use of $\hat{\Sigma}$ as an upper bound on $\Sigma$. However, what's shown by Proposition 5.1 is $\bar{\Sigma}(\xi^*)\preceq \hat{\Sigma}$, not $\Sigma\preceq \hat{\Sigma}$. Then, the question is whether $\bar{\Sigma}(\xi^*)$ dominates $\Sigma$ or not, which depends on the choice of $\xi^*$. I think $\xi^*=\xi$ can be justified in terms of "dominance in expectation". What is your justification for taking $\xi^*=\hat{\xi}$ in (13a-h)?
----------
P. S. I am afraid that I cannot see the figures you have posted.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our response so thoroughly and for your prompt feedback.
To clarify, Proposition 5.1 is not used for justifying the use of $\hat{\Sigma}$ as an upper bound on $\Sigma$. It is exactly as the reviewer points out: $\hat{\Sigma}$ is an upper bound on $\bar{\Sigma}$. $\bar{\Sigma}$ does _not_ dominate $\Sigma$. Indeed, $\Sigma$ is stochastic (for any choice of $\xi$); hence, in general, it can take any value, thus making it impossible to bound it by a deterministic quantity. What we can do is, e.g., attempt to bound its expectation or bound it in probability. If we try to upper bound the expectation $\mathbb{E}[\Sigma]$ (as we believe the reviewer suggests), the nonlinear dependence of $\Sigma$ on $A,C,\sigma$ and $R$ in (9) makes it a difficult task (this approach would merit a paper). Instead, in this paper, we aimed to obtain a bound on the conditional expectation $\mathbb{E}[\Sigma \mid \xi=\xi^*]:=\bar{\Sigma}(t;\xi^*)$ which then avoids the dependence mentioned above (note that this approach still gives very good results when applied). We will make sure that this point is clear in the revised version and modify the introduction of the paper according to it. We chose $\xi^*=\hat{\xi}$ as $\hat{\xi}$ can be found deterministically and through differentiable dynamics. The quantities $\hat{\Sigma}$ and $\hat{\xi}$ are related to the mean behavior, which we aim for by our deterministic measurement scheduling method in Proposition 7.1.
For the figures' links:
It seems like some locations do not have access to the host website we used for the images. We uploaded the figures to a different hosting site (github with an anonymous account) just to be sure if this was the problem. Here are the links for all of the figures for all the reviewers:
[Target tracking](https://github.com/ICML-anon25/ICML25_ANON_figs/blob/main/traj_track_ICML_rev_f.png) or [here](https://postimg.cc/S2t1gQDW)
[Computation and Disc. points](https://github.com/ICML-anon25/ICML25_ANON_figs/blob/main/disc_points_ICML_rev.png) or [here](https://postimg.cc/ykL46hPv)
[Modified figure for the example](https://github.com/ICML-anon25/ICML25_ANON_figs/blob/main/New_fig_ICML_rev.png) or [here](https://postimg.cc/hJZ0ByTn) (also for reviewer UHHX)
[Table for comparison](https://github.com/ICML-anon25/ICML25_ANON_figs/blob/main/table_ICML_rev.png) or [here](https://postimg.cc/qz2Q3GVc) (reviewer 6UyF rebuttal)
[Nonconcave aux. state](https://github.com/ICML-anon25/ICML25_ANON_figs/blob/main/nonconcave_ICML_rev.png) or [here](https://postimg.cc/jWcFJ4f3) (reviewer 6UyF rebuttal) | Summary: This work considers continuous-time state-space models in which observations are taken at discrete and potentially irregular time intervals from a finite collection of different kinds of sensors, each with a potentially different accuracy and potentially different cost incurred per measurement.
In this context (for a finite time horizon), the authors propose methodology for optimising the (Poisson-process) rates at which measurements are taken by the different sensors subject to constraints on the cost and estimation accuracy. They also show how the (random) Poisson-process-generated measurement times can be approximated by a deterministic schedule.
The results are illustrated on two robot models with synthetic data.
Claims And Evidence: The claims are supported by mathematical proof.
Methods And Evaluation Criteria: yes.
Theoretical Claims: I did not find any issues. But I did not check the proofs in the appendix.
Experimental Designs Or Analyses: The numerical illustrations seem fine. But I did not check any code.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: As the authors discuss in Section 2, similar optimal control problems have been analysed previously in slightly different settings, e.g., assuming that the latent states evolve in discrete or that the cost of taking measurements is independent of the latent state. The authors also point out further connections to existing literature on Bayesian optimisation.
Essential References Not Discussed: None that I'm aware of.
Other Strengths And Weaknesses: I think this manuscript is overall well written, well structured and thus quite clear. The contributions seem sufficiently novel and the authors mention a few real-world areas in which such problems arise
Other Comments Or Suggestions: - L280: "Wasserstien" -> "Wasserstein"
- L288: "devide" -> "divide"
- L175: "$\leq_e$" denotes an elementwise inequality? Perhaps define this.
- L194: "togheter"
- Remark 5.3: I think there is a "$|$"-symbol missing in the penultimate line, as well as a redundant "." inside the expectation.
- Section 4: In the first paragraph, maybe add a sentence explaining the role of the functions $g_s$.
- P2: In the "Active Sensing" paragraph, a few of the \citep citations should be \citep.
- L398: "true simulated true"
- Bibliography: Inconsistencies in capitalisation/abbreviation of journal/conference names. Missing capital letters in some names, e.g. "kalman" or "gaussian".
Questions For Authors: 1. Can you explain the connection with / use of Gaussian process regression in the last paragraph of Section 8.1? I could not follow this aspect.
2. Can you add more motivation linking this work to machine learning? It is not fully clear to me why the topic is appropriate for a machine-learning conference.
3. Is Eq. 15 correct? I am confused by some of the signs.
4. What is the need for/role of $Y_s(t)$ defined in Eq. 8? Maybe I've missed it but I don't think it is ever used.
5. In Fig. 1, there seem to be several (different?) uses of the symbol $x_p$. The caption states that it represents output of the RTS smoother but according to the legend also seems to represent output of the filter. More generally, is "$x_p$" actually meant to be "$\xi_p$"?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Q1:
In the last paragraph of Section 8.1, we leverage the fact that Gaussian process (GP) regression (with many common stationary covariance kernels) is equivalent to Kalman smoothing of a specific linear state-space model (see ref. Sarkka \& Hartikainen, 2012 in the paper). This equivalence allows us to perform GP regression efficiently on the temporal process $x_p$ via Kalman smoothing. This example demonstrates how our approach can be used for sensor scheduling in connection with GP regression. We will provide an appendix section in the paper clarifying this point.
Q2:
Our work focuses on Bayesian inference in SSMs, a topic that has been explored in previous papers at ICML and other ML-related conferences (e.g., [1]). Additionally, our work can be applied to sensor scheduling in GP regression under dynamic environments. We believe that GP regression continues to be a subject of significant interest in the machine learning community.
Q3:
We thank the reviewer for spotting the sign error. The right-hand side of the equation should be $c_e \exp\bigl(r_e \|p_r - p_e\|^2\bigr) - c_u v - c_u \omega - \sum_{s=1}^2 \sum_{i=1}^{N_s} c_s \,\delta_{t^s_i}$.
Q4:
We see the reviewer's point. We wrote it with the intention of providing more clarity for the reader on the total measurement process. But as the reviewer points out, it is not used; it may therefore contribute to more confusion for the reader than clarity. We will remove it from the paper.
Q5:
We apologize for the ambiguity of the figure. This point has also been mentioned by the second reviewer. We will fix the figure to be more reader-friendly for the final submission. The symbol $x_p$ represents the process we want to measure. The filter estimate is now denoted as $\hat{x}_p$, and the RTS smoother estimate is denoted as $\hat{x}^s_p$ (representing the GP regression output). We have adjusted the legends and description of the figure with the new notation for the filter estimate and the smoother estimate (link: postimg.cc/hJZ0ByTn (as allowed by ICML)).
[1]: Duran-Martin, Gerardo, et al. "Outlier-robust Kalman Filtering through Generalised Bayes." International Conference on Machine Learning. PMLR, 2024. | null | null | null | null | null | null | null | null |
Best Subset Selection: Optimal Pursuit for Feature Selection and Elimination | Accept (poster) | Summary: This paper introduces optimal pursuit strategies for feature selection and elimination in best subset selection problems. It challenges classical feature selection methods by offering new selection and elimination criteria, which focus on feature interactions as opposed to individual significance alone. The authors revisit the classic greedy algorithms, such as Matching Pursuit and Orthogonal Matching Pursuit, and propose enhanced algorithms by substituting their classical feature importance criteria with the new optimal pursuit criteria. The results demonstrate that these new methods outperform traditional approaches.
Claims And Evidence: Please refer to **Questions For Authors**.
Methods And Evaluation Criteria: Please refer to **Questions For Authors**.
Theoretical Claims: Please refer to **Questions For Authors**.
Experimental Designs Or Analyses: Please refer to **Questions For Authors**.
Supplementary Material: Yes, I reviewed some proofs and L, M in supplementary material.
Relation To Broader Scientific Literature: Please refer to **Other Strengths And Weaknesses**.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths:**
1. The paper challenges existing greedy algorithms by proposing a more holistic method to evaluate feature importance, accounting for interactions between features, leading to more optimal feature selection.
2. The paper is mathematically rigorous and builds a strong theoretical foundation for the proposed optimal selection and elimination criteria.
3. The algorithms are experimentally validated across tasks like compressed sensing and sparse regression, showing clear improvements in performance metrics such as recovery rates and computational time efficiency.
4. Despite the added complexity of the new criteria, the algorithms maintain the computational efficiency of the classical greedy methods.
**Weaknesses:**
Please refer to **Questions For Authors**.
Other Comments Or Suggestions: Please refer to **Questions For Authors**.
Questions For Authors: **Weaknesses:**
1. The proposed optimal pursuit strategy involves solving optimization subproblems for feature selection and elimination in a more detailed manner, incorporating interactions between features. While mathematically elegant, this additional complexity can make the algorithms harder to implement and computationally expensive for practitioners. The algorithms also involve matrix inversions, which could potentially increase the time complexity and make them unsuitable for high-dimensional problems where feature selection is crucial.
2. Although the paper demonstrates the effectiveness of the proposed algorithms in compressed sensing and sparse regression tasks, these are relatively specific domains. The algorithms are not evaluated on a wider variety of machine learning tasks or diverse datasets.
3. While the authors present important optimizations to classical feature selection algorithms, the core contribution seems to build upon existing algorithms. Many of the proposed algorithms can be seen as modifications of existing approaches rather than a completely novel class of algorithms. This lack of a breakthrough contribution might limit the paper’s impact and novelty compared to other works in the field.
4. While the paper is mathematically thorough, it lacks detailed practical insights or step-by-step implementation guidelines for those who might want to apply these new algorithms. There is minimal discussion on potential limitations or challenges in applying these methods in real-world machine learning systems.
5. The paper uses standard metrics like NMSE and $R^2$ for evaluation, which are common in compressed sensing and sparse regression tasks. However, the paper does not explore a wider variety of evaluation criteria, such as cross-validation performance or other task-specific metrics.
**Conclusion:**
The weaknesses primarily stem from the increased complexity and limited generalization of the proposed methods, along with an incremental contribution that builds upon existing techniques. While the proposed algorithms are a meaningful enhancement for specific tasks like compressed sensing, they may not significantly advance the field of feature selection in a broad sense. Furthermore, the lack of a deep dive into practical implementation challenges and a broader experimental validation means that the paper may not be as impactful or widely applicable in its current form. I hope the author can address my concerns and change my expected rating.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate your feedback and constructive suggestions on our paper, which will help enrich the original content. In this rebuttal, we address the concerns raised in the reviews. For references [1-8] in the rebuttal, please refer to Reviewer uuYf.
**Q1: (Complexity)** Thank you for your excellent question, which has driven us to further our advancements.
First, the algorithmic complexity of our proposed criteria-enhanced algorithm remains at the same order as the original methods. Both rely on solving the least squares problem over a given subset $S$, which involves a linear system solved via Cholesky decomposition. As noted in Remark 3.7 and 3.15 of our paper, given the Cholesky decomposition, computing the matrix inverse requires only $O(K^2)$ complexity. Thus, despite the presence of inverse terms in our new metric, its computational complexity remains fundamentally equivalent to that of the original algorithm.
However, in ultra-high-dimensional settings, solving least squares over a subset can be prohibitive, affecting both the original and enhanced algorithms. Updating coefficients via least squares is equivalent to Newton’s method. [4] proposed Gradient Pursuit, which follows the same correlation-based selection strategy but replaces Newton’s method with gradient-based updates, significantly reducing computational overhead.
Our optimal pursuit idea extends to Gradient Pursuit, forming Optimal Gradient Pursuit (OGP), which simultaneously considers support set updates and coefficient updates while maintaining the same computational complexity as Gradient Pursuit.
In this rebuttal, we explicitly derive the selection criterion for OGP:
\begin{equation}\arg\max_{j} \begin{cases}
\frac{\underline{||X_S^Tr^k||^2}+({r^k}^TX_j)^2}{||\underline{X_SX_S^Tr^k} +X_jX_j^Tr^k||}, & j \in S^c \\\\
\frac{\underline{||X_S^Tr^k||^2}}{||\underline{X_SX_S^Tr^k}||}, & j \in S
\end{cases}\end{equation}
where the underlined part only needs to be computed once, keeping the overall complexity comparable to correlation-based selection in Gradient Pursuit. We establish OGP’s convergence theory and validate it with numerical experiments, demonstrating superior performance both theoretically and empirically.
[FigOGP](https://drive.google.com/file/d/1kx7exKHUYToPS3wYuG4yTsVowCtkv92e/view?usp=sharing)
We compared GP and OGP runtime on numerical examples from our paper:
[FigTime](https://drive.google.com/file/d/1RJ8yreuaGvReU2svoT-OrXCY4ILEZlMv/view?usp=sharing)
Both methods achieve an order-of-magnitude speedup over least squares-based subset selection. OGP provides an efficient acceleration scheme for the optimal pursuit strategy, extending its applicability to general objective functions in future research.
**Q2 (Diverse datasets, machine learning tasks, and metrics evaluation)** Thank you for your suggestion. We have conducted additional tests across five tasks, ten more datasets, and six metrics. Due to space constraints, please refer to Reviewer sDws (Q2) for details.
**Q3 (Contributions)** Thanks for your question. As stated in [2] (Section 3.3.2, pp. 59–60), best subset selection is built on forward feature selection and backward elimination. Greedy algorithms combine these criteria, with classical criteria based on correlation and T-statistics.
Our contribution lies in re-examining these foundational criteria from an optimization perspective. By modeling feature significance and interaction through a block coordinate descent framework, we clarified the optimization essence of classical criteria and proposed new selection and elimination models. Using forward and backward matrix inversion techniques, we derived explicit new criteria, providing a foundation for future best subset selection algorithms.
Additionally, we further analyzed:
(1) Theoretical behavior under high feature correlations (Reviewer aZmj, Theorem 1\&2).
(2) Complexity and algorithmic convergence.
(3) Empirical performance gains.
(4) Performance across various machine learning tasks and metrics.
Our work has significant potential:
(1) Theoretically, our convergence results suggest the possibility of breaking existing RIP assumptions, advancing algorithmic study in this NP-hard problem.
(2) Practically, our new criteria significantly improve performance across machine learning tasks, datasets, and evaluation metrics, laying a foundation for future algorithm design.
Furthermore, our optimal pursuit idea extends to other greedy methods, such as Optimal Gradient Pursuit.
**Q4 (Implementation)** Thank you for your suggestion. We recognize the importance of practical implementation and plan to open-source all code, covering the original algorithm, extensions, and accelerated optimal gradient pursuit, along with detailed workflow guidance.
We will also provide tutorials on applying the algorithm to best subset selection, column subset selection, line spectrum estimation, and other machine learning applications. | Summary: This paper proposes two criteria for feature selection and feature elimination in the context of solving the best subset selection problem. The authors approach these criteria from an optimization perspective. These criteria can be incorporated into various heuristic subset selection algorithms. Additionally, the authors establish convergence guarantees for one such algorithm, CoSaOP. Numerical experiments are reported to demonstrate the effectiveness of new criteria against classical feature selection and elimination approaches. In particular, Sections 2 and 3 develop an optimization-based framework for feature selection and elimination. The authors highlight the limitations of classical criteria, which partially capture variations in the objective function due to feature addition or removal, and subsequently propose refined criteria to address these shortcomings. Furthermore, the two sub-problems are reformulated to enhance computational efficiency. Section 4 provides convergence guarantees for the CoSaOP algorithm. Section 5 presents numerical experiments on synthetic and real-world datasets, demonstrating the practical advantages of the proposed approach over existing heuristic methods.
Claims And Evidence: The claims made in the submission are clear and also supported by their numerical experiments.
Methods And Evaluation Criteria: The proposed methods, i.e., CoSaOP, partially make sense under the assumptions provided in Appendix F.
The benchmark instances used in Section 5.1.1 are okay. However, the instances used in Sections 5.1.2 & 5.2 might not ensure the assumptions provided in Appendix F.
Theoretical Claims: The proofs for theoretical results look correct on my side. However, the theoretical results provided in Section 4 need further assumptions and conditions, which may weaken the main contributions.
Experimental Designs Or Analyses: Experimental designs are sound.
The experimental setup for the synthetic datasets would benefit from additional details. In Section 5.1.1, the generation procedures for the sparse vector $\beta$ and the random Gaussian matrix $X$ are not clearly specified. Furthermore, the definition of the signal-to-noise ratio (SNR) is absent from the main text. To enhance the rigor of the study, the authors are encouraged to conduct further experiments with higher-dimensional settings and under conditions where the SNR values are low.
Supplementary Material: Appendix B & C & D are okay on my side.
If I do not make a mistake, there is a computation mistake on the right-hand side of eq(19) based on the previous bound for $D_{jj}$.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Weakness:
The paper is well-organized. However, the clarity of writing could be improved, and some key assumptions are omitted from the main content, which may significantly impact or weaken the theoretical results.
The originality is incremental. The proposed selection and elimination method follows a similar idea from vanilla local search, where the term "optimal" is based on maximizing over all possible single selection/elimination index. It is better to demonstrate whether the convergence results for the proposed optimal criteria would also hold for other greedy criteria.
Other Comments Or Suggestions: Here are some minor comments for the authors to consider:
1. Following the introduction, it would be beneficial to include a notation convention section to clarify the terminology used throughout the main text.
2. In Remark 3.7, the authors assert that the proposed criteria do not introduce significant additional computational costs. An algorithmic complexity analysis should be provided to substantiate this claim.
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful question, which has driven us to further theoretical advancements. In this rebuttal, we address concerns raised in the reviews. For references [1-8], please refer to Reviewer uuYf.
**Q1 (Theoretical Assumptions):**
Thank you for your question. The theoretical assumptions about algorithmic convergence in our work align with Section 2.3 of the CoSaMP paper [1]. Other best subset selection algorithms also rely on assumptions like the Restricted Isometry Property (RIP) due to the NP-hard nature of the problem. Without such assumptions, proving convergence for any polynomial-time algorithm would be intractable unless P = NP.
However, your question led us to reflect further. Empirical observations reveal that even when theoretical assumptions are violated (e.g., correlated features), our proposed algorithm family performs well. This stems from our criteria’s explicit focus on feature interaction, motivating new theoretical frameworks under violated RIP conditions.
**Theorem 1** Suppose the true subset $S^*$ contains indices $(i, j)$, where feature correlation is $\rho$. Assuming $S$ includes $i$, then for the classical criterion:
\begin{equation}
\frac{|{r^k}^T X_j|}{||X_j||_2} \le \sqrt{1-\rho^2}||r^k||_2,\tag{C1}
\end{equation}
while our objective-based criterion (8) satisfies
\begin{equation}
\frac{({r^k}^T X_j)^2}{X_j^T(I-X_S(X_S^TX_S)^{-1}X_S^T)X_j} \ge \frac{1}{1-\rho^2}\left(\frac{{r^k}^TX_j}{||X_j||_2}\right)^2.\tag{C2}
\end{equation}
Theorem 1 shows that under strong feature correlation, traditional criteria struggle to identify true features, while our criterion (8) provides a stable lower bound, mitigating correlation effects. If equality holds in (C1), substituting into (C2) eliminates dependence on $\rho$, fully removing correlation influence. Additionally, (C2) connects our criterion (8) to classical ones.
Pseudo-correlation may arise in best subset selection. Pseudo-correlated features $X_p$ are highly correlated with important features $X_i$ in $S^*$ but do not belong to $S^*$. Classical T-statistics-based criteria struggle to remove such features, while our criterion (10) reliably identifies them.
**Theorem 2**
1) In noiseless cases, if $S^* \subset S$, then for all $j_m \in S \setminus S^*$,
\begin{equation*}
j_m \in \arg\max_{j \in S}~\text{objective-based criterion (10)},
\end{equation*}
whereas classical criterion (4) lacks this guarantee.
2) If $X_p \in S$ is pseudo-correlated with $X_i \in S^*$ (correlation $1 - \epsilon$), when $\epsilon$ is small, classical criteria (4) may discard true features, while our criterion (10) correctly removes $X_p$.
These theorems show our criterion (8) effectively identifies key features under strong correlation, and criterion (10) removes pseudo-correlated features. Experimental results in Q2 confirm superior performance. Theorems 1 and 2 pave the way for research on algorithm convergence under weakened RIP conditions. The detailed proof is provided in [The detailed proof](https://drive.google.com/file/d/1r7Y-2DZ07TD7ORxU72hwJi73dOFtIo29/view?usp=sharing)
**Q2 (Experimental Design)**
Thank you for your suggestions. Due to character limits, we will include details on sparse vector and random Gaussian matrix generation, along with SNR definitions, in the revised paper. Additional comparisons were conducted under extreme conditions:
1. Small sample rate, high-dimensional vectors: $p = 2000$, with $n/p$ varying from 0.05 to 0.1.
2. High noise: SNR from 5 to 15.
3. Highly correlated features (RIP violated): The covariance matrix of $X$ follows a Toeplitz structure, where $\text{corr}_{ij} = \rho^{|i-j|}$ with $\rho = 0.7$.
Sparse vectors with sparsity level $K = 10$ were used. Phase transition diagrams illustrate how varying sampling rates and SNR impact performance. Larger blue areas indicate stronger performance.
[Phase Transition](https://drive.google.com/file/d/1HY6-6XzeVTq-LUtehvQS1hZgjVvNSu7k/view)
Results show that all algorithms enhanced with our criteria exhibit major improvements in phase transition capabilities, validating both theoretical insights from Q1 and advantages in high-dimensional, low-SNR cases.
Additional tests included sparse regression datasets, cross-validation, column subset selection, line spectral estimation, and other machine learning problems where feature correlation is prevalent. Enhanced algorithms consistently outperformed others.
**Q3 (RHS of Eq.(19))**:
Thank you for your question. The matrix 2-norm used is: $\text{norm}(D,2)= \\sqrt{\\lambda_{\\max}(D^TD)}.$
Since $D$ is diagonal, $\text{norm}(D,2) = \\max |D_{jj}|$. This justifies why Eq.(19) holds.
**Q4 (Contributions and Other Greedy Criteria)**
Thank you for your question. Due to space constraints: For the contribution of this paper, see Reviewer g7tW (Q3). For other greedy criteria, we extend the optimal pursuit idea to Gradient Pursuit (Reviewer g7tW, Q1) and Column Subset Selection (Reviewer aDws, Q2, 4).
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. While I found clarification helpful, I have to maintain my score based on the following concerns.
**Theoretical Assumptions.**
I appreciate the effort in providing additional theoretical results (Theorem 1 and Theorem 2). However, Theorem 1 only shows that the proposed criteria are better than the classical one, which meets our expectations due to a finer selecting \& removing step with a greater computational complexity. Additionally, the dependency on $\rho$ works for any choices within interval $(0,1)$, I cannot see something like "phase-transition" for strong correlation case.
For Theorem 2, when $\epsilon$ is small, it is better to compare with existing theoretical/statistical results in (robust/perturbed) sparse regression, which ensures similar guarantees or bounds.
**Experimental Design.**
Usually, for high noise or low SNR setting, we set SNR < 1.
Toeplitz structure is commonly used for input sample generation. However, the correlation between different features are highly-inconsistent, I do not think the resulting instances satisfies the assumed high-correlated condition/assumption.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your response on our rebuttal. Below, we address your concerns and provide further clarifications:
**Q1 (Theorem 1):** Thanks for your question. Theorem 1 demonstrates that if features $i$ and $j$ in the true subset are highly correlated (larger $\rho$ means higher correlation), inequality (C1) implies that as their correlation $\rho$ approaches 1, feature $j$ becomes increasingly unidentifiable under traditional correlation-based criterion once feature $i$ has been selected, since the coefficient term $\sqrt{1-\rho^2}$ in the RHS converges to 0. In contrast, (C2) shows that when $\rho$ is large, since the first term’s coefficient in RHS $1/(1-\rho^2)$ is large (approaches $+\infty$ as $\rho$ approaches 1), the proposed criterion (8) exists a stable lower bound. **The larger the correlation $\rho$, the stronger discrepancy between traditional correlation-based criterion and proposed criterion.** This significantly mitigates the impact of feature correlations on identifying $j$, making true features more reliably detectable.
We would like to clarify that the influence of feature correlations cannot be entirely eliminated—doing so would reduce the problem to RIP scenario, which would imply a solution of the NP-hard problem BSS in polynomial time. This is clearly unrealistic. The contribution of our proposed criteria lies in minimizing the impact of correlations through minimal modifications, as demonstrated above.
**Q2 ((Robust/perturbed) sparse regression):** Thank you for your question. To the best of our knowledge, (robust/perturbed) sparse regression is primarily implemented through the following approaches:
1. **Adopting more robust loss functions**, such as the Huber loss.
2. **Introducing perturbation variables** and imposing robust regularization on the objective function, for example, via a total least squares objective.
3. **Resampling or perturbing data** to fit the model multiple times and selecting features that appear most frequently.
However, all these methods necessitate **modification of the objective function (model)**. In other words, to achieve other goals, the target problem **no longer aligns with the Best Subset Selection (BSS) problem (2)** discussed in our work.
**Theorem 2 in our study focuses specifically on the NP-hard BSS problem**. By leveraging our proposed criterion (10), we effectively identify pseudo features outside the true subset, thereby achieving more accurate solutions for BSS in high-correlation scenarios. This improvement constitutes an **enhancement to the solving algorithm for this NP-hard problem**, which is distinct from the objectives addressed by (robust/perturbed) sparse regression.
Certainly, we can also consider best subset selection for more general objective functions, such as those incorporating robustness in (robust/perturbed) sparse regression. In Reviewer g7tW Q1, we proposed the Optimal Gradient Pursuit (OGP) scheme, which extends the Optimal Pursuit (OP) framework to general objective functions using gradient-based methods. The new metric OGP, developed under the guidance of the OP framework, remains more effective than traditional gradient pursuit approaches (see Reviewer g7tW Q1). These are indeed promising avenues for future research, but they fall beyond the scope of this paper.
**Q3 (Experiment)** Thank you for your suggestion. Our definition of SNR = $20log_{10} {||X\beta||/||noise||}$ follows [1]. For the conventional SNR (calculated using $10log_{10}$), the SNR values in our rebuttal actually range from 2.5 to 7.5. We further tested scenarios with SNR values as low as 0.2–1, as shown in the figure. The algorithm using our proposed metric still demonstrates a clear phase transition advantage.
[Fig: SNR Low](https://drive.google.com/file/d/1Ag3T6aTktWiaEWD7Bau5u-u4gF0Boe4i/view?usp=sharing)
Due to space constraints in rebuttal, we could not elaborate in detail. In our experiments, the sparse signal is block-sparse, comprising two blocks of five adjacent non-zero entries each. Combined with the **Toeplitz covariance structure** (where **features closer in position exhibit higher correlations**), this configuration ensures:
1. High correlation features within the true subset (as stated in Theorem 1).
2. Many pseudo-features outside the true subset highly correlated with those in the true subset (as stated in Theorem 2).
The observed **phase transition behavior** in the experimental results validates the theoretical superiority of our proposed criteria.
We further considered more extreme high feature correlation scenarios: $corr_{ij} = \rho^{I\\{i \neq j\\}}$ with $\rho = 0.7$. As shown in the figure, the algorithm with our proposed criteria still demonstrates **significant advantages in phase transition capability**.
[Fig: Corr High](https://drive.google.com/file/d/1EWsgxJ50ksYP9wdG3ad4tNuB1ggABk5Y/view?usp=sharing)
[1] Block Sparse Bayesian Learning: A Diversified Scheme. NeurIPS, 2024. | Summary: The paper proposes a new criterion for selecting and rejecting features in the context of the best subset selection problem. While previous methods primarily focused on the significance of individual features, the proposed approach offers the flexibility to capture interactions between features.
## update after rebuttal
Thank you for addressing my concerns. However, in light of the comments made by other reviewers, I have decided to maintain my score.
Claims And Evidence: The proposed method offers the flexibility to account for the significance of individual features and the interactions among feature sets. Authors attribute the performance gains of their approach to the new modification made in the training objective. Experimental results demonstrate a significant improvement in model performance. However, I am curious about the choice of housing and superconductivity datasets for the experiments. A more comprehensive evaluation on a wider range of datasets is needed to fully validate the paper’s claims.
Methods And Evaluation Criteria: While the modification to the overall objective appears minor, the flexibility provided by the proposed approach is interesting. However, I am curious whether methods like ABESS represent the current state-of-the-art in the field. Additionally, many of the baselines used by the authors seem outdated. A comparison with more recent methods would strengthen the evaluation.
Theoretical Claims: N/A
Experimental Designs Or Analyses: As mentioned previously, a more comprehensive comparison across a larger set of datasets and recent baselines such as [1, 2] would provide better clarity on the usability and effectiveness of the proposed method.
[1] Cherepanova, Valeriia, et al. "A performance-driven benchmark for feature selection in tabular deep learning." Advances in Neural Information Processing Systems 36 (2023): 41956-41979.
[2] Cohen, David, et al. "Few-sample feature selection via feature manifold learning." International Conference on Machine Learning. PMLR, 2023.
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength :
1. The proposed method is well-grounded in theory and offers a strong intuitive justification for the flexibility it provides over previous methods.
2. Paper is well written.
3. A simple modification in the objective function clearly leads to substantial improvements over the baselines
Weakness :
1. I am concerned with the relevance of the proposed method with the current literature around the optimal feature selection. The author should clarify the benefits of using the proposed method over the existing state of the art or compare the performance of their method with them.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback on our theoretical foundation and flexibility. We have incorporated further explanations and experiments accordingly, and this rebuttal will be integrated into revised paper. For the references [1-8] in the rebuttal, please refer to **Reviewer uuYf**.
---
**Q1 (SOTA in BSS):**
We appreciate your question, which prompted us to review the literature further. The state-of-the-art algorithm in Best Subset Selection (BSS) is ABESS, as confirmed by recent works [11]–[14], recognizing it as a leading or benchmark method.
While BSS and Feature Selection (FS) are related, they are not synonymous. BSS is a subset of FS, typically in a linear framework, while FS methods like neural networks and random forests employ nonlinear models. BSS addresses the NP-hard problem (equation (2) in the original paper) with efficient polynomial-time solutions.
To highlight BSS’s contributions in FS, we conducted additional experiments on larger datasets and FS tasks.
---
**Q2 (Larger dataset, various tasks, and task-specific metrics):**
We expanded our tests to five tasks, ten more datasets, and six task-specific metrics.
1. **Phase Transition in Extreme Scenarios:**
Experimental setting is detailed in Reviewer aZmj Q2, with results in: [Phase Transition](https://drive.google.com/file/d/1HY6-6XzeVTq-LUtehvQS1hZgjVvNSu7k/view).
In cases with small samples, high-dimensional features, and high noise, BSS excels in identifying the true subset, whereas other FS methods struggle.
2. **Sparse Regression Tasks on Diverse Datasets:**
We added three widely used BSS datasets: (1) House 16H [5], (2) Prostate.v8.egens [6-7], (3) Spectra [8]. The $R^2$ curves as a function of selected features are available here: [Fig: R2](https://drive.google.com/file/d/1v5Vz0lyVADuo2KVaUOr2ernVcSncC_t0/view?usp=sharing).
Across all datasets, the enhanced algorithms outperform the original, achieving gains equivalent to selecting ten additional features.
3. **Cross-Validation in Prediction:**
We evaluated BSS on six datasets using 5-fold cross-validation, where 4 folds were for training and 1 for validation. Prediction error is defined as: $error_{pred} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2$. Cross-validation scores are shown here: [Fig:CV](https://drive.google.com/file/d/1xQvgSCXXpgq4nqOlkJRxJw5dUgcCbrEV/view?usp=sharing).
Enhanced algorithms exhibit superior generalization, validating the new metric’s effectiveness in predictive tasks.
4. **Column Subset Selection (CSS) in Unsupervised Learning:**
CSS and PCA are key dimensionality reduction methods. While PCA forms linear combinations, reducing interpretability, CSS selects important features while better preserving the dataset’s structure.
We tested eight 256×256 image datasets, using $||X - X(:,S)C||_F/||X||_F$ as the evaluation metric, with the leverage score method as a baseline. Optimal selection and deletion criteria, along with results, are here: [Table:CCS](https://drive.google.com/file/d/1L3w-GHO9elAk4LKlhEjiDiWtR9vypUYA/view?usp=sharing).
The enhanced algorithm outperforms the original, which surpasses SVD-128, while the enhanced version consistently beats SVD-256. OP-(A)BESS achieves SOTA performance, nearing the optimal SVD bound.
5. **Line Spectrum Estimation (Complex Signal Processing):**
Our methods extend naturally to the complex domain. A key example is line spectrum estimation, a structured feature selection problem where features are continuous in the frequency domain: $v(f) = [ 1, e^{-j 2\pi f}, e^{-j 2\pi 2f}, \dots, e^{-j 2\pi (N-1)f}]^T.$ This problem, crucial in modern wireless communications, involves decomposing a complex signal into its frequency components.
We tested a 128-dimensional complex signal with 20 frequency components, applying BSS on an oversampled Fourier domain. Evaluation metrics included CCDF (lower is better) for frequency estimation error and cosine similarity (higher is better) for amplitude recovery. Frequency domain visualization, radar plot, and metric performance are available here: [Fig:LSE](https://drive.google.com/file/d/1zr1tc-udPEttYkyViwom7zZy5WbvYp4j/view?usp=sharing).
The enhanced algorithm significantly reduces frequency estimation errors and improves correlation. OP-(A)BESS and CoSaOP achieve perfect estimation, reinforcing our metric’s advantage in highly correlated feature settings.
---
**References:**
[11] Wang, Zezhi, et al. "skscope: Fast Sparsity-Constrained Optimization in Python." JMLR (2024).
[12] Roy, Saptarshi, et al. "On the Computational Complexity of Private High-dimensional Model Selection." NeurIPS (2024).
[13] Lin, Zhaotong, et al. "A robust cis-Mendelian randomization method with application to drug target discovery." Nature Communications (2024).
[14] Gregory C. Reinsel, et al. Multivariate Reduced-Rank Regression: Theory, Methods and Applications. Springer (2022). | Summary: The paper presents two novel criteria for feature selection, which are refinements on well studied approaches for identifying features which maximally improve (or reduce) prediction accuracy. By more rigorously considering the impact of features selected as a subset, rather than just individually, similar efficiency guarantees remain while improvements and learning are made.
Claims And Evidence: Yes, all claims are supported with theoretical proofs and experimental results.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I verified the proofs of Theorem 3.3 and 3.11 in the appendix.
Experimental Designs Or Analyses: I reviewed the experimental results. I think a more thorough discussion of CoSaMP's failure is needed, both in terms of why the failure occurs and more experimental validation that this result is not in error.
Supplementary Material: I review the appendix proofs.
Relation To Broader Scientific Literature: Yes, the authors well situate their results as they compare to prior feature selection methods.
Essential References Not Discussed: na
Other Strengths And Weaknesses: The paper is very well written, with helpful illustrations to hammer home the nuance in the idea for improving prior formalizations of the problem. Performance gains are further made abundantly clear by the provided experimental results. Moreover, the generality of these results is what's most intriguing to me--the methods take a fundamentally new approach to well studied solutions.
Other Comments Or Suggestions: na
Questions For Authors: How does this paper differ from Tohidi et al 2025? The submodularity and matching pursuit approach their seem to overlap significantly with this work and she be discussed in more depth.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your deep understanding and kind words on our work. We have carefully addressed your comments below and will incorporate this rebuttal into the revised version of the paper.
---
**Q1 (Why CoSaMP Fails):**
Thank you for your question. We implemented Algorithm 1 of CoSaMP [1] strictly and tested it alongside several example codes from MathWorks, obtaining consistent results. We will now provide a deeper analysis of the reasons behind CoSaMP's inefficiency in sparse regression tasks.
As discussed in Appendix M, CoSaMP iteratively (1) selects $2K$ features, (2) solves a least squares problem on a large subset, and (3) prunes to $K$ coefficients. However, high feature correlation can cause significant errors in the final estimate from steps (2) and (3). We visualize the impact of feature correlation on CoSaMP's iterative process here: [Fig: CoSaMP Visualization 1](https://drive.google.com/file/d/1Zs1C0Bp3NVnu0RqriajYSfJkcHyeWNYI/view?usp=sharing).
As shown, on a regression dataset with highly correlated features, the pruned support set’s direct coefficients (column 2) differ significantly from those after least squares estimation (column 3), with substantial residuals. CoSaMP, lacking least squares refinement, fails as the residuals grow with each iteration due to high feature correlation. This is evident in the residual curve evolution in (a): [Fig: Residual Curve](https://drive.google.com/file/d/1xp34dLCafqFV3tSvEHjkvsqEFTx_ehoM/view?usp=sharing).
In contrast, when features are weakly correlated (as in the Audioset [Fig: Audioset Example](https://drive.google.com/file/d/1vJiWa-jqo4NNSPAbRU2Ew6XXSBmDHyY7/view?usp=share_link)), the coefficients and residuals after pruning the large support set (column 2) and performing least squares (column 3) are nearly identical, leading to algorithm convergence, as shown in curve (b): [Fig: Residual Curve](https://drive.google.com/file/d/1xp34dLCafqFV3tSvEHjkvsqEFTx_ehoM/view?usp=sharing).
In summary, as noted in [1], CoSaMP's theoretical guarantees rely on weak feature correlation, leading to failure when this assumption is violated. In contrast, our CoSaOP algorithm remains effective. We further establish its theoretical foundation (**see Reviewer aZmj, Q1, Theorems 1, 2**), demonstrating how the new criteria enable algorithms that overcome high feature correlation challenges.
---
**Q2 (Difference from Tohidi et al 2025):**
Thank you for your question.
Tohidi et al. (2025) use submodularity and preconditioning, applicable only to the selection process. Our contribution, however, re-examines the foundational criteria in best subset selection from an optimization perspective. By modeling feature independence and interactions through block coordinate descent, we clarified the optimization essence of classical criteria and proposed a unified feature selection and elimination model. Using forward and backward matrix inversion, we derived new explicit criteria, providing a foundation for future algorithm design in best subset selection.
In this paper and rebuttal, we analyzed: (1) the criteria's theory under high feature correlations, (2) complexity and convergence, (3) empirical performance gains, and (4) performance across various tasks and metrics. These findings highlight the potential to relax theoretical assumptions and enable future engineering applications.
Moreover, the optimal pursuit idea can be extended to other greedy metrics and algorithms, opening new directions for further research (**see Reviewer g7tw Q1**).
---
**References:**
[1] Needell D, Tropp J A. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples[J]. *Applied and computational harmonic analysis*, 2009, 26(3): 301-321.
[2] Hastie, T., Tibshirani, R. and Friedman, J. (2017) *The Elements of Statistical Learning: Data Mining, Inference, and Prediction*. 2nd Edition, Springer, Berlin.
[3] Belhadji A, Bardenet R, Chainais P. A determinantal point process for column subset selection[J]. *Journal of machine learning research*, 2020, 21(197): 1-62.
[4] Blumensath T, Davies M E. Gradient pursuits[J]. *IEEE Transactions on Signal Processing*, 2008, 56(6): 2370-2382.
[5] [OpenML Data](https://www.openml.org/search?type=data&sort=runs&id=574&status=active)
[6] Lin Z, Pan W. A robust cis-Mendelian randomization method with application to drug target discovery[J]. *Nature communications*, 2024, 15(1): 6072.
[7] Hastie, T., Tibshirani, R. and Friedman, J. (2017) *The Elements of Statistical Learning: Data Mining, Inference, and Prediction*. 2nd Edition, Springer, Berlin.
[8] MATLAB and Statistics and Machine Learning Toolbox, "Spectra Data," The MathWorks, Inc. | null | null | null | null | null | null |
Devil is in the Details: Density Guidance for Detail-Aware Generation with Flow Models | Accept (poster) | Summary: This paper introduces a collection of methods for controlling likelihood of samples generated by a flow/diffusion model. Authors provide a comprehensive review of prior work on density control, in particular providing a more formal analysis of latent scaling [Song 2021]. They further introduce density guidance - a method for sampling with explicit likelihood control through an alternative ODE formulation ensuring the sample stays in a pre-defined quantile over time. They further introduce a stochastic variant of density guidance.
Claims And Evidence: The paper does deliver on the theoretical claims, but the experimental claims are not very extensively validated (e.g. empirical evaluation of prior vs density vs stochastic density guidance are a handful of qualitative examples).
Methods And Evaluation Criteria: There is almost no coherent evaluation.
Theoretical Claims: I checked the claims in the main paper leading to Eq.(24), assuming that proves in supp are correct, they do seem to be meaningful and consistent with existing work [Karczewski'ICLR2025]
Experimental Designs Or Analyses: N/A
Supplementary Material: Reviewed in more detail section E. In particular the empirical validation of typical evolution of log-density behavior seems meaningful.
Relation To Broader Scientific Literature: This paper explores an interesting property of the flow/diffusion models that has been noticed recently [Karczewski'25] and explains ad-hoc techniques commonly used in score-based generative models [Song'25].
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: + Overall the paper introduces a significant contribution to understanding the properties of generative diffusion models, both in terms of providing theoretical insight to existing sampling techniques, and in terms of novel methods for density control during generation.
- The practical utility of proposed method is a bit unclear due to lack of evaluation.
Other Comments Or Suggestions: Although this is mostly theory-focused work, it will be beneficial to get minimal quant/qual validation, even if on toy data.
Questions For Authors: Apart from several images, experimental validation is missing, would be great to understand whether the theoretical claims could be properly validated? E.g. why not follow methodology from [Karczewski'25] for quantiative/qualitative analysis?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the Reviewer for their time and efforts to scrutinize our submission. We address the raised concern below.
**File with new figures:** https://anonymous.4open.science/r/DensityGuidance-20E6/Density_guided_sampling___Rebuttal.pdf
>The practical utility of the proposed method is a bit unclear due to lack of evaluation. It would be great to understand whether the theoretical claims could be properly validated? E.g. why not follow methodology from [Karczewski'25] for quantiative/qualitative analysis?
Thank you for this suggestion, we have now included an extensive evaluation of the proposed methods building on the methodology of [Karczewski’25]. Specifically:
### Explicit Quantile Matching (EQM)
We estimated the quantile function for the CIFAR model as described in line (311 left). We tested $K = [16, 32, 64, 128, 256, 512, 1024]$ and found that using $K=128$ is enough to ensure a correlation between the desired value of log-density and the obtained one is above 99%. Based on the estimated quantile function $\phi_t$, we estimate $b_t = \frac{d}{dt}\phi_t$ with a moving average of the finite difference estimates.
Furthermore, we found that the difference between the desired values of log-density and the obtained ones goes to zero as we decrease the discretization error (increase the number of sampling steps). Interestingly, for lower number of sampling steps, even though we do not obtain exact desired values of likelihood, the correlation between the desired values and the obtained ones remains above 99%, even for as few as 32 Euler sampling steps. This means that for all values of the number of sampling steps, we saw a monotonic relationship between the target $\log p_0$ and the amount of detail (PNG size). Please see Figure 17.
Finally, we also show that **we can obtain exact values of likelihoods even when sampling stochastically** by using results from Appendix F, and the Euler–Maruyama algorithm. We tested different amounts of added noise: $\varphi(t)= r g(t)$ for $r=[0.1, 0.5, 0.9]$. As expected, as the amount of noise increases, the required number of steps to take to achieve exact likelihoods also increases. Please see Figure 18.
### Prior Guidance vs Density Guidance vs Stochastic Density Guidance
We quantitatively compared Density Guidance (DG), and Prior Guidance (PG) using the EDM2 model. We measures the correlation between the hyperparameter ($q$ for Density guidance and $||x_T||$ for prior guidance) and the obtained $\log p_0$. We found 66% for DG and 68% for PG.
Furthermore, we compared DG and PG with stochastic sampling. I.e.using Eq 25 for Stochastic Density Guidance (SDG), and Eq 7 for “Stochastic Prior Guidance” SPG (i.e. regular stochastic sampling after rescaling the latent code). We tested two scenarios:
Adding noise early: $\varphi(t)=0.2g(t)$ for $\log SNR(t) < -4.03$, and $\varphi(t)=0$ otherwise;
Adding noise late $\varphi(t)=0.3g(t)$ for $\log SNR(t) > -3$, and $\varphi(t)=0$ otherwise.
We found the correlation between the hyperparameter and the obtained $\log p_0$ to be 50% for SDG and 25% for SPG. We summarize all correlations in the table below.
||Density Guidance|Prior Guidance|
|-|-|-|
|Deterministic Sampling|66%|68%|
|Stochastic Sampling|50%|25%|
For DG the drop in correlation from deterministic to stochastic sampling can be explained by the same reasoning as for the EQM, i.e. stochastic sampling requires significantly more sampling steps to achieve the desired levels of $\log p_0$ (Figure 18).
For PG, stochastic sampling is not principled, i.e. the more noise we add during sampling, the less information is contained in the starting point $x_T$. For example, if $\varphi(t)=g(t)$ for all $t$, then the process is the Reverse SDE, and $p(x_0|x_T)$ does not depend on $x_T$, and thus scaling the latent code has no effect on the final sample. Hence the need for Density Guidance for stochastic sampling.
Please see Fig 20 for details on the evaluation of log-densities and corresponding PNG files sizes, and Fig 21 for the visualization of the stochastic samples.
### Additional Experiments
We have also added the following:
* Analysis of the impact of Guidance on perceptual metric NIQE (Fig 14, more details in the response to Reviewer 8ij3)
* More samples and quantitative results with Stable Diffusion (Fig 19)
* Results with models using Classifier-Free Guidance (Fig 22, more details in the response to Reviewer reGE, Q9)
* A new State-of-the-art model FLUX (Fig 23-25, more details in the response to Reviewer reGE, under SOTA models)
* A rigorous proof of the hypothesis we posed in Appendix D about the asymptotic behaviour of $h(x)$ for the Gaussian Mixture (Theorem 1 in the uploaded file)
We thank the Reviewer again for their constructive feedback, which strengthened our claims and improved the quality of our submission. We hope that we have adequately addressed the concerns, and you will consider raising your score. | Summary: This paper studies the control of the amount of details in samples from diffusion models. The authors first established a theoretical framework (Score alignment) to explain a trick to increase sample details in prior literature. Then, the authors explore a suite of methods that can be used to control the exact scale of amount of details in the generated samples, both in the deterministic case (Density guidance) and in stochastic case. Theoretical claims are proved on simplified cases and some qualititave anlaysis is performed. Experiments on SD2.1 and EDM are performed to show the real-world use cases.
Claims And Evidence: **Major claims made in Sec. 3**:
- The trick of "scaling latent code" will work because (1) it decreases the likelihood of $x_T$, (2) decreasing likelihood of $x_T$ correlates to the decrease of likelihood of $x_0$, and (3) the likelihood of $x_0$ correlates to the amount of details. (1) is supported by the prior Gaussian distribution. (2) is supported by the Score Alignment condition, which is partially and qualitatively shown for selected models. (3) is supported by a previous literature.
- **(Q1)** The major complaint here is on (2). Only two models (VP-SDE and EDM2) are analyzed qualitatively. It would be great if the authors could analyze state-of-the-art models (e.g., Stable diffusion XL, FLUX). Moreover there is no theoretical guarantee for this property.
- Likelihood correlates well with amount of details in the image, which is shown through the correlation between the image compression size and the sample likelihood.
**Major claims made in Sec. 4**
- Explicit quantile matching enables sampling images with an exact likelihood of $c$. This is mostly supported with theoretcial proof in Appendix D, and a claim from prior literature that sampling from typical regions will produce accurate predictions.
- **(Q2)** The major complaint here is that this claim is not supported empirically. Practically, the authors propose to sample $K$ times to estimate the quantile function. However it is not clear on how large $K$ should be in practice. Moreover, there is no empirical results on whether using this method indeed produce samples with a likelihood of the exact value. It would be great if we can see results of this algorithm to be applied to real-world unconditional generators, and whether the produced samples indeed have an altered amount of detail and whether those samples indeed have an exact likelihood expected.
- Implicit quantile matching works similarly for conditional generators. This is supported by some sparse theoretical results in Appendix D. and qualitative results in Fig. 9.
- **(Q3)** I could not fully understand why using $b_t$ as defined in Eq. 21 would guarantee the condition in Eq. 18. I can get some intuitions from the fact in Eq. 20, yet is there a rigorous proof on this?
- **(Q4)** The results in Fig. 9 are interesting. However, the result for stable diffusion is sparse: there are only two levels shown, so we could not really tell whether the method is really achieving a fine-grained control over amount of detail. Moreover in SD2.1 it can be seen that there are changes in semantic contents, especially in the train example.
- **(Q5)** There are no quantitative analysis on the generated samples' amount of details. It would be beneficial if we can see a plot or a table measuring both the image compression size (in PNG as in Fig.4) and the likelihood of $K$ samples in different levels.
**Major claims made in Sec. 5**
- Eq 25. extends the above sampling procedure to stochastic process. This is supported by the proof in Appendix F and by experiments in Fig. 10.
- **(Q6)** There are only two levels of details in Fig. 10. Similarly, we could not tell whether the proposed method works to control specific level or details. (Will a simple prior guidance do very similar thing?)
Methods And Evaluation Criteria: The methods proposed in this paper are sound from the description. The evaluation mostly makes sense except for some minor issues, as discussed in Q2, Q4, Q5, and Q6.
Theoretical Claims: I didn't fully check the proofs in the appendix.
Experimental Designs Or Analyses: Yes, I checked most experimental designs and analyses. They are mostly adequate except for some minor issues as discussed in Q2, Q4, Q5 and Q6.
Supplementary Material: I briefly skimmed the proofs in the supplementary material.
Relation To Broader Scientific Literature: This paper studies models from recent studies in diffusion models (Song et al. 2021) and flow models (Lipman et al. 2023, Liu et al. 2023) and leverages some insights from prior literature (Song et al. 2021b). This paper is strongly based on the findings in (Karczewski el al. 2024).
Essential References Not Discussed: Relevant literatures are adequately discussed to the best of the reviewer's knowledge.
Other Strengths And Weaknesses: Beyond the strengths and weaknesses already discussed (Q1-Q6):
Strengths:
- The paper is well-written and well-organized. Detailed discussions and proofs are presented for most claims.
- Extensive studies are conducted for the research question proposed. The contribution is solid and extensive, and the paper explores the proposed framework in various settings including conditional/unconditional generation and stochastic generation.
- The problem studied in this paper is interesting. It may potentially be insightful to inspire reseach in related fields.
Weaknesses:
- There are some issues with the validation and experiment design, as in Q1-Q6.
- **(Q7)** What would be a practical application scenario of the proposed technique? When would a user be interested in a fine-detailed control over the amount of details of the generated image?
- **(Q8)** There is no discussion on when the method will fails to achieve the desired properties. There are some approximations in the theoretical proofs of the method, so it would be great if the authors could analyze the scenarios where the method will fail.
- **(Q9)** How would the method be used together with Classifier-free guidance, which is the de facto methods to perform conditional generation for diffusion models?
Other Comments Or Suggestions: Minor issues and comments:
- In L175, it seems that "$v_t$" is not discussed in Eq. 11.
- There should be reference for Eq. 3.
Questions For Authors: Minor questions:
- L140(right), I actually didn't find such claims explicitly stated in Song et al. 2021b. Could you point out the exact location of such a claim?
- Will the proposed method be able to work for state-of-the-art models like Stable Diffusion XL or FLUX?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the Reviewer for their thorough evaluation of our work and very insightful questions! We address the raised points below.
**File with new figures:** https://anonymous.4open.science/r/DensityGuidance-20E6/Density_guided_sampling___Rebuttal.pdf
Glossary:
* Density Guidance - DG
* Prior Guidance - PG
* Score Alignment - SA
* Classifier-free guidance - CFG
> **Q1:** Only two models analyzed for SA, and no theoretical guarantee.
This is true that there is no theoretical guarantee for SA. There cannot be such a guarantee as we show:
* In Fig 3, right - for the CIFAR model, SA does not hold for 3% of the latent codes (line 209)
* In Appendix C.3, we provide an example of a Gaussian mixture (exact scores known), where SA does not hold.
This emphasizes the point from the paper: **SA does not always hold.** This in part motivates the DG approach, because PG is not always guaranteed to work.
We discuss FLUX at the end of the response.
> **Q2:** no quantitative evidence of Explicit Quantile Matching.
Please see our response to Reviewer ppbq.
> **Q3:** Can you prove that Eq 21 implies Eq 18?
We actually do not make that claim. Eq 21 is based on results in Appendix D, which **we have now extended by a rigorous proof in the Gaussian Mixture case** (Theorem 1). The motivation is to keep the samples in the typical regions of $p_t$, but we do not guarantee exact quantiles.
> **Q4:** Few StableDiffusion samples. Also, semantic changes visible.
We have included more levels for Stable Diffusion, with PNG sizes (Fig 19). Regarding semantic changes - this is true, and can be even more drastic as in the train example with PG in Fig 19. We do not guarantee that only the low-level features change in all cases. In the extremes semantic changes can happen as well. However, it is consistent with the amount of detail as measured by PNG.
> **Q5:** Quantitative analysis of DG
Please see our response to Reviewer ppbq.
> **Q6:** Only two levels in Fig 10. Does stochastic guidance differ from PG?
We added more samples and levels in Fig 21 for both DG and PG. Perceptually, both seem to be monotonically controlling the detail, but we argue in the response to Q5: DG is more accurate.
> **Q7:** What are practical application scenarios?
Due to character limit, please refer to our response to Reviewer J9Xk, where we discuss potential applications.
> **Q8:** When might the method fail?
A potential issue is applying DG in low dimensions. We use the fact that $h(x)$ is approximately Gaussian (Appendix D). This only holds when the dimensionality is large.
Another approximation we make is discussed in lines 1147-1161. We justify it on two datasets. If one wants to increase the accuracy further, Eq 119 can be used instead, which makes no approximations. It comes at a cost of one additional Jacobian-Vector-Product.
> **Q9:** Would the method work with CFG?
As explained in the CFG paper, CFG can be interpreted as classifier guidance with an implicit classifier. This means that CFG is just a regular diffusion process, but with a different base distribution, favouring a certain class. Thus, all our results apply without changes, just with a redefined target distribution.
We sampled with an EDM2 model with CFG and found consistent behaviour with other models. Please see Fig 22.
>L175, "$v_t$" is not discussed
We denote by $v_t$ the score pushed forward from $T$ to $t$. We refer to it later in the text, e.g. in Eq 12. We will make it more explicit in the final revision.
> No reference for Eq. 3.
The reference is Chen et al. 2018. We will make it more explicit.
>I didn't find such claims in Song et al. 2021b
It can be seen in the Appendix in Figure 6 - it is not referenced in the main text. Authors call it "temperature rescaling" (reducing norm of embedding).
> Will the methods work for SOTA models like FLUX?
We have included new results with FLUX.1[dev]. Fig 23 shows samples with PG and DG, and Fig 24 shows a DG image with PNG and TIFF filesize comparison. Fig 25 shows the coupling of PNG and TIFF filesizes over guidance.
FLUX shows different behavior to other models. Images are richer, but detail variations from guidance are milder. The weaker DG effect can be attributed to the FLUX model coupling a latent diffusion on 16x64x64 space with a strong decoder to 3x768x768, whose effect on logp is unknown. We only control the latent portion of the model. FLUX is undocumented and unpublished.
The effect of DG on PNG filesizes also becomes inconsistent: adding more semantic detail doesn’t necessarily increase filesize, possibly due to the images already being highly realistic and rich in patterns. Furthermore, PNG is only an approximation of the true information content of the image. We include comparison to TIFFs, which shows more consistent coupling between filesize and detail.
We thank the Reviewer again for their high-quality review that significantly contributed to improving our submission.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal! My initial concerns have been addressed.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer again for their high-quality and thorough review, as well as the constructive feedback provided. We are glad to hear that the concerns have been addressed to the reviewer's satisfaction and appreciate the raised score. | Summary: This work introduce a method to control the sampling density in diffusion models. The main contribution is using score alignment to scale and control the sampling guidance, which works for both deterministic and stochastic sampling.
The experiments demonstrate that density guidance and its stochastic extension provide fine-grained control over image details.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes.
However, the experiment only demonstrates the side effects of density sampling. I have not observed any positive impact of density sampling on sampling quality.
Supplementary Material: yes. I have reviewed most of the content in supplementary material.
Relation To Broader Scientific Literature: This paper may enhance the community’s understanding of the sampling process, and some of the perspectives presented are quite interesting.
Essential References Not Discussed: N.A
Other Strengths And Weaknesses: This paper introduces a density guidance for both deterministic and stochastic processes, enabling precise control over the likelihood (log-density) during the sampling process. Additionally, this work provides solid theoretical foundations that can enhance the community’s understanding of the sampling process. Furthermore, the study uncovers some interesting phenomena: high-density generated results may appear relatively blurry, while lower-likelihood samples introduce more intricate details.
One concern I have is that I have not observed any positive impact of the authors’ proposed solution on existing sampling techniques. While I acknowledge the authors’ contribution and the effectiveness of the density guidance, I would appreciate it if the authors could provide practical advice on how the proposed density guidance might improve current sampling methods.
However, I must still express that I am inclined to accept this work.
Other Comments Or Suggestions: n.a
Questions For Authors: I hope the authors can offer suggestions on how density guidance could be used to optimize the current sampling process.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the Reviewer for their support of our work. Below we address the raised concern.
**File with new figures:** https://anonymous.4open.science/r/DensityGuidance-20E6/Density_guided_sampling___Rebuttal.pdf
> I would appreciate it if the authors could provide practical advice on how the proposed density guidance might improve current sampling methods.
The Density Guidance works on any diffusion model without retraining, or finetuning and requires no extra cost during sampling. We now ran a new experiment and measured the predictions of the generated samples using NIQE [1], which is a metric for image quality assessment reported to correlate strongly with human judgement.
### Potential applications of detail control
We believe that potential applications of the presented methods are image editing, where the user might want to control the amont of detail in the image. From [2] we also know that highest densities contain cartoon-like images, and thus user can have fine grained control over the spectrum between realistic images and cartoons describing the same scene. People also acknowledge that image generation can be used for “aiding designers in producing striking scenes for video games” [article](https://news.mit.edu/2025/ai-tool-generates-high-quality-images-faster-0321) and we believe that detail control can become an addition to that toolkit.
We also note that there has been interest among the practitioners in explicitly controlling the amount of detail in image generation:
* Modifying Stable Diffusion to generate less detail [Thread](https://www.reddit.com/r/StableDiffusion/comments/1e4spmo/how_to_generate_images_with_less_detail/)
* Modifying Stable Diffusion to generate more detail [Thread](https://www.reddit.com/r/comfyui/comments/1866y53/how_do_i_get_more_detail/)
Finally, we would also like to point out that the density-guidance is derived to control the log-density of the generated samples. We know from prior literature [2], that for image data, this correlates with image detail. Perhaps in domains different than images, controlling log-density may be desirable for other purposes. Density Guidance can be used for that as well. Investigation of domains other than image data is out of scope for this work but certainly an interesting direction that we hope this work could pave way for.
We hope our response has addressed the Reviewer's concerns, and that the additional experiments provided in the uploaded file further strengthen your support for our submission, which we hope will be reflected in an updated score.
---
[1] Mittal et al. "Making a “completely blind” image quality analyzer" (IEEE Signal processing 2012)
[2] Karczewski et al. "Diffusion Models as Cartoonists: The Curious Case of High Density Regions" (ICLR 2025)
---
Rebuttal Comment 1.1:
Comment: Thank the author for the reply. While my concerns still seem to exist, I believe that the work's contribution in terms of analysis is still worthy of acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you for your follow-up.
In your initial review, you raised the following concern:
"I would appreciate it if the authors could provide practical advice on how the proposed density guidance might improve current sampling methods."
In your most recent comment, you mention that "concerns still seem to exist," but it's unclear to us which concerns you're referring to, or why our rebuttal may have fallen short in addressing them. We would genuinely appreciate more clarity, as this would help us better understand how the work could be improved.
Since the ICML policy this year does not allow us to respond to future comments, we'd like to take this opportunity to clarify and emphasize how we believe we address the concern you raised in your review:
### Ease of application to existing models/sampling methods
In the paper, (line 311 right) we explain how density guidance can easily be implemented as simply as an appropriate rescaling of the score function. This means that for any trained model, **we can apply guidance without any retraining or finetuning, simply by performing regular sampling with a rescaled score function**, regardless of what noise schedule the sampler uses or whether the solver is 1st or 2nd order. We have demonstrated it for
1. EDM2, which uses PF-ODE with 2nd order Heun solver.
2. Stable Diffusion, which uses the DDIM solver.
3. **Now we also added FLUX.1-dev, which is a Flow Matching model** (which is known to be equivalent to diffusion: https://diffusionflow.github.io/), which uses the Euler Solver for sampling.
This demonstrates that our methods can easily be applied on top of various flow-based models, regardless of how they were trained or what sampling methods they use.
### Potential application
We have also explained how our methods can be useful for applications such as image editing, and also highlighted that we demonstrate how to control log-density. We know that this corresponds to detail control in case of images, but this opens up possibilities of controlling log-density, which may prove useful for other purposes in data other than images.
We hope that this helped address any remaining concerns you may have and will appreciate if you could reconsider your score. | Summary: The paper proposes a novel method, Density Guidance, to control the level of detail in generated images of flow models. It addresses the observsed mismatch between image likelihood and perceptual quality. The samples of high-likelihood are usually overly smooth, while the low-likelihood ones are more detailed. The author analyze the Prior Guidance adn introduce score alignment. They then propose Density Guidance to enable explicit log-density control by modifying the generative ODE. The model is further extended to stochastic sampling, enabling precise log-density control while allowing controlled variation in structure or fine details. The experimental results demonsrate taht the proposed method can adjust image detail while maintaining the image quality.
### update after rebuttal
The rebuttal has addressed my concerns regarding the perceptual evaluation and ablation study. The authors have provided satisfactory answers about the relationship between perceptual metrics and score alignment, as well as clarified the distinctions between ODE sampling and stochastic sampling approaches. Based on these clarifications, I have decided to increase my score to 4.
Claims And Evidence: The paper make several claims
1. Density Guidance enables explicit log-dencity control. It is justified by a derivation modifying the generative ODE.
2. Score Alignment explains prior guidance. It is supported by a theoretical analysis.
Methods And Evaluation Criteria: The proposed method is well-justified for controlling image detail in flow models. The use of score alignment to explain prior guidance provide a good insight. The evaluation contrains comparison of generated images, analyses between log-density and perceptual metrics and quantitative evaluation of proposed method, which are appropriate.
Theoretical Claims: The theoretical analysis about score alignment and density guidance looks sound. The authors provide detailed derivations in the appendix.
Experimental Designs Or Analyses: The experiments effectively validate the proposed method. It conducts the experiments on CIFAR-10 and ImageNet datasets. The proposed method is compared to stable diffusion and edm2. The paper also evaluates the relationship between log-density and perceptual detail.
Supplementary Material: The supplementary material includes extensive derivations and verification of score alignment. It also provide more qualitative results.
Relation To Broader Scientific Literature: The work is related to the literature on diffusion models and normalizing flows. It connects well to prior fingdings on the relationship between likelihood and image detail.
Essential References Not Discussed: It may be better to discuss some papers about perceptual quality metrics and detail-preservation.
Other Strengths And Weaknesses: Strengths:
1. Good theoretical contribution. The paper introduces score alignment, which explains the relationship between prior guidance and image detail. This provides a solid theorecial insight for the observation in prior work.
2. The method is well-motivated. The paper proposes density guidance, which enables log-density control insteand of heuristic modification. It can be used in ODE framework of diffusion models and extended to stochastic sampling, which allows controoled variation. The method does not need any additional training.
3. Comprehensive expriments. The paper validate the method on CIFAR-10 and ImageNet. The method is compared to Prior Guidance and demonstrate better control over image detail.
Weaknesses:
1. The user study or perceptual evaluation is missing. LPIPS, FID, SSIM or user study could be added to strengthen the claims.
2. Lack of a comprehensive ablation study. The paper introduces multiple modifications, including score alignment, density guidance, and stochastic density guidance, but it does not conduct a ablation study to evaluate the contributions of each component.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How does density guidance compare to methods that use explicit perceptual loss functions, such as LIPIS for controlling detail?
2. What is the relationship between preceptual metrics and score alignment?
3. How different does the model perform in ODE sampling the stochastic sampling with Density Guidance?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the Reviewer for their positive comments and constructive feedback. We address the raised points below.
**File with new figures:** https://anonymous.4open.science/r/DensityGuidance-20E6/Density_guided_sampling___Rebuttal.pdf
> Discuss some papers about perceptual quality metrics and detail-preservation.
Certainly. In the camera-ready revision we will include a discussion on perceptual quality including the reference-based metrics such as the ones suggested (LPIPS, FID, SSIM), as well as no-reference-based, such as NIQE [3].
> The user study or perceptual evaluation is missing. LPIPS, FID, SSIM or user study could be added to strengthen the claims.
Thank you for this suggestion. The metrics proposed are "Reference-based" metrics, which compare the generated images to reference ones. LPIPS and SSIM score a single image against a single reference image, while FID compares the set of generated images to the set of "real" images. The issue with LPIPS and SSIM is that, for a given generated image, we do not have a corresponding "ground truth" image to compare to. FID would have been more suitable for our use-case, however it is computationally expensive, requiring generating tens of thousands images [1], which was beyond our coomputational budget. It has also been reported to not always agree with human judgement [2].
We thus propose to use NIQE [3], which is a "no-reference" image quality evaluation metric, which is reported to correlate strongly with human judgement. It provides a single number per image, which indicates whether an image has been distorted (a lower number - higher quality). It was used e.g. by [4] to evaluate super-resolution diffusion models.
We evaluated Density and Prior Guidance for EDM2 model, and the now included State-of-the-art model FLUX. In Figure 14 you can see that:
* For EDM2 model: guided samples can obtained better (lower) NIQE scores than regular samples (gray area);
* For FLUX model: regular samples already score optimally.
After a visual inspection of optimally scoring guided samples (as measured by NIQE) in Figure 15, we noticed that NIQE actually prefers images with significantly less detail than regular samples. For the FLUX model (Figure 16), there were no perceptual differences between regular samples and best NIQE scoring ones.
> The paper introduces [...] score alignment, density guidance, and stochastic density guidance, but it does not conduct a ablation study to evaluate the contributions of each component.
Thank you for this question. We take this opportunity to clarify:
* Score alignment (SA) is a novel framework to verify whether a known procedure (Prior Guidance) will be effective in practice
* (Stochastic) Density Guidance is a novel algorithm proposed by us, which is principled and can be used with any diffusion model (regardless of whether SA holds)
That said, we now performed an extensive evaluation of Prior and Density Guidance, both quantitative and qualitative, including novel models. Please see our response to Reviewer ppbq for more details.
> How does density guidance compare to methods that use explicit perceptual loss functions, such as LIPIS for controlling detail?
The difference between Density Guidance (DG) and models trained with perceptual loss functions is two-fold. First, DG can be used, without any finetuning or retraining, and for no extra cost, on models which were trained without any perceptual losses. Second, it provides a way to control the generations. One can generate images with either high or low level of detail, depending on the use-case. Models trained with perceptual losses do not have that capability.
> What is the relationship between preceptual metrics and score alignment?
Score Alignment guarantees that Prior Guidance effectively changes log-density in deterministic sampling. We show in Figure 14 how that can impact the perceptual metrics.
> How different does the model perform in ODE sampling the stochastic sampling with Density Guidance?
Please see our response to Reviewer ppbq for the details on the comparison on different modes of sampling.
We thank the Reviewer again for their useful suggestions that helped improve our work. We hope that our clarifications and additional experiments addressed all concerns and ask for a reconsideration of the score.
---
[1] Heusel et al. "GANs trained by a two time-scale update rule converge to a local Nash equilibrium." (NeurIPS 2017)
[2] Liu et al. "An improved evaluation framework for generative adversarial networks." (arXiv 2018)
[3] Mittal et al. "Making a “completely blind” image quality analyzer" (IEEE Signal processing 2012)
[4] Sami et al. "HF-Diff: High-Frequency Perceptual Loss and Distribution Matching for One-Step Diffusion-Based Image Super-Resolution." (arXiv 2024) | null | null | null | null | null | null |
Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-off | Accept (poster) | Summary: The paper proposes a novel federated learning framework, FedCEO, aimed at balancing model utility and user privacy through collaboration among clients. The authors introduce a compelling case study to illustrate the potential of semantic collaboration among clients in enhancing the utility of the global model, and they construct the update process as a high-order tensor low-rank optimization. Furthermore, the authors theoretically prove that their model achieves a $\sqrt{d}$-order improvement in the utility-privacy trade-off bound. Extensive experiments, covering both utility and privacy aspects, demonstrate the effectiveness of FedCEO.
Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence. The authors provide thorough theoretical analysis and experimental validation to substantiate the significant improvements in the utility-privacy trade-off achieved by the FedCEO. Firstly, the authors derive the utility-privacy trade-off bound for FedCEO in the theoretical section and prove its improvement by a factor of $\sqrt{d}$ over existing techniques. Secondly, extensive experiments on multiple representative datasets validate the performance enhancements and privacy-preserving capabilities of FedCEO under various privacy settings.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are highly appropriate and well-suited for the problem at hand. The experiments utilize representative datasets such as CIFAR-10, EMNIST and Sent140, along with various model architectures (e.g., MLP, LeNet and AlexNet), thoroughly validating the method's effectiveness and generality. Additionally, the authors further validate privacy protection through gradient inversion attack experiments, making the evaluation criteria comprehensive and convincing.
Theoretical Claims: The theoretical proofs in the paper have been carefully reviewed and are generally correct and rigorous.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper have been carefully reviewed and are generally sound and valid. The experimental section covers a comprehensive evaluation of model utility, privacy protection, and the utility-privacy trade-off, with a well-designed and convincing approach. Particularly, the authors further validate the robustness of FedCEO in privacy protection through gradient inversion attack experiments, with clear and reproducible results. Overall, the experimental designs and analyses are rigorous and support the main conclusions of the paper.
Supplementary Material: Yes, I reviewed the supplementary material. The supplementary material provides detailed code implementation, which is highly beneficial for reproducing the experimental results and conducting further research. The code section clearly demonstrates the implementation details of the FedCEO, including the application of differential privacy mechanisms and the specific steps of tensor low-rank optimization. Additionally, the supplementary material includes extra experimental details and parameter settings, further enhancing the credibility and reproducibility of the paper.
Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the broader scientific literature, particularly in the fields of DPFL. Firstly, the FedCEO addresses the utility-privacy trade-off in differentially private federated learning through semantic collaboration among clients, which differs from existing approaches that focus on regularization or personalization to improve model utility. For example, *Cheng et al.* proposed local update regularization and sparsification techniques, while *PPSGD* improved model utility through a personalized privacy-preserving stochastic gradient optimization algorithm. However, these methods primarily focus on constraining local updates and do not fully leverage the semantic complementarity among clients. FedCEO further enhances model utility through high-order tensor low-rank optimization and theoretically proves its improvement in the utility-privacy trade-off bound. Additionally, the experimental results of this paper demonstrate significant performance improvements and strict privacy guarantees compared to existing literature (e.g., *CENTAUR*). Overall, the paper introduces an innovative approach based on existing research and provides a new solution to the utility-privacy trade-off problem in federated learning.
Essential References Not Discussed: The references in the paper already cover most of the essential related works
Other Strengths And Weaknesses: Strengths:
The strengths of the paper lie in its originality and practical applicability. Firstly, the proposed framework addresses the utility-privacy trade-off in differentially private federated learning through semantic collaboration among clients, which is highly innovative. Secondly, the paper not only theoretically proves the improvement in the utility-privacy trade-off bound but also validates its effectiveness on real-world datasets through extensive experiments, demonstrating its potential in practical scenarios.
Weaknesses:
The weaknesses of the paper can be outlined as follows:
1. Lack of Depth in Technical Details: The detailed discussions on parameter selection are somewhat brief in the main text. More technical details and discussions on parameter tuning could further enhance the reproducibility and practicality of the paper.
2. Unclear Descriptions of Concepts: Some concepts or keywords in the paper, such as "semantic collaboration" and "global semantic space smoothing," could be described more clearly and in greater detail. More precise definitions and explanations would help readers better understand these concepts.
3. Breadth of Comparative Experiments: Although the paper compares with some existing methods, the scope of comparative experiments could be expanded to include more recent federated learning and differential privacy methods. This would provide a more comprehensive demonstration of FedCEO's advantages.
Other Comments Or Suggestions: Please refer to the weaknesses.
Questions For Authors: 1. Could the authors further explain why FedCEO is referred to as a "flexible" DPFL framework, as indicated in the title?
2. What is the definition of "global semantic space" in the paper?
3. How exactly is the non-iid data distribution set up in Table 5?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s positive feedback on our theoretical and empirical contributions. Below are our point-to-point responses to the raised concerns and suggestions.
> **W1**: Lack of Depth in Technical Details: The detailed discussions on parameter selection are somewhat brief in the main text. More technical details and discussions on parameter tuning could further enhance the reproducibility and practicality of the paper.
The reviewer noted brevity in parameter selection discussions. We have detailed hyperparameter search ranges and final configurations in **Appendix C.1 (Table 2)**. For example, on CIFAR-10 with σ_g=1.5, we set λ=10, ϑ=1.07, and I=10. These parameters are selected via grid search and adaptively adjusted based on privacy requirements (e.g., smaller λ for stricter privacy). Please refer to the response to the **reviewer RVpv (Experimental Designs/Analyses) for additional experiments**.
> **W2 & Q2**: Unclear Descriptions of Concepts: Some concepts or keywords in the paper, such as "semantic collaboration" and "global semantic space smoothing," could be described more clearly and in greater detail. More precise definitions and explanations would help readers better understand these concepts.
- **Global Semantic Space Smoothing**: Refers to the low-rank representation of client parameters in the spectral space after tensor decomposition, where high-frequency components are truncated to preserve semantic correlations
- **Semantic Collaboration**: Achieved by integrating complementary semantic information across clients via low-rank optimization (Section 3.2). Please refer to the responses to **Reviewer kpz4 (Q2) for specific examples**.
> **W3**: Breadth of Comparative Experiments: Although the paper compares with some existing methods, the scope of comparative experiments could be expanded to include more recent federated learning and differential privacy methods. This would provide a more comprehensive demonstration of FedCEO's advantages.
We appreciate your suggestion and will incorporate additional baseline comparisons in the revised manuscript to provide a more comprehensive validation of our findings.
> **Q1**: Could the authors further explain why FedCEO is referred to as a "flexible" DPFL framework, as indicated in the title?
As illustrated in Figure 2, our FedCEO can flexibly adapt to different privacy settings and noise accumulation during continuous training by setting different initial values and employing an adaptive thresholding rule based on a geometric series. The specific advantages of this "flexibility" are demonstrated in the experimental results in Table 1.
> **Q3**: How exactly is the non-iid data distribution set up in Table 5?
Apologies for the lack of details on the degree of non-iid data. For non-iid data, we used a common method to construct data heterogeneity.
Taking CIFAR-10 as an example:
- First, we sorted all images in the dataset by their labels.
- Then, for each client, we randomly selected 250 consecutive images twice. Since CIFAR-10 has ten categories, each client has at least one category and at most four categories, verifying that our framework works even in highly heterogeneous situations. | Summary: The authors propose a novel federated learning framework with the differential privacy mechanism, focusing on improving the trade-off between model utility and user privacy. By leveraging tensor decomposition techniques, the proposed method can model the dynamic semantic relationships among different clients, thereby enhancing the performance of the global model. The authors provide comprehensive mathematical analysis and empirical evidence, demonstrating that their framework achieves a better privacy-utility trade-off.
Claims And Evidence: Yes, the main claims in the paper are well supported by both experimental results and theoretical analysis. The visualization experiment in Figure 1 and Theorem 3.1 support the motivation of this work. Corollary 4.6 and the experiments in Figure 4 provide strong evidence demonstrating the effectiveness of the proposed framework in improving the privacy-utility trade-off in federated learning.
Methods And Evaluation Criteria: The proposed method is appropriate, as Proposition 3.2 in the paper proves that the classic federated learning algorithm FedAvg is a special case of it. The chosen datasets are suitable and comprehensive, covering general datasets of different modalities and adopting common federated learning data partitioning methods.
Theoretical Claims: Yes, I mainly focus on the proof details of Theorem 4.3. The overall proof approach is sound and successfully demonstrates the effect of low-rank (semantic collaboration).
Experimental Designs Or Analyses: The experiments in this work are rigorous and comprehensive, systematically designed to evaluate utility and privacy. Additionally, the study includes experiments under heterogeneous federated settings, efficiency comparisons, and tests on textual datasets.
Supplementary Material: The supplementary materials include the complete implementation code for this work, covering the FedCEO framework, gradient attack code, and several baseline methods for comparison.
Relation To Broader Scientific Literature: Previous works have considered spectral decomposition at the local client level, while this method further incorporates inter-client relationships and models them using high-order tensor singular value decomposition. Compared to (Jain et al., 2021) and CENTAUR (Shen et al., 2023), the theoretical bounds have been further optimized.
Essential References Not Discussed: The references in this work are generally comprehensive. However, it would be beneficial to include a discussion on recent related works, such as Provable Mutual Benefits from Federated Learning in Privacy-Sensitive Domains (AISTATS 2024).
Other Strengths And Weaknesses: **Strengths:**
1. This paper primarily focuses on achieving the utility-privacy trade-off in differentially private federated learning framework. This is a crucial research topic with significant implications for the practical deployment of federated learning algorithms in industrial applications.
2. This work follows a rigorous logical structure, where the authors naturally introduce their framework through preliminary experiments and insights from previous works.
3. The proposed approach is novel, as it connects tensor decomposition algorithms with federated model updates and provides an equivalence proof. This offers new perspectives for designing global model update paradigms in federated learning.
4. Theoretical analysis is thorough, and the final conclusions demonstrate the effectiveness of low-rank modeling, providing readers with a deeper understanding of the fundamental principles behind the proposed approach.
**Weaknesses:**
1. There is still room for improvement in the paper’s writing, particularly in terms of the proportion of content across different sections.
2. The paper lacks a discussion of some recent related works, such as Provable Mutual Benefits from Federated Learning in Privacy-Sensitive Domains (AISTATS 2024).
Other Comments Or Suggestions: 1. It is recommended to compress the length of the background knowledge in Section 2 and place more emphasis on the description of the method.
2. The description of the experiments in the abstract needs to be revised, as the paper considers both image and text datasets.
3. In Section 1.1 (Related Work), it is suggested to discuss the differences between the proposed method and the latest techniques in terms of mechanism design and theoretical analysis. Additionally, it might be useful to add comparative baselines.
Questions For Authors: 1. How should "semantic complementarity" in the introduction be understood?
2. Can you provide a specific example of how semantic collaboration between clients is implemented?
3. Apart from the T-tSVD algorithm used in the paper, can other more efficient tensor optimization methods be applied?
4. Can the model optimization in Equation 2 be understood as a denoising process of the noisy model tensor? Would this compromise the privacy of the overall differential privacy federated framework? I'm slightly concerned about this.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s constructive feedback and recognition of our work. Below are our point-by-point responses:
> **W1 & Comments1, 2**
Condensing Background Knowledge: We agree with the suggestion. In the revised manuscript, we will streamline Section 2 (Preliminaries) by moving detailed definitions to the appendix, allowing greater emphasis on the core methodology (e.g., the optimization objective of FedCEO in Section 3 and the motivation for low-rank modeling).
Abstract Experiment Description: Thank you for the correction. We will update the abstract to include results on text datasets (e.g., Sentiment140): "...with experiments on representative image and text datasets…".
> **W2 & Comments3**
We will add a discussion of 'Provable Mutual Benefits from Federated Learning in Privacy-Sensitive Domains' in the Related Work (Section 1.1). This work proposes a theoretical framework for designing personalized privacy-preserving protocols that provably benefit all participants in privacy-sensitive domains. Compared to our approach, it does not explicitly provide quantitative utility-privacy bounds but focuses on existence proofs. Both works involve inter-client collaboration, representing two complementary approaches: **protocol design controlling client-side noise** versus **post-processing algorithms correcting inter-client semantics**.
> **Q1**
"Semantic complementarity" refers to the fact that local DP noise disrupts semantic information differently across clients. In DPFL, due to the randomness of noise introduced by each client, specific semantic information might be disrupted in some clients while remaining relatively intact in others, leading to semantic complementarity among different clients. By integrating the commonalities of parameters via tensor low-rank optimization, the server recovers global semantic smoothness. For example, in Figure 1 (CIFAR-10), the semantic space of the 10th class is corrupted (red row), but FedCEO leverages intact semantic features from other clients (e.g., client 7, 9) to restore smoothness (blue row), improving classification accuracy.
> **Q2**
Consider a DPFL framework with only two clients performing animal image classification. This collaboration mechanism enables semantic complementarity between the two clients' models, thereby improving the global model's performance in DPFL.
For one client, the noise might severely corrupt parameters responsible for recognizing facial features, while for the other client, the noise might primarily distort parameters for limb recognition. However, by performing a Fourier transform, each slice will contain information from all clients. Subsequently, applying truncated SVD to all slices can be viewed as projecting the perturbed parameters back into a smooth semantic space, leveraging knowledge from all clients.
Finally, after performing the inverse Fourier transform, each slice (representing a client's parameters) will have adaptively incorporated information from other clients while retaining some of its original knowledge.
> **Q3**
T-tSVD is chosen for its efficiency in dynamic thresholding via Fourier-domain matrix factorization (Table 4). Future work may explore other methods (e.g.,deep tensor learning), provided they align with privacy constraints. Existing attempts can be seen in our response to **Reviewer RVpv (W3 & Other Comments/Suggestions)**.
> **Q4**
Equation (2) is not a denoising process, as we do not utilize any prior information about the DP noise in our modeling. Essentially, it is a controlled fusion operation, and Proposition 3.2 demonstrates that FedAvg is in fact a special case of our FedCEO method—yet FedAvg is not considered to possess denoising capabilities. Furthermore, both Theorem 4.5 and the experiments in Section 5.3 confirm that the server-side operations in Algorithm 2 do not compromise the privacy guarantees of the DPFL framework.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I still have some questions as follow:
1. You mentioned in your response that “due to the randomness of noise introduced by each client, specific semantic information might be disrupted in some clients while remaining relatively intact in others, leading to semantic complementarity among different clients”. Does this imply that the method assumes a certain degree of IID data, and its effectiveness would decline when client data distributions are highly heterogeneous?
2. In Q3, you mentioned exploring alternative tensor optimization methods (e.g., deep tensor learning). Could these methods potentially leak client privacy due to additional parameters or nonlinear operations? How would you ensure compatibility with existing privacy constraints?
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your thoughtful questions and the opportunity to clarify these important aspects of our work.
> You mentioned in your response that “due to the randomness of noise introduced by each client, specific semantic information might be disrupted in some clients while remaining relatively intact in others, leading to semantic complementarity among different clients”. Does this imply that the method assumes a certain degree of IID data, and its effectiveness would decline when client data distributions are highly heterogeneous?
The FedCEO framework does **not** inherently assume IID data distribution. Our paper explicitly evaluates both IID and non-IID scenarios (see **Table 5 in Appendix C2.2**). Even under **highly heterogeneous** data distributions (specific experimental settings are detailed in our response to Reviewer vBZu's Q3), FedCEO consistently outperforms all baseline methods across different privacy configurations. Furthermore, as noted in our paper, the **parameter $\lambda$** can be adjusted to a relatively large value in heterogeneous settings to help each client **retain more personalized information** while still benefiting from collaborative learning.
The core mechanism of semantic complementarity relies on intrinsic correlations within clients' semantic spaces, which persist even in non-IID scenarios. For example, clients specializing in different animal categories (e.g., cats vs. dogs) still share **low-level features** (edges, textures) that can be collaboratively enhanced through low-rank tensor optimization.
> In Q3, you mentioned exploring alternative tensor optimization methods (e.g., deep tensor learning). Could these methods potentially leak client privacy due to additional parameters or nonlinear operations? How would you ensure compatibility with existing privacy constraints?
Any alternative tensor optimization methods (e.g., deep tensor learning) must comply with the post-processing theorem of differential privacy (Theorem 4.5).
For instance, if local DP employs Gaussian mechanisms, the tensor optimization method must satisfy:
- **The network only processes already noised parameters (post-DP data)**.
- **The tensor optimization model does not utilize Gaussian distribution priors during its formulation**.
Additionally, we can leverage existing **gradient inversion attacks** (see Appendix C2.4) to empirically verify the privacy guarantees of the tensor-optimized model parameters, providing further validation of the framework's security.
Thank you once again for your valuable feedback. Please feel free to let us know if you have any further questions. | Summary: The paper introduces FedCEO, a federated learning framework that aims to balance model utility and differential privacy by applying tensor low-rank proximal optimization (via T-tSVD) on noisy client parameters. The idea of smoothing the global semantic space is interesting, though its novelty is not entirely clear compared to existing spectral methods.
Claims And Evidence: The authors claim an improved utility-privacy trade-off (O(√d)) and effective adaptation to different privacy settings.
Methods And Evaluation Criteria: The approach leverages tensor low-rank approximation to mitigate the impact of differential privacy noise. This method is sound.
Theoretical Claims: The paper includes comprehensive proofs for its main theoretical claims.
Experimental Designs Or Analyses: The experimental section demonstrates that FedCEO can achieve competitive performance compared to existing methods. That said, the hyperparameter choices (such as λ and ϑ) are not thoroughly justified, leaving some questions about the robustness of the method in practical scenarios.
Supplementary Material: The supplementary material provides additional details and codes.
Relation To Broader Scientific Literature: The discussion does not fully differentiate FedCEO from related methods, leaving some ambiguity regarding its incremental contribution.
Essential References Not Discussed: There is room for a more comprehensive discussion on scalability in federated learning under differential privacy.
Other Strengths And Weaknesses: Strengths:
1.The paper addresses a significant issue by proposing a way to mitigate DP noise effects using tensor methods.
2.The combination of theoretical analysis and experimental results is a strong point.
Weaknesses:
1.The paper lacks intuitive visualizations that could better demonstrate its noise robustness.
2. Experimental evaluation is limited to simple datasets (only two datasets), which may not fully reflect performance in real-world scenarios.
3. FedCEO seems to be a shallow method. What are its advantages and disadvantages compared to deep methods? It would be ideal to compare it with newer deep methods to highlight the value and significance of the research in this paper.
Other Comments Or Suggestions: Traditional tensor methods have been extensively studied, and it would be more convincing to try using them to validate the effectiveness of this method within deep learning approaches.
Questions For Authors: See weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive feedback and the opportunity to improve our paper. Below are our point-by-point responses to your comments:
> **W1**
**Figure 4 in the paper** visualizes the robustness of our method to different noise levels on CIFAR-10. Compared to other methods, our FedCEO (orange curve) exhibits less utility degradation as noise (privacy) intensity increases, demonstrating stronger utility-privacy trade-off and robustness to privacy noise (as reflected by the **minimal slope of the orange curve**, second only to non-private FedAvg (blue curve)). In the revised version, we plan to include visualizations on additional datasets to further strengthen this conclusion.
> **W2**
In addition to EMNIST and CIFAR-10 used in the main text, we have also evaluated FedCEO on **text data (Sent140)** with LSTM (see **Appendix C2.3 Table 6**), where our method consistently demonstrates superior performance.
> **W3 & Other Comments/Suggestions**
FedCEO is a novel federated learning parameter update framework that is **architecture-agnostic** (operating on parameter tensors) and compatible with deep architectures (e.g., AlexNet, see Table 6). Its key advantage lies in efficiency—it avoids the high computational costs of deep personalized methods like PPSGD and CENTAUR (see efficiency experiments in Appendix C2.1 Table 4).
Furthermore, we have integrated the **deep tensor method CoSTCo [1]** (*modified for low-rank approximation task*) into our framework. Experimental results show that our approach (via T-tSVD) outperforms CoSTCo on CIFAR-10, likely because CoSTCo is more suited for sparse tensors and less compatible with noisy parameter tensors. Future work will explore deep tensor methods better aligned with FedCEO.
| Dataset | Model | Setting ($\sigma_g$) | FedCEO w/ CoSTCo | FedCEO w/ T-tSVD |
|----------|---------------|---------------|------------------|-----------------|
| | |1.0 | 48.31% ± 0.9% | **54.16% ± 0.2%** |
| CIFAR-10 | LeNet-5 | 1.5 | 44.70% ± 1.0% | **50.00% ± 0.5%** |
| | | 2.0 | 32.25% ± 0.3% | **45.35% ± 0.9%** |
[1] CoSTCo: A Neural Tensor Completion Model for Sparse Tensors (KDD 2019).
> **Experimental Designs/Analyses**
The initialization coefficient λ and geometric ratio ϑ are designed to **adaptively adjust the truncation threshold** $\frac{1}{2\lambda} \vartheta^{\frac{t}{I}}$ based on noise levels (Sec. 3.2). For stronger privacy guarantees (larger σg), smaller λ enhances smoothness (see Table 2), while ϑ > 1 accounts for accumulating DP noise during training. Detailed parameter selection guidelines are provided in Appendix C.1.
Beyond the robustness analysis of parameter I in Table 3, we have added the following analysis for ϑ:
| Dataset | Model | σ_g | ϑ=1.00 | ϑ=1.01 | ϑ=1.02 | ϑ=1.03 | ϑ=1.04 | ϑ=1.05 | ϑ=1.06 | ϑ=1.07 | ϑ=1.08 | ϑ=1.09 | ϑ=1.10 |
|-----------|-----------|------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| | | 1.0 | 50.09% | 53.62% | 53.85% | **54.16%** | 53.92% | 53.90% | 54.09% | 53.20% | 53.76% | 52.47% | 52.02% |
| CIFAR-10 | LeNet-5 | 1.5 | 48.89% | 49.33% | 49.85% | 48.71% | 49.30% | 49.10% | 48.92% | **50.00%** | 49.62% | 48.94% | 48.28% |
| | | 2.0 | 37.39% | 40.32% | 43.19% | 44.87% | **45.35%** | 44.50% | 44.81% | 41.32% | 43.90% | 39.75% | 40.27% |
It can be observed that when ϑ > 1, the model performance shows significant improvement compared to ϑ = 1, while demonstrating **strong robustness to this parameter when ϑ > 1**. We will also include a **visual analysis of mixed λ-ϑ robustness** in the revised manuscript.
> **Relation to Broader Literature**
FedCEO introduces a **client-collaborative tensor low-rank optimization framework** for global model updates in DPFL, explicitly leveraging inter-client semantic complementarity in spectral space. Unlike prior spectral methods (e.g., CENTAUR [Shen et al., 2023] and [Jain et al., 2021]) that apply singular value decomposition (SVD) independently to client matrices, FedCEO stacks client parameters into a **higher-order tensor** and performs truncated tensor-SVD (T-tSVD). This enables adaptive truncation of high-frequency components across clients while preserving low-rank structures, thereby improving global model utility through inter-client information fusion.
---
Rebuttal Comment 1.1:
Comment: After reviewing the authors' detailed responses, I am satisfied that my concerns have been addressed. The clarifications on the robustness of FedCEO (particularly the extensive evaluations on CIFAR-10, EMNIST, and Sent140) and the integration of additional experiments (such as the efficiency comparisons with methods like PPSGD and CENTAUR) convincingly demonstrate the method’s strengths. The added visualizations and discussions around the deep tensor method further solidify the paper’s contributions. So I am raising my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We are pleased to hear that your concerns have been addressed. Thank you for acknowledging our efforts! Your valuable suggestions are of great importance in improving the quality of our paper. If you have any further concerns, please feel free to let us know. We are more than happy to answer them for you. | Summary: The paper introduces a new method for FL training with DP guarantees. The authors argue that the method improves on existing work in terms of the utility-privacy trade-off, supporting their algorithm with a theoretical analysis and experiments. The method is based on a tensor low-rank proximal optimization of the (stacked) local parameters at the server and intuition is provided about why this truncates high-frequency components in spectral space.
Claims And Evidence: The paper presents DP upper bounds on the epsilon, which are a usual in this field to evaluate the privacy guarantees of an algorithm. Experiments on EMNIST and CIFAR do present improvements in terms of accuracy, for a fixed noise level.
Some concerns are as follows:
- The theoretical improvement of the privacy-utility trade-off is stated for a joint objective which is a product of the utility improvement and the privacy guarantee. However, the paper does not discuss why this is a good way to merge the objectives, nor does it cite another work where such a discussion can be found. Why and when may the suggested product be preferable to alternative ways to mix the objectives, e.g. weighted average, or to a Pareto treatment, under which both objectives need to be improved for a method to be considered better?
- In the experiments, privacy is measured via the noise level $\sigma$ that is used. In general, the same $\sigma$ may lead to different levels of privacy for different learning algorithms. Why not use the guarantees provided by Theorems 4.4 and 4.5 (and similar) instead?
- Related to the last point, how do the guarantees in Lemma 4.4 and Theorem 4.5 compare?
Methods And Evaluation Criteria: The way of evaluating the proposed methods: by theoretical upper bounds and experiments with varying noise, are to my awareness standard in this literature.
Theoretical Claims: I have not checked the proofs, but the assumptions and type of bounds are sensible.
Experimental Designs Or Analyses: The structure of the experiments make sense. However, on some of the experiments in Table 1, the gains compared to prior methods are relatively small. Estimates of standard deviation (under different random seeds) will be helpful here, in order to evaluate the gains from the proposed methods better.
Supplementary Material: I have checked it to see the assumptions that are made for the theoretical analysis. If extra space is allowed in future versions of this manuscript, I definitely recommend pushing the assumptions to the main body.
Relation To Broader Scientific Literature: The paper is well-positioned with respect to prior work, as it claims a clear contribution and discusses related work in detail.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: - Algorithm 2 pseudo code contains return commands. The current way I read this is that the algorithm will terminate after this, which is not the intended interpretation. I suggest the authors double-check that.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and for taking the time to provide thorough reviews! Below are our point-by-point responses to your comments:
> **Claims And Evidence (Concern 1)**
We appreciate the reviewer's suggestion.
- The product form was chosen because it directly reflects the **joint tightness of the utility-privacy boundary**. This form highlights the intrinsic trade-off: improving one objective often requires relaxing the other.
- Secondly, we maintain the **same** joint objective formulation ($\epsilon_u \cdot \epsilon_p$) as the prior SOTA method CENTAUR [1], enabling direct theoretical bound comparisons in our paper. Moreover, as shown in Theorems 4.3, 4.5, and Corollary 4.6, the product form eliminates the influence of the stochastic factor K.
- Compared to weighted averaging [2] or Pareto optimization [3], the product form is more **concise and interpretable**. *We will add relevant discussions and citations in the revised version*.
- **Product vs. Weighted Average**: Weighted averaging (e.g., $\alpha \epsilon_u + (1-\alpha)\epsilon_{p}$) requires manual weight selection ($\alpha$), which may introduce subjectivity. The product form avoids weights and directly captures the coupling between objectives, making it more suitable for theoretical analysis. Additionally, weighted averaging is more susceptible to scale differences.
- **Product vs. Pareto Optimization**: Pareto requires simultaneous optimization of both objectives, but in practice, privacy and utility are often conflicting. The product form allows controlled compromise on one objective to achieve significant gains in the other, better aligning with practical needs.
[1] "Share Your Representation Only: Guaranteed Improvement of the Privacy-Utility Tradeoff in Federated Learning" (ICLR2023)
[2] "No free lunch theorem for security and utility in federated learning" (TIST2022)
[3] "Optimizing privacy, utility, and efficiency in a constrained multi-objective federated learning framework" (TIST2024)
> **Claims And Evidence (Concern 2)**
For all compared methods in experiments, we adopted the **same DP mechanism** (user-level DP) and **unified implementation** (based on Opacus). Thus, their privacy guarantees are identical (i.e., same $\epsilon_{p}$). In the revision, we will supplement the quantitative relationship between $\sigma_g$ and $\epsilon_{p}$ (e.g., adding an $\epsilon_{p}$ column in Table 1) based on Theorem 4.5, demonstrating our method's superior utility under identical $(\epsilon_{p}, \delta)$.
> **Claims And Evidence (Concern 3)**
Lemma 4.4 provides the privacy guarantee for the baseline algorithm UDP-FedAvg, while Theorem 4.5 proves that FedCEO's low-rank optimization (a deterministic operation without noise-dependent priors) preserves privacy (post-processing immunity). Thus, their bounds align. We will explicitly emphasize this in the revision.
> **Experimental Designs Or Analyses**
Thank you for the suggestion. We will add standard deviations in the revision (e.g., FedCEO 78.05%±0.2 vs. CENTAUR 77.26%±0.3 for EMNIST at $\sigma_g=1.0$), confirming statistically significant improvements.
| Dataset | Model | Setting ($\sigma_g$) | UDP-FedAvg | PPSGD | CENTAUR | FedCEO | FedCEO ($\vartheta>1$) |
|----------|---------------|---------------|------------------|-----------------|-----------------|-----------------|------------------|
| | | 1.0| 76.59% ± 0.8% | 77.01% ± 0.4% | 77.26% ± 0.3% | 77.14% ± 0.5% | **78.05% ± 0.2%** |
| EMNIST | MLP-2-Layers | 1.5 | 69.91% ± 0.8% | 70.78% ± 0.6% | 71.86% ± 0.2% | 71.56% ± 1.0% | **72.44% ± 0.8%** |
| | | 2.0 | 60.32% ± 1.2% | 61.51% ± 1.1% | 62.12% ± 0.9% | 63.38% ± 0.7% | **64.20% ± 0.6%** |
| | |1.0 | 43.87% ± 2.1% | 49.24% ± 0.9% | 50.14% ± 1.4% | 50.09% ± 0.5% | **54.16% ± 0.2%** |
| CIFAR-10 | LeNet-5 | 1.5 | 34.34% ± 0.7% | 47.56% ± 1.6% | 46.90% ± 0.9% | 48.89% ± 0.6% | **50.00% ± 0.5%** |
| | | 2.0 | 26.88% ± 2.8% | 34.61% ± 0.7% | 36.70% ± 2.4% | 37.39% ± 1.1% | **45.35% ± 0.9%** |
> **Others**
We will correct the usage of `return` in Algorithm 2 (e.g., replacing it with `yield` or annotations) and move key assumptions to the main text in future versions.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response, which clarifies most of my questions.
However, I am still unsure why the product $\epsilon_u \epsilon_p$ is a better measure of performance compared to any other metric. The author say that the product avoids a hyperparameter, however one can equally consider $\epsilon_u^{\alpha} \epsilon_p^{\beta}$ and ask about how to select $\alpha$ and $\beta$.
Has prior work argued why this is a good metric to look at?
Can something be said about the improvements on the individual bounds on the utility and privacy? How do these individual bounds compare to prior work in the field?
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. Regarding your questions about the trade-off objective formulation, we respond as follows:
> However, I am still unsure why the product $\epsilon_u \epsilon_p$ is a better measure of performance compared to any other metric. The author say that the product avoids a hyperparameter, however one can equally consider $\epsilon_u^{\alpha} \epsilon_p^{\beta}$ and ask about how to select $\alpha$ and ask about how to select $\alpha$ and $\beta$.
**Without introducing artificial weight priors** ($\alpha$ or $\beta$), the product form $\epsilon_u \cdot \epsilon_p$ as a trade-off objective demonstrates **greater robustness to the scale differences** of both utility and privacy metrics compared to the additive form $\epsilon_u + \epsilon_{p}$. $\epsilon_u \cdot \epsilon_p$ more **fairly** reflects their dynamic variations. The additive form primarily captures changes in the larger-valued metric while being insensitive to the other, thus requiring careful manual weight tuning. For example, when $\epsilon_{p} = 10$, if $\epsilon_u$ (utility loss) changes from 0.01 to 0.1 (a severe 10× utility degradation), the additive trade-off value would change by **less than 1%**. This makes it difficult to detect when the overall trade-off is disrupted.
Furthermore, the product-form objective offers stronger interpretability that **aligns with practical optimization goals**. As illustrated in Figure 4, it corresponds geometrically to the area of rectangles formed by each $(\epsilon_p, \epsilon_u)$ data point. We will incorporate this analysis in the paper's subsequent version.
We acknowledge that introducing weight parameters (to either additive or product forms) could help manually adjust the relative importance of different metrics (model utility versus user privacy). We sincerely appreciate this suggestion and will explore it in future work.
> Has prior work argued why this is a good metric to look at?
Prior work have thoroughly validated the effectiveness of the product metric, such as in Corollary 5.1 of CENTAUR [2]. For fair comparison, we followed their objective formulation in our paper and achieved at least a $\sqrt{d}$-order improvement over their utility-privacy trade-off bounds.
> Can something be said about the improvements on the individual bounds on the utility and privacy? How do these individual bounds compare to prior work in the field?
Certainly. For **utility**: Compared to [1] (Theorem 5.2), we (Theorem 4.3) achieve an **$O(d)$** improvement. Compared to [2] (Theorem 5.1), we achieve an **$O(\sqrt{d})$** improvement. This benefits from our method's flexible collaboration of semantic information across clients. For **privacy**: Our Theorem 4.5 guarantees $(\epsilon, \delta)$-user-level DP that is no weaker than previous methods.
[1] "Differentially private model personalization" (NeurIPS 2021)
[2] "Share Your Representation Only: Guaranteed Improvement of the Privacy-Utility Tradeoff in Federated Learning" (ICLR 2023)
Thank you once again for your valuable insights, which will undoubtedly help us refine and strengthen our work. If you have any further concerns, we would be happy to discuss them with you! | null | null | null | null | null | null |
Which Attention Heads Matter for In-Context Learning? | Accept (poster) | Summary: The authors investigate the nechanisms behind in-context learning (ICL). Specifically, they study about two special types of attention heads called "induction heads" and "function vector heads" (FV heads). By detecting these two heads, they find that: 1) The induction heads and FV heads are distinct; 2) FV heads are mainly response to the ICL; 3) Some FV heads evolve from induction heads.
## Update after rebuttal
I think the most important concerns have been solved, if all the mentioned modifications are finally applied. Now I think the paper is OK to be accepted but I feel a litter unconfident because the large difference between these two version (and I cannot see the complete changing log due to the ICML policy --- Honestly, I feel somewhat disappointed about restricting the depth and format of rebuttals, especially when both authors and reviewers are willing to engage in more thorough discussions.) I will raise my score to 3.
Claims And Evidence: These paper focus on the roles and relations between FV heads and induction heads. The claims are well expressed in Table 1 and these claims are interesting. However, I am not sure whether these claims are well supported:
1. Although the experiments are operated on 14 models, the selection of these models are strange. 1) The Pythia models (released in 2023) and GPT-2 models (released in 2019) are somehow not most newest models and there is only one model in Llama family. 2) There are only Pythia-6.9B and Llama-2-7b models are large models. A common problem in interpretability is that phenomena that originally seemed correct may disappear as the model size increases.
2. The conclusion that "induction heads and FV heads are mostly distinct" are based on the chosen threshold 2%. I wonder if the results are robust to the threshold. I hope the authors can provide an analysis on the threshold.
3. In figure 19, the locations of FV heads and induction heads are significantly distinct only for the small models. For, GPT-2 Large, GPT-2 XL, Pythia 6.9B, and Llama 2-7B, the p-value is large and cannot get the conclusion that the FV heads and induction heads are distinct. The authors should explain why they choose these models and how the results can be generalized to other models.
4. Not a drawback but a suggestion: The current section 5 is only about the heads. I believe the conclusions can be more clear if the ICL accuracy is also tested during the training. I wonder if the ICL accuracy is also related to the occurrence of FV heads. These experiments may be also helpful to verify some of your conjecture in section 6.
Additionally, In section 4.2, the authors state that the token-loss difference and few-shot ICL are different things. I am curious about what in fact the token-loss difference is, and why the induction heads are most responsible for the token-loss difference. Even an intuitive explanation would be helpful.
Methods And Evaluation Criteria: The main methods in the paper is locating the FV heads and induction heads, where the probing metrics are proposed in previous works. They also use mean ablation of heads, which is also a typical method to test the function of heads. These methods are not innovative but reliable. One of their contributions is to find that the difference between the previous metrics "token-loss difference" and "ICL accuracy". I believe this is a good contribution but I still hope the authors can provide more explanation about what the token-loss difference actually reflect.
Theoretical Claims: No theoretical claims
Experimental Designs Or Analyses: The experimental is well designed and the logic is clear. First, the authors detect these two heads and then analyze their roles and finally talk about the evolution. Some concerns about the analysis are mentioned in the "Claims and Evidence" section.
Supplementary Material: I read about A.1, A.2, A.3 and A.9. The supplementary can help understand the paper better.
Relation To Broader Scientific Literature: I have no idea about the relation to broader scientific literature. It is a work about interpretability of ICL ability of LLMs. The explanatory work may shed some light on related research, but I am not currently aware of any direct applications of its conclusions.
I think that work about explaining mechanism has its own value, and it may not be directly applied. But some people may worry about the application value, especially considering that the capabilities of few-shot ICL have been replaced to some extent by instruction tuning, zero-shot ICL, or RLHF. For me, the lack of direct application value is not a disadvantage, but if some more practical applications can be pointed out (out of the interpretability field), it may be more inspiring for other readers.
Essential References Not Discussed: Essential references are discussed. But the discussion is highly mixed with their own methods. It is the most shortcoming of this article.
There could be some relatively related works that can be included in the related work. For example, there are some works explaining the ICL ability from the perspective of Bayesian inference [1] or gradient descent [2,3]. These works are better to be included in the related work.
[1] S. Xie, et. al., An Explanation of In-context Learning as Implicit Bayesian Inference
[2] R. Ren and Y. Liu, Towards Understanding How Transformers Learn In-context Through a Representation Learning Lens
[3] H. Sun, et. al., In-Context Learning of Polynomial Kernel Regression in Transformers with GLU Layers
Other Strengths And Weaknesses: 1. These paper further investigate the training dynamics of the models. The conclusion about how the functions of heads evolve (or change) during training is very interesting and important.
2. Their discussion in section 6 is very interesting and inspiring. I hope the authors can further investigate them in the future.
Other Comments Or Suggestions: I believe the organization of the paper needs to be improved. The current article is poorly readable.
1. There are many repeated expressions:
1. In the beginning of section 4, the authors have state that "we control for the correlation between induction and FV heads by only ablating induction heads that do not have high FV scores, and vice versa". However they repeat this point in line 263-267: "we take the top n heads by FV score that do not appear in the top 2% heads". If the specific statement has been appear, the upper one can be removed.
2. The figure 1 is copied from the figure 3 and figure 4. Although the author may think that this helps the reader understand the content of the article more quickly, it is still a form of repetition that should be avoided.
3. The figure 5 is just the average value of figure 6. The figure 6 is enough to express the conclusion.
2. I recommend the section 2 background & related work should be splitted into two sections. The "related work" should introduce the ideas of related work and also the relation between theirs and the work. Too specific discussion should not be included in the section. On the other hand, some preliminary like how they define their detectors can be stated in a preliminary subsection. Further, all the statement about your methods (even if these methods are not proposed in these paper) should be placed into a method section. For example, the statement in line 125-150 about how the authors use the causal mediation analysis to identify FV heads, the statement in line 100-114 about how to compute the induction score should be moved to your method section. As for the section 2.3, it could be in introduction or after your experiments.
3. The table 2 can be moved to the appendix. The authors can keep only the "parameters" (and maybe the "|L|" column) in the main text.
4. Now, the page 7 only have three figures and the text explaining them are far from them. Please arrange the size and distribution of the images reasonably so that they appear in the most reasonable position possible to ensure readability. For example, you can together the three models in one figure in Figure 5 and use different color or marker to distinguish them.
5. Please add a limitation section.
6. The figures could be further improved. For example, the markers should be added in line chart. The upper and right border of the figures can be removed. Some grids can be added. In figure 4, some sub caption could be helpful to understand the figures in each row.
7. The figure 10 should add more explanation in its caption.
8. The figure 11 has an empty image.
Questions For Authors: 1. My main concern is the poor structure of the article, especially the repetitive expressions, the confusion of related work, background introduction and methods, which seriously affected my understanding of the content of the article. I approve of the content of the article but the organization it is currently presented makes it not ready for publication.
2. How can the results be generalized to newer and larger models? At least some Llama-3 and qwen-2.5 models should be included. It is better to include some MoE models or models with GQA. As for the param size, I believe the location of heads can be operated on a 70B model on one A100 GPU. (If I am wrong, please correct me.)
3. How does the conclusion can be applied, especially considering the few-shot ICL has been replaced by some other methods?
If the question 1 has been solved, I will raise my score to 2 or 3. And if the question 2 has been solved, I will raise my score to 3 or 4. If the question 3 can be explained well, I will raise my score to 4 or 5.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the detailed feedback, and we especially appreciate the clear questions to resolve reviewer concerns!
We first address reviewer concerns on the claims:
1. We used Pythia models mostly to facilitate the analysis over training dynamics, which other open-source models do not enable since they do not release intermediate checkpoints. We will add Llama 3 8B and Qwen 2.5 7B to our paper - as these models were not yet released during the development of this paper.
2. In many cases, we had to decide on a threshold to differentiate meaningful FV/induction heads from the long tail of other heads. The 2% threshold was carefully chosen following previous work by Todd et al. (2024), and after we verified this threshold does seem to meaningfully separate important heads (Figure 7 in appendix shows the 2% threshold does select heads outside the cluster of other low-scoring heads - taking a higher percentage would select heads with FV scores close to 0). If we take the top 5% of heads, there would be between a 20-40% overlap (except for our smallest model with a 67% overlap) - which is still a small enough overlap to motivate our subsequent experiments in comparing the two sets of heads. The main goal of the claim on overlap is to establish that the two sets of heads we're studying are not identical, and furthermore, our ablation and training experiments later that shows different trends for FV vs. induction heads reinforces that the two are distinct types. Beyond a 5% threshold, we would be selecting too many heads with very low FV and induction scores that it would not be meaningful to consider them FV/induction heads.
3. We agree that the observation on layer depths is not statistically backed up - we will clarify that this observation is speculative.
4. Great suggestion! We have computed and added the evolution of ICL accuracy to Figure 5 of section 5: we observe that in all models, few-shot ICL accuracy begins to improve around the same time as when induction heads appear, and continues to gradually increase throughout training until the end. Since ICL accuracy continues to improve even after the formation of induction heads, we speculate that this suggests the sharp emergence of induction heads contributes to an initial rise in ICL performance, but the emergence of the FV mechanisms contributes to further improvements in ICL (which reinforces our conjecture 1).
We also address the questions at the end:
1. Thank you very much for pointing this out and for the thorough, actionable feedback under "Other comments or suggestions"! We have incorporated your feedback to remove repetitions; reorganize the structure of related work / background / methods; added suggested references to related work; edited Table 2 in the main body to show the model, parameters and |L| only (while putting the rest in the Appendix); moved figures in page 7 to be closer to their text; added a limitation section (where we discuss limitations on the complexity of ICL tasks studied and the scale of the models in our analysis); added additional markers, grids, subcaptions to figures; removed unnecessary borders and empty images.
2. We are currently repeating our experiments on Llama 3 8B and Qwen 2.5 7B. We will also try to include models in the 12-13B parameter range that might fit in A100s. However, we cannot perform experiments on larger models since, to compute the FV score, we need to store several caches of all the attention head activations - we will suggest experiments on larger models for future work.
3. Our conclusion mainly serves to clarify misunderstandings in the current interpretability literature, that attributes few-shot ICL to induction heads, as well as providing general lessons for interpretability such as how interpretability conclusions change with model scale, and the definition of the metric used to measure ICL can also affect conclusions. Since few-shot ICL is still an active field of study in interpretability, we believe our conclusions are very important to share with the field, and we hope that our general lessons for interpretability will also apply to future interpretabiltiy research on other methods!
Thank you again for your feedback, please let us know if any questions or concerns remain unsolved!
---
Rebuttal Comment 1.1:
Comment: I am very grateful for the author's careful response, and I believe that the author can improve this work based on it. I would like to emphasize again that I highly recognize the impact and value that this work may have if some of the problems (which I think are easy to solve, such as adding newer models and improving the article structure) are solved. However, given that the new experiments have not been completed and the updated article structure has not been confirmed, I will temporarily keep my score unchanged. If the author can show the results of their improvements to this work as much as possible in the remaining time, I will increase my score. (Considering that the PDF cannot be updated in this rebuttal, I think the author can partially show the results of their modifications, such as listing the modified related work chapter in the form of an outline or summary, and the specific content can be temporarily omitted. At the same time, modifications to some images and tables can be provided in the form of anonymous links.)
p.s. In my understanding, the process of calculating FV can be completed by recording the activation values on the disk. There is no need to keep the complete activation values on the GPU because it only involves forward propagation. At the same time, model quantization may help reduce GPU memory. Although this may bring some errors, it is better to have a certain approximate experiment than not.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our rebuttal and for your additional comments requesting the results of our improvements! We also appreciate you emphasizing that our work is impactful and valuable, once we added new models and improved the article structure! We apologize for not sharing the results in our original rebuttal, and for the delay (since we had to wait to get results on Llama 3 8B and Qwen 2.5 7B).
---
First, we provide the plot for the result we described in point #4 of our previous rebuttal comment [here](https://drive.google.com/file/d/1Zz-HZX1jz3I2TZmUnxidRjg0D8crLcpS/view?usp=drive_link). This plot shows the evolution of ICL accuracy during training, as suggested by the reviewer in point 4 under “claims and evidence”.
---
Second, we outline the structure of the background and related work sections we have adopted in the current revision following suggestion #2 by the reviewer [here](https://drive.google.com/file/d/1Y70aiA901Qwu_jeGLxAaWcH9SXWJkb2q/view?usp=drive_link).
We have also moved the previous section 2.3 on reconciling divergent findings into section 4.3 after our experiments, and revised the text in this subsection to remove repetitions.
---
Third, we show further results for other suggestions on the paper presentation the reviewer has proposed under “Other comments or suggestions”:
- Suggestion #3: updated Table 2 (screenshot [here](https://drive.google.com/file/d/1BThqIpyDtnMt7xx9B6nPG2GR-ce_Sl4g/view?usp=drive_link))
- Suggestion #5: we added a limitations section. Outline: limitation on the complexity of ICL tasks studied, limitation on the scale of models studied, enumeration of observations made in the paper that are not yet empirically verified with statistical significance.
- Suggestion #6: we have added additional markers and grids for all line plots. For example, the figures [here](https://drive.google.com/file/d/1Zz-HZX1jz3I2TZmUnxidRjg0D8crLcpS/view?usp=drive_link) or [here](https://drive.google.com/file/d/1RQ0OZGFS4LPMoCbh26A2niLAtFvN45JR/view?usp=sharing) shows the markers and grids we have added.
- Suggestion #7: we modified the caption of figure 10 to “Few-shot ICL accuracy after ablating induction and FV heads, using either random ablation method (rows 1 and 3) or zero ablation method (rows 2 and 4). Overall, the observation that ablating FV heads decreases ICL accuracy more than induction heads, is robust against different methods of ablation.”
- Suggestion #8: we removed the empty image in Figure 11 [here](https://drive.google.com/file/d/1RQ0OZGFS4LPMoCbh26A2niLAtFvN45JR/view?usp=sharing).
As for the remaining suggestions, we have removed repetitions in suggestion #1 and rearranged figure positions as suggested in #4 (difficult to show without the context of the full paper).
---
Fourth, we provide plots for the ablation experiments with exclusion for Llama 3 8B [here](https://drive.google.com/file/d/1_m78eZ8XQlmv5hQ3QRMWjS7oVmXUXE0P/view?usp=drive_link) and Qwen 2.5 7B [here](https://drive.google.com/file/d/1wo6e1c1wSsTQSk2n1dbNtZ0UfsbTgFNT/view?usp=drive_link). Both plots show similar trends as the other models we have studied of similar parameter size, and strengthens our claim that FV heads contribute more to few-shot ICL accuracy than induction heads.
---
Finally, in our previous rebuttal, we missed your question at the end of “claims and evidence” on the intuitive explanation for token-loss difference, apologies for that! Intuitively, TL difference measures how much more the model is accurate at predicting the 500th token, relative to predicting the 50th token. This has been used as a proxy for measuring the model’s context utilization (since if the model is leveraging context, it would become better at predicting tokens in later positions than earlier). Induction heads may be important for TL difference since by definition, induction heads retrieve information from the context using the pattern-matching and copy mechanisms.
---
Thank you once again for your careful review and suggestions! We hope that the additional results have addressed your questions 1 and 2 at the end of your review, and we have also responded to question 3 at the end of our initial rebuttal comment! | Summary: This paper studies the functionality of attention heads in in-context learning. Specifically, two functionalities are investigated, namely induction heads and function vector heads. By using the established metrics to locate induction heads and function vector heads across layers, several observations have been made. For example, induction heads and FV heads are distinct but correlated; FV heads are more influential for ICL performance compared to induction heads; FV heads appear in higher layers while induction heads appear in lower layers; FV heads evolve from induction heads. Experiments on language models of different scales across a number of ICL tasks reveal these patterns, together with detailed analysis.
Claims And Evidence: The paper proposes several claims. Most of them are revealed from empirical results, but some of them are not strongly supported.
- The authors claim that "induction heads appear in early-middle leayers and FV heads appear in slightly deeper layers". However, it is not so evident from Figure 2. The average layer may not sufficiently inform the distribution of heads. It is also not clear to me why the authors choose 2% as the selection criteria.
- The authors claim "The induction heads and FV heads are distinct", but "they are correlated". Figure 3 shows that FV (induction) heads contain 90-95 percentile of induction (FV) heads. That means, whether induction heads and FV heads are overlapping or not depends on how you set the threshold (in this paper, the authors set it as top 2%). In that sense, the above claim seems not rigorous to me.
- The conjecture that "FV heads implement more complex or abstract computations than induction heads" is not empirically validated or analyzed.
- The conjecture about the evolution of induction heads to FV heads does not seem rigorous to me.
Methods And Evaluation Criteria: The method and evaluation criteria make sense.
Theoretical Claims: There is no theoretical claim.
Experimental Designs Or Analyses: I have checked the validity of the experimental designs and analyses.
Supplementary Material: I have reviewed the supplementary material.
Relation To Broader Scientific Literature: The relationship between induction heads and FV heads and their distributions are relevant to LLM interpretability and the broad area of ICL behavior analysis. The findings could be useful for future research in this area.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
- The investigation between induction heads and FV heads is interesting and meaningful to future research.
- Extensive controlled experiments are made to uncover the characteristics of the two heads and their relationships.
- A number of interesting findings are provided which could potentially guide ICL design and LLM training.
Weaknesses:
- The ICL tasks being tested are a bit synthetic, such as "Capitalize first letter", "choose first of list". Incorporating more realistic tasks could strengthen the impact of the analysis.
- Some claims are not very strongly supported. Please refer to the "Claims And Evidence" section for more details.
- The discussion of token loss is not clear to me. More explanations on why token loss can be used as the metric and why it presents opposite pattern compared to ICL accuracy should be given.
- The evaluated LLMs are mostly Pythia and GPT-2. More advanced and update-to-date models could be incorporated to make the conclusion general and appropriate in the current development.
- The paper lacks insightful suggestions for future development of ICL or LLMs. Given the findings in this paper, what are critical takeaways when developing future ICL or LLM training techniques?
Other Comments Or Suggestions: NA
Questions For Authors: Refer to the above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your feedback! We will first address the concerns around claims and evidence:
* On the layer distribution, we will clarify that the claim on the layer depth difference is more of a speculative observation than a claim sufficiently backed up statistically.
* On the selection of 2%: the 2% threshold was carefully chosen following previous work by Todd et al. (2024), and after we verified this threshold does seem to meaningfully separate important heads. Figure 7 in appendix shows the 2% threshold does select heads outside the cluster of other low-scoring heads - taking a higher percentage would select heads with FV and induction scores close to 0, which would prevent meaningful analysis. We also repeated ablation experiments taking 5% and 10% as thresholds, and find that our claims are robust to the thresholds.
* On head overlap: the main goal of the claim on overlap is to establish that the two sets of heads we're studying are not identical, to motivate subsequent experiments comparing the two sets. Our ablation and training experiments later that shows different trends for FV vs. induction heads further reinforces that the two are distinct types. If we take the top 5% or top 10% of heads, there would be between a 20-40% overlap (except for our smallest model with a 67% overlap for top 5% and 80% overlap for top 10%) - while taking 5% and 10% as the threshold does not make as much sense since it includes heads with FV and induction scores close to 0, it still shows that the overlap is low enough to justify treating the two sets of heads as distinct heads for study.
* On the rigor of conjectures: we would like to clarify that these are mere conjectures, inspired from observations in our study, but that remain to be rigorously proven and we are not positing them as empirically verified claims. We leave these conjectures as discussions and interpretations of our empirical findings, and to inspire future analyses.
We also address weaknesses raised by the reviewer:
* On ICL tasks: our study combines ICL tasks that are synthetic but require reasoning that is achievable by the models we study, with realistic tasks (such as translation, commonsense question answering, news topic classification, etc.) so we believe that the selection of ICL tasks is sufficient for our analysis. We do appreciate the suggestion for realistic tasks, and will consider adding further tasks that is feasible by our models (such as sentiment classification).
* On token loss: the difference between TL difference and few-shot ICL accuracy was first described in Olsson et al. (2022), but we will include a discussion in our revision as well. In short, TL difference approximates how much the model is using context by measuring how much the model's loss decreases on tokens further in context than earlier. When we observe high ICL accuracy with low TL difference (which is what we observe when we ablate random heads and induction heads for example), this means the model preserves its ICL abilities but loses other abilities associated with learning important signals from context. These other signals could capture non-ICL signals such as remembering entities, tracking style from context, etc. It would be great for future work to analyze what is being lost when we see these drops in TL difference!
* On newer models: we are currently reproducing all experiments except section 5 on training dynamics on Llama 3 8b and Qwen 2.5 7B. Since our conclusions have applied to three families (GPT-2, Pythia, Llama 2) that were released in different years (2019, 2022, 2023), we would expect our conclusions to generalize to the models released in 2024 as well. If surprisingly they don't, we would still like to share the interesting negative results!
* On suggestions for future development: while papers in the interpretability literature are often not expected to provide this (e.g. the Olsson et al. (2022) and Todd et al. (2024) papers that we compare the mechanisms for do not) as the interpretability contributions are standalone from model development, we will include a discussion on how evidence of the higher influence of FV heads may suggest that this is a more effective mechanism than induction heads. This suggests that ICL can be optimized in small models by using training methods that promote the formation of FV heads.
Thank you again for your feedback, please let us know if any questions or concerns remain unsolved!
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in responding to my questions. I have a better understanding about the difference between TL and ICL accuracy. Given that there are still some claims without statistical or empirical evidence (more like speculation and observation), I will keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your acknowledgement of our rebuttal, and we are glad to hear that our explanation helps clarify the difference between TL and ICL accuracy!
We would also like to clarify that the main claims of the paper (findings 3 and 4 in Table 1) are all rigorously backed up with empirical evidence.
On the other hand, the two first claims you brought up under “Claims and Evidence” serve to set the context for our subsequent experiments (findings 1 and 2 in Table 1). Then, the last two points you brought up under “Claims and Evidence” are, as you mentioned, conjectures - they are speculative ideas that our results may suggest, but not empirically proven yet, and we include these conjectures in our paper for a high-level discussion of our work and implications for future research in interpretability.
We hope this clarifies that the claims we do make are all sufficiently backed up by evidence, and the others are made clear in our paper as being either context-setting or speculative. Please let us know if there is wording in our paper that misleadingly suggests otherwise - we will adjust the wording accordingly! | Summary: This paper presents a comprehensive study on attention heads and their impact on in-context learning. Specifically, it examines two types of attention heads: induction heads and function vector heads. Through ablation studies, the authors found that function vector heads play a more significant role in in-context learning performance, while induction heads are more crucial to the token-loss difference. Interestingly, the two types of heads also seem to be correlated.
## update after rebuttal
Given that the authors provided additional results to address my concerns, I increase my score to 3.
Claims And Evidence: Most of the claims are well supported by the evidence. However, I have some concerns regarding the choice of percentile for identifying different types of attention heads. It appears that you use the top 2% as a threshold to classify heads as induction heads, function vector heads, or neither. How is this percentile determined? As shown in Figure 3, while most of the top 2% induction heads do not have top 2% FV scores, they still exhibit relatively high FV scores. Consequently, the ablation study in Section 4.1 is not entirely convincing to me. The heads being ablated, though not in the top 2% of the other category, may still be in the top 5% or 10%, which is still very high to me. The experiments with “exclusion” seems not so exclusive.
Methods And Evaluation Criteria: They make sense.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: I check all the experiments. One issue that I have pointed out in "Claims And Evidence" section.
Supplementary Material: No supplementary material provided.
Relation To Broader Scientific Literature: It helps us to understand the underlying rationale of large language models.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths
- Comprehensive empirical studies on induction heads and function vector heads and their impact on in-context learning.
- Interesting finding that function vector heads contribute more to performance than induction heads.
- Several models including different sizes are studied.
Weaknesses
- The selected model are relatively old. I understand part of the reason is the need to train the models. However, for those experiments do not involve training, I would like to see some more advanced models to know if the conclusion can be applied to them as well.
- As mentioned in the "Claims And Evidence" section, the way to decide the threshold needs more discussion. For example, the current conclusion is there is no significant overlap among the top 2% heads. How about top 5% or top 10%? How strong is the correlation between the two types of heads? Especially, you argue that previous work wrongly interprets the impact of induction heads because some induction heads that also behaved like FV heads. However, in the following section, you argue that there is no significant overlap, which sounds a bit contradictory to me. I suggest more discussions, perhaps along with more analysis, are needed.
- Similarly, when you ablate one type of head, it would be much helpful if the average score/percentile of the other type can be reported as well. I do not think the current design well separates the effect of the two types of heads.
Other Comments Or Suggestions: - It would be great if the performance of 11 out of 37 tasks and the new 8 tasks can be separately displayed. The performance gap helps to understand if the conclusion can be really generalized to new tasks.
Questions For Authors: Please see the weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback!
We will first respond to the questions about claims and evidence:
* Choice of percentile: In many cases, we had to decide on a threshold to differentiate meaningful FV/induction heads from the long tail of other heads. The 2% threshold was carefully chosen following previous work by Todd et al. (2024), and after we verified this threshold does seem to meaningfully separate important heads (Figure 7 in appendix shows the 2% threshold does select heads outside the cluster of other low-scoring heads - taking a higher percentage would select heads with FV scores close to 0).
* Ablations with exclusion: we repeated our ablation with exclusion experiments on Pythia-6.9B taking 5% and 10% as the threshold. Overall, we observe the same trends as with 2%, where ablating induction heads does not hurt ICL much more than random, while ablating FV heads quickly significantly degrades ICL performance. Our ablations are robust to the threshold chosen for exclusion. We will repeat these experiments with other thresholds for our other models, and add them to the Appendix.
We will also address weaknesses:
* Newer models: we are currently reproducing all experiments except section 5 on training dynamics on Llama 3 8b and Qwen 2.5 7B. Since our conclusions have applied to three families (GPT-2, Pythia, Llama 2) that were released in different years (2019, 2022, 2023), we would expect our conclusions to generalize to the models released in 2024 as well. If surprisingly they don't, we would still like to share the interesting negative results!
* Threshold selection: we will elaborate in our revision the rationale behind choosing 2% - that in addition to following prior work, we also empirically verified in Figure 7 that selecting a higher threshold such as top 5% or 10% would end up selecting attention heads with FV and induction scores that are close to 0, which would prevent meaningful analysis. We have also previously computed the correlation between the two scores, but the correlations are not informative due to the long tail of heads that are neither high on FV or induction.
As for the question about contradiction, we would like to clarify that we first established a lack of significant overlap to motivate our subsequent experiments that study FV and induction as two separate phenomena (if the two heads significantly overlap and correspond to the same set of heads, we no longer have a reason to compare these two heads). However, while the two sets of heads are mostly distinct, there are some heads that have both induction and FV properties which may have confounded earlier studies (e.g. heads in purple on Figure 7). Thank you for raising this confusion, we will clarify this in our revision!
* For our ablations, we have also computed and plotted the average scores and percentils of the heads preserved. We have added these plots to our appendix.
As for your last suggestion on separating the performance, we have plotted the ablations for each task in Figures 15-18 of the Appendix. Following your suggestion, we have added markers for the 8 new tasks to better differentiate them from the other 11 tasks.
Thank you again for your feedback, please let us know if any questions or concerns remain unsolved!
---
Rebuttal Comment 1.1:
Comment: Thanks for your response.
Since this time ICML allows anonymous external links for supplementary tables and figures during the rebuttal, could you share the suggested results you have so far? If they seem reasonable and adequately address my concerns, I will consider adjusting my score.
- Ablation studies with exclusion experiments at 5% and 10%
- Experiments on Llama 3 8B and Qwen 2.5 7B (I personally think this is important)
- Figures including averages and percentiles
- Results that separate 8 new tasks
Regarding contradiction, I still feel a bit confused. If two sets of heads are mostly distinct, that means only a few overlapping heads. Can those few heads (way less than 2%) have such a big impact in the previous studies? I don't think the currently provided evidence support the claim.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our rebuttal and for your additional comments! We apologize for not sharing the results in our original rebuttal, and for the delay (since we had to wait to get results on Llama 3 8B and Qwen 2.5 7B).
* Please find ablations excluding [5%](https://drive.google.com/file/d/1Vtg18GFo941uTWaXqDCUoz0hqaZSa1JN/view?usp=drive_link) and [10%](https://drive.google.com/file/d/1svDqeYCbhz9xgN2F9gIp6GadQm3-6g9d/view?usp=drive_link) of heads in Pythia 6.9B as hyperlinks on the percentages. As stated earlier, both plots show similar trends to the ablation with exclusion experiments in our paper taking 2% as the threshold.
* We provide plots for the ablation experiments with exclusion for Llama 3 8B [here](https://drive.google.com/file/d/1_m78eZ8XQlmv5hQ3QRMWjS7oVmXUXE0P/view?usp=drive_link) and Qwen 2.5 7B [here](https://drive.google.com/file/d/1wo6e1c1wSsTQSk2n1dbNtZ0UfsbTgFNT/view?usp=drive_link). Both plots show similar trends as the other models we have studied of similar parameter size, and strengthens our claim that FV heads contribute more to few-shot ICL accuracy than induction heads.
* We plotted the average induction scores and percentiles for the remaining heads after ablating induction and FV heads [here](https://drive.google.com/file/d/1RFtSfCnZieSe_lYG54HeqAwGuVrsFjEv/view?usp=sharing). The plots show how when we ablate induction heads, the induction scores of the remaining heads steadily decreases as we increase the ablation percentage, but when we ablate FV heads, the induction scores of the remaining heads does not change much with ablation percentage.
Similarly, we plotted the average FV scores and percentiles for the remaining heads after ablating induction and FV heads [here](https://drive.google.com/file/d/1Y-v8MMiFKUPNpj1mgOoOlq540gk_Zf2Z/view?usp=sharing).
* To separate the 8 new tasks, we re-generated the plots in Figures 15-18 while bolding the name of the tasks that are new. [Here](https://drive.google.com/file/d/1Ixexj5ZwRDYjp1lIiBr3SeEbIfcHx-5g/view?usp=drive_link) is an example of the new plot for Pythia 6.9B.
Regarding the contradiction: the impact of the overlapping heads is best shown in the difference between our ablations without and with exclusion (Figure 4 row 1 and 2). Most importantly, the induction plot (in blue) is behaviorally different in large models between ablations with and without exclusion. The only difference is that in the ablation with exclusion, we preserved FV heads that overlap with the top n% induction heads (n being the ablation percentage), so this set of heads do meaningfully impact conclusions of ablation studies.
In addition, although the two sets of heads are distinct (which motivates our comparative study), the top 2% of FV heads have induction scores at around the 90-95th percentile, and vice versa. This means that once we start ablating more than 5% of heads, there would be a higher overlap between the two sets of heads. This is more clearly shown in Figure 21 of our Appendix, where we plot the overlap percentage of the two types of heads according to the ablation percentage chosen.
We hope that the supplementary results and the additional clarification on the overlapping heads alleviate your concerns! | Summary: This paper investigates the mechanisms behind ICL. Specifically, it focuses on analyzing two popular explanations for the mechanism of ICL: induction heads (token-copying) and function vector (FV) heads (task-encoding). Through extensive experiments on 12 LLMs and 45 tasks, multiple interesting findings are presented:
- Induction heads and function vector heads are distinct but correlated.
- FV heads matter the most for few-shot ICL, especially in larger LLMs. Particularly, the effect of ablating induction heads while preserving top FV heads is close to random, which offers new insight into the effect of induction heads.
- Many induction heads transition to FV heads during training. Induction heads are simpler and learned earlier, whereas FV heads are more complex.
Claims And Evidence: The claims are well supported by the extensive experiments presented in this work.
Methods And Evaluation Criteria: The evaluation criteria for the three major findings are valid. The measures of token-loss difference and ICL accuracy effectively support the discrepancy between the findings of this work and those of Olsson et al. (2022), demonstrating that FV heads significantly contribute to ICL accuracy but not to token-loss difference.
Theoretical Claims: There is no proofs involved in this paper.
Experimental Designs Or Analyses: In general, the experiments are rigorous and well-designed, thoroughly supporting the findings of this paper.
Some minor issues include:
- ICL is usually conducted on larger LLMs, ranging from a few billion to hundreds of billions of parameters. The largest LLM evaluated in this paper is 7B. Scaling evaluations to include larger LLMs in the range of 13B to 70B could enhance the robustness of the findings.
- The tasks evaluated focus on simple classification tasks. Extending this to more challenging tasks (e.g., math reasoning, graduate-level MCQs) and diverse prompt formats (ICL with intermediate CoT explanations) would enhance the generalizability of the findings.
- Most experiments are based on the top 2% of attention heads. Exploring different ratios can reveal whether the observed trends are robust. A sensitivity analysis would strengthen the findings.
Supplementary Material: There is no supplementary material provided.
Relation To Broader Scientific Literature: This work builds on two previous explanations of the ICL mechanism: induction heads and FV heads. It presents multiple novel and interesting insights into these two distinct explanations.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Additional Strengths**:
- This work provides valuable insights into the mechanism of ICL, and can potentially inspire future works on ICL mechanism.
- The paper resolves contradictions of findings in this work and prior studies on induction heads by highlighting the metric difference. It convincingly demonstrates that ICL is largely driven by FV heads rather than induction heads.
**Additional Weaknesses**:
- It would be better to discuss potential practical applications of the findings, such as how these insights could be leveraged for model training and ICL optimization.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the positive assessment! We also address minor concerns:
* Larger models: we will consider experiments on models in the 13B range for our camera-ready. However, computational constraints on our end prevent testing on larger models due to memory limits, since the FV score computation and ablation experiments require several caching of model activations. We would love to see future work extend our studies to bigger models! The models we study are also much bigger than earlier mechanistic analyses of ICL (e.g. Olsson et al. 2022 on toy models with 1-6 attention-only layers or MLPs) which shows promise of scaling our methods to bigger models given enough compute.
* Task difficulty: while small models limit complex reasoning tasks, our findings hold on tasks such as translation, binding, named entity recognition, commonsense QA... It would be a great idea to also include the more challenging tasks you suggest for future work studying bigger models that have a reasonable base accuracy for those tasks!
* Thank you for the suggestion! We have repeated the ablation experiments and training dynamics analysis using 5% as the threshold, and results confirm the robustness of the observed trends.
* Practical applications of findings: we will include a discussion on how evidence of the higher influence of FV heads may suggest that this is a more effective mechanism than induction heads. This suggests that ICL can be optimized in small models by using training methods that promote the formation of FV heads. | Summary: The authors study Induction Heads and Function Vector Heads in relation to In-Context Learning in a variety of small and medium-sized models.
Their experiments suggest that induction heads and FV heads are distinct, though there is some overlap in their behaviours.
They also show that removing FV heads greatly impacts accuracy, while it is not the case for induction heads, while removing induction heads has a greater impact on token-loss difference (the difference in loss between the 50th and the 500th token of the prompt) than removing FV heads.
Finally, they show that FV heads tend to be found slightly deeper within the network, and that during training some heads turn into induction heads before later involving into FV heads.
The authors attribute some differences between their findings and earlier literature to their carefully distinguishing between the metrics of interest (token-loss difference and ICL accuracy), as well as to their controlling for the behavioural overlap between FV heads and induction heads.
## update after rebuttal
The authors have answered many of my concerns, and have even brought forth new interesting experimental results. Consequently, I have raised my recommendation to a 4.
Claims And Evidence: I am currently not convinced of the soundness of the following claims:
1) One of the paper’s main claims is that induction heads contribute less to ICL accuracy than FV heads. I suspect that this is very task dependent: e.g., if the context describes a dictionary of tokens of sorts, i.e. is composed of pairs “$t_{1,1} t_{1,2}, t_{2,1} t_{2,2}, \ldots$” where $t_{i,2}$ is the output token associated with the input token $t_{i,1}$, then I would expect induction heads to contribute greatly to the accuracy. I did not read in detail the references where the various tasks studied are described to see whether such tasks are in fact included.
2) The authors claim that there is little overlap between FV heads and induction heads. However, they also state that “In most models, FV heads are at around the 90-95th percentile of induction scores, and vice versa.” Unless I misunderstood that sentence, does it not mean that there is little overlap mainly because the authors arbitrarily defined FV heads as the heads with the top 2% of FV scores (and same for induction heads), but that there would be a lot of overlap if FV heads were defined as the top 5% of heads for FV scores (and same for induction heads)?
3) In relation to the same claim: FV heads are by definition heads such that their activation can have a large impact on ICL accuracy for the tasks considered (since the score measures the difference in accuracy for two distinct activations). If the tasks are such that induction heads are not needed to solve them (which the authors demonstrate themselves), then won’t that choice of tasks necessarily reduce the overlap between induction heads and FV heads? In other words, isn’t this potential lack of overlap an almost direct consequence of the choice of tasks, rather than a general phenomenon?
4) The authors claim that “Here, we find that in smaller models (less than 160M parameters), ablating induction or FV heads does not influence token-loss difference more than random. In models with over 345M parameters, ablating induction heads has a larger effect token-loss difference than ablating FV heads. The gap between the effect of ablating induction and FV heads decreases with model scale.” However, the behaviours of the “FV heads ablated”, “Induction heads ablated” and “random heads ablated” curves with respect to each other seem to vary so much between models (see Figure 4 and Figure 9 - note in particular that removing induction heads is sometimes less impactful than removing random heads) that I find it difficult to conclude with so few data points that there really is a trend with respect to model size, rather than simply a lot of model-dependent variance. In particular, the results for the largest model in Figure 9 in the Appendix (Llama 2 7B) go strongly against the claim.
5) The authors claim that “In general, induction heads appear in early-middle layers and FV heads appear in slightly deeper layers than induction heads.” However, looking at the results in Figure 19 in the Appendix, we find that the p-values are often quite large and that the phenomenon, if it exists, might again be model-dependent.
Note: I recognise that it is very hard to draw rigorous conclusions regarding such empirical matters, where confounding factors abound. Though I do not fully agree with many of the authors’ claims, I do not question the seriousness of their work or their scientific rigor.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: Experiments are sound overall.
Supplementary Material: I have skipped through it (see remarks above).
Relation To Broader Scientific Literature: I feel like the findings fit well within a series of articles investigating similar topics and phenomena.
One of the findings (induction heads are not that important for ICL accuracy) contradicts conventional wisdom, and the authors offer an explanation for it (which does not entirely convince me, see above).
Essential References Not Discussed: As far as I can tell, the relevant literature is adequately cited by the authors. A few additional references on either ICL for transformers or the mechanistic study of subcircuits that might further enrich the Related Works section:
The evolution of statistical induction heads: In-context learning Markov chains (Edelman et al.)
Iteration Head: A Mechanistic Study of Chain-of-Thought (Cabannes et al.)
An explanation of in-context learning as implicit bayesian inference (Xie et al.)
What can transformers learn in-context? A case study of simple function classes (Garg et al.)
How do transformers learn in-context beyond simple functions? A case study on learning with representations (Guo et al.)
Dissecting recall of factual associations in auto-regressive language models (Geva et al.)
Other Strengths And Weaknesses: The paper is well-written and clear. The authors ask relevant questions and investigate them thoroughly (though I am not entirely convinced by some of their conclusions). They clearly distinguish between what their experiments prove (insofar as anything can be proven regarding such empirical phenomena) and what they merely suggest.
Other Comments Or Suggestions: The following sentence is unclear to me, and additional explanations might be needed:
“However, C2 would predict that ablating monosemantic FV heads would not hurt ICL performance, […]” Why is that ?
Subsection 2.2 is perhaps too concise; a slightly more detailed exposition (as in Todd et al.) would increase clarity.
A few typos:
"two distinct sets of heads To do so, we" (missing ".")
"task $t\in T$ defined by a dataset $P_t$" (missing "is")
Questions For Authors: I would appreciate additional details regarding the claims I have mentioned in “Claims and evidence” (points 1) to 5)). I would be happy to improve my evaluation of the paper if presented with satisfactory answers.
Other questions:
6) By the very definition of the function vector score, heads whose FV score is high are necessarily heads such that their activation can have a large impact on ICL accuracy (since the score measures the difference in accuracy for two distinct activations). As such, is it not somewhat tautological, or at least not very surprising, that ablating FV heads greatly hurts accuracy (one of the authors’ main contributions)?
7) The authors emphasise the importance of distinguishing between few-shot ICL accuracy and token-loss difference, but make no attempt at describing the difference between the two metrics. Consider a prompt composed of input-output pairs $(x_i,y_i)$ where the 50th and 500th tokens are the correct answers $y_l$ and $y_r$ to some inputs $x_l$ and $x_r$; in that case, isn’t token-loss difference essentially a renormalized version of few-shot ICL accuracy (renormalised by the accuracy at $y_l$ given the shorter context of the first 50 tokens) ? In general, if few-shot ICL accuracy is high but token-loss difference is low, doesn’t that mean that there is no improvement in the models predictions as the context increases in length, hence that the model is not actually using the context?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback! We address concerns below:
1. Our task suite does include dictionary-style tasks as you mentioned, and we plotted the task-specific ablations in Figure 18 of the appendix - for the dictionary tasks (e.g. national_parks, park-country, person-occupation, person-instrument, english-german, french-english) we do still consistently find that FV heads contribute more than induction heads significantly.
2. In many cases, we had to decide on a threshold to differentiate meaningful FV/induction heads from the long tail of other heads. The 2% threshold was carefully chosen following previous work by Todd et al. (2024), and after we verified this threshold does seem to meaningfully separate important heads (Figure 7 in appendix shows the 2% threshold does select heads outside the cluster of other low-scoring heads - taking a higher percentage would select heads with FV scores close to 0). If we take the top 5% of heads, there would be between a 20-40% overlap (except for our smallest model with a 67% overlap).
3. While we used a held-out set of tasks to identify FV heads, the FV heads are task-agnostic (as shown in our ablations where they contribute to tasks that were not used to identify them) - so regardless of the choice of the ICL task, with enough tasks the same set of heads should be identified as FV heads. In addition, the tasks we used to compute FV heads are ones that were previously often associated with induction heads, but demonstrated in our paper as being less influenced by induction heads than FV heads. Therefore, the lack of overlap is a general phenomenon.
Also, the main goal of the claim on overlap is to establish that the two sets of heads we're studying are not identical, to motivate subsequent experiments comparing the two sets. Our ablation and training experiments later that shows different trends for FV vs. induction heads further reinforces that the two are distinct types.
4. We agree token-loss difference trends are noisy and we will remove the claim about scale-dependence. The key finding we'd like to emphasize here is that token-loss difference and ICL accuracy measure distinct phenomena.
5. We agree that the observation on layer depths is not statistically backed up - we will clarify that this observation is speculative.
6. Great point! We used a different set of tasks to compute FV scores than the tasks used in our ablation experiments, to specifically ensure that our main claim is not tautological. What's interesting about FV heads is that their contribution to ICL seems to generalize across different ICL tasks.
7. Interesting question! The difference between the two metrics was first described in Olsson et al. (2022), but we will include a discussion in our revision as well. In the prompt you described, it would be a normalized version of few-shot ICL, but in practice, TL difference is not computed using such prompts (in previous work and ours, it is taken over natural sentences). And yes, high ICL accuracy with low TL difference (which is what we observe when we ablate random heads and induction heads for example) means the model preserves its ICL abilities but loses other abilities associated with learning important signals from context. These other signals could capture non-ICL signals such as remembering entities, tracking style from context, etc. (it would be great for future work to analyze what is being lost when we see these drops in TL difference!)
Thank you for providing the additional references, suggesting better clarity in 2.2, and pointing out the typos - we have incorporated these in our revision!
> “However, C2 would predict that ablating monosemantic FV heads would not hurt ICL performance, […]” Why is that ?
C2 posits that FV heads are polysemantic heads that implement both induction and the FV mechanism. Then, if we ablate monosemantic FV heads but preserve the polysemantic FV heads, models should still be able to perform ICL - we will clarify this in our revision!
Thank you again for your feedback, please let us know if any questions or concerns remain unsolved!
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed answer.
Some additional comments:
1) Note that I mentioned a very specific task where $t_{i,1}$ and $t_{i,2}$ are tokens and not words (i.e sequences of tokens); this makes quite a difference, as induction heads are precisely tasked with outputting B after A if AB was present earlier in the text and A, B are tokens (as opposed to collections of sequences of tokens).
2) This makes sense. You could maybe emphasize in the main text (a single sentence should be enough) that the choice of threshold is informed by the shape of the distribution of scores.
3) I am not 100% convinced, but I think that the fact that the set of tasks used to designate FV heads is distinct with the set of tasks used for later experiments is an interesting point that should be slightly more emphasized. Among the reasons why I am still not 100% convinced: the tasks chosen might be quite similar to each other (at least "from a transformer's point of view").
4) Good.
5) Good.
6) Same as for 3).
7) Such a discussion would indeed be welcome. " high ICL accuracy with low TL difference (which is what we observe when we ablate random heads and induction heads for example) means the model preserves its ICL abilities " I don't entirely agree, as this could also be a sign of the model simply being good at 0 shot accuracy (and not really doing any in-context learning).
As some of my concerns have been answered, I have raised my recommendation to a 3.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our rebuttal and for raising your recommendation, we appreciate it! We will also follow your suggestions on points to emphasize in 2 and 3 for our revision!
Regarding your concerns in 3: we believe that because FV heads are task-general (using different sets of tasks to identify FV heads leads to the same set of heads), then the choice of tasks used to compute FV heads would not influence their overlap with induction heads. Also, Figure 8 shows that in small models, when we don’t perform the exclusion, induction and FV heads seem to contribute comparably to our ICL tasks - and yet there is no overlap between these heads in the small models. For these reasons, we’re not concerned that the lack of overlap is a consequence of the choice of tasks (we find no overlap even induction and FV heads both contribute to the tasks used to compute FV heads). Please let us know if you have remaining questions!
We also ran additional experiments to look into your questions in 1 and 7.
* For 1, we ran experiments with the dictionary task you described: we sampled random pairs of tokens $t_{i,1}, t_{i,2}$ which we provide as demonstration examples for ICL. Then, we randomly choose one input from the context $i^*$ and use $t_{i^*,1}$ as our query to verify whether the model correctly retrieves $t_{i^*,2}$. You can find the ablation results for this particular task without exclusion [here](https://drive.google.com/file/d/1mnIvo989g00HQENYenyIdR1vtR1QW-fg/view?usp=drive_link), and with exclusion [here](https://drive.google.com/file/d/1mHJW4_mSS_ZDDnt_zO5h1mvIXgLSiSUQ/view?usp=sharing). In both settings, and especially with exclusion, FV heads seem to contribute primarily to this task, which aligns with the general claim of our paper that induction heads contribute less to ICL. We do also remark that for this task, induction heads seem to contribute more than the other tasks we evaluated (in the ablation with exclusion for the dictionary task, ablating 20% of induction heads leads to a ~0.2 accuracy decrease, whereas on other tasks (Fig 1), it leads to less than a 0.1 accuracy decrease). We hope this alleviates your concerns in 1.
* For 7, we computed ablations on 0-shot ICL accuracy [here](https://drive.google.com/file/d/1HrflR1Ye9hiv3DJYUnnnE2b2-iAfp3oi/view?usp=drive_link). The model’s clean 0-shot accuracy is around 0.15 (a large drop from its clean 10-shot accuracy at around 0.5, and similar performance to 10-shot accuracy with 20% of FV heads ablated), and the ablations do not decrease 0-shot accuracy much more. This suggests that for the ICL tasks we study, the model is not good at performing them 0-shot and the model must be doing ICL to achieve high ICL accuracy.
We appreciate the reviewer’s additional questions that allowed us to further verify the robustness of our claims! | Summary: This paper explores the mechanisms behind in-context learning (ICL) in large language models (LLMs), specifically examining two types of attention heads: induction heads and function vector (FV) heads. The authors conduct experiments to determine which of these mechanisms is primarily responsible for ICL. Through various ablations, the paper argues that FV heads play a more significant role than induction heads, especially in larger models, while also showing that induction heads evolve into FV heads during training.
Claims And Evidence: 1. Induction and FV heads are distinct but correlated. - Ablations and training dynamics support this claim, showing minimal overlap but some correlation in the functionality of these heads.
2. FV heads are primarily responsible for ICL in LLMs. - Evidence: Ablating FV heads results in a significant drop in ICL performance, whereas ablating induction heads has a smaller impact.
3. Induction heads evolve into FV heads during training. - Evidence: The authors observe that some heads that initially exhibit induction-like behavior transition to performing more abstract FV functions later in training.
Methods And Evaluation Criteria: The study uses ablation experiments to measure the effect of removing induction and FV heads on ICL performance, focusing on few-shot learning tasks. The paper also tracks the evolution of these heads during training to understand their transition from simpler induction-based operations to more complex FV-based mechanisms. Additionally, they explore the overlap between induction and FV heads and provide a detailed breakdown of the results for models ranging in size from 70M to 6.9B parameters.
Theoretical Claims: The paper introduces a key theoretical claim that induction heads may serve as an initial mechanism that helps the model learn the more complex FV heads. This hypothesis is supported by the observation that FV heads often start with high induction scores during training but diverge as training progresses.
Experimental Designs Or Analyses: Ablation experiments were conducted on 12 different language models of varying sizes. Causal mediation analysis was used to assess the contribution of specific heads to ICL performance. Training dynamics analysis helped track the evolution of attention heads during the learning process.
Supplementary Material: The authors provide supplementary material that includes detailed ablation results across multiple ICL tasks, layer-specific analyses, and evolution of head scores over training steps. Additionally, appendices contain data on model architecture, ablation methods, and task descriptions.
Relation To Broader Scientific Literature: The paper contributes to ongoing research in head interpretability and in-context learning. This may be related to https://arxiv.org/abs/2404.15574 and https://www.anthropic.com/research/in-context-learning-and-induction-heads. Especially the second one, I think authors may need to discuss a bit with the second one.
Essential References Not Discussed: While the paper cites foundational works, it could benefit from incorporating studies that investigate the broader implications of these findings on neural architecture optimization and future model scaling.
Other Strengths And Weaknesses: Strengths: Comprehensive ablation studies across models of various sizes. Clear presentation of results and robust statistical analysis. Thought-provoking theoretical contribution regarding the evolution of induction heads.
Weakness: The paper could expand on why the shift from induction to FV heads is necessary for larger models and whether other mechanisms could play a role. The conjectures about the relationship between induction and FV heads could benefit from further validation with real-time model adjustments. The application of FV heads could be further discussed.
Other Comments Or Suggestions: The findings present a compelling case for rethinking the role of induction heads in ICL, but further exploration of the interaction between FV heads and other components of the model might provide a more holistic view of in-context learning dynamics.
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback!
To address your weaknesses:
* On the shift from induction to FV: we will elaborate on why this shift might be necessary for larger models. We hypothesize that induction is a more lightweight mechanism (based on just attending to the subsequent token of a previous copy of the current token) that offers a crude way for models to retrieve information from context, however this method does not sufficiently solve more complex ICL problems. FV heads, on the other hand, offer a compact representation of the ICL task given the examples - the exact mechanism of FV heads is still unknown, and plausibly more complex, but allows larger models to achieve higher training accuracy. We also discussed the possibility of other mechanisms that may play a role in ICL, we leave this investigation for future work.
* On the conjecture between induction and FV heads relationship: we performed an additional experiment measuring the evolution of ICL accuracy during training. We observe that in all models, few-shot ICL accuracy begins to improve around the same time as when induction heads appear, and continues to gradually increase throughout training until the end. Since ICL accuracy continues to improve even after the formation of induction heads, we speculate that this suggests the sharp emergence of induction heads contributes to an initial rise in ICL performance, but the emergence of the FV mechanisms contributes to further improvements in ICL. This reinforces our conjecture that induction heads may serve as a stepping stone for FV, and we leave further exploration to rigorously verify this conjecture to future work.
* We will also add a further discussion on the application of our findings: our evidence of the higher influence of FV heads suggests that this is a more effective mechanism for ICL than induction heads. This suggests that ICL can be optimized in small models by using training methods that promote the formation of FV heads.
Thank you for suggesting the relevant work by Wu et al.! We also extensively discussed and compared with the Anthropic work suggested (which we cite as Olsson et al. 2022).
Thank you again for your feedback, please let us know if any questions or concerns remain unsolved! | null | null |
Detecting Strategic Deception with Linear Probes | Accept (poster) | Summary: The paper evaluates whether deceptive behaviour in LMs can be detected by linear probes on the activations. The paper includes extensive experiments and ablations with strong positive results.
Claims And Evidence: The paper is clear about its limitations and scope, e.g.,
>Thus, our experiments do not attempt to prove that white-box monitors can achieve this ambitious goal, but only validate and benchmark performance in simpler settings.
(Writing nit: what is "this ambitious goal" here?)
Given the scope of the work, the results are compelling and quite strong. E.g., clear generalisation of probes to more complicated domains.
The paper performs lots of ablations and is refreshingly upfront about its limitations and failure modes (e.g., displaying interesting failures in table 2).
Methods And Evaluation Criteria: The paper uses many different benchmarks, showing strong generalisation. and comparison to control experiments. (And further ablations)
Theoretical Claims: n/a
Experimental Designs Or Analyses: The metrics used to compare different methods are sound and the paper pushes itself to have practical application in real-life by investigating the 1% FPR metric.
Supplementary Material: no
Relation To Broader Scientific Literature: The paper is clearly up-to-date with the relevant work on deception. However, the related work section does not do a good job of placing the paper in the broader field (it acts more like a summary) --- it would be better to edit this section to explain how your work is situated and relevant to the different subareas you discuss. e.g., probes can be used to detect model orgs, but how exactly is your work different? Similar comment on the SAEs paragraph, how does it compare to your work?
Essential References Not Discussed: Deception can also be directly incentivised by training, e.g., to get good feedback from systematically mistaken evaluators, see (and cite)
https://arxiv.org/abs/2312.01350
https://arxiv.org/abs/2411.02306
https://arxiv.org/abs/2402.17747
(I usually think of this as "deceptive reward hacking")
Other Strengths And Weaknesses: It feels like the paper is written for quite a niche audience, maybe that's fine, but I think you could do more to make it more accessible. As an example "One guard against scheming would be to reliably detect strategic deception; this serves as an important component of an AI Control approach" --- some readers may not be familiar with the specific meaning if AI control here, nd it could be explained. Similar comment where you could explain the meaning of "sandbag".
You could explain probes better for non-familiar readers, e.g., at the start of 3.1 say "A probe is..."
please include a short summary of the paper at the end
Other Comments Or Suggestions: > Given this challenge, we focus on “white-box” techniques that analyse the model internals, rather than just examining outputs.
Well, yes, black-box evals are hard, especially when the model is deceptive, but a similar critique can be made against interpretability, whereby, the model obfuscates its internal reasoning. I think you should just note this, presumably it is more difficult (and beyond current models) to hide internal reasoning (unless its optimised against)
**Please write a short section summary/outline at the beginning of each section (e.g., sec 3 and 4).**
Minor note: the terminology "strategic deception" is probably good for you to use, as it makes the goal-directed aspect clear. But I think that, actually, all deception is "strategic", i.e., intentional (cf Ward https://arxiv.org/abs/2312.01350 ).
> the dataset in Azaria & Mitchell (2023)
Explain the data set / facts more
> Notably, unlike our evaluation datasets, these responses are not written by the model and contain no actual deception.
You could expand on these points, e.g., "not written by the model", could be a limitation at finding the probe for the model's internal deceptive "intent"
I wonder if the probes would pick up on a different thing if you tried to find "fine-tuned" intent, rather than "prompted" intent. I.e., if you could train probes in the same way, but contrasting with a bit of fine-tuning (for deception vs honesty) instead of prompting towards those personas.
"We adapt a setting from Benton et al.
(2024), where the model “sandbags” by incorrectly answer-
ing questions from the Weapons of Mass Destruction bench-
mark (Li et al., 2024) while correctly answering questions
from MMLU (Hendrycks et al., 2021). The system prompt
directly instructs the model to sandbag, with a step by step
procedure:"
Isn't that set-up from van der weij?
Questions For Authors: What's going on, in Fig 3, with the honest answers to the insider trading confirmation?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Summary: The paper explores the use of logistic regression probes, trained on the residual stream activations of an LLM, to detect whether the LLM is executing deceptive behavior.
Their results demonstrate that such white-box deception probes can be trained using teacher forced (off-policy) data, and transfer to detecting on-policy deceptive behavior from LLMs.
Finally, they compare against a number of baseline techniques, including black-box chain of thought monitoring. These baselines perform fairly well, but the authors acknowledge this well.
Claims And Evidence: The authors make the following broad claims:
1. White-box deception monitors will be important for detecting LLM deception in the future.
1. This is not a central point of the paper, but rather the motivation that is argued in the introduction. In general I agree with the authors arguments, and find the motivation compelling.
2. White-box monitors can be used to detect deception in LLMs.
1. This is the central claim of the paper that is backed up empirically. The authors train probes on off policy contrastive pairs of deceptive and non-deceptive examples. They then test the probes on out of distribution examples of models executing on-policy deceptive behavior. Figure 4 shows favorable ROC curves on these held out tasks. Figure 3 shows some issues however --- in the low FPR regime, their probes false positive rate appears quite high in some settings.
Methods And Evaluation Criteria: The proposed method, of using logistic regression to probe model activations, is well born out in the related literature as a feasible technique (the novelty in this work comes from applying it to deception).
The evaluations seem reasonable also. They test against a number of held out datasets. Research in LLM deception is not well developed currently, so there are not a huge number of different evals that can be done --- the authors pick 3 (insider trading has 2 variants) which is reasonable.
Theoretical Claims: N/A
Experimental Designs Or Analyses: I split the experiments up by the figures / tables in which the data was presented.
Table 1: Describes how the ground truth data for deception or not is collected for the on-policy evaluations. Insider trading and sandbagging, the collection seems reasonable. For roleplaying, I think the gpt4o judge may introduce a non-trivial amount of label noise.
Figure 3, 4: Main results. The experimental design is reasonable. I am not sure what the dotted line on figure 4 is? Also I think providing the recall at 1%FPR explicitly would be useful.
Figure 6: These are the baseline experiments. More detail could be provided on these experiments (fine to reference the appendix more explicitly).
Supplementary Material: I have skimmed the appendix. It is well organized and contains a lot of extra experimental detail. In general the authors do a good job of referencing relevant appendix sections in the main body.
Relation To Broader Scientific Literature: The works sits at the intersection of two subfield --- AI control (or monitoring) and LLM probing. In general, most AI monitoring papers take a black-box approach ([1], [2]), The novelty in this work comes from demonstrating that white-box approaches are feasible for detecting deception.
The work does not, however, comprehensively show that white-box methods are much stronger than black-box ones.
[1] - Greenblatt, Ryan, et al. "AI control: Improving safety despite intentional subversion." _arXiv preprint arXiv:2312.06942_(2023).
[2] - Baker, Bowen et al. "Detecting misbehavior in frontier reasoning models"
Essential References Not Discussed: The related works appears complete to me.
Other Strengths And Weaknesses: Strengths:
- Originality: The paper is novel in that it is, to my knowledge, the first paper that applies white-box methods to detecting LLM deception.
- Clarity: The paper is very clearly written. Additionally the experimental methodology is clear and reasonable for demonstrating the claims of the paper.
Weaknesses:
- The main weakness of the paper comes in significance. While the paper shows that white box methods can be used to detect deceptive LLMs, they do not convincingly show that these methods are better than conventional black box approaches.
Other Comments Or Suggestions: It would be nice to link to Figure 6 more clearly when discussing baselines.
Also a discussion of the numerical baseline results would be good.
Questions For Authors: Q1) Are there any additional experiments you can run the demonstrate the difference between white and black box methods. For example, testing out more black-box baselines (models, prompts)?
Q2) Can you provide a discussion of the numerical baseline results in Figure 6?
## Update after rebuttal
No rebuttal given by the authors to my specific review. After reading the other reviews and response, I still lean towards accepting.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Summary: The paper studies the problem of strategic deception in AI models by training linear probes on datasets that elicit dishonesty in certain ways and check whether the probes generalize to more realistic examples of deception. The paper provides experimental validation that the probes can distinguish between honest and deceptive responses, but often fire more on honest responses than in the control prompts, leading to an increased false-positive rate. An analysis of what the probe might be learning is presented, and the method is compared favorably to other honesty detection approaches.
Claims And Evidence: The claims are clear and well-supported. There is some speculation in the paper, but this is clear and not framed as a claim.
Methods And Evaluation Criteria: Yeah, they more or less make sense for the method presented, but ultimately feel unsatisfactory and don't provide a particularly strong answer to what the probes are actually doing and why.
Theoretical Claims: No specific theoretical claims are made.
Experimental Designs Or Analyses: Yes, the experimental designs are sound.
Supplementary Material: I looked at some of the examples in the supplemental material, but not in great detail.
Relation To Broader Scientific Literature: The paper's contributions are tightly related to broader research in strategic deception and probing of large language models.
Essential References Not Discussed: References are very well-done; there are no obvious missing references.
Other Strengths And Weaknesses: There have been lots of papers on probes; it would be nice if the paper made a more convincing argument about what exactly its main contribution is to the literature.
Other Comments Or Suggestions: One thing I'm ultimately interested in with white-box probing is ways of studying how something like this might generalize to models that are actually superhuman, and scheming might be harder to detect. Is it possible to make a more explicit safety case here? That is, if I'm given a very strong model that I know nothing about, do the methods presented in this paper give me any sort of useful information about the model that might allow me to update in a certain direction about the model's underlying predisposition to scheming or ability to deceive?
Questions For Authors: 1. Probes are trained off-policy, right? Like on the 3.2.1 dataset? How much better does training the probes on-policy do?
2. What is Figure 3? How do I interpret these graphs?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Summary: The paper evaluates whether linear probes can effectively detect strategic deception in language models by monitoring their activations. The authors train probes on simple datasets (instruction pairs and roleplaying scenarios) and test if they generalize to realistic deceptive behaviors like concealing insider trading and sandbagging on safety evaluations. Their probes achieve high performance (AUROCs 0.96-0.999) and can catch 95-99% of deceptive responses while maintaining a 1% false positive rate on control data. The authors conclude that while promising for monitoring, current performance is insufficient as a robust defense against sophisticated deception.
Claims And Evidence: The claims are mostly well-supported by empirical evidence. The authors test their probes on multiple datasets, conduct ablation studies, and are very transparent about limitations.
The high performance metrics across diverse deception scenarios provide convincing evidence for the effectiveness of their approach.
However, the paper acknowledges that current models exhibit relatively blatant deception, so the results may not generalize to more subtle forms of deceptive behavior or more capable models. These are the settings that seem most important – and as such, I think the biggest weakness of the paper is to shy away from ambiguous settings, rather than trying to grapple with analyzing them despite the challenges.
Methods And Evaluation Criteria: As far as I can tell, the methods and evaluation criteria are appropriate for the problem.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental designs seem generally sound. The authors take care to isolate the effects of different design choices by testing various probe architectures, training methods, and aggregation strategies. The worst-case approach of setting thresholds based on control data (which assumes ~0% of deceptive data in the real-world) seems good methodological practice.
Supplementary Material: I briefly looked at Appendices A.1 and A.2
Relation To Broader Scientific Literature: Maybe i missed this in the paper, but i think making clearer how work differs from prior work on probes is important: did none of the prior works assess generalization of deceptiveness probes to new settings? If that were the case, that would be somewhat surprising, but I think this is important for assessing the novelty of this paper: my impression is that much of the novelty is about assessing the generalization of deceptiveness probes. In that sense, the title may be somewhat misleading, as other works cited have already attempted to detect deception with probes – this work just does so across different settings and with a deeper analysis.
Essential References Not Discussed: I don't know of any essential references that were not discussed in the paper.
Other Strengths And Weaknesses: Nothing not already stated above or below.
Other Comments Or Suggestions: > "To fit our probe we flatten across samples and sequence positions"
I think this could be made a bit clearer. I was initially confused by this description.
Something that I think would be interesting to study is whether the probes generalize to being applied to models even after training that introduces deceptiveness, or whether probing works at all in settings in which models have been trained to be deceptive. My impression is that Wen 2024 does not specify whether their probes were trained using activations pre or post RLHF. Another work that seems particularly relevant is [Williams 2024](https://arxiv.org/abs/2411.02306), in which the authors find that very strong tendencies to perform strategic deception arises from RL training: I'd be curious to know whether probes trained on these models that have received further RL training would work.
It would be interesting to see whoever training on additional data that is about deception but is not deceptive could reduce the spurious correlations that you see and you study more in appendix A.1.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Summary: The authors address the question of whether linear probes trained on simple deception datasets can generalise to more complex examples of deception. They train linear probes for Llama-3.3-70b-Instruct using a simple dataset of contrastive pairs. They then test generalization to four out-of-distribution deception datasets. They find that probes can achieve >95% test AUROC when classifying honest from deceptive responses. Lastly, the authors analyze failure modes and offer speculation on how future probing methods might improve on these metrics.
## Update after review
The authors addressed my questions and concerns. I will maintain my recommendation of 4.
Claims And Evidence: The empirical finding - that linear probes trained to detect deception in Llama-3.3-70b - is well-supported by evaluations on four different test datasets. They also compare to strong baselines including SAE activations and black-box methods. In the appendix, the authors also provide detailed explanations of their probe-fitting methodology.
As the authors only tested a single model, it is unclear whether this finding generalizes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes.
Supplementary Material: N/A.
Relation To Broader Scientific Literature: Investigating probe generalization is significant for AI safety, as probes trained on existing data will typically need to generalise out of distribution in deployment. The finding here is an important update towards the effectiveness of white-box monitoring.
Essential References Not Discussed: None that I know of
Other Strengths And Weaknesses: As mentioned above it would be useful to have quantitative evaluations of probe generalization in more than a single model.
The authors select the deception threshold using 1% FPR. It is unclear whether the results are robust to a different choice of threshold, or why 1% is the correct threshold.
Other Comments Or Suggestions: The relationship between harmfulness and deceptiveness is not investigated. It would be concerning if the probe catches 'benign' forms of deception but misses 'catastrophic' forms of deception. For example, in the Sandbagging eval which uses the WMDP dataset, each prompt could be evaluated in terms of its potential for harm, and the harmfulness scores could be correlated with whether the probe detects this successfully.
You could also consider doing multi-layer aggregation of activations a la Fort and Balakrishnan (2024). https://arxiv.org/abs/2408.05446
Questions For Authors: Did you try training probes on the concatenated activations of all layers? That might eliminate the need to do a sweep.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed review!
We agree that it would be useful to compare results on other models. During writing we ran experiments on gemma-2-27b and llama-3.1-70b and saw broadly similar results, but these results are not included in the main paper (and were not run with the latest datasets or all ablations).
Similarly, we agree that training probes on activations concatenated across layers would be a useful experiment for future work to investigate. We did some preliminary tests of training probes at several layers and aggregating the results across each layer, but found it didn’t meaningfully improve performance so didn’t include the variation in the final paper.
We agree the relationship between harmful and deceptive outputs is interesting. Harmfulness probes have been shown to work well (at least on the HarmBench dataset, e.g. https://arxiv.org/abs/2412.09565). Since probes are generally cheap to train and evaluate, we expect the best approach for real-world applications is to train a separate probe for harmfulness and deception, rather than hoping that a single probe transfers to both datasets. Another key difference is that LLM providers probably don’t want their model to ever generate harmful responses – while deceptive responses may be acceptable as long as the user is aware (or even desirable in certain contexts such as social deception games).
We agree it could be helpful to sanity check correlation between harmfulness and deception, however. The example you give of only catching benign forms of deception would indeed be concerning (but seems unlikely).
On the choice of 1% FPR, Figures 4 & 10 have ROC curves which show recall rates for different FPR. Overall we think it is necessary for probes to have a low false positive rate on control data, however the exact FPR to fix is certainly flexible. The primary reason for using 1% FPR in our analysis is to allow for easy comparison to previous works which also use this metric (Roger (2023) and Bailey et al. (2024)).
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed comments. I will maintain my score of 4. | null | null | null | null | ||||
Defending LVLMs Against Vision Attacks Through Partial-Perception Supervision | Accept (poster) | Summary: The authors address the vulnerability of LVLMs to adversarially perturbed or maliciously injected input images. Their motivation is to solve the issues of response quality degradion that comes with the existing defense methods which rely on image modifications that could possibly cause semantic distortions. They propose **DPS (Defense through Partial-Perception Supervision)**, a novel black-box and training-free defense mechanism. Instead of directly using responses from partial images for voting, DPS employs these responses to guide the LVLM’s inference on the original image. This allows the model to adjust its response when under attack while preserving accuracy on clean inputs. They also demonstrate the effectiveness of DPS, and claim to reduce the average attack success rate by 76.3% across six datasets and three widely used LVLMs, outperforming baseline defenses. In essence, DPS leverages a “weak-to-strong” learning strategy where partial, and potentially less accurate, observations help correct and stabilize the full-model output in the face of malicious input perturbations.
Claims And Evidence: The experimental results and ablation studies presented in the paper provide strong empirical backing for the claims. The authors demonstrate that DPS can reduce the average attack success rate by approximately 76.3% across multiple datasets and LVLMs, while also maintaining standard performance on clean inputs. Comprehensive comparisons with baseline defenses and detailed evaluations (including different cropping strategies and safety metrics) support the effectiveness of the proposed method. Therefore, the claims made in the submission are indeed supported by clear and convincing empirical evidence.
Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria are well-suited for the problem. DPS leverages partial observations to guide the full-model responses, addressing both attack mitigation and standard performance. Additionally, the use of multiple benchmark datasets and comparisons with established baseline methods provides a robust and comprehensive evaluation framework, ensuring that the method’s effectiveness is thoroughly validated.
Theoretical Claims: There are no formal proofs or theoretical claims in the paper. The authors primarily support their approach with intuitive reasoning and robust empirical evidence. While the intuition behind DPS and its ability to improve robustness is compelling, a dedicated section (perhaps in the appendix) that delves into theoretical insights or offers a more formal justification of why DPS improves robustness would further strengthen the paper.
Experimental Designs Or Analyses: The experimental design was thoroughly examined, including evaluations on misleading attacks, jailbreak scenarios, and standard performance (using the MM-Vet benchmark). The authors compare DPS against multiple baseline defenses across six datasets and three LVLMs, and they conduct detailed ablation studies (e.g., comparing different cropping strategies) to validate the design. Overall, these analyses appear sound and well-constructed. A couple of points could be addressed to strengthen the work further:
- The computational overhead introduced by multiple cropping strategies and the multi-agent interaction in DPS might affect scalability in practical deployments.
- Including additional details on statistical significance or error analysis (e.g., confidence intervals or p-values) would enhance the clarity and rigor of the empirical results.
Nonetheless, the experimental design and analyses are valid and effectively support the authors’ claims.
Supplementary Material: I conducted a high-level walkthrough of the supplementary material. I reviewed the code implementation provided in the appendix, focusing on the components that implement the DPS framework, the cropping strategies, and the evaluation scripts for the experimental benchmarks. The implementation is consistent with the methodology described in the paper, and I did not observe any issues.
Relation To Broader Scientific Literature: The key contributions are well stated, and the methodology is necessary to tackle both alignment challenges and quality issues found in existing solutions. Prior defenses rely on majority voting across modified images, which degrades response quality, but this paper builds on this by using the modified images to supervise the model’s inference, maintaining accuracy on clean inputs while reducing attack success rates. This builds on existing research in robust AI alignment by showing that adversarial robustness can be improved without retraining. The results make a strong case for DPS as a practical and effective defense, adding to ongoing discussions on safe and trustworthy AI systems.
Essential References Not Discussed: While the authors aim to improve response quality while reducing attack success rates, the paper overlooks recent literature that tackles this problem through detection-based approaches. The following two papers are particularly relevant and should be considered:
1. **VLMGuard: Defending VLMs against Malicious Prompts via Unlabeled Data** (Du et al., 2024) – This work introduces a defense mechanism that leverages unlabeled data to detect and mitigate adversarial prompts targeting VLMs. Unlike DPS, which focuses on partial perception, VLMGuard explores input-space robustness via prompt-level interventions.
2. **MirrorCheck: Efficient adversarial defense for vision-language models** (Fares et al., 2024) – This approach focuses on efficient adversarial defenses through self-consistency checks within multimodal inputs. MirrorCheck employs consistency-based detection to identify manipulated inputs, providing an alternative or complementary strategy to DPS.
These works introduce alternative ways to achieve robustness in LVLMs that could complement or contrast with DPS. The authors should explore these and other recent efforts in adversarial defenses for a more comprehensive discussion.
Other Strengths And Weaknesses: - The presentation and clarity of the approach are commendable. The method is clearly described and the experimental results demonstrate a significant empirical improvement over prior works.
- While the idea is novel in its application of partial perception supervision to mitigate adversarial attacks, its conceptual foundation is similar to existing literature, positioning it as an incremental improvement.
- The paper would benefit from an exploration of failure cases. Specifically, it is unclear how DPS handles scenarios where the supervision mechanism is bypassed or when a jailbreak input goes undetected, areas where detection-based methods might offer more reliable performance.
- A discussion of edge cases, potential weaknesses, and failure modes would enhance the robustness and clarity of the paper.
- Finally, providing a theoretical analysis or high-level intuition for why DPS improves robustness over prior methods would further strengthen the contribution and support its practical applicability.
Overall, great work by the authors, with room for improvement in addressing limitations and offering deeper theoretical insights.
Other Comments Or Suggestions: Please refer to the strengths and weaknesses listed above for a comprehensive assessment. Furthermore, it would be beneficial for the authors to include a discussion comparing DPS with VLMGuard and MirrorCheck. For instance, while DPS leverages partial perception supervision to guide the model's full-image response under attack, VLMGuard utilizes unlabeled data to detect malicious prompts, and MirrorCheck employs self-consistency checks between modalities. A direct empirical or high-level comparison could further clarify the relative strengths and limitations of these approaches.
Questions For Authors: Please check strengths and weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate your time and effort in reviewing our work and would like to address each of your concerns.
### 1. The incremental improvement
We appreciate the reviewer’s recognition of our idea novelty. In addition to the novel idea, we respectfully argue that the reviewer should reconsider the multifaceted and fundamental contributions in this work:
- We provide significant insights into LVLM defense. We identify a key vulnerability pattern in LVLMs: reduced confidence with attacked images versus high confidence with clean inputs (Sections 3.2-3.3).
- The training-free deployment and superior defense performance demonstrate DPS's practical distinctiveness.
- Our systematic bridging of "weak-to-strong" principles with LVLM defense mechanisms creates a new research nexus.
More contribution details are in *Reviewer 5ot1 [2. Suitable for the ICML community]*. These interlocking advances collectively represent a non-incremental leap in defense, which we will further clarify.
### 2. Theoretical Justification of DPS
We provide a theoretical formulation to explain why our work could achieve defense.
1) The Target of Attack
Let $x$ denote the original example, $x' = \text{adv}(x, \delta)$ denote the adversarial example, and $F_v(x) \in \mathbb{R}^d$ be its visual features extracted by the vision encoder. $T_\text{target}$ is the targeted output, $T_\text{clean}$ is the originial output of clean input, and $\mathcal{T}$ represents the text space. We can represent the probability of $T_{\text{clean}}$ and $T_{\text{target}}$ condition on the $x$ or $x'$:
$$
P(T_{\text{clean}}|x') = \frac{P(T_{\text{clean}}|F_v(x'))}{\sum_{T\in\mathcal{T}} P(T|F_v(x'))}.
$$
After attack, $P(T_{\text{target}} | x')$ becomes high and the $P(T_{\text{clean}} | x')$ is largely reduced.
2) Prior Injection via Partial-Perception Supervision
Our method DPS crops the original image and generates text $T_c$ when we feed the partial images into LVLM. We have $\phi(T, T_c) \in [0, 1]$ representing the semantic consistency between the candidate text $T\in \mathcal{T}$ and $T_c$.
Compared with the attack formulation, our defense mechanism actually introduces crop-induced text $T_c$ as a **prior** to constrain the output distribution. The revised conditional probability of the defense can be approximated as:
$$
P_{\text{defense}}(T_\text{clean}|x',T_c) = \frac{P(T_\text{clean}|F_v(x')) \cdot \phi(T_\text{clean}, T_c)}{\sum_{T\in\mathcal{T}} P(T|F_v(x') \cdot \phi(T, T_c)}.
$$
As the $ \phi(T_\text{clean}, T_c)$ is relatively high compared with $ \phi(T_\text{target}, T_c)$, we can have $P_{\text{defense}}(T_\text{clean}|x',T_c)>P_{\text{defense}}(T_\text{target}|x',T_c)$, achieving defense.
### 3. Computational overhead
The computational overhead of DPS yields proportionally higher defense effectiveness, while existing methods failed to improve defensive capability even with the increased computation budgets. Please refer to our response to *Reviewer popy regarding [1. Computational Cost]*.
### 4. Statistical significance
Thanks for your suggestion! Please refer to our response to *Reviewer 2wMM regarding [1. Error Bar]*.
### 5. Comparison with more recent work
We sincerely appreciate the relevant literature you shared and will incorporate these papers into the revised manuscript.
DPS fundamentally differs from the two detection methods as follows:
- DPS is a **training-free** defense that offers significantly lower deployment costs in wild.
- The scopes of attacks are different. These two methods are limited to specific scenarios (i.e., jailbreak detection) and **cannot be directly applied to the defense against other attacks like misleading attacks**.
- The underlying mechanisms are fundamentally distinct. DPS leverages the sensitivity of diverse attacks to cropping operations, achieving defense.
Further, we conduct an empirical study to compare the performance between DPS and MirrorCheck on MM-Safety and VisualAttack datasets.
From Table 1, we observe that while MirrorCheck's defense performance is only slightly worse than our method. Its standard performance is significantly lower than that of LS-DPS due to the high false positive rate.
Since VLMGuard's implementation is not publicly available, we will revisit this comparison with more time.
Table 1. Defense Performance Compared with MirrorCheck
| | ASR (MM-Safety) | ASR (VisualAttack) | Avg. Standard Performance (MM-Vet) $\uparrow$|
| -------- | -------- | -------- | -------- |
| MirrorCheck | 0.20 | 0.17 | 0.26
| LS-DPS | 0.10 | 0.17 | 0.68
### 6. An exploration of failure cases.
The most common failure case occurs when the cropping fails to eliminate attacks, rendering partial perception supervision ineffective for defense. In such cases, combining detection-based defense methods may enhance defense effectiveness, an area for future exploration. | Summary: This paper proposes a black-box, training-free defense method called DPS (Defense via Partial Perception Supervision) to protect large vision-language models (LVLMs) from visual adversarial attacks. Unlike existing defenses (e.g., majority voting), which suffer from semantic distortion due to image cropping and thus degrade performance on clean inputs, DPS leverages “weak supervision” responses generated from cropped image regions to dynamically guide the model’s predictions during inference. When an adversarial input is detected, partial perception information encourages the model to lower confidence and correct erroneous responses, while for clean inputs, the model maintains its original high-confidence predictions. Inspired by the “weak-to-strong learning” paradigm, DPS treats partial perception as weak supervision guiding a stronger model. It employs a multi-cropping strategy to generate supervisory signals and incorporates safety prompts to enhance robustness. Experiments demonstrate that DPS reduces the attack success rate by an average of 76.3% across six datasets, outperforming baseline methods in defending against misleading and jailbreak attacks while preserving performance on standard vision-language tasks (MM-Vet benchmark), showcasing its effectiveness and practicality against adversarial visual attacks.
Claims And Evidence: The paper claims that LVLMs may exhibit a decrease in confidence when processing perturbed adversarial images. However, I am skeptical about this confidence variation because the definition of confidence in this context is unclear. Specifically, I question whether this confidence is inferred based on the textual responses of LVLMs or directly derived from the model’s output logits. A more detailed explanation is needed to clarify this aspect.
Methods And Evaluation Criteria: Overall, this method appears to be simple and reasonable. However, such a straightforward approach may offer limited insights to the field.
Therefore, I have some doubts about whether this work is suitable for ICML. I’m not entirely certain, but I feel that this paper might be a better fit for ACL.
Theoretical Claims: The submission does not present formal theoretical claims or proofs, as it is primarily an empirical study focused on methodology and experimental validation.
Experimental Designs Or Analyses: This paper conducts experiments on multiple datasets and tasks, making the evaluation relatively comprehensive. However, I still have some questions:
1. How is the adversarial noise for images generated? It is well known that performing targeted white-box attacks on LVLMs is extremely challenging and typically results in very low attack success rates. If the adversarial examples are optimized using an open-source LVLM model, it is unlikely that they will effectively transfer to closed-source models.
2. Although evaluating some closed-source models enhances the generalizability of the proposed method, for the sake of reproducibility, it is necessary to assess its effectiveness on an open-source model, such as LLaVA or Qwen2-VL.
Supplementary Material: I have read the appendix, including the experimental setup and some case studies.
Relation To Broader Scientific Literature: This issue is meaningful for the security of multimodal large models.
Essential References Not Discussed: Related work has been discussed.
Other Strengths And Weaknesses: See above
Other Comments Or Suggestions: See above
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate your time and effort in reviewing our work, and would like to address each of your concerns.
### 1. The definition of confidence
> However, I am skeptical about this confidence variation because the definition of confidence in this context is unclear. Specifically, I question whether this confidence is inferred based on the textual responses of LVLMs or directly derived from the model’s output logits. A more detailed explanation is needed to clarify this aspect.
Thank you for the comments. In our work, we define LVLM models' confidence as the inverse of the standard deviation of responses to variations of the same question, crafted by adding different prefixes to the original question. This confidence metric represents certainty in model outputs without requiring access to internal model parameters. A high confidence value indicates a low standard deviation and, thus, low uncertainty in the LVLM's responses. To avoid potential ambiguity in the term 'confidence,' we will add clear explanations of the term in the revised manuscript.
### 2. Suitable for the ICML community
> However, such a straightforward approach may offer limited insights into the field. Therefore, I have some doubts about whether this work is suitable for ICML. I’m not entirely certain, but I feel that this paper might be a better fit for ACL.
We respectfully disagree with the assessment that our paper might be more suitable for ACL due to its "straightforward approach."
- First, **Novel insights for LVLM Defense**. Our paper provides significant insights into LVLM defense: (1). We identify a key vulnerability pattern in LVLMs: reduced confidence with attacked images versus high confidence with clean inputs (Sections 3.2-3.3). (2). We innovatively apply the "weak-to-strong" learning phenomenon to visual defense by using partial image perception to supervise full-image understanding. (3) Our approach fundamentally differs from existing methods that rely on direct majority voting or filtering.
- Second, **Demonstrated Effectiveness**. (1). Our method reduced attack success rates by 76.3% across six datasets on three models (Section 5.2). (2). We show effectiveness against both misleading and jailbreak attacks (Tables 1-2). (3) Unlike other methods (e.g., SmoothVLM), we maintained standard performance while improving defense (Section 5.3).
- Third, **Simplicity as Strength**. ICML values impact and effectiveness, not complexity for its own sake. Our training-free method requires no model modification, making it immediately applicable to any LVLM. The DPS framework shows strong compatibility with other defense strategies (Section 4.4).
Our paper presents novel insights, effective solutions, and provides new solutions in defending LVLMs, aligning well with ICML's emphasis on machine learning advances.
### 3. How is the adversarial noise for images generated?
> It is well known that performing targeted white-box attacks on LVLMs is extremely challenging and typically results in very low attack success rates. If the adversarial examples are optimized using an open-source LVLM model, it is unlikely that they will effectively transfer to closed-source models.
In our experiments, in addition to the adversarial samples from existing datasets (RTA 100, MultiTrust, SafetyBench, and HADES), we generate adversarial samples using the VisualAttack method (lines 250-266). These samples achieve a transfer success rate of approximately 10% to 20%. The samples selected for defense evaluation are those that demonstrate successful transferability.
### 4. Experiments on open-source models
> Although evaluating some closed-source models enhances the generalizability of the proposed method, for the sake of reproducibility, it is necessary to assess its effectiveness on an open-source model, such as LLaVA or Qwen2-VL.
Thanks for your suggestion! We conduct extended experiments using the open-source model Qwen2.5-VL-32B on misleading attack datasets—RTA 100, Self-Gen, and MultiTrust—as well as jailbreak datasets MM-Safety and VisionAttack. The results (ASR) of misleading defense and jailbreak defense are shown in Table 1 and Table 2, while the standard performance is shown in the Figure (https://anonymous.4open.science/r/ICML2025-231D/README.md). The proposed methods, DPS and LS-DPS, consistently achieve the best performance against misleading and jailbreak attacks. | Summary: The paper proposes a improved ensemble based defense of VLMs. The idea improves upon SmoothVLM which leveraged partial crops or random perturbations of the input image and ensembled the responses, by adding a secondary step of using the VLM responses on the partial crops as supervisory inputs to the VLM. The approach also provides an optional language post-processing step that converts malicious responses to benign. The authors evaluate this defense on several jail breaking and adversarial datasets, and show that the approach is succesful at defending against jail-breaking attacks.
## Update after rebuttal
The rebuttal has mostly answered my queries. I am keeping my score, with the suggestion that the authors add error bars to the reported metrics.
Claims And Evidence: The claims in the paper are well-supported by the analysis.
Methods And Evaluation Criteria: I find the method to be very intuitive, and appreciate the fact that they are able to achieve training free robustness. The experiments are well-designed and the evaluations are thorough. The authors provide a comprehensive evaluation over two adversarial and one benign tasks, as well as analyse the effectiveness the cropping strategy. However, I would suggest adding error bars to the results where ever possible to account for the random nature of the evaluations.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: I have checked all the experimental designs and analyses and believe them to be generally sound. It would be additionally useful to analyse the effect of the number of partial crops used in the defense.
Supplementary Material: Yes. I have reviewed all the sections of the supplementary material.
Relation To Broader Scientific Literature: The paper is related to the broader scientific literature on defending against adversarial attacks on VLMs. The closest prior work is SmoothVLM which uses a similar ensemble based defense. The proposed approach improves upon SmoothVLM using the partial input responses as supervisory signals to the VLM insted of as inputs to an ensemble.
Essential References Not Discussed: The paper covers necessary related work.
Other Strengths And Weaknesses: The approach is quite well motivated and original. It provides a training-free approach towards improved robustness. I appreciate the simplicity of the approach itself.
Other Comments Or Suggestions: None
Questions For Authors: I do not have any specific questions However, as suggested above, I would like to see the effect of number of evaluations on adversarial robustness.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate your time and effort in reviewing our work, and would like to address each of your concerns.
### 1. Error Bar
> I would suggest adding error bars to the results where ever possible to account for the random nature of the evaluations.
Thanks for your suggestion! We conducted all our experiments three times to calculate the error bars and incorporated them into the results. Moreover, we extend our experiments to the open-source model Qwen2.5-VL-32B. Here we only show the error bars for Qwen2.5-VL-32B. (https://anonymous.4open.science/r/ICML2025-231D/README.md)
In Table 1, we demonstrate the statistical significance using a t-test, with p-values consistently less than 0.05, confirming their significance. Here, the t-test is performed between DPS and the baseline with the best performance.
Table 1. P-values from the t-test
| Dataset | RTA 100 | Self-Gen | MultiTrust | MM-safety |VisAttack |
| -------- | -------- | -------- |-------- |-------- |-------- |
| p-values | p=0.03 | p=0.02 | p=0.03 | p=0.04 | p=0.03 |
### 2. Analysis of the number of partial crops
> It would be additionally useful to analyse the effect of the number of partial crops used in the defense.
Thanks for your suggestion! Due to time limitations, we restrict our analysis to random crop-based partial copies (varying from 1 to 5) to evaluate their impact on defense performance, using the Self-Gen dataset. The experimental results in Table 2 demonstrate a positive correlation between the number of partial crops and defense performance.
Table 2. Attack Success Rate (ASR) Comparison Under Different Partial Crop Numbers
| Crop Numbers | 1 | 2 | 3 | 4 | 5|
| -------- | -------- | -------- |-------- |-------- |-------- |
| Self-Gen | 0.55 | 0.30 | 0.30 | 0.30 | 0.25 | Summary: This paper proposes a novel method, named Defense through Partial Perception Supervision (DPS), which focuses on evaluating and improving the robustness of Large Vision-Language Models (LVLMs) against vision attacks. Specifically, DPS leverage the outputs from cropped image processing to supervise the outputs from full image processing, to provide a correct understanding of images under vision attacks. Empirical experiments show DPS outperforms the baseline methods across six datasets and three LVLMs.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: This paper draws an analogy, treating responses to partial images as those from weak models and responses to full images as those from strong models. Is this hypothesis, or analogy, reasonable? According to the methodology, Part-Perc models and full models are essentially the same model, except that they process partial images and full images, respectively. Therefore, I am more inclined to view this as information fusion across different levels rather than a form of weak-to-strong learning.
Experimental Designs Or Analyses: Please refer to ‘Questions For Authors’.
Supplementary Material: Yes, I have reviewed most of the content in the supplementary material, including Sections A, B, C.1, C.2, C.3, C.4.1, and C.4.2.
Relation To Broader Scientific Literature: On one hand, this paper utilizes cropping operations to reduce the interference caused by vision attacks on images, which aligns with the idea of adversarial purification [1,2,3,4,5,6] in conventional adversarial defense. Both methodologies capitalize on the inherent sensitivity of adversarial perturbations to modifications like smoothing, compression, and denoising. On the other hand, the dual-stage framework enhances the robustness of LVLMs by transitioning from partial to full perception, which shares certain parallels with the concept of weak-to-strong learning [7,8], although it is not entirely analogous.
[1] Guo C, Rana M, Cisse M, et al. Countering adversarial images using input transformations. ICLR. 2018.
[2] Xie C, Wang J, Zhang Z, et al. Mitigating adversarial effects through randomization. ICLR. 2018.
[3] Xu W, Evans D, Qi Y. Feature squeezing: Detecting adversarial examples in deep neural networks. NDSS. 2018.
[4] Jin G, Shen S, Zhang D, et al. Ape-gan: Adversarial perturbation elimination with gan. ICASSP. 2019:
[5] Nie, W, Guo, Brandon, Huang, Y, et al. Diffusion Models for Adversarial Purification. ICML. 2022.
[6] Lee M, Kim D. Robust evaluation of diffusion-based adversarial purification. ICCV. 2023.
[7] Khan, A., Hughes, J., Valentine, D, et al. Debating with more persuasive llms leads to more truthful answers, 2024. URL https://arxiv.org/abs/2402.06782.
[8] Yang, Y., Ma, Y., and Liu, P. Weak-to-strong reasoning, 2024. URL https://arxiv.org/abs/2407.13647.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths.
1. The paper is well-organized and easy to follow.
2. The paper conducts extensive experiments, comparing with a wide range of defense methods on various datasets, thoroughly validating the effectiveness of the proposed approach.
Weaknesses.
1. Complexity and computational cost. For DPS, it provides supervision for the full model by constructing multiple part-perception supervisions composed of various cropping methods, such as center cropping, random cropping, and adaptive cropping. Compared to other SOTA methods, the multiple query mechanism inevitably increases the time overhead during the inference phase. According to the results in Section C.3, the efficiency of LS-DPS is only superior to Smoothvlm, and the efficiency results for IVA and Warning are missing in Section C.3.
2. Clarity. This paper draws an analogy, treating responses to partial images as those from weak models and responses to full images as those from strong models. Is this hypothesis, or analogy, reasonable? According to the methodology, Part-Perc models and full models are essentially the same model, except that they process partial images and full images, respectively. Therefore, I am more inclined to view this as information fusion across different levels rather than a form of weak-to-strong learning.
3. Soundness. I suppose that DPS is not entirely black-box. As shown in Figure 2, the original question specifically emphasizes whether there are animals in the image, which leads to information leakage. In other words, the mention of 'animal' in the original question aligns with the attack target of the adversarial attack, which significantly helps LVLMs focus on the animals in the image. If the original question only asks what is contained in the image without emphasizing animals, I believe the ASR would increase substantially. In other words, under supervision without specific content constraints, the second answer provided by the full model should randomly deviate from the original answer.
Other Comments Or Suggestions: 1. To enhance the clarity and structure of the manuscript, it is suggested to outline the specific contributions of this work in a bullet-point format in the Introduction section
2. A list of typos. Line 186, it should be 'There are no animals.' Line 303, 'employe' should be corrected to 'employ.' Line 354, In 'Specifically,' the following 'the' should be in lowercase."
Questions For Authors: 1. I am curious about the effectiveness of traditional adversarial purification techniques from deep learning models in defending LVLMs against vision-based attacks. To my understanding, adversarial purification can mitigate adversarial perturbations through two main approaches. The first approaches [1,2,3] involve post-processing methods, such as denoising, compression, total variation minimization, and image quilting, aiming to eliminate adversarial noise. The second approaches [4,5,6] leverage generative models like GANs or diffusion models to reconstruct a clean sample from the adversarial input. Adversarial purification method can achieve similar performance to DPS but with significantly higher efficiency. Unlike DPS, which requires multiple interactions with the large model to refine responses, adversarial purification only needs to process the adversarial sample once before passing it to the model for evaluation. This streamlined process makes it a more practical and efficient defense mechanism.
[1] Guo C, Rana M, Cisse M, et al. Countering adversarial images using input transformations. ICLR. 2018.
[2] Xie C, Wang J, Zhang Z, et al. Mitigating adversarial effects through randomization. ICLR. 2018.
[3] Xu W, Evans D, Qi Y. Feature squeezing: Detecting adversarial examples in deep neural networks. NDSS. 2018.
[4] Jin G, Shen S, Zhang D, et al. Ape-gan: Adversarial perturbation elimination with gan. ICASSP. 2019:
[5] Nie, W, Guo, Brandon, Huang, Y, et al. Diffusion Models for Adversarial Purification. ICML. 2022.
[6] Lee M, Kim D. Robust evaluation of diffusion-based adversarial purification. ICCV. 2023.
2. In Section 4, is the cropping methods, such as center cropping, random cropping, and adaptive cropping, the only way to obtain partial images? If so, please explain the underlying reasons. If not, please describe other feasible operations.
3. For the caption of Table 1, I find the use of the term ‘adversarial samples’ unclear. To my understanding, ‘adversarial samples’ specifically denote samples which are intentionally modified with imperceptible perturbations to deceive Deep Neural Networks. However, since the datasets referenced in Table 1 do not include VisualAttack or any adversarial manipulation, the presence of ‘adversarial samples’ in this context seems inconsistent and potentially misleading
4. In Table 2 (line 337), the results for Qwen-VL-Plus's adversarial defense against VisualAtt show a notable discrepancy between DPS and LS-DPS. Considering the discussion in Section 4.4, where only an additional prompt was introduced, the significant improvement in adversarial defense performance appears unexpected. Even if the model classifies the adversarial attack target as 'harmful,' such a substantial enhancement in defense effectiveness seems unlikely and warrants further explanation.
5. In the Implementation Details, please elaborate on the specific methodology behind adaptive cropping (AC). As evidenced in Table 3, AC consistently outperforms other cropping techniques across most datasets in terms of ASR. For datasets such as Self-Gen, MultiTrust, and MM-Safety, is the observed improvement in defense performance attributed to AC’s ability to directly eliminate textual contents through cropping?
6. In Section 5.3, as illustrated in Figure 4 and Table 4, why does the Warning Prompt significantly enhance the standard performance of Qwen-VL-Plus? Please provide a concrete example to explain the underlying reason for this improvement
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We greatly appreciate your effort and address each of your concerns.
### 1. Computational cost
We report the computational costs of baselines and our method, and would like to highlight that the computational overhead of our DPS delivers proportionally higher defense effectiveness. Specifically, for a fair comparison, we enhanced the baselines by running each model 6 times and implementing majority voting to achieve ensemble effects comparable to our method's six queries. As shown in Table 1 (https://anonymous.4open.science/r/ICML2025-1-3F36/README.md), even with additional inference time, the enhanced baselines cannot achieve proportional improvements in defensive effectiveness comparable to DPS.
### 2. Why use the "weak-to-strong" analogy
In the ICML 2024 best paper [Khan; ICML 2024], they establish a framework where an LLM without certain necessary information serves as the 'weak model', while the same LLM with full information access functions as the 'strong model.' Inspired by this, we apply a similar analogy—treating models with partial-image access as 'weak' variants to enhance 'strong' full-image models. Our black-box, training-free method leverages this approach across six attack datasets (lines 43-53).
While this could be viewed as information fusion, we believe the 'weak-to-strong' analogy particularly suits our method and will clarify this in our revision.
### 3. Black-box nature of our method
Our method operates in a fully black-box manner—the defense process works independently of questions. In Figure 2, questions aren't involved in our defense method.
Regarding concerns about "animals" keywords causing information leakage:
Jailbreak benchmark prompts contain no such keywords, yet our method performs well.
Replacing "animals" with "things" still triggers attack-related content, and removing animal-related statements entirely doesn't affect output accuracy.
We'll revise Figure 2 for clarity.
### 4. Comparing with adversarial purification.
Although DPS can also be viewed as a pre-processing approach, it fundamentally differs from adversarial purification:
- First, the scopes of attacks are different. Adversarial purification mainly focuses on removing pixel-wise adversarial noise (e.g., DiffPure [Nie; ICML 2022]). However, non-noise attacks like typographic attacks [Cheng; ECCV 2024] and jailbreak attacks [Shayegani; ICLR 2024] cannot be handled effectively by adversarial purification. In contrast, our DPS counteracts both adversarial noise and typographic attacks.
- Second, the underlying mechanisms are fundamentally distinct. Adversarial purification removes or breaks the effectiveness of adversarial noise. In contrast, our approach leverages the sensitivity of diverse attacks to cropping operations, naturally defending against non-noise attacks like the misleading attacks in our submission. As shown in Tables 2-3 (https://anonymous.4open.science/r/ICML2025-1-3F36/README.md), our DPS outperforms all purification baselines under both misleading and jailbreak attacks.
### 5. Why do we use cropping?
We use center cropping, random cropping, and adaptive cropping methods for these reasons:
- Efficiency: Cropping is highly concise with nearly no deployment cost.
- Effectiveness: Cropping effectively eliminates various attacks, including adversarial noises and typographic attacks.
While advanced models like SAM [Kirillov; ICCV 2023] could be used, our tests showed higher costs without significant improvements. We'll explore more effective cropping operations in future work.
### 6. Defense enhancement on Qwen-VL-Plus
The primary reason for the significant enhancement is that Qwen-VL-Plus's inherent safety mechanisms (which DPS relies solely on) are substantially weaker than those of the LLM-based safety checker (i.e., GPT-4o-Mini in our paper), which complements DPS in LS-DPS. In contrast, other LVLMs, whose safety mechanisms are closer to those of LLMs, exhibit less pronounced improvements. This aligns with previous findings that Qwen-VL-Plus's safety mechanisms are relatively weaker [Ying; arXiv 2024].
### 7. Details of adaptive cropping (AC).
**We have introduced adaptive cropping is in Appendix B.3 of the manuscript.** we use GPT-4o-Mini to locate text boxes and crop the remaining parts. It outperforms center/random cropping by more effectively eliminating textual content. We'll add this explanation to the Implementation Details and the Ablation Study section.
### 8. Why Warning Prompt work
We hypothesize that the warning prompt enhances safety awareness, making the model more cautious with deceptive queries common in the dataset, improving overall performance. See our case study in Figure 2 (https://anonymous.4open.science/r/ICML2025-1-3F36/README.md).
This observation aligns with prior defense works like [Zheng; arXiv 2025], which reports that explicit safety prompts can enhance baseline model accuracy. | null | null | null | null | null | null |
Arrow: Accelerator for Time Series Causal Discovery with Time Weaving | Accept (poster) | Summary: The authors proposed an accelerator framework named ARROW to address the efficiency bottleneck in multivariate time series causal discovery. By introducing time weaving encoding (capturing contextual trends between time points), an optimal time lag determination theorem based on XOR operations, and an intelligent pruning strategy, ARROW significantly improves causal discovery efficiency without compromising the performance of the original algorithms. Experiments demonstrate that the method is adaptable to various causal discovery algorithms and shows significant improvements in efficiency.
Claims And Evidence: Yes. The claims made in the submission are supported by clear and convincing evidence, including rigorous experimental results, comparative analyses with four different types of methods, and theoretical justifications, all of which collectively validate the effectiveness and efficiency of the proposed ARROW framework.
Methods And Evaluation Criteria: Yes. The proposed methods and evaluation criteria, including synthetic datasets and metrics, are well-suited for the problem of causal discovery in multivariate time series. They also demonstrate that ARROW can effectively address computational efficiency and accuracy issues in real-world applications.
Theoretical Claims: Yes. The correctness of the theoretical claims, including the proofs for the XOR-based time lag determination theorem (Appendix A), appears to be well-supported by logical reasoning and empirical validation, ensuring their validity within the proposed framework.
Experimental Designs Or Analyses: Yes. The experimental designs and analyses in the paper are well-founded and reliable, supported by thorough comparisons with baseline methods in different cases, and consistent results across various datasets, showcasing the robustness and reliability of the proposed approach.
Supplementary Material: Yes. I have reviewed appendix carefully.
Relation To Broader Scientific Literature: The paper's key contributions are deeply connected to the broader scientific literature by addressing efficiency bottlenecks in causal discovery, introducing innovative encoding and optimization techniques, and demonstrating practical value through empirical validation, thereby advancing the field and its real-world applications.
Essential References Not Discussed: No. The related work section in the paper is comprehensive and thorough.
Other Strengths And Weaknesses: **Strengths**
S1. The proposed accelerator, ARROW, aims to achieve high efficiency for various existing causal discovery methods.
S2. Experimental results demonstrate that the proposed accelerator, ARROW, successfully addresses the efficiency bottlenecks of existing causal discovery methods, achieving a maximum speedup of 153 times. ARROW is a research with significant practical implications.
S3. The paper is well written.
**Weaknesses**
W1. The paper mentions using XOR operations to determine the optimal time lag, but is the XOR operation applicable to all types of time series data? Are there certain data distributions or causal relationship patterns that could cause the XOR operation to fail?
W2. Compared to existing high-performance causal discovery methods (such as those based on GPU acceleration), what are the advantages of ARROW?
W3. From the experimental results, it appears that ARROW shows more significant acceleration and performance improvement for nonlinear causal relationships. Please provide a detailed explanation.
Other Comments Or Suggestions: None
Questions For Authors: Answer W1, W2, W3.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the insightful comments and our responses are detailed below.
**Response to W1**: Our method is better suited for monotonic causal relationships and less applicable to purely nonlinear ones. Future work should explore trend patterns in nonlinear relationships, such as periodicity. In complex scenarios, some nonlinear relationships don't affect overall monotonicity, and thus, our method remains effective. This is validated by our experiments on synthetic datasets with nonlinear causal relationships, as shown in Table 2 of the paper.
**Response to W2**: GPU-based acceleration provides hardware-level acceleration. In contrast, ARROW is the first data-level acceleration solution, particularly suited for high-dimensional time series data, and can be used alongside GPU acceleration.
**Response to W3**: Nonlinear datasets pose challenges for the downstream causal discovery algorithms. However, by identifying time lags and pruning variables in advance, our approach enhances time lag discovery, causal graph accuracy, and causal mining efficiency.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed rebuttal and confirm to vote for acceptance. Thanks. | Summary: This paper presents ARROW, an accelerator for time series causal discovery that overcomes the efficiency bottleneck of existing causal discovery methods. The concept of time weaving is introduced, along with an XOR-based time lag discovery strategy, which leverages theoretical derivations to rapidly determine the optimal time lag, significantly improving computational efficiency. Additionally, a pruning strategy is designed to optimize the search space. The experimental results demonstrate that ARROW significantly accelerates the causal discovery process.
Claims And Evidence: Yes. This paper proposes a general accelerator for time-series causal discovery algorithms. Its effectiveness is validated through theoretical analysis and empirical evaluation across four different types of causal discovery methods.
Methods And Evaluation Criteria: Yes. To improve the efficiency of temporal causal discovery, the paper introduces the concept of time weaving and an XOR-based sequence analysis method, leveraging bit-level operations to accelerate the process. Additionally, the pruning strategy ensures the accuracy of time lag identification. The paper evaluates the performance of both time lag discovery strategy and causal graph construction, demonstrating the method's reliability and effectiveness in causal discovery tasks.
Theoretical Claims: Yes. I carefully reviewed the proof section in the appendix, which thoroughly demonstrates Theorem 4.2, and provides an explanation for why the threshold value is set to 0.33 in the pseudocode.
Experimental Designs Or Analyses: Yes. The experimental design and analysis are highly reasonable. To validate the effectiveness of the proposed method, the authors conducted tests on both synthetic and real-world datasets using four different causal discovery algorithms. Additionally, they set up both constant time lags and multiple time lags for the varying relationships between variables in multivariate scenarios. Extensive evaluations demonstrate that ARROW can achieve stable and efficient causal discovery across most causal discovery algorithms.
Supplementary Material: Yes. I reviewed the appendix, including the proof, discussion, experimental setting details, and additional analysis of the experimental results. The appendix provides a valuable supplement to the main text, enriching and enhancing the overall content of the paper. Moreover, the inclusion of the source code ensures the reproducibility of the results.
Relation To Broader Scientific Literature: Currently, most causal discovery research focuses on causal discovery strategies in different scenarios, lacking a general accelerator framework to improve the efficiency of causal discovery. This paper primarily addresses the efficiency problem in time series causal discovery and can be applied to most existing causal discovery methods.
Essential References Not Discussed: I believe that the paper has sufficiently discussed the works related to its research.
Other Strengths And Weaknesses: S1. The proposed accelerator, ARROW, effectively addresses the efficiency bottleneck of existing causal discovery methods. ARROW is with broad applicability and practical value.
S2. The paper introduces a novel concept of "time weaving" and leverages XOR technology for sequence analysis, offering an elegant approach that greatly enhances computational efficiency.
S3. The paper designs an efficient and effective pruning strategy that accurately identifies the most relevant candidate variables, reduces the search space, and quickly determines the optimal time lag. Additionally, the paper provides strong theoretical support.
S4. Experimental results fully validate the effectiveness of the accelerator, demonstrating it superior performance in computational efficiency.
W1. In cases when different time lags exist between multivariate variables, calculating the time lag pairwise may increase the complexity. Would this affect the overall efficiency?
W2. The paper introduces the time weaving concept with a hyperparameter w. It would be better to experimentally verify the impact of this parameter on both efficiency and effectiveness.
W3. The description of handling irregular data is not very clear. If there is a lot of missing data, should the window size be fixed or variable?
Other Comments Or Suggestions: 1. In the sentence "their time lag discovery and causal graph generation performance is assessed using three metrics," "is" should be "are."
Questions For Authors: {W1, W3}
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We appreciate the positive comments and our responses are detailed below.
**Response to W1**: Calculating time lag is the same for both constant and multiple lags, with no additional complexity for multiple lags. Our pruning strategy reduces the time dimension and variable count, while binary computation improves the efficiency.
**Response to W2**: In the rebuttal, we add a comparison experiment on the hyperparameter w, evaluating the SURD algorithm on a nonlinear dataset with w set to {3, 9, 15, 21}. The results below indicate that larger w values slightly accelerate ARROW without significantly affecting time lag discovery or causal graph accuracy, further highlighting its effectiveness in handling irregular time series data.
| w | Graph AUC | Lag AUC | Time(s) |
|-----------|-----------|---------|---------|
| 3 | 0.745 | 0.923 | 6.273 |
| 9 | 0.686 | 0.886 | 6.253 |
| 15 | 0.746 | 0.941 | 6.173 |
| 21 | 0.716 | 0.912 | 6.030 |
**Response to W3**: The window size for handling irregular data is dynamically adjustable, based on the sparsity of the data. | Summary: This paper presents ARROW, an acceleration framework for causal discovery in time series data. ARROW aims to improve the efficiency of causal discovery algorithms by reducing computational complexity through three sequential steps: Time Encoding with Time Weaving transforms time series into binary tuple representations to capture local trend dynamics. Time Lag Discovery via XOR Analysis identifies optimal time lags by analyzing trend patterns using XOR operations, ensuring efficient and accurate selection. Candidate Pruning Strategy reduces the search space by filtering out irrelevant variable pairs, improving efficiency without compromising accuracy. ARROW successfully accelerates four time series causal discovery algorithms by up to 153x on 25 synthetic and real-world datasets while improving accuracy in most cases.
################################
Added after rebuttal period: I checked the authors' response, but I still believe that the assumption in Theorem 4.2 of the paper is too strong. As the authors mention, the method is effective in scenarios with monotonic causal relationships, but in more complex systems, such as financial markets, climate, where there are multi-variable interactions, the relationship between two variables is likely to be influenced by other variables, non-stationarity . This could cause the numerical trend consistency to disappear, rendering the ARROW method ineffective. The authors claim that there are no specific requirements for data construction, but if two variables, A and B, only have a dependency in the first 1/5 of the time and the dependency disappears afterward, I believe it would be difficult for ARROW to detect a significant cooperative positive/negative trend that exceeds the set threshold in this case. In summary, I think the core assumption in Theorem 4.2 has certain limitations, so I keep my original score.
Claims And Evidence: The authors' claims in this paper are theoretically supported, specifically by Theorem 4.2. However, this theorem assumes that if a variable v has a causal effect on v′, then their trends are more likely to be either positively or negatively correlated. I believe this assumption is overly idealized, as real-world systems often involve multiple interacting variables, where the effect of one variable may be offset or influenced by others, making the overall trend less predictable.Under the authors' assumptions, the proposed methods and evaluation criteria are reasonable. However, their applicability to real-world scenarios remains uncertain.
Methods And Evaluation Criteria: Under the authors' assumptions, the proposed methods and evaluation criteria are reasonable. However, their applicability to real-world scenarios remains uncertain.
Theoretical Claims: I checked the proof of Theorem 4.2, and under the assumptions proposed by the authors, the proof seems to be correct. However, I believe that the assumptions may not hold in real-world scenarios. The theorem assumes that if a variable v has a causal effect on v′, their trends should be either positively or negatively correlated. While this might be reasonable in a simplified setting with a single influencing variable, real-world systems are often influenced by multiple interacting factors, making the actual trend relationships more complex.
Experimental Designs Or Analyses: The experiments include both synthetic and real-world experiments. The synthetic experiments consider multiple scenarios and appear to be relatively comprehensive. However, I do not get the information of the number of variables and the edge density in each synthetic dataset from the paper. Additionally, the real-world dataset is not specifically described in detail in the paper. It is also unclear whether the synthetic experiments effectively validate the hypotheses proposed in the paper.
Supplementary Material: I reviewed the appendix, specifically Section A (Proofs) and Section B (Discussions). In Section A, I examined the proof of Theorem 4.2, which seems correct under the authors' assumptions. However, I believe the assumptions may not fully align with real-world causal dynamics.
Relation To Broader Scientific Literature: This proposed method is based on existing time series causal discovery methods by introducing a candidate variable selection step and a time lag determination strategy. These steps reduce the number of variable pairs that need to be tested, effectively accelerating the causal discovery process.
Essential References Not Discussed: I do not see any major issues with the related work. The paper cites relevant prior research in time series causal discovery and acceleration techniques, providing sufficient context for its key contributions.
Other Strengths And Weaknesses: Strength:
1. The paper is clearly written and presents its methodology in an understandable way.
Weaknesses:
1. I believe the proposed method may not be well-suited for real-world scenarios, as a variable is often influenced by multiple other variables in general. The impact of one variable may be offset by the effects of others, meaning that trend changes do not necessarily exhibit a strictly positive or negative correlation, which contradicts the assumption of Theorem 4.2.
2. The authors do not specify the specific conditions under which their method is applicable, such as whether it assumes stationarity or causal sufficiency. Clarifying these assumptions would help better understand the method’s limitations and applicability.
3. Although the authors claim that their method is not a causal discovery approach but rather a data-level acceleration framework applicable to most causal discovery methods, I believe it fundamentally serves as a causal discovery pre-filtering step. It leverages trend-based analysis to determine whether a time series is a potential causal candidate for another variable. However, I believe this trend-based approach is prone to misclassification. While it may be beneficial in sparse causal graphs, its effectiveness could diminish as the relationships between variables become more complex.
Other Comments Or Suggestions: 1. Provide more details on the synthetic datasets, including the number of variables and the density of causal graphs, to enhance the transparency of the evaluation.
2. Clarify the description of the real-world dataset, particularly the specifics of the DREAM3 dataset, to better assess the method’s applicability.
3. Further discuss the limitations of the approach, especially its potential challenges in unobserved confounders.
Questions For Authors: 1. Theorem 4.2 suggests that if variable v has a causal effect on v′, the probability of the XOR operation results being (0,0) or (1,1) is higher than (0,1) or (1,0), implying that the trends tend to be either consistently similar or consistently opposite. Is there any evidence to support this claim? How do you account for the influence of other variables and noise in this process?
2. Does this method apply to scenarios with hidden variables? Since hidden variables are common in real-world data, I believe they may affect the ARROW process, potentially excluding the correct variables and reducing the effectiveness of the subsequent causal discovery methods.
3. Is there a stationarity assumption in your approach? I believe non-stationary time series could disrupt the ARROW process and affect its performance.
4. How is the pruning threshold in the final step determined? Is there a theoretical basis for it, or is it purely empirical?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your valuable suggestions.
**Response to Q1 and W1**:
1. **Response to the assumption of Theorem 4.2:** Our method is well-suited for scenarios with monotonic causal relationships, as assumed in Theorem 4.2, where trend changes show consistent positive or negative correlations in most consecutive periods. Many real-world datasets (e.g., NetSim[ICLR23], DREAM3, NetSIM[ICLR20], Turbulence[Nature24]) exhibit monotonic causal relationships, making our method applicable to practical scenarios. Additionally, we incorporate nonlinear relationships in our experiments (Table 2), with results further validating the robustness of our approach.
3. **Response to a variable is often influenced by multiple other variables.** Our method filters variable pairs without strong correlations to accelerate downstream causal discovery. The downstream casual discovery method can support either the case when a variable is influenced by multiple other variables or the case when a variable is influence by one variable, which is not our focus. In situations where the impact of one variable is offset by others, ARROW will not incorrectly prune the edges in most cases. For example, in the case of 2A(t-1) = C(t) and -2B(t-1) = C(t), with A and B being functions of y=x, the effects of A and B cancel each other out, making C always 0. Here, A’s time weaving encoding is {1, 1, 1}, B’s is {0, 0, 0}, and C’s is {0, 0, 0}, so A XOR C always gives {0, 0, 0, 0}, and A XOR B gives {1, 1, 1, 1}. According to Theorem 4.2, both edges will be retained, with causal relationships determined by downstream algorithms. However, in complex scenarios challenging for downstream algorithms, ARROW may misprune, reflecting a trade-off between accuracy and efficiency. Our method performs well on both synthetic datasets and real-world datasets, which involve multiple variable influences, as shown in Tables 1 and 2 in the paper.
4. **Response to noise.** Our datasets include both synthetic datasets and real-life datasets used in experiments incorporate the noise. In the synthetic datasets, we set the noise standard deviation to 0.1. In this rebuttal, we further validate the impact of different noise levels on ARROW by setting the noise standard deviations in the nonlinear causal relationship dataset to 0.1, 0.5, and 1. The results below demonstrate that noise has a minimal effect on performance. Additionally, the acceleration efficiency remains stable across these noise levels.
| Parameter | Graph AUC | Lag AUC | Time(s) |
|-----------|-----------|---------|---------|
| 1 | 0.842 | 0.895 | 7.166 |
| 0.5 | 0.870 | 0.911 | 7.047 |
| 0.1 | 0.846 | 0.894 | 7.055 |
**Response to Q2**: The hidden variable problem remains a significant challenge in causal discovery[CSUR23]. Our method focuses on accelerating causal mining algorithms, while handling hidden variables is determined by downstream algorithms (e.g., SURD supports hidden variables, but PCMCI does not). When hidden variables exist, we accelerate the discovery only based on observed variables, thus the acceleration process for observed variables remains unchanged regardless of the number of latent variables. This rebuttal further validates ARROW on a synthetic dataset with 10 observed and 4 hidden variables. The results below confirm its improvements in time lag discovery, causal graph accuracy, and up to 70x speedup.
| Metric | SURD+ARROW | SURD |
|-------------|---------------|---------------|
| Graph AUC | 0.645 | 0.484 |
| Lag AUC | 0.869 | 0.502 |
| Time(s) | 6.219 | 426.206 |
**Response to Q3 and W2:**
As an accelerator for causal mining, ARROW makes no specific assumptions about the dataset; these are determined by downstream methods. For instance, LiNGAM assumes linear data generation, non-Gaussian disturbances, and no unobserved confounders, while PCMCI assumes causal sufficiency, faithfulness, and the Markov condition. For datasets with insufficient causality, ARROW does not prune indirect variables that still show lagged correlations with other observed variables, allowing downstream algorithms to make the final decision.
**Response to Q4:**
The pruning threshold is set based on the sparsity levels in real-world scenarios. Currently, the threshold is set to 0.25, which is set based on experience. A larger threshold retains more edges, reducing efficiency, while a smaller threshold may prune important correlations, impacting accuracy.
**Response to W3:**
The datasets we generated in our experiments have sparsity levels of 0.2 and 0.4, while the real-world datasets, including the Dream3 dataset, have sparsity levels ranging from 0.1 to 0.25, and the NetSim dataset has a sparsity level of 0.14 (see the table below).
| Dataset | Sparsity |
|--------------------|----------|
| Dream3-Ecoli1 | 0.11 |
| Dream3-Yeast2 | 0.25 |
| Netsim | 0.14 | | Summary: The paper investigates the computational efficiency of causal discovery in multivariate time series. Existing methods face high computational costs when applied to large-scale data, primarily due to issues such as data binning, time lag selection, and candidate set explosion. To address these challenges, the authors propose ARROW, a method designed to accelerate time series causal discovery. ARROW optimizes time lag selection using time weaving encoding and XOR operations while reducing computational overhead through a pruning strategy. Experimental results demonstrate that ARROW significantly improves efficiency while maintaining the accuracy of causal discovery.
## Update after rebuttal: I have read the rebuttal and I think my concerns are well addressed, I will keep my score this time (for acceptance).
Claims And Evidence: Yes. The claims made in the submission are supported by clear and convincing evidence, particularly through experiments on synthetic and real-world datasets across various time lag scenarios.
Methods And Evaluation Criteria: Yes. The proposed methods and evaluation criteria are suitable for addressing the problem and validating the approach.
Theoretical Claims: Yes. The theoretical proof regarding the time lag selection strategy has been checked, with a rigorous logic that effectively validates Theorem 4.2.
Experimental Designs Or Analyses: Yes. The experiment validates ARROW using synthetic and real-world datasets across different time lag scenarios, which is quite convincing.
Supplementary Material: Yes, including appendix and codes.
Relation To Broader Scientific Literature: Building on prior constraint-based, score-based, granger-based, and information-theoretic methods, ARROW enhances time series causal discovery by introducing time weaving encoding and XOR-based analysis to optimize time lag selection and reduce candidate set complexity, significantly improving efficiency without compromising accuracy.
Essential References Not Discussed: The author provides a comprehensive and thorough review of related work.
Other Strengths And Weaknesses: S1. The proposed acceleration framework ARROW has significant value and partially addresses the efficiency bottleneck in causal discovery.
S2. The paper introduces time weaving representation, which represents trends between three consecutive points using a compact binary format, reducing discretization costs and capturing dynamic time series features.
S3. To optimize time lag selection, it leverages XOR operations to analyze trend patterns, eliminating brute-force search and improving efficiency and reliability.
S4. A pruning strategy for candidate sets is proposed to select only the most causally relevant variables, mitigating candidate set explosion and enhancing scalability.
S5. The experiment is robust, thoroughly examining various time lag scenarios and validating the approach on both synthetic and real-world datasets.
W1. Since time weaving encoding retains only trend information while discarding exact numerical changes, it requires further explanation for the observation that our causal discovery performance in the experiment surpasses the original algorithm.
W2. The authors need further clarification whether ARROW's time weaving representation might perform poorly on stationary sequences or high-noise data.
W3. The experimental setting does not explicitly mention the data scale. It would be better to see the impact of different dataset sizes on the performance of the ARROW method.
Other Comments Or Suggestions: NA
Questions For Authors: Please See W1, W2, W3.
Ethics Expertise Needed: ['Other expertise']
Ethical Review Concerns: NA.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the positive comments and our responses are detailed below.
**Response to W1**: Causal relationships rely more on trend synchronization than numerical transmission. Performance improvements come from three aspects:
* Noise Robustness – Binary encoding filters noise and highlights structural causal patterns.
* Lag Discovery – XOR operations accurately identify lagged causal relationships and optimize time lag selection.
* Pruning Strategy – Pruning reduces the search space.
**Response to W2**: ARROW performs well in stationary sequences and noise scenarios.
* In stationary sequences, binary encoding of temporal weaving stabilizes, and XOR results remain similar, preventing the pruning. Note that, causal discovery relies on the downstream inference, and ARROW does not impact the accuracy of downstream algorithm.
* Temporal weaving filters high-frequency noise, but in high-noise environments (e.g., outliers), pre-processing is needed to reduce interference and improve causal identification accuracy.
**Response to W3**: We have included different data sizes, including synthesized datasets with 10 variables and 1000 time points, Dream3 dataset with 10 genes and 84 time points.
---
Rebuttal Comment 1.1:
Comment: The reviewer thanks authors for their rebuttal and confirms the vote for acceptance. Thanks. | null | null | null | null | null | null |
Revisiting Noise Resilience Strategies in Gesture Recognition: Short-Term Enhancement in sEMG Analysis | Accept (poster) | Summary: This paper proposes a noise-robust method for surface electromyography (sEMG)-based gesture recognition. The authors emphasize the importance of short-term signal learning in mitigating the interference of local noise, which could otherwise degrade the modeling of long-term signals. Specifically, the paper introduces an sEMG module that separately models long-term and short-term signal features. Within this module, the authors propose a advanced masking strategy to prevent excessive signal isolation, a common issue when applying standard masking techniques in masked autoencoders. Additionally, they propose an Asymmetric Optimization method that prioritizes difficult cases during training, enhancing model robustness.The effectiveness of these designs is validated through extensive experiments, demonstrating significant improvements in noise resistance and overall gesture recognition performance.
Claims And Evidence: The validation experiments in Fig. 3 demonstrate the effectiveness of the proposed methods. However, I still have some doubts about whether the reduction of noise interference is directly attributed to the enhancement of short-term feature modeling. While it is a common sense that combining short- and long-term feature modeling generally improves performance, the specific contribution of short-term modeling in mitigating noise interference remains unclear. o strengthen this claim, additional qualitative analysis is needed. For instance, providing insights into the proportion of data affected by local noise and the duration distribution of such noise would help clarify the impact of short-term modeling on noise robustness.
Methods And Evaluation Criteria: The dataset and evaluation metrics are reasonable in this paper.
Theoretical Claims: I have reviewed the algorithm for sEMG signal masking and the arguments regarding the advantages of short- and long-term signal modeling. However, the theoretical analysis supporting how short-term modeling improves the model's robustness to noise is currently lacking.
Experimental Designs Or Analyses: I have reviewed the experimental design, particularly the noise robustness evaluation in Fig. 3. While the results suggest improved performance under different noise conditions, the paper does not provide details on how the noise levels were set or whether they align with real-world conditions. A more rigorous analysis, such as comparing against real sEMG noise distributions, would strengthen the validity of the conclusions.
Supplementary Material: I read all the content in the supplementary material, including how to add noise to a signal, the details of sEMG signal masking, and the real-world deployment of this method.
Relation To Broader Scientific Literature: [1] A novel event-driven spiking convolutional neural network for electromyography pattern recognition. [2] SpGesture: Source-Free Domain-adaptive sEMG-based Gesture Recognition with Jaccard Attentive Spiking Neural Network haven't been discussed and compared in this paper.
Essential References Not Discussed: I haven't found related papers that were not discussed in this paper.
Other Strengths And Weaknesses: 1. The paper lacks a direct comparison between employing Focal Loss and Asymmetric Optimization. Evaluating their respective impacts on model performance would provide a clearer justification for the proposed optimization strategy.
2. There is no comparison between conducting long-term and short-term modeling in a non-parallel mode. Additionally, the computational cost differences between the parallel and non-parallel approaches should be supported by experimental results, not just a claim.
3. What types of examples are identified as difficult, and why? Furthermore, what is their proportion in the dataset? Providing more details on this aspect would help clarify the role of Asymmetric Optimization in handling hard cases.
4. Have you considered directly applying the typical masked strategy used in standard reconstruction methods? It would be helpful to provide experimental results comparing this approach with your proposed strategy.
5. What is the proportion of difficult cases in the dataset? Additionally, how does model performance vary across different difficulty levels?
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer QgDP
Thank you for your professional review and valuable time. Your positive assessment is incredibly encouraging to our research team. We sincerely appreciate your thoughtful comments and would like to address each of your concerns as follows:
## Broader Scientific Literature
Thank you for bringing this to our attention. We will discuss these two critical works (SCN, SpGesture) in the camera-ready version from the perspective of SNN development in sEMG gesture recognition.
## Focal Loss vs Asymmetric Optimization
Thank you very much for bringing this to our attention. Although this was not our core contribution, we appreciate that providing these results would offer readers more valuable insights.
We have conducted additional tests on the Grabmyo:
- STET + Focal Loss: 89.42%
- STET + Asymmetric Optimization: 90.54%
## Modeling in Non-Parallel vs Parallel
Thank you for your suggestion. We compared the Focal Transformer (non-parallel) with our method.
| Backbone | Grabmyo (ACC) | GPU-A6000-Latency (ms) |
|----------|---------------|------------------------|
| Focal Transformer | 84.56 | 4.7 |
| STET | 90.72 | 3.9 |
In our inference speed experiments, we used a 4-stage Focal Transformer, while STET used two long-term layers plus two short-term layers.
## Concerns about Hard Samples
We appreciate your insightful question. Below is our detailed response:
### Definition of Hard Samples
**Hard Positives**:
- **Definition**: Positive samples (ground-truth $y_{i,j} = 1$) with low predicted probabilities $\widehat{y}_{i,j}$
- **Weighting Mechanism**: The term $(1 - \widehat{y}_{i,j} )^{\gamma^+}$ assigns higher weights to samples where the model lacks confidence.
### Proportions and Dynamics
| Sample Type | Estimated Hard Proportion | Remarks |
|-------------|---------------------------|---------|
| **Hard Positives** | 40%-60% (early training) | Decreases to ~20% as the model converges due to asymmetric focusing. |
| **Hard Negatives** | 5%-10% (persistent) | Suppressed via $\gamma^- < \gamma^+$ and $m=0.2$; absolute count remains high but influence is reduced. |
## Classic Mask Method vs Our Method
Thank you for your suggestion! We actually included related experiments in Appendix A.3. Classical mask uses a standard pretraining with a general mask, employing a standard masking approach.
| Backbone | Masking method | AG Noise↓ (%) | MG Noise↓ (%) |
|----------|----------------|--------------|--------------|
| Transformer | classical | 22.00 | 15.00 |
| Transformer | ours | 15.00 | 13.00 |
We understand your concern and will highlight this experiment more prominently in the revised paper.
## Concerns about Noise Reduction
We appreciate your thoughtful comment. We would like to clarify this relationship with additional insights and reasoning.
The connection between short-term feature modeling and noise resilience is rooted in the fundamental characteristics of sEMG signals and common noise patterns.
1. **Electrode-skin interface noise**: Movement artifacts and momentary changes in skin-electrode contact typically manifest as brief bursts of interference rather than consistent long-term corruption [1, 2].
2. **Environmental electromagnetic interference**: Such interference usually occurs in short, sporadic patterns rather than continuously affecting the entire signal [3].
It's worth noting that we're **not focusing on constant background noise** (such as power line interference at 50/60Hz), as these can be effectively removed through conventional filtering techniques. Our primary concern is with **transient, unpredictable noise patterns** that are more challenging to address through traditional methods and represent common challenges in real-world applications.
By adopting a sliding window attention mechanism, our STEM concentrates on local temporal windows to isolate noise within affected segments, preventing it from spreading across the entire sequence. This method outperforms the long-term decoder under various noise types (as shown in Figure 3) and offers multiple sampling perspectives: overlapping windows that partially include noisy regions can still capture valuable information, thus reinforcing resilience.
**We believe visualizing the noise distribution is a worthwhile idea**; however, precisely pinpointing the noise remains challenging, so we cannot present such a visualization yet, **but we will continue to work toward it**.
### References
[1] Surface electromyography: physiology, engineering, and applications. IEEE Press.
[2] Filtering the surface EMG signal: Movement artifact and baseline noise contamination. Journal of Biomechanics
[3] Sampling, noise-reduction and amplitude estimation issues in surface electromyography. Journal of Electromyography and Kinesiology
We sincerely appreciate all your valuable suggestions and will incorporate them into our revised manuscript. Your positive assessment is greatly encouraging to our research team.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ answers to address my concerns, but I still believe that a quantitative analysis of the noise data is essential. Specifically, how many instances of electrode-skin interface noise and environmental electromagnetic interference are present in the dataset? The authors mention the sporadic nature of these artifacts, but this does not replace the need for statistical evidence.
If the dataset is large, a reasonable approach would be to use random sampling combined with multi-person manual verification to estimate the prevalence and impact of these noise types. The proportion of noise-contaminated samples plays a significant role in determining this study's overall validity and impact. Without such information, assessing whether the proposed method addresses a widespread problem or only rare edge cases is difficult.
Therefore, I will maintain my score until the authors provide a more rigorous quantitative analysis of noise occurrence within the dataset.
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive feedback. After detailed research and analysis, our research team has developed an approach to quantitatively analyze the noise distribution in our dataset, as you suggested.
We have employed two key metrics to analyze the noise distribution quantitatively:
1. **Signal-to-Noise Ratio (SNR)**: We analyzed the spectral properties of each sEMG signal recording and measured the SNR (in dB) as the ratio between the power of the signal to the power of the noise. [1] **The power of the noise was estimated as the power of sEMG recordings during the rest trial** (when sEMG acquisition is least susceptible to interference).[1] The average SNR across all signals in our datasets was 14.565 ±6.385 dB.
2. **Correlation Coefficient of Normality (CCN)**: This metric was used to analyze amplitude distribution. For a static contraction with moderate force, sEMG can be modeled as a filtered, random, white Gaussian noise process.[2] It has been suggested that a test of normality can provide a measure of biosignal quality, where a signal amplitude with a non-Gaussian distribution would be considered contaminated. **We generated a Gaussian distribution with equal mean and variance to that of the recording.** [3] The CCN is defined as the Pearson correlation coefficient between the histogram bin values of the sEMG recording and the normal density function value for the corresponding bins.[4] A value close to 1 indicates a normal distribution. The CCN of all signals in our dataset was 0.975±0.041.
In the real-world dataset, noises are complexly and randomly intermingled, making precise quantitative isolation impossible. Therefore, we used SNR and CCN calculations to analyze the overall noise distribution in spectral and amplitude properties. Our measurements of the GrabMyo dataset revealed the following distribution:
| Metric | **High noise (SNR < 10 dB)** | **Moderate noise (10 dB ≤ SNR ≤ 20 dB)** | **Low noise (SNR > 22 dB)** |
| --- | ----------------------------- | ----------------------------------------- | --------------------------- |
| SNR | 28.2% | 56.3% | 15.5% |
Through visual inspection and based on our experience, we identified samples with **CCN < 0.93** (11.6% of the total samples) as having severe non-Gaussian signals, which may be affected by transient noise such as motion artifacts or electromagnetic interference.
To demonstrate the effectiveness of our approach across different noise levels [1], we provide comparative model accuracy:
| Model | **High noise (SNR < 10 dB)** | **Moderate noise (10 dB ≤ SNR ≤ 20 dB)** | **Low noise (SNR > 22 dB)** | **Motion artifacts or EMI (CCN < 0.93)** |
| ------------------- | ----------------------------- | ----------------------------------------- | --------------------------- | ----------------------- |
| Informer | 78.16 | 86.44 | 92.74 | 72.17 |
| Informer+STEM (ours) | 83.32 | 87.28 | 93.13 | 78.63 |
| STET (ours) | 85.93 | 89.62 | 92.89 | 80.22 |
As the results demonstrate, our method shows significant performance improvements on samples with high and moderate noise levels in real-world datasets.
We hope this quantitative analysis addresses your concerns regarding the prevalence and impact of noise in real word dataset and demonstrates the practical value of our proposed method. We sincerely thank you for your professional feedback. Your positive assessment is greatly encouraging to our research team.
[1] Automatic assessment of electromyogram quality. J. Appl. Physiol
[2] A nonstationary model for the electromyogram. IEEE Trans. Biomed. Eng
[3] A Review of Techniques for Surface Electromyography Signal Quality Analysis. IEEE Reviews in Biomed. Eng
[4] Multi-day dataset of forearm and wrist electromyogram for hand gesture recognition and biometrics. Scientific Data | Summary: This paper specially captures the short-term temporal dependencies in sEMG-based gesture recognition. By designed a self-supervised pretrained method and two short/long-term heads, the proposed method achieve SOTA performance.
## update after rebuttal
The authors have resolved most of my concerns.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Pros:
1. This papre is well written and easy to understand with clear visualizations.
2. The proposed methods achieves strong performance.
3. The authors give plentiful ablations and visualizations to support the method.
4. The proposed method seems novel with a new MAE-based self-supervised pre-training method and two heads to capture short/long-term temporal dependencies.
Cons:
1. The methods included for comparison are old. Most are published before 2021 and only one of them is published in 2023. It seems that this area has drawn little attention in recent years, or authors have neglected several recent works.
2. The authors claim that they use two datasets for evaluation (GRABMyo and the Ninapro DB2, line 113), but i only observe results on one dataset (tab.1). It seems that the results are insufficient.
3. While the proposed method seems novel, some of the components have been used in previous methods. For example, MAE-based self-supervised pre-training is widely used in previous self-supervised methods. The long-term modeling method is a simple attention implementations, while the short-term modeling method is implemented by windowed attention.
Other Comments Or Suggestions: N/A
Questions For Authors: My overall concerns focus on the old methods included for comparison, and whether the authors use adequate datasets for evaluation.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer ybGK
We sincerely appreciate your thorough review and valuable feedback on our manuscript. We are grateful for your recognition of our paper's strengths, including the **clear writing style**, **effective visualizations**, **strong performance** , **comprehensive ablation studies**, and the **novelty of our approach**. Your positive assessment is greatly encouraging to our research team.
We have carefully considered your concerns and would like to address them as follows:
## Regarding Dataset Evaluation
We apologize for causing this misunderstanding. While we did indeed evaluate our method on **both GRABMyo (results presented in Table 1) and Ninapro DB2 (results presented in Table 4)**, we acknowledge that this was not sufficiently clear in our presentation. We will revise the manuscript to clearly indicate the comprehensive nature of our evaluation.
Furthermore, to provide even more robust evaluation, we have **added results on an additional dataset, Ninapro DB5**, as shown in the table below:
| Model | Accuracy (%) |
|-------------|--------------|
| LST-EMG-Net | 82.23 |
| Informer | 85.22 |
| TEMGNET | 80.74 |
| STET (ours) | **87.61** |
This enhancement brings our total to **three distinct datasets**, offering a more comprehensive assessment of our method's performance across varied conditions.
## Regarding Comparison Methods
We thank you for highlighting the need for more recent comparison methods. Following your suggestion, we have **expanded our comparisons to include two state-of-the-art methods published in late 2024**:
| Model | GRABMyo (ACC) | DB2 (PCC) | DB5 (ACC) |
|-------------------------|---------------|-----------|-----------|
| Spgesture (NeurIPS 2024)| 88.06% | 0.84 | 86.32 |
| LRNN (TIST 2024) | 86.75% | 0.82 | 86.20 |
| STET (ours) | **90.76%** | **0.88** | **87.61** |
These additional comparisons demonstrate that **our method maintains its performance advantage even against the most recent approaches** in the field.
## Our Main Contributions
To summarize, we would like to reiterate the key contributions of our work:
1. **Introduction of sEMG Signal Masking**: We propose a novel self-supervised pretraining technique using sEMG Signal Masking (Sensor-wise and Contiguous Masked Segments following a Geometric Distribution) to leverage the inherent variability in sEMG data.
2. **STEM Module for Enhanced Noise Resilience**: From the perspective of improving short-term feature representation, we propose STEM, an adaptive and noise-resistant module. Integrating STEM into various neural networks has demonstrated significant performance gains.
3. **Improved Noise Resilience**: Extensive experiments confirm that our overall design substantially enhances the noise resilience of models for sEMG data.
We sincerely appreciate your insightful comments, which have helped us improve the quality and comprehensiveness of our manuscript. | Summary: The paper addresses the problem of noise resilience in surface electromyography (sEMG)-based gesture recognition. The authors propose a novel Short-Term Enhancement Module (STEM), which focuses on capturing short-term dependencies in sEMG signals to enhance noise resistance. Further, results on GRABMyo and Ninapro DB2 datasets show >20% improvement in noise resilience compared to existing models.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes.
Theoretical Claims: It is an empirical paper.
Experimental Designs Or Analyses: The experiments are missing a robust evaluation based on either Leave one user out or n-fold cross validation to show there is no overfitting in the results.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper builds on previous works and advances sEMG-based gesture recognition by focusing on short-term dependencies.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: Strengths:
- The paper is clean and organized.
- Well motivated problem and novel idea of using short-term feature extraction.
Weaknesses:
- Missing robust evaluation of the existing dataset using Leave-one-subject-out (LOSO) evaluation or n-fold cross-validation.
- Since, the gains are small compared to the baselines. It would be good to know the error bar (mean, stddev) of the results and baselines.
- Any evaluation of latency of the system is missing compared to previous works.
Other Comments Or Suggestions: No
Questions For Authors: Please refer to strengths and weaknesses section for the rebuttal.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer AUAg
We sincerely thank you for your thorough review and **insightful feedback** on our paper. We particularly appreciate your recognition of our **well-motivated problem statement** and the **novel approach of using short-term feature extraction**. Your professional suggestions have been invaluable in improving our work.
## Regarding N-fold Cross-validation
Thank you for this excellent suggestion. While we initially followed standard evaluation methods in the field of sEMG-based gesture recognition (as seen in [1], [2], [3]) and used validation sets to prevent overfitting (detailed in the appendix's dataset section), we also believe that your suggestion is extremely valuable.
As per your recommendation, we have conducted a **6-fold cross-validation** comparing our method with the second-best performer (Informer):
| Method | Average Accuracy | Standard Deviation |
|--------|------------------|-------------------|
| **STET (Ours)** | **92.15%** | **1.12** |
| Informer | 88.06% | 1.56 |
Statistical significance was confirmed with a t-test (p < 0.001), further supporting the robustness of our approach.
## Concern about error bars
We apologize for any confusion. We would like to clarify that we have indeed included the mean and standard deviation in Table 1 of our original manuscript, as well as in our newly added n-fold experiments.
We appreciate your suggestion and recognize the importance of clearly presenting statistical significance when comparing our approach with the baselines. Based on your feedback, we will revise the manuscript to make these error metrics more prominent and ensure they are easily noticeable throughout our experimental results section.
## Regarding Inference Latency
We completely agree with your insightful point about the importance of latency evaluation for downstream applications. While this information was included in our appendix under "Inference Performance and Parameter Comparison of Models," we have expanded this section with additional comparative analysis:
| Model | Inference Time by GPU (A6000) | Inference Time by CPU (AMD EPYC 7543) | Average Accuracy |
|-------|------------------------------|--------------------------------------|-----------------|
| Transformer (same layer without weight sharing) | 4.8 ms | 27.5 ms | 85.26% |
| TCN | 2.1 ms | 10.6 ms | 81.50% |
| GRU | 6.0 ms | 36.0 ms | 86.30% |
| LST-EMG-NET | 5.2 ms | 31.0 ms | 85.31% |
| **STEM (Ours)** | **3.9 ms** | **17.6 ms** | **90.76%** |
Our STEM module adds minimal computational overhead (only 0.1ms on GPU and 2ms on CPU) while delivering **superior accuracy with competitive inference speed**.
## Conclusion
Once again, we are grateful for your expert review and constructive feedback. We hope the additional evaluations address the concerns you raised while further validating the effectiveness of our approach. Your positive assessment is encouraging to our research team.
[1] Multi-attention feature fusion network for accurate estimation of finger kinematics from surface electromyographic signals IEEE Transactions on Human-Machine Systems
[2] Cross-Subject Lifelong Learning for Continuous Estimation from Surface Electromyographic Signal IEEE Transactions on Neural Systems and Rehabilitation Engineering
[3] A CNN-attention network for continuous estimation of finger kinematics from surface electromyography IEEE Robotics and Automation Letters | null | null | null | null | null | null | null | null |
On the Robustness of Transformers against Context Hijacking for Linear Classification | Reject | Summary: This paper studies the robustness of Transformers against context hijacking in a linear classification setting.
Empirically, the paper observes deeper transformer can achieve higher robustness. Theoretically, the paper explains this phenomenon as deeper model corresponding to more fine-grained optimization steps, which improves context hijacking robustness.
## update after rebuttal
I would like to keep my original score. On the positive side, I appreciate the paper’s solid analysis and clear exposition. However, the analysis is restricted to the representation level, which is relatively well-established in the literature on ICL. Given that there are already works studying the learnability of ICL, the lack of results on this aspect limits the contribution in my view and is a key reason why I do not increase my score.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes. I read the proof sketch and experimental setup in the appendix but I did not check every technical details of the proof.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
The paper studies the robustness of context hijacking under a well-formulated ICL linear classification setting. Under the setting, the paper provides good theoretical analysis, which is consistent with empirical observations about the relations between the robustness, the training context length, the number of hijacked context examples, and the depth of the transformer model.
Weaknesses:
1. It is unclear how the design of hijacked context data in the linear classification setting is connected to the context hijacking in practice. While the authors claim this "follows the general design of many previous theoretical works", a brief illustration might be essential.
2. The theoretical analysis is restricted to the representation level. It only shows that there exist Transformers that satisfy the desired properties (i.e., equivalence between $L$-layer transformers and $L$ steps gradient descent), based on which the results about the context hijacking robustness can be derived. There lacks analysis on the learnability that the model will necessarily learn the desired paramters from the training data.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Can you explain how the design of hijacked context data in the linear classification setting is connected to the context hijacking in practice?
2. Though strict theoretical analysis might be beyond the scope of this work, can you provide any explanation or insight about the learnability issue?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your constructive questions and suggestions! We address them as follows:
>**Q1:** More clarity on the design of hijacked context data in the linear classification setting, and its connection to context hijacking in practice.
**A1:**:
We carefully design the context hijacking sample $\mathbf{Z}\_{\mathrm{hc}}$ to simulate the context hijacking phenomenon. We will explain it and its connection with context hijacking in practice with the example shown in Figure 1. Our data consists of $\mathbf{x}\_{\mathrm{hc}}, y\_{\mathrm{hc}}, \mathbf{x}\_{\mathrm{query}}$ and $y\_{\mathrm{query}}$. We can assume that $\mathbf{x}\_{\mathrm{hc}}$ = “Rafael Nadal is not good at playing”, $y\_{\mathrm{hc}}$ = “basketball”, $ \mathbf{x}\_{\mathrm{query}}$ = "Rafael Nadal’s best sport is", and $y\_{\mathrm{query}}$ = "tennis".
So our data structure is defined as follows. It consists of two parts: context and query. For each sample $\mathbf{Z}\_{\mathrm{hc}}$, each of its first $n$ columns of context is exactly the same, that is, it is composed of $n$ repeated $(\mathbf{x}\_{\mathrm{hc}},y\_{\mathrm{hc}})$. However, the last query is $(\mathbf{x}\_{\mathrm{query}},y\_{\mathrm{query}})$, which is not equal to $(\mathbf{x}\_{\mathrm{hc}},y\_{\mathrm{hc}})$. This corresponds to the repeated context sentence "Rafael Nadal is not good at playing basketball" and the final query “Rafael Nadal’s best sport is” in practical case.
We design $\mathbf{x}\_{\mathrm{hc}}$ and $\mathbf{x}\_{\mathrm{query}}$ to be different, which satisfies that corresponding $y\_{\mathrm{hc}}$ and $y\_{\mathrm{query}}$ are opposite. We can see that $\mathbf{Z}\_{\mathrm{hc}}$ is designed to perfectly fit the context hijacking phenomenon, consisting of two parts.
* $N$ repeated context samples correspond to repeated interference samples in the context hijacking phenomenon.
* The values of $\mathbf{x}\_{\mathrm{query}}$ and $\mathbf{x}\_{\mathrm{hc}}$ could be close (controlled by $\sigma$), which is consistent with the close semantics of context in the real context hijacking phenomenon. And the labels of the context samples are opposite to the final predicted label, aligning with the practical observation that context hijacking causes the prediction to flip to the token in the context.
>**Q2:** Theoretical analysis is restricted to the representation level. Any insights about the learnability issue?
**A2**:
We recognize that the optimization analysis and learnability of multi-layer transformers are of great interest and importance. However, to our knowledge, current research primarily focuses on shallow architectures (one or two layers) [1, 2, 3, 4]. At this stage, it appears to be exceedingly challenging and nearly impossible to offer a rigorous theoretical analysis of the optimization processes for multi-layer transformers.
While lacking rigorous derivation, we have some plausible hypotheses regarding the training dynamics for multi-layer. Proposition 4.2 suggests that the choice of learning rates is symmetric across all steps. We conjecture that this conclusion extends to the context of global optimization, implying that meaning our Proposition 4.2 provide insightful implications from the perspective of training. This is supported by Theorem 2.1 in [5], which shows that the difference in the $\ell$-2 norm across layers in a deep homogeneous neural network remains constant during training. Such a conclusion is applicable to linear transformers, as they are always homogeneous. Therefore, if all layers of linear transformers are initialized from the same point, they will behave similarly throughout the training process. Based on this conjecture, the matrix factorization technique proposed in [2] for one-layer linear transformers might also be applicable to general multi-layer transformers. We believe this is an interesting and promising future work direction.
[1] Zhang, et al. "In-context learning of a linear transformer block: benefits of the mlp component and one-step gd initialization." NeurIPS.
[2] Zhang, et al. "Trained transformers learn linear model in-context." JMLR.
[3] Zhang, et al. "Transformer learns optimal variable selection in group-sparse classification." ICLR.
[4] Frei and Gal. "Trained transformer classifiers generalize and exhibit benign overfitting in-context." ICLR.
[5] Du, et al. "Algorithmic regularization in learning deep homogeneous models: layers are automatically balanced." NeurIPS.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, which addresses my main concerns. I will maintain my original score. | Summary: This paper investigates the context hijacking phenomenon of transformer models, where incorporating multiple hijacking context samples can successfully flip the original model prediction. The paper conducts theoretical analysis on the linear transformer case for in-context learning, and verifies it on linear transformers/GPT-style transformers using a synthetic linear classification task. The experiments confirmed the theoratical analysis.
Claims And Evidence: **Claim 1:**
Less hijacking in-context example and more transformer layers improve the transformer robustness against hijacking attack.
*Evidence:* The experiments in Section 5 aim to support this claim.
*Comment*: The experiment can confirm the theoretical analysis, but may be a bit limited since it mainly focuses on linear classification.
Methods And Evaluation Criteria: This work follows other work in theoretical analysis of transformer in-context learning. I think the evaluation is reliable.
Theoretical Claims: **Claim 1:**
Context hi-jacking phenomenon can be formulated following previous modeling on transformer in-context learning.
*Evidence:* Section 3 formulates the problem.
*Comment:* The formulation mainly follows previous works on in-context learning analysis, and this paper extends the prior of $w^*$ to have non-zero mean, which better models testing a pre-trained model. Overall, it is reasonable to me.
**Claim 2:**
Testing error (context hijacking) can be formulated as a function of context length and number of layers in the transformer.
*Evidence:* Section 4.3 provides the proof.
*Comment:* I am not able to fully follow the analysis, but the proof structure is clean to me.
Experimental Designs Or Analyses: Please see claims/evidence section.
Supplementary Material: I checked section B, C, trying to consume the theoretical analysis, Section G for additional context hijacking experiment results, and Setion H for additinal experiments on GPT-style transformer.
Relation To Broader Scientific Literature: This work is related to the theoretical analysis of transformer in-context learning works.
Essential References Not Discussed: -
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: I'm not very familiar with this domain and thus I'm not able to comment on the correctness of the proof. But the overall proof structure is clear and makes sense to me.
Questions For Authors: 1. The paper is mainly motivated by the context hijacking, where the context itself is actually closely related to testing question/answer but not really the same. Are the distributions $D{te}$ and $D{tr}$ the same? If not, how does the difference between the two distributions affect the theoretical analysis?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your recognition of our work and your constructive questions!
>**Q1:**
Are the distributions $\mathcal{D}\_{\mathrm{te}}$ and $\mathcal{D}\_{\mathrm{tr}}$ the same? How does the difference between the two distributions affect the theoretical analysis?
**A1:**
The distributions $\mathcal{D}\_{\mathrm{te}}$ and $\mathcal{D}\_{\mathrm{tr}}$ are not the same. $\mathcal{D}\_{\mathrm{tr}}$ is the distribution of sample $\mathbf{Z}$ during the training phase, and $\mathcal{D}\_{\mathrm{te}}$ is the distribution of sample $\mathbf{Z}\_{\mathrm{hc}}$ during the test phase (Section 3.1 and 3.3).
Specifically, $\mathcal{D}\_{\mathrm{tr}}$ is a general in-context learning (ICL) data distribution modeling the pre-training distribution of a large language model. Consistent with common practices in theoretical studies on ICL [1-4], we consider classification pairs with Gaussian features.
In contrast, $\mathcal{D}\_{\mathrm{te}}$ is a carefully designed data distribution that simulates the context hijacking phenomenon. We provide more intuitive explanations of its design in the following.
For each sample $\mathbf{Z}\_{\mathrm{hc}}\sim \mathcal{D}\_{\mathrm{te}}$, its first $N$ columns are identical, consisting of $N$ repetitions of $(\mathbf{x}\_{\mathrm{hc}},y\_{\mathrm{hc}})$. Here, the $N$ repetitions of $(\mathbf{x}\_{\mathrm{hc}},y\_{\mathrm{hc}})$ represent the multiple repetitions of "Rafael Nadal is not good at playing" and "basketball" respectively in context hijacking example in Figure 1. The last pair $(\mathbf{x}\_{\mathrm{query}},y\_{\mathrm{query}})$ differs from $(\mathbf{x}\_{\mathrm{hc}},y\_{\mathrm{hc}})$, representing "Rafael Nadal’s best sport is" and "tennis". Additionally, we let the values of $\mathbf{x}\_{\mathrm{query}}$ and $\mathbf{x}\_{\mathrm{hc}}$ be closed, simulating the similarity between "Rafael Nadal is not good at playing" and "Rafael Nadal’s best sport is", while the corresponding $y\_{\mathrm{hc}}$ and $y\_{\mathrm{query}}$ have opposite signs, indicating the different answer "tennis" and "basketball".
In summary, the context and query in $\mathcal{D}\_{\mathrm{tr}}$ are i.i.d., but the context in $\mathcal{D}\_{\mathrm{te}}$ are correlated to the query. As our key theory shows, multi-layer transformers perform multiple steps of gradient descent on the context samples. Therefore, when the distribution of the context is different, the gradient steps performed on the context will be significantly different.
>**Q2:**
The experiment may be a bit limited since it mainly focuses on linear classification.
**A2:**
We would like to clarify the motivation and organization of our paper. First we perform experiments on GPT2 using natural language data to identify the patterns of robustness of LLMs for context hijacking. Then we perform theory on linear classification, because linear problems have sufficient representation power, supported by many previous works [1-4]. Linear classification is a basic modeling of the problem. If we cannot effectively analyze linear problems, it is difficult for us to fully understand other problems. Based on linear problems, we build the first theoretical understanding of context hijacking and propose a comprehensive theoretical framework. We believe that based on our theoretical framework, people can expand to more complex classification problems (such as non-linear problems).
We further verify our theoretical results, conducting experiments to validate our theory. Our experiments consider the optimal learning rate for gradient descent with different number of iterations, and the robustness for the linear transformer with different depths and the length of training context. The results are consistent with our theory. So our current experiments are developed to verify our theoretical analysis. And then we can bridge the gap between the theoretical results and the empirical findings on GPT2.
Then, based on your suggestion, we further conduct some preliminary experiments on nonlinear classification (https://github.com/sfghtkgfv/dgnhjkgiqeb), changing $\langle \mathbf{w}, \mathbf{x}\rangle$ to $\langle \mathbf{w}, \mathbf{x}\rangle^2-C$, where $C$ is a constant. We conduct the experiment on multi-layer ReLU attention transformers. The results show that even in the nonlinear case, the model still tends to be more robust as it gets deeper, which is consistent with our theory.
[1] Von Oswald, et al. Transformers learn in-context by gradient descent. ICML.
[2] Ahn, et al. Transformers learn to implement preconditioned gradient descent for in-context learning. NeurIPS.
[3] Zhang, et al. In-context learning of a linear transformer block: benefits of the mlp component and one-step gd initialization. NeurIPS.
[4] Zhang, et al. Trained transformers learn linear models in-context. JMLR. | Summary: The authors here have studied how the concept of context hijacking affects the transformer models. The context hijacking problem deals with the problem where giving some other informations to the model might affect it's output even if the informations are factually correct. The authors here tried to study this problem from both theoretical and practical aspects.
Claims And Evidence: They claimed to have proved that deeper models help alleviate the context hijacking problem. I think the experimental evidence provided by them is convincing enough.
Methods And Evaluation Criteria: I think the evaluation method is fine. Though more clarity on the type of data they used and why would have been better.
Theoretical Claims: I think more clarity on how they linked the problem with multi step optimization is needed.
Experimental Designs Or Analyses: They have used only linear transformers for testing their theory. I feel they could've given more clarity on it and why they chose the linear model.
Supplementary Material: No
Relation To Broader Scientific Literature: I think the problem and their finding that increasing layers will help alleviate the context hijacking problem is quite general. Context hijacking seems like a general underfitting problem and increase in model complexity will help, this is a quite general result in my opinion.
Essential References Not Discussed: I don't think so
Other Strengths And Weaknesses: The authors did very well in describing what context hijacking is. But they mentioned some terminologies like L-step gradient descent and L-transformers in the introduction which seemed unnecessary and confusing.
Other Comments Or Suggestions: I think there is lack of clarity or grammatical error in line 384, 'optimal gradient descent with more L steps'.
Questions For Authors: No questions
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your informative feedback! We address your comments as follows:
>**Q1:** More clarity on the data model.
**A1:**
We model the context hijacking problem as a binary linear classification task, following general theoretical studies on transformer in-context learning [1, 2, 4, 5, 7]. Our context hijacking data consists of $\mathbf{x}\_{\mathrm{hc}}, y\_{\mathrm{hc}}, \mathbf{x}\_{\mathrm{query}}$ and $y\_{\mathrm{query}}$. Taking the left picture in Figure 1 as an example, we can assume that $\mathbf{x}\_{\mathrm{hc}}$ = “Rafael Nadal is not good at playing”, $y\_{\mathrm{hc}}$ = “basketball”, $ \mathbf{x}\_{\mathrm{query}}$ = "Rafael Nadal’s best sport is", and $y\_{\mathrm{query}}$ = "tennis".
So our data structure is defined as follows. It consists of two parts: context and query. The first $n$ columns of our data represent context samples, where each column is a query-answer pair $(\mathbf{x}\_{\mathrm{hc}}, y\_{\mathrm{hc}})$. The last column of the sample contains a query $ \mathbf{x}\_{\mathrm{query}}$. This corresponds to the repeated context sentence "Rafael Nadal is not good at playing basketball" and the final query “Rafael Nadal’s best sport is” in practical case.
>**Q2:** How to link the problem with multi-step optimization and why you chose the linear transformers? Additionally, unclear terms like $L$-step gradient descent and $L$-transformers, and a grammatical error.
**A2:**
Based on the classification task above, this paper aims to establish a rigorous theoretical analysis of the transformers' robustness against context hijacking, focusing on parameters such as depth $L$, context length $n$ an $N$, and embedding dimension $d$.
However, existing transformer optimization analyses primarily focus on single-layer models [4, 5, 6, 7]. Analyzing the training processes of multi-layer transformers appears nearly impossible. Fortunately, recent works [1, 2, 3] provide a solid analytical framework for in-context learning of multi-layer linear transformers. Specifically, they demonstrated that **when given the in-context input matrix $\mathbf{Z}$ (eq. (3.1)), the corresponding output of $L$-layer linear transformers $\hat{y}\_{\mathrm{query}}$ (eq. (3.4)), is equivalent to that of a linear model trained via $L$-step gradient descent on all in-context pairs.** We adopt this framework and extend it in Lemma 4.1 to allow gradient descent to be initialized arbitrarily, which, as reviewer Rqgr noted, is a more reasonable conclusion.
In summary, we consider linear transformers due to their proven in-context learnability, enabling us to derive derive clear insights into the robustness of multi-layer transformers (Theorem 4.5). Notably, a recent work [8] also employs linear transformers to study robustness against adversarial context in theory, but only considers one-layer models. Additionally, we conduct experiments on softmax attention transformers with GPT-2 style architectures, and the results (Appendix H.1) support our conclusions.
Furthermore, we appreciate your feedback on unclear terms and grammatical errors; we will address them in revision.
>**Q3:**
Increasing the model complexity will mitigate context hijacking is a general result, as it appears to be an underfitting problem.
**A3:**
First, we would like to clarify that the focus of this paper, robustness against hijacking, differs from a fitting problem associated with the training data. This distinction arises as our test data is specifically designed to simulate the phenomenon of context hijacking, following a distribution that differs from that of the training data. Consequently, the overfitting or underfitting of the training data may not be directly connected to robustness.
Figure 4 (Section 5.2) shows that shallow models experience underfitting issues. However, once the model depth exceeds 3, it can achieve an accuracy $\ge 99\\%$, indicating that transformers with 4 or more layers do not suffer from underfitting. In contrast, Figure 3 (Section 5.2) demonstrates that even when the model depth exceeds 3, the robustness of the model continues to improve as the depth increases.
[1] Von Oswald, et al. Transformers learn in-context by gradient descent. ICML.
[2] Ahn, et al. Transformers learn to implement preconditioned gradient descent for in-context learning. NeurIPS.
[3] Bai, et al. Transformers as statisticians: Provable in-context learning with in-context algorithm selection. NeurIPS.
[4] Zhang, et al. In-context learning of a linear transformer block: benefits of the mlp component and one-step gd initialization. NeurIPS.
[5] Zhang, et al. Trained transformers learn linear model in-context. JMLR.
[6] Zhang, et al. Transformer learns optimal variable selection in group-sparse classification. ICLR.
[7] Frei and Gal. Trained transformer classifiers generalize and exhibit benign overfitting in-context. ICLR.
[8] Anwar, et al. Adversarial robustness of in-context learning in transformers for linear regression. arXiv. | null | null | null | null | null | null | null | null |
Strategic A/B testing via Maximum Probability-driven Two-armed Bandit | Accept (poster) | Summary: This paper builds on Strategic Two-Sample Test via the Two-Armed Bandit Process to enhance the detection of small average treatment effects. It proposes a more powerful one-sided two-sample test by adjusting the balance between the mean and volatility terms, yielding a statistic that is more concentrated under the null and less so under the alternative. The framework is adapted to the Rubin Causal Model (RCM), where only one potential outcome per subject is observed, with a doubly robust estimator used for causal effect imputation. To address sensitivity to sample ordering, the authors incorporate meta-analysis by repeatedly reordering samples and recalculating p-values. Theoretically, they show that as $n$ approaches infinity, the asymptotic distribution converges to a spike, ensuring valid inference.
Claims And Evidence: The validity of the proposed statistic is questionable. For instance, in Equation (2), classical statistical inference typically expresses the mean term as the sample average of $R_i^{(\vartheta_i)}$ for $i = 1,2,\dots,n$. However, the authors instead define it as $\bar{R}n^{(\vartheta_i)} = \sum{j=1}^n R_j^{(\vartheta_i)} / n$, raising a fundamental issue: what is the meaning of $R_j^{(\vartheta_i)}$ when $i \neq j$? Since this formulation underpins the entire paper, its incorrect mathematical structure casts doubt on the validity of the conclusions, theory, and experiments. The authors should carefully reexamine their theoretical derivations, algorithmic details, and experimental code.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. The proofs for the theorems on the asymptotic properties of PWTAB are standard and rely on the strategic CLT. However, as noted in the Claims and Evidence section, the mathematical formulation of the proposed statistics seems incorrect, casting doubt on the validity of the theoretical results.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes. I mainly review the proof of theorems and simulation details.
Relation To Broader Scientific Literature: This paper primarily addresses the problem of one-sided two-sample testing, with a particular focus on paired two-sample testing. Compared to previous works, such as Strategic Two-Sample Test via the Two-Armed Bandit Process, which establish a more comprehensive theoretical framework—including proofs of the strategic central limit theorem (CLT) and other asymptotic properties—this paper places greater emphasis on practical implementation and the technical challenges that arise in finite samples.
Despite repeatedly highlighting its connection to the two-armed bandit, the constructed bandit framework assigns rewards to the two arms as exact opposites, effectively reducing it to a one-armed bandit. Moreover, in traditional two-sample testing, data from both populations are fully observed, eliminating the partial observation challenge that is central to bandit problems. As a result, this paper primarily addresses two-sample testing and has only a limited connection to the bandit literature.
Essential References Not Discussed: I mention only one important paper, Strategic Two-Sample Test via the Two-Armed Bandit Process. While this paper does not cite the foundational work, much of its content serves as a technical extension of it.
Other Strengths And Weaknesses: Strengths: The weighting and permutation tricks is beneficial for finite-sample performance in statistical inference.
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s thoughtful feedback and valuable suggestions. We also sincerely apologize for any difficulties or confusion arising from insufficient clarity in the presentation of some fundamental equations and references in the paper. Below, we will discuss the issues you have raised:
**1. Effectiveness of the Proposed Statistic**
- **Notation Clarification:**
We define $\bar{R}_n^{(\vartheta_i)}=\bar{R}_n^{(1)}\mathbb{I}(\vartheta_i=1)+\bar{R}_n^{(0)}\mathbb{I}(\vartheta_i=0)$, where $\bar{R}_n^{(1)}=\sum\_{i=1}^nR_i^{(1)}/n=-\bar{R}^{(0)}_n$ and $\mathbb{I}(\cdot)$ denotes the indicator function. Since our paper does not introduce or define the notation $R_j^{(\vartheta_i)}$ for $i \neq j$, we will explicitly define $\bar{R}_n^{(\vartheta_i)}$ as stated above, immediately following Equation (2).
- **Intuitive Explanation and Theoretical Perspective:**
Using $\sum\_{i=1}^n\bar{R}\_n^{(\vartheta_i)}/n$ instead of $\sum\_{i=1}^n R_i^{(\vartheta_i)}/n$ reduces the variance of the mean term while preserving the asymptotic properties of the statistic. This leads to a faster and more stable convergence, as also verified experimentally. Replacing $R_i^{(\vartheta_i)}$ with $\bar{R}_n^{(1)}\mathbb{I}(\vartheta_i=1)+\bar{R}_n^{(0)}\mathbb{I}(\vartheta_i=0)$ represents a key methodological improvement.
**2. Motivation for Linking to the Two-Armed Bandit Framework**
- **Differences in Research Objectives:**
The classical two-armed bandit model focuses on maximizing the average reward by balancing exploration and exploitation. In contrast, our model does not seek to identify the arm with the highest return. Instead, it aims to maximize the target probability through collaborative arm selection. Within the hypothesis testing framework, this maximized target probability corresponds to the tail probability $\mathbb{P}(|T\_{n, \lambda}(\theta_n)|>z_{1-\alpha/2}|\mathcal H_1)$.
- **Clarification in Writing:**
Given the distinct objectives of our proposed model compared to the classical framework, we appreciate the opportunity to improve clarity in our manuscript. To this end, we will explicitly define our proposed model following the second paragraph of Section 2.2, thereby clearly distinguishing it from the classical model.
**3. Missing References**
We sincerely apologize for the unintentional omission of the reference to “Z. Chen et al., Strategic Two-Sample Test via the Two-Armed Bandit Process” during the drafting process. This reference has been included in the revised manuscript along with a detailed comparison:
- **Research Background:**
Chen et al. addressed the one-sided two-sample testing problem under independent, batch-wise paired observations. While valuable, their approach is not designed to address hypothesis testing in more complex frameworks, such as causal inference scenarios involving missing data or confounding variables. In contrast, our proposed method incorporates advanced techniques, enabling its application to hypothesis testing within causal inference frameworks. This adaptability enhances its relevance to real-world research contexts.
- **Theoretical Contributions:**
Chen et al.’s study primarily focuses on the asymptotic properties of the test statistic, without investigating its behavior in finite samples. This limitation can result in inflated Type I errors in small-sample settings. We effectively control the Type I error under finite samples by modifying the mean term as the mean of $\bar{R}\_n^{(\vartheta_i)}$ and introducing a weighting factor $\lambda$. Their result represents a special case of our framework when $\lambda=0.5$.
- **Algorithm Robustness:**
Chen et al. compute the $p$-value from a single ordered sample sequence, resulting in unstable statistical power in both simulations and real-world experiments. To enhance robustness, we introduce a permutation-based meta-analysis, recalculating the $p$-value across multiple sample reorderings. This improvement significantly strengthens the algorithm’s reliability and practical utility.
In summary, while our work draws inspiration from “Strategic Two-Sample Test via the Two-Armed Bandit Process” in its use of bandit strategies for hypothesis testing, our study offers more generalized insights across research frameworks, theoretical advancements, and technical implementations. We hope this clarification underscores the distinct contributions of our work while acknowledging the foundational influence of Chen et al.’s research.
We sincerely appreciate the reviewer’s keen observation, which has enabled us to refine the manuscript and better contextualize our contributions within the existing literature. | Summary: **Problem:**
This work aims to address the limitations of traditional A/B testing in detecting minor treatment effects.
The key challenges are: (i) data distributions between the treatment and control groups may differ due to confounding effects, (ii) even when distributions are balanced, measured outcomes can still exhibit high variance, and (iii) test statistics rely on the normality assumption of the central limit theorem, which may not always hold.
**Methods Used:**
To this end, this work proposes a novel statistical testing framework that:
(i) relaxes the normality assumption by leveraging bandit-inspired distributions, which is drawn from the the prior results of strategy-central limit theorem (Chen et al., 2022) [1],
(ii) introduces a weighted test statistic to control Type I error.
(iii) employs a doubly robust method to obtain unbiased, low-variance causal estimates, and
(iv) utilizes a permutation test to enhance statistical power.
**Results:**
On empirical evaluations, the authors compare their methods, i.e., Permuted WTAB and WTAB, with existing methods (i.e., z-DML, CUPED, and DIM) on both synthetic data and real-world ridge sharing data.
On synthetic data, the authors show that their Permuted WTAB achieves the highest statistical powers compared to other methods while maintaining a similar Type I error rate.
Similarly, on real-world ridge sharing A/B testing datasets, they shows their Permuted WTAB consistently achieves lower p-values compared to CUPED.
**Overall — Main contributions, novelty and impact:**
1. This work **introduces the first hypothesis testing method for estimating causal effects, which breaks the assumption that random variables are normally distributed but instead Bandit distributed**, significantly improving the statistical power. **Its efficacy is further improved by weighted version and doubly-robust estimation.**
2. The proposed method is **novel** and **theoretically grounded**.
3. The proposed method has **major business implications**, particularly in optimizing A/B tests for small treatment effects, which could drive more profitable marketing strategies.
**Area Of Improvement:**
However, given the current empirical results, **additional evaluations are needed** to clarify whether this approach is indeed better than other existing. Moreover, their **current writing and presentation needs improvement**, which I will describe in later sections.
**Ref:**
[1] Chen, Zengjing, Shui Feng, and Guodong Zhang. "Strategy-driven limit theorems associated bandit problems." arXiv preprint arXiv:2204.04442 (2022). https://arxiv.org/pdf/2204.04442
=================================================================================================================================================
**Update After Rebuttal**
1. The authors have provided a clearer statement on the contribution of their work, particularly in how they leverage the strategic central limit theorem framework to relax the exchangeability assumption, thereby improving Type I error control while enhancing statistical power.
2. The authors have addressed the typo and the missing legend in Figure 2. They are also aware of areas where the presentation could be improved and have made an effort to address them.
3. They have added a discussion on how to select the regularization parameter λ in a data-driven manner and explained why the stacking approach did not perform as expected, along with suggestions for how to address this issue.
4. The authors have stated their intention to include the proof of Lemma 2.1.
An additional experiment on a real-world dataset has been included. The authors also explained the rationale for generating synthetic data from real-world sources, which is reasonable.
Given the above, I believe the authors have made a substantial effort to address the key concerns raised.
While we are unable to see the full revision due to the constraints of the rebuttal phase, I believe that the proposed revisions and clarifications significantly strengthen the work.
-- **I think the merits of the paper outweigh the remaining minor concerns (such as presentation)**, and **I therefore maintain my original recommendation of accept.**
-- **However**, if the committee feels it is preferable for the authors to take more time and submit a fully revised version in a future round, I would also support that direction.
=================================================================================================================================================
Claims And Evidence: Their weighted version of the two-arm bandit with a permutation test effectively demonstrates control over Type I error while achieving higher statistical power than both state-of-the-art methods and the standard mean estimator on synthetic datasets.
However, in real-world datasets, they only present variance reduction and p-value results without providing analyses on Type I error control or statistical power.
Methods And Evaluation Criteria: 1. The authors appropriately utilize both synthetic and real-world datasets to evaluate their statistical testing methods, ensuring a comprehensive assessment of their approach across controlled and practical settings.
2. The authors selected both the baseline model, i.e., the mean average estimator, and state-of-the-art methods, such as DML and CUPED, to ensure a fair comparison.
3. As mentioned earlier, they also need evaluations on type 1 error and statistical power analysis for real-world dataset.
Theoretical Claims: I have checked their proofs for their theoretical claims, it is almost the same derivation steps from the strategic central limit theorem [1] with modification of including the weighted version.
**Ref:**
[1] Chen, Zengjing, Shui Feng, and Guodong Zhang. "Strategy-driven limit theorems associated bandit problems." arXiv preprint arXiv:2204.04442 (2022). https://arxiv.org/pdf/2204.04442
Experimental Designs Or Analyses: Their experimental designs and analyses are valid; however, additional evaluations on Type I error control and statistical power analysis for real-world datasets are needed to ensure comprehensive assessment.
Supplementary Material: I have reviewed the entire supplementary material, including the proofs of Theorem 4.1 and Theorem 4.2. and their additional experimental results for synthetic and real-world datasets.
Relation To Broader Scientific Literature: Unlike traditional normality-based hypothesis tests, this work introduces a Bandit-distributed framework, providing an alternative to standard A/B testing. The incorporation of weighted test statistics, doubly robust estimation, and permutation testing further strengthens treatment effect estimation.
Essential References Not Discussed: The key paper to derive their results are provided, which is [1]
Ref:
[1] Chen, Zengjing, Shui Feng, and Guodong Zhang. "Strategy-driven limit theorems associated bandit problems." arXiv preprint arXiv:2204.04442 (2022). https://arxiv.org/pdf/2204.04442
Other Strengths And Weaknesses: This work cleverly integrates strategy-based statistical testing, which challenges the normality assumption, with existing approaches such as the permutation test and the doubly-robust estimator. While the proof is largely adapted from prior work with slight modifications, its implications are significant in accurately detecting minor treatment effects.
Other Comments Or Suggestions: 1. The authors make frequent use of the term exchangeability, but its precise meaning remains ambiguous. For instance, exchangeability might refer to data-level exchangeability, meaning whether the data are generated i.i.d. Alternatively, it could pertain to the exchangeability of treatment assignment given the observed data. In the context of the strategy central limit theorem, I interpret exchangeability as referring to the rewards derived from the sequence of arm choices. Clarifying this distinction would enhance the paper’s rigor and readability.
2. On line 73, it would be better to first introduce the full term Two-Arm Bandit before using its acronym.
3. Line 193: mean --> main
4. In Figure 2(b), there is no legend to describe what each curve means. Please include them.
5. In section 3.1, it would be better to firstly introduce what Theorem 4.1 is and then use it.
6. In your lemma 2.1, please explicitly define $H_1$ as the alternative hypothesis before using it.
7. In line 410, the paragraph titled "Another simulations" should be more clearly described. Please rephrase it to indicate that it presents results on an ML-based method or another relevant categorization for better clarity.
8. In general, the authors assume that readers are already familiar with the intent of each section and proceed without sufficient introduction. It would be beneficial to include brief overviews or contextual transitions at the beginning of each section to improve clarity and guide the reader through the flow of the paper.
Questions For Authors: 1. I may be wrong if I am not very scrutinizing. However, I am unsure if you have the proof for Lemma 2.1?
2. In line 258, you mention that the ensemble method, specifically stacking, improves efficacy. However, in your experiments on synthetic datasets (Figure 6), stacking is not consistently the best method and, in some cases (e.g., the bottom-left panel for function III), performs worse than the compared methods. Could you provide an intuition or explanation for why stacking underperforms in certain scenarios?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the comprehensive and insightful feedback. Below, we provide a point-by-point response to the issues raised:
**1. The Use of “Exchangeability”**
We appreciate the reviewer’s observation regarding the term “exchangeability” and fully agree that a clear definition is essential. In the revised manuscript, we have explicitly defined exchangeability here as referring to the rewards derived from the sequence of arm choices.
**2. Presentation Adjustments**
- **Two-Arm Bandit Introduction:**
We acknowledge the suggestion regarding the introduction of the full term “Two-Armed Bandit” before using its acronym. In the revised version, we ensure clarity by introducing the full term at first mention (e.g., “Two-Armed Bandit (TAB)”).
- **Typo Corrections:**
Line 193: We have corrected the typo from “mean” to “main”.
Line 410: We have changed the paragraph title from “Another simulations” to “More ML-based simulation studies” for clearer expression.
- **Figure Improvements:**
In Figure 2(b), we will add a detailed legend to describe each curve, ensuring that the visual representation is self-explanatory. The pink dashed, cyan solid, and orange dotted lines represent $\sigma=0.5, \sigma=0.6$, and $\sigma=1.0$, respectively. Additionally, we will update the caption of Figure 2(b) to “The empirical type I error rate across different $\lambda$ and $\sigma$, fixed $n=20000$”.
- **Section Transitions and Introductions:**
We agree that additional contextual transitions at the beginning of sections would improve readability. We will include brief overviews in Section 3.1 and elsewhere to better guide the reader through our arguments and experimental results.
- **Lemma 2.1 Clarification:**
We have revised Lemma 2.1 to explicitly define the alternative hypothesis $\mathcal{H}_1$ before applying it, thereby eliminating potential ambiguity.
**3. Theoretical and Experimental Clarifications**
Regarding the reviewer’s question about Lemma 2.1, its proof is provided in Chen et al.'s article (Z. Chen et al., “Strategy-driven limit theorems associated bandit problems,” Theorem 3.3). To improve the readability of our manuscript, we will include the full proof in the appendix of the revised version.
**4. Ensemble Method (Stacking) Performance**
We thank the reviewer for highlighting the performance discrepancies of the stacking method in the synthetic experiments. We identify two key factors contributing to this discrepancy:
- First, the current implementation uses a limited selection of primary learners. We are actively investigating the incorporation of additional machine learning models as primary learners to enhance the performance of the stacking method.
- Second, the choice of primary learners and their respective weights in the ensemble may not be optimal under all configurations, leading to suboptimal aggregation of predictions. We are exploring the use of more advanced meta-learners (e.g., random forests) instead of simple linear regression to better assign weights to different primary learners and further improve the stacking method’s performance.
**5. Additional Evaluations on Real-World Data**
We appreciate the reviewer’s suggestion regarding a more comprehensive evaluation on real-world data. To address this, we have conducted additional experiments using synthetic data based on real-world data. The results of these additional experiments are summarized in Table 1.
**Table 1: Type I error rates and statistical power based on synthetic data derived from real-world dataset.**
| Method | Metric | PWTAB | WTAB | $z$-DML | CUPED | DIM |
|----------|--------------|-------|------|---------|-------|------|
| LightGBM | Type I Error | 0.052 | 0.052| 0.044 | 0.050 | 0.048|
| | Power | 0.758 | 0.738| 0.744 | 0.740 | 0.498|
| XGBoost | Type I Error | 0.052 | 0.034| 0.046 | 0.050 | 0.048|
| | Power | 0.758 | 0.738| 0.746 | 0.740 | 0.498|
| Stacking | Type I Error | 0.052 | 0.052| 0.046 | 0.050 | 0.048|
| | Power | 0.764 | 0.732| 0.746 | 0.740 | 0.498|
These results provide compelling evidence of the effectiveness of our proposed PWTAB method in real-world scenarios. When the null hypothesis holds, all methods maintain Type I error rates close to 0.05, preserving the reliability of statistical inference in practical settings. Under the alternative hypothesis, the proposed method consistently outperforms competing methods in terms of statistical power. PWTAB achieves the highest statistical power when used with LightGBM or XGBoost, and its performance is further enhanced when combined with the ensemble learning algorithm Stacking.
We sincerely appreciate the reviewer’s constructive comments, which have been invaluable in improving the clarity, rigor, and overall impact of our work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors throughout the response.
1. **On the use of exchangeability**:
Having a clearer explanation of exchangeability can significantly enhance both the readability and where the impact of the current work lies in. In your revision, please clearly highlight the advantages of your proposed approach compared to traditional A/B testing. Specifically, with the use of the strategy-driven limit theorem framework, the underlying test statistics no longer require the assumption of normality—an assumption typically made in traditional A/B testing. Therefore, your approach offers superior control over Type I errors and increased statistical power.
2. **On better presentation of your work**:
As per Reviewer KBY3 mentioned, the authors should be mindful of the presentation. That is, the authors should either be less reliant on mathematical equations or clearly articulate the intuition behind each mathematical expression. Even in sections that do not need mathematical expression, the authors should still need to ensure that your work clearly convey intuition, provide smooth transitions between ideas. For example, line 241 to 257 could have been having better intuition and transition. That is, in line 243 to 249, the authors could instead say:
" **Traditional methods such as CUPAC**, which rely solely on linear regression, might fail to capture these intricate patterns. **To overcome this limitation**, advanced machine learning methods are introduced. Specifically, LightGBM (Ke et al., 2017)—a state-of-the-art gradient boosting algorithm—is employed within the double machine learning (DML) framework (Chernozhukov et al., 2018).
**Intuitively, the DML approach mitigates overfitting and reduces regularization biases by partitioning the dataset into multiple subsets. Each subset is used iteratively to estimate conditional relationships, ensuring robustness and improved predictive performance.** "
There are additional sections where the presentation could be improved; however, I leave it to the authors to identify and enhance these sections on their own.
3. **On stacking methods**:
Thank you so much for your clarification. It would be great to include them into your discussion section.
4. **On data-driven lambda**:
I agree with Reviewer KBY3. It would be great to include some discussion on how to choose lambda in a data-driven approach.
5. **On the proof of Lemma 2.1**:
Your theoretical results are primarily based on the work of Z. Chen et al. ("Strategy-driven limit theorems associated with bandit problems"). To ensure your manuscript is self-contained, please also include the detailed derivation of Lemma 2.1 in your appendix.
6. **On your additional Evaluations on Real-World Data**:
I request that the authors clearly explain the rationale behind generating synthetic data from real-world data. Couldn't the experiments not be conducted directly using real-world data?
I thank the authors once again. I believe your manuscript is becoming clearer and, thus, better impact.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your continued positive feedback and insightful suggestions on our novel strategic A/B testing method. Through the revisions outlined below, we aim to further strengthen our manuscript and earn your full support.
**On the use of exchangeability**
We fully agree with your observation. Our proposed test statistic exhibits a concentrated, spike-like distribution around zero under the null hypothesis and a bimodal distribution away from zero under the alternative. By not relying on the normality assumption, our approach achieves superior control over Type I errors while enhancing statistical power. To highlight these advantages, we have revised the manuscript to explicitly compare our method with traditional A/B testing, emphasizing the flexibility afforded by the strategy-driven limit theorem framework.
**On better presentation of our work**
We value your guidance on improving readability. In addition to addressing the issue you pointed out regarding lines 241 to 257, we have minimized the use of extraneous mathematical formulas in the revised manuscript, retaining only those essential to our research. For these, we have added clear, intuitive explanations. Additionally, we have enhanced the logical flow and coherence throughout the text.
For example, the section from line 271 (left) to line 220 (right) has been revised as follows:
“To address this issue, we perform multiple samples reorderings, repeatedly calculate the $p$-value of $T_{n,\lambda}(\theta_n^*)$, and aggregate these via meta-analysis to enhance the robustness of statistical inference.”
The section from line 232 (right) to line 237 (right) has been revised to:
“However, varying sample orderings can yield inconsistent $p_{\lambda}^{(b)}$ values, and the conclusions drawn from individual $p$-values may be unclear. To resolve this, we apply meta-analysis to synthesize an overall $p$-value, improving the reliability of the results derived from individual $p_{\lambda}^{(b)}$ values (Walker et al., 2008; Lee, 2019).”
**On data-driven $\lambda$**
We fully agree with you and Reviewer KBY3 on the importance of a data-driven approach to selecting $\lambda$. As detailed in our rebuttal to Reviewer KBY3, we have proposed a data-driven approach for selecting $\lambda$, which we have now incorporated into the revised manuscript for clarity and completeness.
**On the proof of Lemma 2.1**
We fully agree with you. To ensure the coherence of the paper, we have independently included the detailed derivation of Lemma 2.1 in the appendix of the latest revised version.
**On the additional Evaluations on Real-World Data**
We appreciate the opportunity to clarify the rationale behind this approach. Our decision to generate synthetic data stems from two key practical constraints associated with real-world A/B testing datasets:
- Limited Availability of Real-World Data: Real-world A/B testing datasets are often constrained in size and scope, which can limit their suitability for comprehensive statistical evaluations. Synthetic data allows us to scale experiments and explore a wider range of scenarios while preserving the distributional characteristics of real-world data.
- Absence of Ground Truth for Strategy Improvements: The average treatment effect in real-world datasets is typically unknown, making it difficult to accurately estimate critical metrics such as empirical Type I error rates and statistical power—both essential for validating our method’s performance. By generating synthetic data based on real-world data, we can control the average treatment effect while preserving the original data distribution, thereby enabling precise and reliable estimation of these metrics.
We hope these clarifications and revisions fully address your concerns. Thank you again for your valuable input, which has greatly improved our manuscript. | Summary: This paper introduces a novel approach to A/B testing focused on detecting minor average treatment effects (ATEs) in large-scale applications. The authors propose a maximum probability-driven two-armed bandit process with a weighted mean volatility statistic and incorporation of permutation methods. The key theoretical contribution is the strategic central limit theorem (SCLT), which yields more concentrated distributions under the null hypothesis and less concentrated distributions under alternatives, thereby enhancing statistical power.
The proposed permuted weighted two-armed bandit (PWTAB) method incorporates doubly robust estimation for counterfactual outcomes. Experiments on both synthetic and real-world ride-sharing company data demonstrate PWTAB consistently outperforms standard methods like DIM, CUPED, and z-DML while maintaining proper Type I error control.
Claims And Evidence: The claims are generally well-supported by evidence:
- The central claim that WTAB improves statistical power is backed by both theoretical analysis (SCLT) and empirical results showing superior performance in different simulation settings.
- Type I error control is verified through comprehensive simulation studies in Table 2, with empirical rates remaining close to the nominal α=0.05 level across varied configurations.
- Empirical evidence in Figure 4 demonstrates PWTAB consistently outperforms comparison methods, particularly for nonlinear functions.
Methods And Evaluation Criteria: The methodological approach effectively addresses the problem of detecting minor treatment effects:
- The weighted mean-volatility statistic (Eq. 5) provides a flexible framework balancing detection power with Type I error, with weight parameter λ carefully chosen to maximize statistical power.
- The permutation-based approach (Algorithm 1) using Cauchy combination addresses the "p-value lottery" problem, with B=25 permutations determined sufficient through empirical testing.
The evaluation criteria include both Type I error control and statistical power across varied conditions (linear/nonlinear functions, heterogeneous effects, different noise levels σε ∈ {0.5, 0.6}).
Theoretical Claims: The theoretical proofs seems sound and rigorous, building on:
- Theorem 4.1 establishes that the asymptotic distribution follows a spike distribution.
- Theorem 4.2 demonstrates Type I error control and consistency against fixed alternatives, under $H_0$.
- The weighted statistic maintains the same optimal policy structure (Eq. 10), with $λ$ constrained by threshold $f(\lambda)≤ 0.03$ to ensure proper convergence.
Experimental Designs Or Analyses: The experiments are thorough and well-designed:
- Synthetic data tests span 32 configurations combining four different functions F(X1,X2), four G(X1,X2) (including two null hypotheses GI, GII), and two noise levels (σε=0.5, 0.6).
- Sample size n=20,000 realistically represents large-scale A/B testing scenarios.
- Real-world validation uses three datasets (A, B, C) from a ride-sharing company, with results in Figure 5 showing PWTAB achieves smaller p-values regardless of the machine learning algorithm used.
The authors rigorously compared their approach against DIM, CUPED, and z-DML baselines, showing consistent improvements particularly for nonlinear function settings.
Supplementary Material: I went over the appendix proofs briefly, nothing seemed out of place.
Relation To Broader Scientific Literature: The paper effectively connects to relevant literature across:
- A/B testing
- Causal inference
- Multi-armed bandits
- Permutation tests
Essential References Not Discussed: Nothing completely relevant seems to be omitted from the manuscript
Other Strengths And Weaknesses: **Strengths:**
- Addresses an economically significant problem with a theoretically grounded solution.
- The integration of bandit algorithms with traditional A/B testing creates an innovative hybrid methodology.
- Demonstrates superior performance for nonlinear relationships where CUPED falters.
- The doubly robust estimation approach provides protection against model misspecification.
**Weaknesses:**
- The mathematical density may limit adoption by practitioners without strong statistical backgrounds.
- Limited guidance on practical λ selection beyond the 0.03 threshold.
- The paper could better explain the intuition behind why breaking exchangeability improves performance.
Other Comments Or Suggestions: Nothing to add here.
Questions For Authors: 1. Beyond the threshold approach, are there data-driven methods to select optimal $\lambda$ values?
2. How well does the method generalize to domains beyond ride-sharing (e.g., e-commerce) where metrics and effect sizes differ?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s careful evaluation of our work and the constructive feedback provided. Below, we address the key weaknesses and questions raised:
**1. Adding a More Intuitive Explanation of Mathematical Densities**
We thank the reviewer for this valuable suggestion. We acknowledge that the extensive mathematical derivations may pose a barrier to practitioners with limited statistical backgrounds. In the revised version, we have incorporated an intuitive explanation section to explain the form of different probability densities under different hypotheses. We believe this additional exposition will facilitate a broader understanding and practical application of our method.
To illustrate, consider the case when the null hypothesis holds with $\lambda$ fixed. Given that the optimal policy parameter $\vartheta_1^*$ has an equal probability of being 0 or 1, assume that $R_1^{(1)}$ is observed and that $T\_{1, \lambda}(\theta_1^*)\ge 0$. Consequently, according to the optimal policy, $\vartheta_2^*=1$, implying that $R_2^{(1)}$ will be observed. This process continues with $\vartheta_i^*=1$ until there exists some index $m$ such that $T\_{m, \lambda}(\theta_m^*)<0$. Under the assumption that the null hypothesis holds, it is likely that $T\_{2, \lambda}(\theta_2^*)<0$, resulting in $\vartheta_3^*=0$, which leads to the observation of $R_3^{(0)}$, a reward that is more likely to exceed 0. This brief discussion shows that the optimal policy $\theta_n^*$ will control the value of $T\_{n, \lambda}(\theta_n^*)$ to fluctuate around 0 under the null hypothesis, thereby concentrating its distribution around 0. A similar rationale applies when the alternative hypothesis holds.
**2. Guidance on $\lambda$ Selection**
We agree that the guidance on selecting $\lambda$ is crucial. The threshold value of 0.03 was derived empirically from our synthetic experiments. However, we are actively exploring more data-driven methods for selecting $\lambda$. We propose a data-driven approach for selecting $\lambda$ by first discretizing its range and then employing bootstrapping techniques to generate multiple datasets. For each candidate $\lambda$, we compute the type I error rate across these datasets. The optimal $\lambda$ is chosen as the one that maximizes statistical power while controlling the type I error.
**3. More Real-world Applications**
We appreciate the reviewer’s insightful query on the generalizability of our method to domains such as e-commerce. Although our current real-world validation is based on ride-sharing data, our preliminary experiments in other domains indicate that the method demonstrates strong potential. We are confident that our approach can be generalized to most companies conducting A/B testing. We intend to extend our experimental evaluation to include additional domains, such as a food delivery company and an internet technology company, thereby providing a more robust demonstration of the method’s versatility and robustness.
**4. Breaking Exchangeability**
We appreciate the reviewer’s interest in this aspect. We will clarify the advantages of breaking exchangeability in the revised manuscript by explicitly detailing how it contributes to enhanced performance. Traditional hypothesis testing methods based on the Central Limit Theorem (CLT) are inherently data-driven; once i.i.d. samples are observed, the construction of the test statistic is independent of the sample order, implying that the data are exchangeable. In contrast, our proposed testing framework is goal-driven—it seeks to progressively construct the test statistic from the available data to maximize statistical power. In our proposed two-armed bandit framework, earlier data actively influences the construction of the current test statistic, making the data non-exchangeable. This shift toward a maximum-probability objective enables the optimal construction of the test statistic, thereby enhancing testing performance.
Once again, we are grateful for the reviewer’s positive comments and valuable suggestions. We are committed to incorporating these improvements to enhance the clarity, interpretability, and impact of our work. | null | null | null | null | null | null | null | null |
Multi-Stage Manipulation with Demonstration-Augmented Reward, Policy, and World Model Learning | Accept (poster) | Summary: This paper introduces DEMO3 (Demonstration-Augmented Reward, Policy, and World Model Learning), a novel framework for solving long-horizon, multi-stage manipulation tasks with sparse rewards. The authors address the challenge of designing dense reward functions and effectively exploring large state-action spaces by leveraging a small number of demonstrations for three key purposes: learning a policy, a world model, and a dense reward function. The approach incorporates multi-stage dense reward learning, a bi-phasic training scheme, and world model learning into a demonstration-augmented reinforcement learning framework. The method is evaluated across 16 sparse-reward tasks spanning four domains, including challenging humanoid visual control tasks, demonstrating improved data-efficiency by an average of 40% and by 70% on particularly difficult tasks compared to state-of-the-art approaches.
Claims And Evidence: The claims made in the submission are well-supported by evidence. The authors claim that their method improves data-efficiency compared to state-of-the-art approaches, which is substantiated by comprehensive experimental results across 16 tasks in four domains. The learning curves in Figure 5 clearly demonstrate that DEMO3 consistently outperforms baseline methods (TD-MPC2, MoDem, and LaNE) in terms of success rate as a function of interaction steps. The claim that the method is particularly effective on difficult tasks is supported by the 70% improvement in performance on complex tasks. The authors also claim that their approach requires only a small number of demonstrations (as few as five), which is validated by their experimental setup described in Table 1.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. The authors use success rate as the primary evaluation metric, which is a clear and relevant measure for manipulation tasks. The experimental setup includes a diverse set of 16 tasks across four domains (ManiSkill Manipulation, Meta-World, Robosuite, and ManiSkill Humanoids), providing a comprehensive evaluation of the method's generalizability. The comparison against strong baselines (TD-MPC2, MoDem, and LaNE) ensures a fair assessment of the method's performance. The authors also conduct ablation studies to analyze the relative importance of each component of their framework, which helps to validate the design choices.
Theoretical Claims: The paper makes several theoretical claims about the benefits of their approach, particularly regarding the use of stage-specific discriminators for dense reward learning and the bi-phasic training scheme. These claims are well-founded in reinforcement learning theory, particularly in the context of model-based RL and learning from demonstrations. The authors provide a clear mathematical formulation of their approach, including the loss functions for the discriminators (Equation 2) and the dense reward formulation (Equation 4). The theoretical justification for the bi-phasic training scheme is also well-articulated, explaining how it helps to overcome the exploration challenges in sparse reward settings.
Experimental Designs Or Analyses: The experimental design is robust and comprehensive. The authors evaluate their method on 16 tasks across four domains, with varying levels of complexity and different numbers of stages. The experiments use 5 random seeds to ensure statistical significance, and the results are presented with 95% confidence intervals. The learning curves in Figure 5 provide a clear visualization of the method's performance over time, and the summary of results in Figure 1 offers a concise comparison with baselines. The ablation studies help to isolate the contributions of different components of the framework. The authors also provide details on the number of demonstrations used for each domain and the interaction budget, which helps to contextualize the results.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The authors acknowledge prior work on model-based RL (Ha & Schmidhuber, 2018; Zhang et al., 2018; Kidambi et al., 2020; Hafner et al., 2020; Yu et al., 2020; Hansen et al., 2022; 2024; Sferrazza et al., 2024) and learning from demonstrations (Zhan et al., 2022; Hansen et al., 2023; Lancaster et al., 2024). They also discuss the limitations of existing approaches, such as the challenges of designing dense reward functions and the exploration problems in sparse reward settings. The paper builds upon TD-MPC2 (Hansen et al., 2022; 2024) as its backbone for model-based RL, and draws inspiration from MoDem (Hansen et al., 2023) for its bi-phase training scheme, clearly acknowledging these influences.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- The paper addresses a significant challenge in reinforcement learning: solving long-horizon, multi-stage manipulation tasks with sparse 2. rewards.
- The proposed method is data-efficient, requiring only a small number of demonstrations (as few as five).
- The approach is evaluated on a diverse set of tasks, demonstrating its generalizability.
- The bi-phasic training scheme is a clever way to leverage demonstrations for both initialization and online learning.
Weaknesses:
- The paper does not extensively discuss the limitations of the approach or potential failure cases.
- While the method is shown to work with as few as five demonstrations, it's not clear how the performance scales with even fewer demonstrations or how it compares to methods that don't use demonstrations at all.
- The computational complexity of the approach is not thoroughly discussed, which is important for practical applications.
- The paper focuses primarily on simulation environments, and it's not clear how well the approach would transfer to real-world robotic systems.
Other Comments Or Suggestions: - The paper would benefit from a more detailed discussion of the limitations of the approach and potential directions for future work.
- A more thorough analysis of the computational requirements of the method would be valuable, particularly for real-time applications.
- It would be interesting to see how the method performs with varying numbers of demonstrations, to better understand the trade-off between demonstration quantity and performance.
- The paper could discuss more explicitly how the approach might be adapted for real-world robotic systems, addressing challenges such as sensor noise and actuation delays.
Questions For Authors: - How does the performance of DEMO3 scale with the number of demonstrations? Is there a minimum number of demonstrations required for the method to work effectively, and how does the performance improve with additional demonstrations?
- The paper focuses on simulation environments. Have you explored how the approach might transfer to real-world robotic systems, and what additional challenges might arise in that context?
- The method relies on stage indicators for multi-stage tasks. How sensitive is the approach to the definition of these stages, and how might it be extended to tasks where the stage structure is less clear or where the number of stages might vary between demonstrations?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We address your comments in the following.
---
**Q: Discussion on limitations and future work**
**A:** Thank you for pointing this out. In response, we have prepared an expanded discussion on limitations and future work. We will incorporate this expanded discussion into the final manuscript upon acceptance.
*We acknowledge that our current implementation relies on demonstrations collected via RL-trained agents, which are typically well-aligned with task objectives but often unimodal and less representative of human-like variability. A key future direction is to extend DEMO3 to handle more **diverse and multimodal demonstrations**, such as those arising from motion planning, human teleoperation, or video sources. Robustness to such sources would broaden the applicability of our method to real-world data collection settings.*
*Another limitation is that our current experiments are confined to **simulation environments**. We aim to deploy DEMO3 on **real robot hardware** to evaluate how well our dense reward learning and world model generalize under domain shift, sensor noise, and actuation delays, and whether the same constrained regime of demonstrations and interaction samples can yield reliable real-world policies. Notably, related work such as **MoDem** has already demonstrated that a similar pipeline can transfer successfully to real robots. Given that DEMO3 builds on and improves these components in simulation, we are optimistic that its benefits will carry over to physical systems.*
*Finally, while our current **50% demonstration sampling ratio** already yields strong results, we believe that more **sophisticated sampling strategies** could further enhance learning efficiency. Inspired by prioritized replay buffers, we plan to explore **priority-based demonstration sampling** that adaptively focuses on more informative or rare transitions during training.*
---
**Q: Thorough analysis of computational complexity**
**A:** We appreciate the reviewer’s request for a clearer discussion of computational complexity. As shown in **Table 2** of the main paper, DEMO3 introduces **minimal computational overhead** compared to TD-MPC2. Specifically, our method adds only a lightweight MLP per stage (i.e., the **stage discriminators**), which are trained jointly with the world model. This design keeps the additional parameter count and forward/backward pass time inconsequential relative to the backbone model.
Empirically, DEMO3 increases per-100k training time by only **6.7%** compared to TD-MPC2 (5.19h vs. 4.84h), and remains significantly faster than other demonstration-augmented baselines like **MoDem (8.37h)** and **LaNE (20.40h)**. These results suggest that DEMO3 offers strong performance gains with **minimal added training cost**, making it practical for real-world deployment scenarios where wall time is a limiting factor.
We will make this comparison more explicit in the final version of the paper.
---
**Q: Applying the method to a varying number of demonstrations**
**A:** We thank the reviewer for raising this important point. We address this question in our **demonstration efficiency analysis** (Figure 8), where we evaluate performance as a function of demonstration count (5, 10, 25, 50, 100, 200) on two of the most challenging tasks: **StackCube** and **PegInsertion**. DEMO3 is the only method that consistently reaches the target success threshold with as few as **5 demonstrations**, and it continues to improve steadily with more data, **outperforming all baselines across the range**.
For a more detailed view, we refer the reviewer to **Appendix A.2**, which includes **full learning curves under different demonstration regimes**. These results show that DEMO3 gracefully scales with more demonstrations while also being **highly effective in the low-data regime**.
---
**Q: How sensitive is the approach to stage definition? How does it transfer to more “unstructured” tasks?**
**A:** This is a great question. While DEMO3 is evaluated on tasks with explicit stage-wise structure, we believe that many real-world tasks — even those considered “unstructured” — can often be approximated with a **coarse stage decomposition**. For example, a task like human locomotion might involve an **initial acceleration phase** followed by a **steady gait phase**, which can naturally map to 2 distinct stages for learning purposes.
To better understand the effect of stage granularity, we include a **reward granularity ablation (Figure 9)**. This experiment compares performance across **1, 2, 3, and fully dense stage definitions**. While performance improves with finer-grained supervision, **DEMO3 remains surprisingly robust even under coarse or minimal stage labeling**, performing competitively with human-engineered dense rewards using only 1-stage definitions.
---
Please do not hesitate to let us know if you have any additional comments. | Summary: This paper introduces a demonstration-augmented reinforcement learning method to solve data-efficient manipulation tasks with sparse rewards. By utilizing limited demonstrations, the policy, world model, and dense reward are effectively modeled, thus the long-horizon tasks can be solved in a multi-stage manner. Experiments on four benchmarks demonstrate the effectiveness of the proposed method.
## update after rebuttal
Thanks to the authors for your efforts during rebuttal. Most of my concerns are resolved. I have raised my rating.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Not sufficiently. One main goal of this paper is to solve the sparse reward problem in long-horizon tasks. However, the reviewer finds the used benchmarks/datasets have a limited number of stages (i.e., <3).
Theoretical Claims: No theoretical claims are made.
Experimental Designs Or Analyses: My major concerns lie in the experimental parts,
i) Only three existing works (i.e., TD-MPC2, Modem, LaNE) are used for comparisons,
- the proposed method leverages the strength of Modem and TD-MPC2, thus the improvements over these two methods are a little bit trivial;
- Why the previous SoTA (ie, LANE) performs well on Robosuite while performing extremely badly on the other three benchmarks (Fig. 1)?
- since the method uses demonstrations for augmentation, existing imitation learning based methods should be compared;
ii) The tasks used for ablation are mainly chosen from ManiSkill, the environment where the method achieves the largest improvement over other methods. It is not convincing enough to evaluate the effectiveness of the proposed method;
Supplementary Material: Yes. The entire content of the appendix is reviewed.
Relation To Broader Scientific Literature: Related.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: No.
Questions For Authors: Please see the experiment section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We address your comments below.
---
**Q: Benchmarks/tasks have a limited number of stages**
**A:** We appreciate the reviewer’s concern. While many tasks in our suite contain 2–3 defined stages, we argue that this already reflects significant temporal and compositional complexity, particularly in sparse reward visual manipulation. Prior works such as DrS, MoDem, and LaNE often evaluate on shorter or single-stage tasks. In contrast, DEMO3 handles longer-horizon behaviors with similarly sparse supervision, making our setup at least as challenging—if not more—than existing literature.
That said, we agree that scaling DEMO3 to more complex, multi-stage tasks is a valuable next step. We are actively working on tasks like *Stack-N-Cubes*, requiring sequential completion of N > 3 subgoals. While we couldn’t finalize these in time for this submission, we plan to include preliminary results in the camera-ready version.
---
**Q: Improvements over MoDem and TDMPC2 seem trivial since DEMO3 builds upon them**
**A:** We thank the reviewer for this comment and appreciate the opportunity to clarify our benchmarking choices.. While DEMO3 does incorporate key ideas from both MoDem and TD-MPC2, we believe it is important—and indeed necessary—to benchmark directly against these methods to quantify the contribution of our core innovation: **dense, multi-stage reward learning**.
TD-MPC2 and MoDem are **strong, state-of-the-art baselines** in visual model-based RL and demonstration-augmented RL, respectively. DEMO3 builds on their foundations but introduces a new training paradigm in which **reward, policy, and world model are trained jointly via online, stage-wise discriminator signals**, rather than using precomputed reward functions or task-specific shaping. This design results in a reward signal that evolves alongside the agent’s experience, improving both learning stability and sample efficiency.
Rather than relying on a single benchmark, our results show that introducing learned reward components consistently improves performance across tasks and domains, as illustrated in Figures 5 and 7. Furthermore, our ablation studies demonstrate that the learned reward is particularly important in high-variance, long-horizon tasks where sparse rewards are insufficient for policy discovery.
---
**Q: Comparing with Imitation Learning**
**A:** Thank you for the suggestion. To complement comparisons with demonstration-augmented RL methods, we now include *Behavioral Cloning (BC)* results across all tasks. These are visible on the [project website](https://sites.google.com/view/icml2025demo3) and will be added to the final manuscript.
As expected, BC performs better in simpler settings like Robosuite, but fails to succeed in harder domains like ManiSkill Manipulation, which involve longer horizons, multi-stage coordination, and high variability (see Appendix D.3). This supports the view that stronger supervision—via dense rewards or interaction—is needed in these benchmarks.
---
**Q: Why does LaNE perform badly in ManiSkill and Meta-World?**
**A:** LaNE was originally evaluated on Robosuite and may be tuned to that benchmark. We attempted to adapt its hyperparameters to all environments, but performance still varied widely. DEMO3, by contrast, uses the same hyperparameters across all benchmarks, demonstrating stronger out-of-the-box generalization.
As shown in Appendix D.3, Robosuite is comparatively easier, while ManiSkill and Meta-World involve longer horizons, greater variability, and multimodal interactions—conditions under which LaNE’s nearest-neighbor mechanism struggles to scale.
We also observed that LaNE performs well in early stages but often fails to progress further, aligning with its reliance on latent-space similarity. Reward plots (see LaNE Task Progress in [Additional Ablations](https://sites.google.com/view/icml2025demo3/additonal-ablations)) illustrate this behavior. We will clarify these findings in the final manuscript.
---
**Q: ManiSkill ablations are not convincing enough**
**A:** While main ablations focus on ManiSkill, *per-task results across all domains* are provided in Appendix A.1. We chose ManiSkill because it is the most challenging domain in our suite, requiring precise control, long-horizon planning, and generalization under randomization.
BC performance also reflects this difficulty, performing modestly in Robosuite but struggling in ManiSkill. These characteristics make ManiSkill a valuable testbed for evaluating the impact of each component in DEMO3.
That said, we agree that broader ablation coverage is useful. Additional ablations on Meta-World are available here: [Meta-World Ablations](https://sites.google.com/view/icml2025demo3/additonal-ablations). These show trends consistent with those in ManiSkill and support the generality of our framework.
---
Please don’t hesitate to reach out with further comments. | Summary: This paper proposes DEMO, a framework that learns dense rewards from demonstrations and interactions with the environment to aid model-based RL learning. DEMO uses a multi-stage paradigm where it learns dense rewards from sparse reward signals (from stage indicators) to indicate "progress" and uses the learned dense reward to better learn policies. DEMO learns the reward model from visual signals and shows that incorporating the DEMO reward can help policies learn faster and converge to better performances. Experiments on a number of popular benchmarks (ManiSkill, Meta-World, Robotuite) show that the proposed method achieves SOTA performance using fewer samples.
Claims And Evidence: This work claims that using a small number of samples and learned dense rewards can speed up and help learning long-horizon tasks using MBRL. The results on the benchmarks and ablations verify these claims. Especially on the harder tasks (in ablation A.2), DEMO achieves the best results with a small number of demonstrations.
Methods And Evaluation Criteria: Yes, the proposed benchmarks and evaluation criteria are suitable for the application and problem setup. ManiSkill and MetaWorlds are challenging and popular benchmarks in this space, and the proposed method achieves better performance than baselines.
No real-robot experiments are conducted, though, which would help bolster the claims of the method.
Theoretical Claims: No theoretical claims are proposed in this work.
Experimental Designs Or Analyses: This work showcases results on long-term tasks such as "stack cube" and "stick pull" that best demonstrate the benefit of having multi-stage rewards and learned dense rewards. DEMO achieves the best results. These tasks can best demonstrate the benefit of using learned dense rewards as they are challenging and each step leads to another.
In terms of analysis, I feel like adequate analysis is provided in terms of performance of the overall results. Ablation also shows that each component is important for achieving the best result. Some missed opportunities in verifying that the learned dense reward actually corresponds to "task progress" as claimed in the paper.
Supplementary Material: Supplementary materials provide additional results compared to baselines and additional ablations on harder tasks. They also provide more details on the demonstrations used for each task.
Relation To Broader Scientific Literature: I think this work fits nicely in the model-based RL framework and the behavior cloning literature where the strengths of both fields are combined. If a few demonstrations can be effectively used to learn dense rewards for multi-stage and long-horizon tasks, then the benefit of using RL and interactive labels can be maximally leveraged for future results.
Essential References Not Discussed: References are adequate.
Other Strengths And Weaknesses: ## Additional Strength
- Overall, I think this paper provides an interesting and novel idea in dense reward learning from demonstrations and interactions with the environment.
- Analysis of the reward granularity shows that the learned reward signals are near optimal compared to dense rewards engineered by humans.
Other Comments Or Suggestions: I would suggest including live plots of the learned reward function and videos of the policy rollout to better showcase the capabilities of the learned reward function and policy.
Questions For Authors: None
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We address your comments in the following.
---
**Q: Suggestion about live plots with policy rollout (1 to 1 video with reward evolution and policy rollout)**
**A:** We fully agree with the reviewer that it is important to verify whether the learned dense reward accurately reflects task progress, especially since this is central to our method. Following this suggestion, we have added live plots of the learned reward function aligned with video rollouts, now available in the **“Learned Dense Reward”** section of the project website: [https://sites.google.com/view/icml2025demo3](https://sites.google.com/view/icml2025demo3).
To provide a comprehensive overview, we include three representative cases:
- a **successful rollout**,
- a **failed rollout**, and
- a **semi-successful rollout** that illustrates fluctuating behavior, alternating between progress and regression in the task.
These visualizations clearly demonstrate that the learned reward function captures nuanced task progression, aligning with intuitive notions of success, failure, and partial recovery. We believe this addition strongly supports the claim that our reward model tracks progress in long-horizon, multi-stage tasks.
---
**Q: Real robot experiments**
**A:** We acknowledge that validating DEMO3 in real hardware would further strengthen our contributions, and we plan to pursue this in future work. We are particularly interested in testing whether DEMO3 retains its strong sample efficiency and robustness under real-world conditions. Notably, prior work such as **MoDem-v2** has already demonstrated that a similar demonstration-augmented model-based pipeline can be successfully transferred to real robots. Given that DEMO3 builds on and improves these components, we are optimistic that its benefits will carry over to real-world deployment as well.
---
Please do not hesitate to let us know if you have any additional comments.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I appreciate and enjoy the live plots. They demonstrate the learned reward functions well. I will maintain my acceptance rating. | null | null | null | null | null | null | null | null |
MetricEmbedding: Accelerate Metric Nearness by Tropical Inner Product | Accept (poster) | Summary: This paper considers the problem of metric nearness and proposes a new method for solving the problem. In particular, they define a tropical neural network that uses tropical operations instead of regular operations, a new loss function to be used to train the model, and then experimental results validating their model.
Claims And Evidence: The main contributions of the paper are
1. Theorem 8.
I think this claim has sufficient evidence. The proof from skimming seems correct to me.
2. Empirically, testing their new method.
There is quite a bit of evidence for this claim; however, I think there could be more evidence, particularly explicit comparisons against Project and Forget.
Methods And Evaluation Criteria: The paper should benchmark against Project and Forget.
One particular dataset that could be added is the case are metrics where bullet point 3 (Page 3 Line 156 right col) is violated.
Theoretical Claims: I skimmed the proofs. They look okay.
However, I do not think Theorems 1,2,3,4 need to be presented. These are quite standard.
Experimental Designs Or Analyses: The experimental design is valid for section 4.2. Except there is a case where the paper does not test that should be tested. Specifically, metrics where bullet point 3 (Page 3 Line 156 right col) is violated.
For the experiment in section 4.3, error rate is not defined. Hence makes the experiement difficult to understand.
The experiment in section 4.4 does not make sense. How is the new data fed in? Is the original matrix masked and the mask is then slowly changed? Also if $d(a,b)$ is masked. Can $d(a,c)$ be visible to the method earlier. When testing, why should the trained network have any information about the new data?
Section 4.5 is okay
Section 4.6 error rate is again not defined. Additionally, metrix nearness finds the closest metrix. There is no reason to assume that the closest metric is of the same type as the metric that was used to create the corrupted matrix.
Supplementary Material: I skimmed the proof and the experimental results sections.
Relation To Broader Scientific Literature: The metric nearness problem is an important problem, and solutions to it help design robust and theoretically justified data analysis techniques.
However, solving the problem is quite computationally intensive. Hence advances in making solving the problem more tractable are important. Additionally, the paper uses neural networks. This makes it more tractable to integrate into the broader framework.
Essential References Not Discussed: The paper does a good work of providing references for the $\ell_2$ version of the metric nearness problem. That is when the loss function is the squared distance. However, if the paper chooses to do so, I think it could improve the paper to connect to broader literature for the $\ell_0$ version of the problem. I provide the necessary papers here [A,B,C,D].
Additionally, in relation to the downgrade of the performance when it is not a metrix (Line 19 Right Col). The experiments in the introduction are nice conrete examples [B,E]
There also have been targeted works, where the target is a tree metrix [G] and Euclidean metric [H]. There also have been older work that tried fixing the metric [I]
[A] A. C. Gilbert and L. Jain. If it ain’t broke, don’t fix it: Sparse metric repair. In 2017 55th Annual Allerton
Conference on Communication, Control, and Computing (Allerton), pages 612–619, Oct 2017.
[B] Chenglin Fan, Anna C .Gilbert, Benjamin Raichel, Rishi Sonthalia, and Gregory Van Buskirk. Generalized metric repair on graphs. In Susanne Albers, editor, 17th Scandinavian Symposium and Workshops on Algorithm Theory, SWAT 2020, June 22-24, 2020, Tórshavn, Faroe Islands, volume 162 of LIPIcs, pages 25:1–25:22. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020.
[C] Vincent Cohen-Addad, Chenglin Fan, Euiwoong Lee, and Arnaud de Mesmay. Fitting metrics and ultrametrics with minimum disagreements. In 63rd IEEE Annual Symposium on Foundations of Computer Science (FOCS), pages 301–311. IEEE, 2022
[D] Chenglin Fan, Benjamin Raichel, and Gregory Van Buskirk. Metric Violation Distance: Hardness and Approximation. Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 196–209, 2018.
[E] Anna C. Gilbert and Rishi Sonthalia. Unsupervised metric learning in presence of missing data. 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 313– 321, 2018.
[G] Vincent Cohen-Addad, Debarati Das, Evangelos Kipouridis, Nikos Parotsidis, and Mikkel Thorup. Fitting distances by tree metrics minimizing the total error within a constant factor. In 62nd IEEE Annual Symposium on Foundations of Computer Science (FOCS), pages 468–479. IEEE, 2021.
[H] Rishi Sonthalia, Greg Van Buskirk, Benjamin Raichel, and Anna C. Gilbert. How can classical mul- tidimensional scaling go wrong? In Advances in Neural Information Processing Systems (NeurIPS), pages 12304–12315, 2021.
[I] Julian Laub,Klaus-Rober tMüller,Felix A Wichmann, and Jakob H Macke. Inducing metric violations in human similarity judgements. In Advances in neural information processing systems, pages 777–784, 2007.
Other Strengths And Weaknesses: The main weakness for me are sections 4.3, 4.4 and 4.6. If the authors clarify the concerns there. I am happy to increase my score.
Other Comments Or Suggestions: There are some typos such as missing spaces.
Questions For Authors: Please see the experimental design section. I have a variety of questions about Section 4.3, 4.4 and 4.6
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1:Undefined error rate in Sec 4.3**
A1:Thank you for pointing this out. We agree that the evaluation metric in Section 4.3 should be clarified.
All experiments in Section 4.3 use **Normalized Mean Squared Error (NMSE)**, as defined in Section 4.2, to measure reconstruction quality. Specifically, we:
- Compute the ground-truth distance matrix using pairwise Euclidean distances;
- Apply noise or masking to simulate degradation;
- Use NMSE to quantify both pre- and post-recovery error;
- Set missing entries to zero to ensure consistent NMSE calculation.
For example, in the M30 setting, NMSE drops from 35.50% (before recovery) to 13.85% (after), showing a clear performance gain.
Our method leverages the assumption that mildly corrupted metric-consistent matrices retain latent structure, and that enforcing metric constraints during recovery improves reconstruction accuracy.
**Q2:Data handling ambiguity (Sec 4.4)**
A2:Thank you for your comment. Section 4.4 demonstrates the **online update capability** of our method—an aspect largely overlooked in prior work.
We simulate an online setting where the model is incrementally updated as new data arrives, without access to future entries. Starting from a partially observed distance matrix (e.g., 20% revealed), we iteratively expose 20% new entries at each timestep. The model is updated using only these newly observed entries.
This setup respects causality and highlights our method’s ability to **refine predictions progressively**. Traditional convex optimization methods require full data access and incur high computational costs (e.g., \(O(N^3)\)), making them impractical for real-time updates.
In contrast, our method uses **efficient, localized gradient-based updates** via backpropagation, avoiding full retraining. As demonstrated in our experiments, this approach achieves **10×–1000× speedups** over baseline methods while maintaining or improving accuracy. These properties make our method well-suited for **real-time applications** that demand continuous adaptation.
**Q3:Error undefined, metric mismatch (Sec 4.6)**
Thank you for your comment.
Regarding the evaluation metric in Section 4.6, we again adopt the Normalized Mean Squared Error (NMSE) to quantify how well the compressed representation preserves the original distances.
Regarding the assumption about metric nearness, we agree that the closest metric under a given norm is not necessarily of the same type as the one used to generate the original distances (e.g., Euclidean, cosine, etc.). However, the goal of this experiment is not to recover the exact generating metric, but rather to demonstrate that our method can serve as a general-purpose, low-rank, metric-preserving approximation of distance matrices. Specifically A fully specified distance matrix typically requires $O(n^2)$ parameters. Our method seeks to compress this representation to O(nk) parameters, while:
- (a) Guaranteeing that the reconstructed matrix satisfies the metric properties, and
- (b) Maintaining low computational complexity, suitable for large-scale applications.
Traditional methods such as SVD or spectral decomposition offer low-rank approximations but do not guarantee metric validity, and
often incur $ O(n^3)$ complexity. In contrast, our method can be viewed as a tropical analogue of SVD, which:
- Operates in a multiplication-free space,
- Enforces metric constraints by design,
- And achieves efficient approximation using only O(n·k) parameters.
In this experiment, we evaluate performance in a controlled setting where the original distance matrices are generated using known distance functions . Our method does not assume knowledge of the underlying metric but still achieves high-fidelity reconstructions, as evidenced by low NMSE and strong performance in downstream tasks (e.g., 0.79 accuracy on the Cora dataset).
**Q4:Uncovered case: bullet 3 violation (Sec 4.2)**
A4:Thank you for your comment. We will revise **Theorem 8** to clarify that satisfying both **Bullet 2** and **Bullet 3** implies the **triangle inequality**.
Our experiments deliberately use matrices with heavily violated metric properties—e.g., **99.7%** of entries in the graph-t1 dataset (\(N=1000\)) violate **Bullet 3**.
Q5: Unnecessary Theorems 1–4
A5: Thank you. We agree that Theorems 1–4 are standard and will streamline or move them to the appendix to improve clarity without losing completeness.
Q6: Essential References Not Discussed
A6: Thank you for the suggestion. We have reviewed references [A–I] and will revise the related sections accordingly. Detailed updates will be provided in the next rebuttal stage.
Q7:PAF
For \(N = 2000\), our method (Tropical) achieves an NMSE of **0.18** in **34s** with **0 violations**, while PAF yields a lower NMSE (**0.068**) but takes **3133.91s**, produces **3.7e7 violations**, and fails to converge—highlighting our method’s superior efficiency, and suitability for large-scale applications.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification.
With the clarifications and adding PAF to Table 1, I am willing to increase my score.
The authors' comment on PAF taking 3k seconds for N = 2000 implies that it is faster than TRF and HLWB. Also, while 3.7e7 sounds large when dividing by $\binom{2000}{3}$, it is approximately 2.8%. Hence, this has fewer violations than TRF. Hence, PAF is a good baseline and should be included in a complete comparison.
---
Reply to Comment 1.1.1:
Comment: **Q1:PAF**
Thank you for the helpful suggestion. We agree that PAF is an important baseline and we will add comprehensive comparisons across datasets, including large-scale scenarios. Our revised experiments address prior omissions and introduce a new noisy MNIST dataset. Metrics include NMSE and convergence time, with convergence defined as stabilization of both NMSE and violation rate.
**The Tropical method demonstrates several key advantages:**
- **Efficiency:** Achieves up to 50–60× speedup over PAF on large datasets (N=1000)
- **Constraint Satisfaction:** Guarantees 0% metric violation across all settings, as ensured by Theorem 6.
- **Online Updates:** Supports real-time updates, enabled by the short per-epoch computation time.
In terms of **accuracy**, the slightly higher NMSE compared to PAF is mainly due to the limited optimization space and susceptibility to local minima. This issue can be alleviated through a multi-start strategy that explores more diverse solutions.
Tropical’s efficiency, constraint guarantees, and scalability make it well-suited for time-sensitive metric nearness tasks. We appreciate your emphasis on PAF and have included thorough comparisons to ensure completeness.
### Table 1. Comparison of Methods for Different Matrix Sizes.
| Matrix Size (N) | Method | Computation Time (s) | NMSE (Ratio) | Triangle Inequality Violations (%) |
|------------------|---------------------|----------------------------|---------------|-------------------------------------|
| 100 | Ours | 0.69 | 0.084 | 0% |
| | HLWB | 14.80 | 0.072 | 0% |
| | TRF | 12.09 | 0.059 | 4.71% |
| | PAF | 21.844 | 0.071 | 0% |
| 500 | Ours | 16.39 | 0.099 | 0% |
| | HLWB | 1291.61 | 0.069 | 0% |
| | TRF | 1120.73 | 0.058 | 4.53% |
| | PAF | 266 | 0.069 | 0% |
| 1000 | Ours | 26.73 | 0.136 | 0% |
| | HLWB | >2000 | 0.068 | 0% |
| | TRF | >2000 | 0.058 | 4.85% |
| | PAF | 1619.68 | 0.068 | 0% |
### Table 2. Experimental Results on Noisy MNIST Distance Matrix
| Matrix Size (N) | Method | Computation Time (s) | NMSE (Ratio) | Triangle Inequality Violations (%) |
|------------------|------------|-----------------------|---------------|-------------------------------------|
| 100 | Ours | 0.53 | 0.063 | 0% | |
| | PAF | 4.91 | 0.055 | 0.05% |
| 500 | Ours | 5.73 | 0.086 | 0% |
| | PAF | 55.91 | 0.055 | 0.18% |
| 1000 | Ours | 7.87 | 0.117 | 0% |
| | PAF | 374.91 | 0.055 | 0.2% |
### Table 3. Per-Epoch Runtime Comparison (Same Setting as Table 1, N = 500)
| Matrix Size (N) | Method | Epochs | Total Time (s) | Avg Time per Epoch (s) |
|------------------|----------|--------|------------------|--------------------------|
| 500 | Tropical | 20 | 0.734 | 0.036 |
| | PAF | 20 | 60.34 | 3.02 |
**Q2:Essential References Not Discussed**
Thank you for your comments. We will expand the *Related Work* section to include discussions on sparse metric repair in Article [A], its graph extension in Article [B], inconsistency minimization in Article [C], "metric violation distance" complexity in Article [D], unsupervised metric learning with missing data in Article [E], tree metric fitting in Article [G], Euclidean metric challenges in Article [H], and early work on metric violations in human similarity judgment in Article [I]. This will provide important context for our work.
Regarding the $ L_0 $ norm, we cannot apply it due to the lack of gradients. However, we can extend our method to the $ L_1 $ norm and other differentiable norms, preserving sparsity and enabling gradient-based optimization. We plan to explore the $ L_0 $ norm version in future work. | Summary: The paper introduces MetricEmbedding, a novel approach using the ropical inner product (max-plus operation) to efficiently solve the Metric Nearness Problem (MNP) while ensuring metric properties like the triangle inequality. The authors showed the equivalence (up to diagonal elements) between the class of non-negative distance matrices and the set of matrices resulting from the tropical inner product of non-negative matrices. Using this observation, the authors proposed a continuous optimization task to efficiently solve the Metric Nearness Problem (MNP). The proposed method has been shown to significantly reduce computational complexity, achieving up to 1000× speed improvements over traditional approaches while scaling to large matrices (10^5 \times 10^5) with lower memory usage. Experimental results demonstrate its effectiveness in restoring metric properties, handling noisy and incomplete data, on synthetic datasets.
Claims And Evidence: Most claims are clear and convincing, namely, the investigation of the the relationship between tropical operations and metric matrices, presented in Section 3.1 and Appendix A.
The optimization strategy proposed in Section 3.2. potentially requires further theoretical evidence for its ability to achieve plausible minimum for the proposed optimization problem. Specifically, in Algorithm 1 - "Training Procedure for MetricEmbedding", and the accompanying text, the authors propose a two-step parameter update: an RMSProp step followed by changing W_i, b_i to be non-negative. It is unclear whether the proposed approach is guaranteed to achieve a plausible minimum and under which conditions.
Methods And Evaluation Criteria: The method was evaluated on synthetic datasets (per Section 4.1.). The metrics used to evaluate the performance make sense for MNP.
Lack of evaluation on non-synthetic datasets makes it unclear how applicable the method is for real world applications.
Theoretical Claims: I checked the theorems and proofs in Section 3.1 and Appendix A, for correctness.
See the description in the "Claims And Evidence" section above for an open question about convergence guarantees for the proposed optimization algorithm.
Experimental Designs Or Analyses: The experiments were performed using synthetic dataset. The experiments look sound and provide extensive evaluation of the proposed method. See my comment about lack of experiments with real world data, in the "Methods And Evaluation Criteria" section.
Supplementary Material: Yes, the appendices.
Relation To Broader Scientific Literature: I cannot comment because the paper is not in my area of expertise.
Essential References Not Discussed: I cannot comment because the paper is not in my area of expertise.
Other Strengths And Weaknesses: Strengths.
* The paper is very clearly written and easy to follow.
* Most claims have supporting proofs.
* Experiments validate significant improvement achieved by the proposed method over state of the art approaches, with comparable or better accuracy.
Other Comments Or Suggestions: See the above.
Questions For Authors: Please clarify whether the proposed training procedure for MetricEmbedding is guaranteed to achieve a plausible minimum and under which conditions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1:Please clarify whether the proposed training procedure for MetricEmbedding is guaranteed to achieve a plausible minimum and under which conditions.**
A1:The short answer is **no**, because it is a **non-convex** optimization problem. Nevertheless, our algorithm is guaranteed to converge to a local minimum, as we solve the problem using a gradient descent approach.
We prove this by the following:
(1) consider one layer case,
$
\min_X \| X \odot_{\max} X^T - Y \|_F^2
$
Let:
$
A = [[1,3,3],[3,1,3][3,3,1]],
B = [[3,1,1][1,3,1][1,1,3]]
$
Taking \(\alpha = \frac{1}{2}\), then:
f(\frac{1}{2}A + \frac{1}{2}B) > \frac{1}{2}f(A) + \frac{1}{2}f(B)
This violates convexity, proving that even in a single-layer case, the problem is **non-convex**.
(2) Now consider the general multi-layer formulation,We can instead set all parameters of the network to zero except for those in the first layer, keeping the form consistent with the one-layer result described above.
Then the entire deep network reduces to the form \( W_0 \), and we recover exactly the same objective as in the single-layer case.
**end of proof**
Since global optima are hard to attain in non-convex problems, we instead focus on finding **good local optima**. Our approach achieves a **balance between optimization space and efficiency**.
The original problem is defined under standard metric constraints (triangle inequality and zero diagonals). Our method introduces a stricter condition (**Theorem 8 bullet point 3 **), which defines a narrower solution space we refer to as the **tropical metric space**.
While existing methods like HLWB optimize over the general metric space, our approach is more effective when the optimal solution lies close to this constrained space. However, we currently lack a formal proof of convergence.
To encourage convergence to good local minima, we adopt several practical strategies:
- Initialize outputs around the mean of the target matrix \(D\), ensuring triangle inequality via a constructed matrix \(M\).
- Add controlled randomness for better optimization dynamics.
- Use **parallel optimization** to avoid poor local minima, which significantly improves results.
**Q2:Lack of evaluation on non-synthetic datasets makes it unclear how applicable the method is for real world applications.**
A2: Thanks for your feedback. To address your concern about the real-world applicability of our method, we conducted additional experiments on the MNIST dataset and a real graph dataset.
To further validate our method for **metric matrix recovery**, we tested it on MNIST using two non-metric cases: noisy Euclidean distances following [1] and naturally non-metric cosine similarities following.
### Noisy Euclidean Distance (MNIST)
| N | Method | Used Time (s) | Result | Violation |
|------|-----------|----------------|--------------|-----------|
| 100 | HLWB | 35.04 | 0.052 | 0 |
| 100 | Tropical | 0.53 | 0.063 | 0 |
### Cosine Similarity Distance (MNIST)
| N | Method | Used Time (s) | Result | Violation |
|------|-----------|----------------|---------------|-----------|
| 100 | HLWB | 109.11 | 3.50e-06 | 0 |
| 100 | Tropical | 0.51 | 0.004 | 0 |
## Experiments on a Real Graph Dataset
To further validate the practicality of our approach, we applied **MetricPlug** (Section 3.3) to a real-world graph learning task.
### Task Setup
We used graph contrastive learning (based on GRACE [2]) for node classification, replacing standard similarity metrics with **MetricPlug** using the tropical inner product.
### Dataset and Baselines
We experimented on the Cora dataset (2,708 nodes), with train/val/test splits following GCN [3]. MetricPlug was compared against cosine, Hamming, Euclidean, and Manhattan distances.
### Experimental Settings
We used PyG’s `dropout_adj` to perturb edges (ratio = 0.1) for data augmentation. Training was run for 1,000 epochs with a learning rate of 0.01 and 64-dimensional hidden/projection layers. Accuracy was used for evaluation.
| Method | Validation Accuracy (%) | Test Accuracy (%) |
|--------------|-------------------------|-------------------|
| Cosine | 77.4 | 76.5 |
| Manhattan | 79.0 | 79.2 |
| Euclidean | 78.2 | 79.0 |
| MetricPlug | **79.2** | **79.4** |
MetricPlug outperformed other methods in both validation and test accuracy.
[1] Li W, et al. Metric nearness made practical. AAAI2023
[2] Zhu, Y. Deep graph contrastive representation learning. arXiv.
[3] Kipf, T. N. Semi-Supervised Classification with Graph Convolutional Networks. ICLR2016. | Summary: The authors propose the use of tropical algebra to frame the metric nearest problem within a continuous optimization framework. They first demonstrate that the set of non-negative matrices satisfying the triangle inequality can be fully represented using a combination of tropical algebraic representations. They then propose an MLP based on tropical algebraic operations, which they optimize using RMSprop to solve for the nearest valid metric distance matrix to the original matrix.
## Post-Rebuttal update:
Having reviewed the authors' responses to both my own review as well as that of other reviewers, I retain my original rating and would be happy to recommend this work for acceptance to the main conference.
Claims And Evidence: - Optimizing through an MLP-like structure is preferable to directly optimizing over a single starting matrix and avoids local minima
- There are certainly scale-related benefits to using the MLP-like structure, specifically the mini-batch setup, but one can also easily optimize over multiple starting matrices A in parallel as another means of avoiding local minima.
- I don't think I saw any evidence supporting the purported downstream task benefits of MetricPlug as proposed in section 3.3, specifically compared to existing approaches.
Methods And Evaluation Criteria: The proposed evaluation is sufficient for demonstrating the soundness of their technique. However I think what remains to be demonstrated is downstream task impact, as the authors claim existing methods may struggle to accurately capture complex relationships for contrastive learning setups.
Theoretical Claims: I went through the theorems but did not carefully validate each proof.
Experimental Designs Or Analyses: No issues in existing proposed experiments and analysis
Supplementary Material: Yes, went through sections C and D
Relation To Broader Scientific Literature: The authors compare against TRF and HLWB, which appear to be the most recent and likely state of the art solutions to the metric nearness problem. Unfortunately I am not intricately familiar with the prior literature on this topic, so I cannot comment on any other prior works.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- Appears to be a theoretically sound and practical approach to the metric nearness problem -- specifically so at larger scales
- Proposed solution is fairly simple to implement
Weaknesses:
- Claims of downstream task impact need to be validated
Other Comments Or Suggestions: Some of the math notation can be cleaned up a bit.
- For example, it's a bit confusing to have A be set of matrix pairs in theorem 8, while also representing individual matrices in other theorems.
- X in L209 column 2 has not been previously defined
- The font for superscripts and subscripts such as off, max, min, is inconsistent.
Questions For Authors: - Can the miniabatch-based algorithm in appendix C.2 guarantee the triangle inequality property at a global scale across all pairs i,j? Or only up to its maximum output shape?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1:but one can also easily optimize over multiple starting matrices A in parallel as another means of avoiding local minima.**
A1:Thanks for your valuable suggestion. We conducted experiments using the proposed strategy with a single-layer model, and indeed observed further performance improvements over the baseline approach.
In our experiments, we constructed a set of non-metric matrices using the MNIST dataset, following the approach described in [1].
We fixed the number of model iterations to **1000**. Under this configuration:
The convergence error improved from 0.0583 to **0.0536**.
The improved model required **3.58 seconds**, compared to **0.97 seconds** for the original method.
This demonstrates a clear **trade-off between convergence quality and computational cost**.
**Q2:I don't think I saw any evidence supporting the purported downstream task benefits of MetricPlug as proposed in section 3.3, specifically compared to existing approaches.**
A2: We conducted additional experiments on real-world tasks to demonstrate the applicability of our approach, specifically MetricPlug as described in Section 3.3.
- **Task Definition:**
We applied graph contrastive learning, an unsupervised method for learning node representations using contrastive loss. Specifically, we used the approach from GRACE [2], which involves perturbing edges to generate augmented views. Unlike traditional methods that use cosine or Euclidean similarity metrics, we replaced them with MetricPlug, which uses the tropical inner product to calculate node pair similarity, satisfying the triangle inequality. The evaluation task focuses on node classification.
- **Dataset and Baseline:**
We used the Cora dataset, which contains 2708 nodes, and follows the standard data split used in GCN [3]. We compared our MetricPlug method with existing methods based on cosine, Hamming, Euclidean, and Manhattan distances.
- **Experiment Settings:**
We utilized the dropout_adj function in PyG to randomly perturb edges with a perturbation ratio of 0.1 to generate augmented views. The configuration used a learning rate of 0.01, 1000 epochs, with hidden and projection dimensions set to 64. Accuracy was the evaluation metric. The results are summarized below:
| Method | Validation Accuracy (%) | Test Accuracy (%) |
|------------|-------------------------|-------------------|
| Cosine | 77.4 | 76.5 |
| Manhattan | 79.0 | 79.2 |
| Euclidean | 78.2 | 79.0 |
| Hamming | 78.8 | 79.0 |
| MetricPlug | **79.2** | **79.4** |
As shown in the table, the MetricPlug method outperforms other distance-based methods, achieving the best results on both validation and test sets.
**Q3:Can the miniabatch-based algorithm in appendix C.2 guarantee the triangle inequality property at a global scale across all pairs i,j? Or only up to its maximum output shape?
The answer to the first question is **affirmative**.**
Our proposed minibatch-based training method guarantees that, in any iteration, the model's predictions satisfy the triangle inequality for all index pairs \( (i, j) \). Specifically, as long as the two matrices involved in the final Tropical inner product are constrained to be non-negative, the resulting matrix—obtained via Tropical inner product—will satisfy the triangle inequality for any pair \( (i, j) \).
Although the minibatch algorithm only updates a subset of the model's weights based on the current batch, we explicitly enforce non-negativity constraints on all model parameters throughout training. As a result, regardless of the update stage, the forward pass of the model will always produce a full matrix \( H \) whose entries satisfy the triangle inequality across all index pairs \( (i, j) \).
**For example**, consider a \(1000 \times 1000\) matrix with a batch size of 16. In a single iteration, the model only computes the values for those 16 selected positions and updates the weights based on the corresponding loss. However, since all weights remain non-negative due to the imposed constraints, the full \(1000 \times 1000\) matrix outputted by the model at this point will still satisfy the triangle inequality for any pair \( (i, j) \). This property is guaranteed by **Theorem 6**.
Other cases follow by the same reasoning.
**Q4:Some of the math notation can be cleaned up a bit.**
A4:Thank you very much for pointing out this issue. We will address and correct it in the subsequent version of the manuscript
[1] Li W, et al. Metric nearness made practical. AAAI2023
[2] Zhu, Y et al. Deep graph contrastive representation learning. ArXiv.
[3] Kipf, T. N et al. Semi-Supervised Classification with Graph Convolutional Networks. ICLR2016. | null | null | null | null | null | null | null | null |
A General Representation-Based Approach to Multi-Source Domain Adaptation | Accept (poster) | Summary: This paper addresses the issue of multi-source domain adaptation with a focus on identifiability. It introduces a causal framework that avoids restrictive assumptions such as independent latent variables or invariant label distributions. The authors theoretically establish identifiability and validate the effectiveness of their method through experiments on datasets like Office-Home and PACS.
Claims And Evidence: All the claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria are suitable for the problem.
Theoretical Claims: Due to limited time, I couldn't check every detail of the derivation. However, most of the theorems seem intuitive and appear to be correct.
Experimental Designs Or Analyses: 1. The primary concern with this work is that while it claims to "consider the most general scenario" and suggests adaptability to different data distribution shifts, the experiments are limited to datasets with covariate shifts. It would be more convincing if the authors also conducted experiments involving other types of distribution shifts, such as label shift or conditional shift.
2. The most recent domain adaptation (DA) baselines included in the study are from 2022. Including the latest baselines would enhance the rigor of the analysis.
Supplementary Material: I didn’t check the supplementary material.
Relation To Broader Scientific Literature: The works most closely related to this are [1] and [2]. However, [1] adopts more stringent assumptions about the independent latent variables, whereas [2] does not ensure identifiability.
[1] Kong, Lingjing, et al. "Partial disentanglement for domain adaptation." International conference on machine learning. PMLR, 2022.
[2] Zhang, Kun, et al. "Domain adaptation as a problem of inference on graphical models." Advances in neural information processing systems 33 (2020): 4965-4976.
Essential References Not Discussed: As far as I known, all the essential references are discussed in the paper.
Other Strengths And Weaknesses: **Strengths:**
1. The novel multi-source adaptation framework introduced in the paper guarantees identifiability under mild restrictions.
2. Effectiveness of the theorem and its implementation are verified through experiments conducted on datasets such as Office-Home and PACS.
3. The paper is well-written and easy to follow.
**Other Weaknesses:**
None. Most potential weaknesses are addressed in the "Experimental Designs" section.
Other Comments Or Suggestions: None
Questions For Authors: The theoretical framework proposed by the author suggests there are multiple ways to implement it in practice. Why did the authors choose a multi-VAE structure for implementation? Could the author provide more insights into their intuition behind this choice?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's constructive comments, helpful feedback, and time devoted. Please see below for our response.
**Q1:** "the experiments are limited to datasets with covariate shifts" and "experiments involving other types of distribution shifts"
**A1:** Thanks for your comment. Notably, several existing works [1, 2, 3] on label shifts have also used Office-Home and PACS, highlighting the presence of label shifts in these datasets. A possible reason is that, to the best of our knowledge, there is no benchmark dataset specifically designed for label shift. If you are aware of any such datasets, we would greatly appreciate your suggestions and would be happy to incorporate them into our analysis.
To validate the presence of label shifts, we analyzed the label distributions of Office-Home and PACS. The visualizations are provided in Figures 1, 2, and 3 in https://anonymous.4open.science/r/icml-rebuttal-gama/rebuttal.pdf. Specifically, Figures 1 and 2 illustrate the label distributions across different domains for Office-Home and PACS, respectively, indicating notable variations in label probabilities across domains. To further quantify this, we computed the Jensen–Shannon divergence between label distributions for different domain pairs (Figure 3), confirming clear label shifts—especially in the Art sub-task of Office-Home and the Sketch sub-task of PACS. Thus, beyond covariate shift as you pointed out, both datasets also exhibit label shifts.
To further investigate our method’s robustness to severe label shift, we created a more extreme setting for Clipart sub-task (from Office-Home dataset ) by sampling data points from different labels according to a pre-defined label distribution; see Figure 4 in https://anonymous.4open.science/r/icml-rebuttal-gama/rebuttal.pdf, where the label distribution of Clipart domain differs substantially from other domains. The corresponding Jensen–Shannon divergence (Figure 5) verifies this and indicates a much more severe label shift for Clipart. For this sub-task, we compared our method against iMSDA, one of the strongest baselines in our experiments. Our method achieves an accuracy of $57.7$, outperforming iMSDA that achieves $57.1$, demonstrating the effectiveness of our approach under severe label shift.
We will incorporate this discussion and additional experiment into the revision. Hope this addresses your concern.
**Q2:** "Including the latest baselines would enhance the rigor of the analysis"
**A2:** In light of your suggestion, we will include 5 additional recent baselines (after 2022) in the revision, i.e., CASR [4], TFFN [5], SSD [6], GeNRT [7], and iLCC-LCS [8]. The results are provided in the tables below, which indicate that our method achieves superior performance.
- Office-Home dataset:
| Method | Ar | Cl | Pr | Rw | Avg |
|-------------|------|------|------|------|------|
| CASR [4] | 72.2 | 61.1 | 82.8 | 82.8 | 74.7 |
| TFFN [5] | 72.2 | 62.9 | 81.7 | 83.5 | 75.1 |
| SSD [6] | 72.5 | **64.5** | 81.2 | 83.2 | 75.4 |
| **GAMA (Ours)** | **76.6** | 62.6 | **84.9** | **84.9** | **77.3** |
- PACS dataset:
| Method | P | A | C | S | Avg |
|-------------|------|------|------|------|------|
| GeNRT [7] | 98.5 | 93.6 | 91.4 | 85.7 | 92.3 |
| iLCC-LCS [8] | 95.9 | 86.4 | 81.1 | 86.0 | 87.4 |
| **GAMA (Ours)** | **98.8** | **93.7** | **92.8** | **89.3** | **93.7** |
**Q3:** "Why did the authors choose a multi-VAE structure for implementation? Could the author provide more insights into their intuition behind this choice?"
**A3:** Thanks for your question, which helps clarify the motivations behind our implementation. First, VAE provides a convenient way to model the distribution of latent variables. Second, compared to other generative models, VAEs make it easier to incorporate prior structural information (e.g., parent-child relationships) into our method. Using multiple VAEs further facilitates the integration of such structural priors. We will clarify this in the revision.
---
**References:**
[1] Le et al., On Label Shift in Domain Adaptation via Wasserstein Distance. arXiv, 2022.
[2] Jang et al., Distribution Shift-Aware Prediction Refinement for Test-Time Adaptation. arXiv, 2024
[3] Liu et al., Domain Generalization under Conditional and Label Shifts via Variational Bayesian Inference. In IJCAI, 2021.
[4] Wang et al., Classaware sample reweighting optimal transport for multi-source domain adaptation. Neurocomputing, 2023.
[5] Li et al., Transferable feature filtration network for multi-source domain adaptation. Knowledge-Based Systems, 2023.
[6] Li et al., Multidomain adaptation with sample and source distillation. IEEE Transactions on Cybernetics, 2023.
[7] Deng et al., Generative model based noise robust training for unsupervised domain adaptation. arXiv, 2023.
[8] Liu et al., Identifiable latent causal content for domain adaptation under latent covariate shift. arXiv, 2024. | Summary: The manuscript presents a general representation-based approach for multi-source domain adaptation (GAMA). It aims to improve knowledge transfer across domains by leveraging theoretical identifiability results for latent variables and adapting a variational autoencoder (VAE)-based framework. The authors propose partitioning the Markov blanket into its parents, children, and spouses to enhance adaptation performance and validate their approach through theoretical guarantees and empirical evaluations on two benchmark datasets.
## update after rebuttal
The authors' answers clarified my doubts, so I confirm the positive assessment of the manuscript.
Claims And Evidence: The manuscript makes strong theoretical claims regarding identifiability and its impact on domain adaptation. The theoretical framework is supported by a series of assumptions and theorems, including results on subspace identifiability of the Markov blanket and its components. Empirical results on benchmark datasets substantiate the proposed method, demonstrating its effectiveness compared to existing approaches.
Methods And Evaluation Criteria: The authors employ a combination of variational autoencoders and deep neural networks to extract and learn latent representations that are invariant to domain shifts. Evaluation is conducted on standard datasets such as PACS and Office-Home, using accuracy as the primary metric. The paper also includes an ablation study to analyze the impact of different components of the model.
Theoretical Claims: The manuscript provides rigorous theoretical backing, introducing multiple theorems to establish the identifiability of the joint distribution in the target domain. However, some assumptions, such as linear independence of certain distributions, may be restrictive and require further discussion.
Experimental Designs Or Analyses: The experiments are well-structured, covering multiple datasets and baseline comparisons. However, additional baselines, particularly recent causal representation learning approaches, could further validate the method's effectiveness.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: The paper contextualizes its contributions well within domain adaptation and causal representation learning. It cites relevant works on domain adaptation, causal disentanglement, and latent variable identifiability.
Essential References Not Discussed: As far as I know, the authors have included relevant and recent references related to the subject matter.
Other Strengths And Weaknesses: Strengths
- Strong theoretical foundation for identifiability.
- Clear motivation for partitioning the Markov blanket.
- Well-structured experimental validation.
Weaknesses
- Some theoretical assumptions may be restrictive.
- Lack of discussion on computational efficiency.
- No discussion on potential failure cases or limitations.
Other Comments Or Suggestions: - Provide additional baselines from causal representation learning.
- Discuss the computational complexity of the approach.
- Clarify the generalizability of the method beyond the considered datasets.
- Page 7, line 372 first column. I think the authors would refer to Figure 2 instead of Figure 1.
Questions For Authors: 1. How does the approach handle highly imbalanced domain shifts?
2. Would the method still be effective with fewer source domains?
3. Are there any practical limitations regarding computational resources for training?
4. Could the theoretical results be extended to other types of unsupervised domain adaptation problems?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's time and valuable comments, many of which will help improve the clarity of our paper. Our responses to these comments are given below.
**Q1:** "some assumptions, such as linear independence of certain distributions, may be restrictive and require further discussion"
**A1:** Thanks for this comment. Intuitively, the sufficient change assumption requires the existence of multiple environments where the causal mechanisms among latent variables change sufficiently. These distributional changes, combined with the invariant mixing function, provide information for recovering the latent variables and their causal relationships. Note that this type of sufficient change assumption has been commonly adopted in the literature of nonlinear ICA and causal representation learning to establish identifiability under various settings. We will include this discussion in the revision.
**Q2:** "Lack of discussion on computational efficiency" and "Are there any practical limitations regarding computational resources for training?"
**A2:** We train our model using a NVIDIA A100-SXM4-40GB GPU. For Office-Home dataset, the batch size is set to 32, and the model is trained for 70 epochs, which takes approximately 160 minutes. The peak memory usage is around 35 GB. The majority of the computational cost comes from the ResNet-50 backbone, as we only add several lightweight MLP layers after it. We will discuss this and report the running time for all datasets in the revision.
**Q3:** "No discussion on potential failure cases or limitations"
**A3:** A limitation, as discussed in Q2, is that our method may require a relatively long training time. However, we believe this trade-off is justified given the performance improvements it achieves. We will discuss this in the revision.
**Q4:** "Provide additional baselines from causal representation learning"
**A4:** There are only a few works from causal representation learning that address domain adaptation. The only work we are aware of is iMSDA (Kong et al., 2022), which we have included as a baseline. Please kindly let us know if there is other suitable causal representation learning baseline you think we could compare to.
**Q5:** "Clarify the generalizability of the method beyond the considered datasets"
**A5:** Our method is designed for general domain adaptation, effectively handling various types of distribution shifts by learning latent representations that are most relevant for adaptation and capturing distribution shifts through low-dimensional representations. The datasets we used—Office-Home and PACS—are widely adopted benchmarks that span diverse domains and distribution shifts, providing strong empirical support for its generalizability.
To further evaluate the generalizability, we have conducted additional experiments under more severe label shifts; see our response to Q1 for Reviewer d2cE. It is observed that our method continues to perform well, demonstrating its effectiveness even in more challenging settings.
**Q6:** "Page 7, line 372 first column. I think the authors would refer to Figure 2 instead of Figure 1"
**A6:** We will fix the typo in the revision.
**Q7:** "How does the approach handle highly imbalanced domain shifts?"
**A7:** Based on your question, we interpret "highly imbalanced domain shifts" as referring to large distribution shifts between domains. Under this interpretation, our approach remains applicable as long as the assumptions hold. Specifically, the more imbalanced the domain shifts are, the more domains may be required to satisfy our theoretical assumptions. We will incorporate this discussion in the revised manuscript. If you intended a different interpretation of "imbalanced domain shifts," we would greatly appreciate if you could kindly let us know.
**Q8:** "Would the method still be effective with fewer source domains?"
**A8:** While our theoretical result assumes multiple domains, our method remains highly effective even with only a limited number of source domains. This is demonstrated by the strong performance in our experiments, where only a limited number (i.e., three) of source domains are available. We will include this discussion in the revision.
**Q9:** "Could the theoretical results be extended to other types of unsupervised domain adaptation problems?"
**A9:** Our theoretical results are already quite general, as they accommodate (1) various types of distribution shifts, where changes may occur anywhere in the latent space, and (2) arbitrary relations among latent variables. To address your question, while our results assume a single target domain, **it can be naturally extended to multi-target domain adaptation** by learning distinct $P_{new}$ for each target domain. This extension is feasible because our framework learns compact latent representations that capture distribution shifts relative to the prediction task. We will include a discussion of this extension in Section 4.3.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses that clarified my doubts, so I confirm the positive judgment on the manuscript.
---
Reply to Comment 1.1.1:
Comment: Thanks for your recognition and constructive feedback. We will incorporate them into the manuscript. Please feel free to let us know if you have further questions. Thank you! | Summary: The paper proposes a multi-source domain adaptaion approach. It is a generative based approach which projects feature representation into a latent space using VAE. The latent space (Markov blanket) is then partitioned into the subspace of label’s parents, children and spouses. Next, two VAE are used to Z_pa and Z_sps to learn \theta_Y and \theta_ch. A classification loss is added to train on input labels.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Glanced over them.
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes, glanced over the theorems.
Relation To Broader Scientific Literature: The paper presents an incremental solution to multi-source domain adaptation problem.
Essential References Not Discussed: Yes
Other Strengths And Weaknesses: Strengths:
1. The method is novel and has a detailed theoretical background and explanation.
2. Approach achieves good results on Office Home and PACS datasets.
Weakness:
1. More experiments would be better. Proposed approach is evaluated on two datasets (std should be added to the results). It is suggested to include at least 3 datasets for reliability. Domainnet is a popular dataset for multi-source domain adaptation.
2. Limited analysis. Analysis performed on the approach is not enough. Some experiments related to the latent space of features and VAE would be nice to visualize.
3. The approach has too many hyper-parameters. How and what these values are set to is not available in the paper.
Other Comments Or Suggestions: NA
Questions For Authors: See weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the time dedicated to reviewing our paper and the valuable comments. We have tried to address all the concerns in the following.
**Q1:** "More experiments would be better" and "Domainnet is a popular dataset for multi-source domain adaptation"
**A1:** Thank you for your insightful suggestion. Following your suggestion, we are currently conducting experiments for the DomainNet dataset. The dataset contains 6 different sub-tasks. Due to time constraints of the rebuttal period, some of these experiments are still ongoing, and we have finished the experiments for the Clipart sub-task. The results are provided in the table below, which indicate that our method outperforms all other baselines. Note that to save space for author's response, the references of the baselines, if not specified, can be found in our paper (e,g., Table 1). We will include the complete results for different sub-tasks in the revised manuscript. Hope this addresses your concern.
| Method | Clipart |
|-------------|------|
| Source Only | 52.1 |
| DANN | 60.6 |
| DCTN | 48.6 |
| MCD | 54.3 |
| M3SDA | 58.6 |
| CMSS | 64.2 |
| LtC-MSDA | 63.1 |
| ADDA [1] | 47.5 |
| ML_MSDA [2] | 61.4 |
| meta-MCD [3] | 62.8 |
| PFSA [4] | 64.5 |
| SSD [5] | 67.2 |
| **GAMA (Ours)** | **69.2** |
**Q2:** "std should be added to the results"
**A2:** Thanks for this suggestion. We will include the standard deviation for all datasets in the revised manuscript. Due to space constraints of author's response, let us provide the results with standard deviation for Office-Home dataset in the table below.
| Method | Ar | Cl | Pr | Rw | Avg |
|-------------|------|------|------|------|------|
| DAN | 68.3±0.5 | 57.9±0.7 | 78.5±0.1 | 81.9±0.4 | 71.6 |
| Source Only | 64.6±0.7 | 52.3±0.6 | 77.6±0.2 | 80.7±0.8 | 68.8 |
| DANN | 64.3±0.6 | 58.0±1.6 | 76.4±0.5 | 78.8±0.5 | 69.4 |
| DCTN | 66.9±0.6 | 61.8±0.5 | 79.2±0.6 | 77.8±0.6 | 71.4 |
| MCD | 67.8±0.4 | 59.9±0.6 | 79.2±0.6 | 80.9±0.2 | 72.0 |
| DANN+BSP | 66.1±0.3 | 61.0±0.4 | 78.1±0.3 | 79.9±0.1 | 71.3 |
| M3SDA | 66.2±0.5 | 58.6±0.6 | 79.5±0.5 | 81.4±0.2 | 71.4 |
| iMSDA | 75.4 ± 0.9 | 61.4 ± 0.7 | 83.5 ± 0.2 | 84.5 ± 0.4 | 76.2 |
| **GAMA (Ours)** | **76.6±0.1** | **62.6±0.6** | **84.9±0.1** | **84.9±0.1** | **77.3** |
**Q3:** "Limited analysis" and "Some experiments related to the latent space of features and VAE would be nice to visualize"
**A3:** We appreciate your helpful suggestion, which will aid our understanding of the method. In light of your suggestion, we have conducted visualizations of the latent space of features and VAE. Specifically, the t-SNE visualizations of the learned features on the Clipart task from the Office-Home dataset are available in Figure 6 in the anonymized repository: https://anonymous.4open.science/r/icml-rebuttal-gama/rebuttal.pdf, which demonstrate the effectiveness of our method at aligning the source and target domains while preserving discriminative structures. We will include the visualizations in the revised manuscript, and hope this addresses your concern.
**Q4:** "The approach has too many hyper-parameters. How and what these values are set to is not available in the paper."
**A4:** Thanks for your comment. We followed existing work [6] and selected hyperparameters that lead to optimal performance for each task. Due to space constraints of author's response, the values of these hyperparameters are provided in Table 1 of the anonymized repository: https://anonymous.4open.science/r/icml-rebuttal-gama/rebuttal.pdf. We will include these details in the revised manuscript for clarity.
**We want to thank the reviewer again for all the valuable feedback.**
---
**References:**
[1] Tzeng et al., Adversarial discriminative domain adaptation. In CVPR, 2017.
[2] Li et al., Mutual learning network for multi-source domain adaptation. arXiv preprint arXiv:2003.12944, 2020.
[3] Li et al., Online meta-learning for multi-source and semi-supervised domain adaptation. In ECCV, 2020.
[4] Fu et al., Partial feature selection and alignment for multi-source domain adaptation. In CVPR, 2021.
[5] Li et al., Multidomain adaptation with sample and source distillation. IEEE Transactions on Cybernetics, 2023.
[6] Kong et al., Partial disentanglement for domain adaptation. In ICML, 2022. | null | null | null | null | null | null | null | null |
Context Matters: Query-aware Dynamic Long Sequence Modeling of Gigapixel Images | Accept (poster) | Summary: The paper introduces Querent, a query-aware long contextual modeling framework for whole slide image (WSI) analysis, addressing the challenge of computational efficiency in gigapixel images. Unlike standard transformer architectures with quadratic complexity, Querent dynamically selects relevant regions for each patch using region-wise metadata summarization and importance estimation. This enables efficient self-attention while preserving long-range dependencies. The method outperforms existing approaches in biomarker prediction, gene mutation prediction, cancer subtyping, and survival analysis across multiple WSI datasets. Empirical results show that Querent achieves state-of-the-art accuracy while significantly reducing computational costs.
## update after rebuttal
I think the authors well address my concerns, so I will raise my score.
Claims And Evidence: The authors have demonstrated, through theoretical analysis and experimental validation, that their proposed query-aware attention mechanism possesses expressiveness comparable to that of full self-attention, while achieving greater computational efficiency. Moreover, the effectiveness of the region-level metadata summarization and importance estimation modules introduced by the authors has also been empirically substantiated.
Methods And Evaluation Criteria: I believe that the methods and evaluation criteria proposed by the authors are well-aligned with the problem at hand.
Theoretical Claims: Upon reviewing the authors' theoretical proofs, I have some reservations and cannot guarantee their complete accuracy.
Experimental Designs Or Analyses: The experimental design and analysis conducted by the authors are reasonably sound.
Supplementary Material: I have specifically reviewed Appendices B, C, and G.
Relation To Broader Scientific Literature: The paper builds on MIL and transformer-based WSI analysis, addressing efficiency challenges seen in TransMIL and HIPT. While prior work uses linear approximations (Shao et al., 2021) or local-global attention (Chen et al., 2022), Querent introduces query-aware attention, dynamically selecting relevant regions, inspired by context-dependent tissue relationships (Heindl et al., 2015). This improves efficiency while maintaining long-range modeling, advancing adaptive sparse attention in pathology AI.
Essential References Not Discussed: No, there aren’t.
Other Strengths And Weaknesses: ### Strengths:
1. The paper introduces query-aware attention, a novel approach to dynamically selecting relevant regions in WSIs, improving upon rigid local-global attention and linear approximations.
2. By significantly reducing computational costs while maintaining long-range dependencies, Querent advances scalable WSI analysis, impacting biomarker prediction, cancer subtyping, and survival analysis.
3. Strong performance across 11 datasets and multiple CPath tasks demonstrates robustness, outperforming state-of-the-art MIL and transformer-based models.
### Weaknesses:
1. The assumptions in the theoretical proofs may be challenging to satisfy in practical code implementation, which undermines their persuasiveness. For instance, the authors did not specify how to ensure that the neural networks $f_{min}$ and $f_{max}$ satisfy the L-Lipschitz continuity. Additionally, the fulfillment of the four conditions in Theorem B.6 during actual network training was not addressed.
2. Could you provide a comparison of training and inference times between this method and other networks? How does the training convergence speed of this method fare? In practical inference for WSI, the most time-consuming part is likely the extraction of patch features, with aggregation taking up a relatively small portion of the time. How much does this method improve the overall inference time compared to other methods during inference?
Other Comments Or Suggestions: It appears that there is an error in Equation 12 (NLL survival loss) in the appendix. The second and third terms on the right side of the equation should be preceded by a minus sign rather than a plus sign. Additionally, the second term should be $y^{(i)}_{j-1}$ instead of $y^{(i)}_j-1$.
Questions For Authors: 1. In the second set of ablation experiments, why does the Estimation Side Network perform worse than the Random Region Selection, which serves as the lower bound?
2. In the final step of Lemma B.4, since $q$ and $\hat{q}$ are not in the same feature space, how can they be combined to yield the final result? Could you provide a detailed derivation?
3. In the first set of ablation experiments and in Appendix G.1, it is mentioned that distance matrices are calculated. After summarization, each region has a feature vector, but before summarization, since a region contains multiple patches, it has multiple feature vectors. How, then, are the distance matrices calculated in this case? Could you elaborate?
4. Is the $\alpha$ in Theorem 3.1 the same as the $\alpha$ in Equation 8?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable suggestions. We answer the reviewer's questions in the following one by one:
`Weakness`
> The assumptions in the theoretical proofs may be challenging to satisfy ...
We appreciate the reviewer's concerns. Regarding the Lipschitz continuity of $f_{min}$ and $f_{max}$: while strict $L$-Lipschitz continuity is a common theoretical assumption used to bound approximation errors, in practice we implement these functions as single-layer perceptrons with ReLU activations. Moreover, we apply regularization techniques such as normalization and weight clipping during training to encourage Lipschitz-like behavior, which empirically ensures that the projections behave in a sufficiently smooth manner for our theoretical guarantees to hold approximately.
As for the four conditions in Theorem B.6, they are idealized guidelines to understand the behavior of our query-aware selective attention mechanism. In our implementation, we approximate these conditions by: (1) using a large hidden dimension (e.g., $d=512$) to meet JL lemma requirements; (2) selecting an appropriate number of regions based on spatial decay analysis; and (3) designing region sizes (e.g., $K=16,24$) that balance the need for a small diameter with computational efficiency. Although perfect adherence is challenging, our parameter choices, guided by validated performance, effectively control the approximation error and preserve the key theoretical properties, as supported by our empirical results.
> Could you provide a comparison of training and inference times ...
We appreciate this question on computational efficiency. In our experiments (see Figure 2 in the [anonymous link](https://anonymous.4open.science/r/ICML_PaperID10-A4EE/README.md)), Querent trains in 72.33s per batch — faster than more complex models like HIPT (473.49s) yet slightly slower than simpler baselines. It converges in a similar number of epochs as other transformer-based methods while achieving state-of-the-art accuracy. Notably, Querent has the lowest memory footprint (2286 MB) among the compared methods. For inference, Querent processes a slide in 0.1328s.
As the reviewer correctly points out, patch feature extraction (typically 2~3mins per slide) dominates the WSI processing pipeline, while feature aggregation accounts for less than 1% of the total time. We believe that the slight increase in computation time compared to simpler methods is a worthwhile trade-off given Querent's state-of-the-art performance across all tasks.
`Other Comments or Suggestions`
> It appears that there is an error in Equation 12 ...
We thank the reviewer for the careful review of Supplementary Materials. We will correct this equation in the revised manuscript to ensure mathematical accuracy.
`Questions`
> In the second set of ablation experiments, why does the Estimation Side Network ...
This counter-intuitive result stems from fundamental limitations in the Estimation Side Network approach. By attempting to predict region importance independently without considering query context, this network struggles with optimization challenges and fails to capture the relational information critical for accurate importance assessment. It also tends to overfit to region patterns seen during training. Random Region Selection, while simple, provides diverse contextual sampling that occasionally includes relevant regions by chance, avoiding biased selection. Our query-aware approach resolves these issues by dynamically assessing region importance relative to each specific query, leading to significantly better performance than both alternatives. We will clarify this explanation in the revised manuscript.
> In the final step of Lemma B.4, ...
We thank the reviewer for the insightful comment regarding the feature space transition in Lemma B.4. To clarify, although $q$ and $\hat{q}$ reside in different feature spaces, the projection functions are assumed to be Lipschitz continuous, which allows us to control the distortion when moving from the original space to the projected space, with detailed deviation shown in Figure 3 in this [anonymous link](https://anonymous.4open.science/r/ICML_PaperID10-A4EE/README.md).
> In the first set of ablation experiments and in Appendix G.1 ...
In our distance matrix calculations, pre-summarization distances between regions are computed by first flattening all patches in each region into a single vector, then calculating Euclidean distances between these region vectors. Post-summarization distances are simply the Euclidean distances between the metadata vectors.
> Is the $\alpha$ in Theorem 3.1 the same as ...
No, these are different quantities. In Theorem 3.1, $\alpha$ is the exponential decay rate parameter for attention scores with spatial distance. In Equation 8, $\alpha$ represents normalized attention weights for feature aggregation. We'll use distinct notation in our revision to prevent confusion.
---
Rebuttal Comment 1.1:
Comment: Thanks for the feedback. I think the authors well address my concerns, so I will raise my score.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for raising the score and we are glad that our rebuttal effectively addressed concerns. Following the reviewer's constructive suggestions, we will further polish our manuscript and include all revisions mentioned above for better readability. | Summary: This paper introduces Querent, a framework for dynamic long-range contextual modeling of gigapixel WSIs through the adaptive determination of patch relationships. The key idea is to maintain the modeling power of full self-attention while achieving computational efficiency through dynamic sparsification. The method adaptively predicts which surrounding regions are most relevant for each patch, enabling focused yet unrestricted attention computation only with potentially important contexts. By using efficient region-wise metadata computation and importance estimation, their approach dramatically reduces computational overhead while preserving global perception to model fine-grained patch correlations. The effectiveness of the proposed method is validated on benchmark datasets, showing improvements over existing approaches.
Claims And Evidence: The authors claim that their method outperforms existing techniques in both efficiency and accuracy when analyzing WSIs. The experimental results presented partially support these claims, showing improvements in key metrics.
Methods And Evaluation Criteria: The proposed query-aware attention mechanism dynamically adapts to the unique context of each patch in gigapixel WSIs, preserving global attention while substantially reducing computational complexity. This results in enhanced computational efficiency, making it suitable for the intended application.
Theoretical Claims: The paper includes theoretical justifications for the proposed approach, particularly in the modeling techniques used. The proofs and derivations appear sound.
Experimental Designs Or Analyses: The experimental design and analyses are sound, offering lucid delineations of datasets, metrics, and methodologies. However, the comparative methods delineated in the paper appear to diverge from the tasks reported in the original studies. For instance, the performance of the RRT-MIL method, as reported on the TCGA-BRCA dataset, pertains to a sub-typing task, whereas this study utilizes the BRCA dataset for survival prediction. A comparative evaluation of the sub-typing task’s performance could enhance the persuasiveness of the experimental findings.
Supplementary Material: The supplementary material provides additional experimental results and technical details that complement the main text. This material enhances the paper's comprehensiveness and provides valuable insights for replication and further study.
Relation To Broader Scientific Literature: The paper builds upon existing work in sequence modeling, introducing novel adaptations for WSIs. It contributes to the literature by addressing specific challenges associated with WSI and proposing a method that integrates context-aware mechanisms.
Essential References Not Discussed: The paper covers relevant literature.
Other Strengths And Weaknesses: Strengths:
- The integration of query-aware mechanisms with dynamic sequence modeling in WSI analysis.
- The paper is clearly written and well-structured.
- The proposed methodology has yielded commendable performance across a diverse array of tasks and datasets.
Weaknesses:
- The primary contribution of the paper lies in its ability to reduce computational complexity while preserving global attention. As evidenced in Tables 1 and 2, the proposed methods outperform existing approaches; however, the analysis appears somewhat deficient. For instance, the absence of results derived from global attention computations raises questions about whether the superior performance of the proposed method stems predominantly from the MIL paradigm, the extraction of patch features via PLIP, or the novel Dynamic Attention mechanism introduced in the text.
Other Comments Or Suggestions: - Some sections, particularly the theoretical derivations, could be elaborated for better clarity. For instance, the meaning of B in Theorem 3.1 should be promptly elucidated.
- Exploring the integration of the proposed method with other advanced models, such as vision transformers, could be a valuable direction.
Questions For Authors: - Should a comprehensive global attention mechanism be employed instead of this approximate variant, what impact might it have on performance?
- Would the adoption of alternative patch feature extractors—such as CHIEF, UNI/UNI2, Virchow/Virchow2, PRISM, or GigaPath—yield analogous conclusions?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for these constructive comments. We answer the reviewer's questions in the following one by one:
`Experimental Designs or Analyses`
> However, the comparative methods delineated in the paper appear to diverge from the tasks ...
We appreciate the reviewer's concern regarding task alignment between our evaluation and original studies. We want to clarify that our evaluation framework was intentionally designed to be comprehensive, spanning multiple computational pathology tasks (biomarker prediction, gene mutation prediction, cancer subtyping, and survival prediction) to demonstrate the robustness and generalizability of our method across diverse clinical applications. While some baseline methods like RRT-MIL were originally evaluated on specific tasks, we adapted all methods for multiple tasks using standardized feature extraction and training protocols to ensure fair comparison. This approach provides stronger evidence of our method's versatility than limiting evaluation to a single task type would, as demonstrated by Querent's consistent performance advantages across all tasks in Tables 1 and 2. We believe our comprehensive evaluation strategy enhances rather than diminishes the persuasiveness of our experimental findings by showing our approach's effectiveness across the spectrum of computational pathology applications.
`Weakness`
> however, the analysis appears somewhat deficient. For instance, the absence of results derived from global attention ...
We appreciate the reviewer's concern about determining the source of our method's superior performance. It's worth noting that directly applying global self-attention to tens of thousands of WSI patches leads to out-of-memory problems, which explains why existing methods use alternatives like local-global or linear attention approximations. To respond to the reviewer's comment directly, we implemented a full global attention approach using FlashAttention (which achieves linear memory complexity) and compared it with our method with the same PLIP feature extractor and same training protocol (see Table 1 in this [anonymous link](https://anonymous.4open.science/r/ICML_PaperID10-A4EE/README.md)). The experimental results show that Querent consistently outperforms the global attention approach (FlashMIL). This confirms that our performance improvements stem specifically from the proposed query-aware dynamic attention mechanism rather than other components, as these elements were identical across both compared methods.
`Other Comments or suggestions`
> Some sections, particularly the theoretical derivations, could be elaborated ...
In response to this comment, we acknowledge that some elements of Theorem 3.1 would benefit from additional clarification. Specifically, the parameter B in Theorem 3.1 represents the bound on input norms ($||q_i||$, $||K_j||$ $\leq$ $B$), which is critical for establishing the error bounds of our query-aware attention approximation. This parameter is properly defined in Lemma B.4 of the appendix but should have been explicitly introduced in the main text for clarity. We will ensure this and other theoretical elements are more thoroughly explained in the revised version to enhance readability and comprehension of our technical contributions.
> Exploring the integration of the proposed method with other advanced models ...
We agree with the reviewer that integrating our query-aware dynamic attention mechanism with advanced vision transformer architectures represents a promising direction for future work. Our current implementation focuses on efficient modeling of long-range dependencies in gigapixel images, but the core principles of our approach — dynamic region-level metadata summarization and importance-based selective attention — could be readily adapted to enhance various vision transformer frameworks. We appreciate this valuable suggestion and plan to explore such integrations in our future research.
`Questions`
> Should a comprehensive global attention mechanism be employed ...
We have responded to this comment with detailed interpretation in the `Weakness` section.
> Would the adoption of alternative patch feature extractors—such ...
We appreciate this insight. As demonstrated in Table 2 in this [anonymous link](https://anonymous.4open.science/r/ICML_PaperID10-A4EE/README.md), we conducted additional experiments using state-of-the-art foundation models (Virchow and CHIEF). Results show that Querent consistently outperforms other methods with these advanced feature extractors, confirming that our method's superiority stems from its long contextual modeling capability rather than the choice of feature extractor. This indicates Querent's contribution is complementary to advances in foundation models and will continue to provide advantages as foundation models evolve.
**We welcome any further questions or clarifications regarding our rebuttal and are happy to provide additional details if needed.**
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses, which have addressed most of my concerns. After carefully reading all the comments and responses, I decide to raise the score to weak accept.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for raising the score. Following the reviewer's constructive suggestions, we will polish our manuscript further and include all revisions mentioned above for a better presentation of our method. | Summary: This paper introduces Querent, a query-aware dynamic modeling framework for analyzing whole-slide images in computational pathology. To address the computational inefficiency of standard transformer architectures, which struggle with the quadratic complexity of self-attention in large-scale WSI analysis, the authors propose a novel approach that dynamically adapts attention computation to the most relevant regions for each query patch. The framework includes: 1) Region-Level Metadata Summarization, 2) Query-Aware Attention Mechanism and 3)Efficient Importance Estimation. Experiments on biomarker prediction, gene mutation prediction, cancer subtyping, and survival analysis demonstrate that Querent achieves state-of-the-art performance while significantly reducing computational overhead, enabling efficient processing of gigapixel WSIs.
Claims And Evidence: Yes, the claims made in the manuscript are largely supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed method(s) and/or evaluation criteria (e.g., benchmark datasets) are appropriately justified for the current problem or application.
Theoretical Claims: Yes, I reviewed the principles of the method proposed in the paper, primarily focusing on Query-Aware Attention Approximation. I examined the detailed derivation provided in the paper, and after referencing the cited literature (Kaban et al., 2015), I found the methodological derivation to be reasonable.
Experimental Designs Or Analyses: Yes, I have checked the validity of experimental designs. The experimental design and analysis conducted for the proposed method in the article are methodologically sound and empirically valid.
Supplementary Material: I have reviewed the supplementary material in Appendix A.
Relation To Broader Scientific Literature: The paper introduces query-aware sparse attention, which dynamically selects relevant regions for each query patch, maintaining the expressive power of full self-attention while achieving near-linear computational complexity. This aligns with findings from prior studies (e.g., HIPT, Chen et al., 2022) that proposed various attention mechanisms and region selection strategies to improve computational efficiency and model performance in WSI analysis.
Essential References Not Discussed: To the best of my knowledge, I think the authors have already provided sufficient explanation and discussion.
Other Strengths And Weaknesses: Strengths:
1.The paper introduces a novel query-aware attention mechanism that dynamically adapts to the context of each patch, addressing the computational bottleneck of standard transformers in large-scale WSI analysis.
2.The paper provides theoretical guarantees for the query-aware attention mechanism, proving its error bounds in approximating full self-attention.
Weaknesses:
The performance of the model depends on the quality of the region-level metadata. In the computation of region-level metadata, using min/max/mean/mean-std feature to summarize the patch features within a region may lead to the loss of important local information, especially in regions with high tissue heterogeneity or significant noise.
Other Comments Or Suggestions: Some minor issues:
1.The experiments did not leverage features from latest foundation models (e.g., UNI and CONCH). Incorporating these advanced features could potentially reduce the performance disparity between existing MIL methods with Querent.
2.The region size in Querent significantly impacts performance, potentially limiting its generalizability across diverse WSI datasets. When applied to new datasets, it may require careful tuning, posing challenges for real-world applications.
Questions For Authors: 1.The method can visualize the original WSI corresponding to the K regions selected by the Querent and their min/max feature to prove the accuracy of the method. The author could provide some visualization results to demonstrate the effectiveness of the proposed method. For example, visualizing the metadata feature score for each region could help illustrate that the model indeed selects influential patches.
2.It is not clear how the method can avoid the situation where the top K regions miss the regions containing key information.
3.It is uncertain what the advantages of the proposed method are compared with the latest methods that can also achieve efficient and fast classification through Mamba (such as MambaMIL, MamMIL) in existing research. MamMIL can also perceive the topological structures among the instances and incorporate short-range feature interactions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments. We respond to the comments one by one as follows:
`Weakness`
> The performance of the model depends on the quality of the region-level metadata ...
We acknowledge that any approach to summarizing region-level metadata, whether it be min/max, mean, or mean-std, will inherently lose some local information. This trade-off is necessary to achieve computational efficiency when handling gigapixel whole slide images. In our work, the min-max strategy was specifically chosen because it captures the extreme values of feature distributions, which are critical for preserving discriminative patterns — especially in heterogeneous tissues. Our additional analysis (see Figure 1 in this [anonymous link](https://anonymous.4open.science/r/ICML_PaperID10-A4EE/README.md)), using the "Adjusted Average Distance to Summary" metric, shows that the min-max approach outperforms other summarization methods, particularly in high-heterogeneity regions, where it achieves significantly lower error compared to mean or mean-std methods.
Furthermore, our query-aware attention mechanism complements the summarization by dynamically selecting and weighting the most relevant patches, which helps mitigate the loss of local information and filters out noise. Although some information loss is unavoidable with any summarization method, our experimental results demonstrate that the min-max approach, in combination with our selective attention, provides an effective and robust representation that leads to superior performance across all tasks, even in challenging heterogeneous and noisy scenarios.
`Other Comments or Suggestions`
> The experiments did not leverage features from latest foundation models ...
We appreciate this suggestion. As demonstrated in Table 2 in this [anonymous link](https://anonymous.4open.science/r/ICML_PaperID10-A4EE/README.md), we conducted additional experiments using state-of-the-art foundation models (CHIEF and Virchow). Results show that Querent consistently outperforms other methods with these advanced feature extractors, confirming that our method's superiority stems from its long contextual modeling capability rather than the choice of feature extractor. This indicates Querent's contribution is complementary to advances in foundation models and will continue to provide advantages as foundation models evolve.
> The region size in Querent significantly impacts performance, potentially ...
We appreciate the reviewer's comment on region size impact. While region size is a hyperparameter, our ablation studies (Figure 5) show that moderate-sized regions (16-24) consistently deliver strong performance across diverse datasets. This pattern aligns with pathological intuition, *i.e.*, region size should capture meaningful local context without diluting distinctive tissue patterns. This provides a reliable starting point that significantly narrows the hyperparameter search space, enhancing Querent's practical applicability without extensive tuning.
`Questions`
> The method can visualize the original WSI corresponding to the K regions selected ...
We thank the reviewer for this great suggestion and will include visualizations in the revised version.
> It is not clear how the method can avoid the situation where the top K ...
We address this important concern through two mechanisms: (1) our region importance estimation algorithm provides theoretical guarantees (Theorem 3.1) that selected regions are at most 2$\epsilon$1-suboptimal compared to the true top-K regions, ensuring minimal information loss; and (2) our min-max summarization strategy (superior in Fig. 4, p<0.005) effectively captures extreme feature distributions, making the metadata highly discriminative for identifying diagnostically relevant areas. Our ablation studies confirm that our approach significantly outperforms random region selection (Table 3, 8.9% accuracy improvement on UBC-OCEAN), demonstrating that our method reliably identifies regions containing key diagnostic information.
> It is uncertain what the advantages of the proposed method are compared with the latest methods ... through Mamba ...
We directly compare with both MambaMIL and MamMIL (see Table 3 in this [anonymous link](https://anonymous.4open.science/r/ICML_PaperID10-A4EE/README.md)) and Querent consistently outperforms these methods across all metrics and datasets.
The fundamental difference between Querent and these methods lies in our query-aware dynamic modeling approach. While MambaMIL uses sequence reordering and MamMIL employs graph-based representations with MST, both still process all patches with predetermined patterns. In contrast, Querent adaptively determines which surrounding regions are most relevant for each patch based on content, focusing computational resources only where needed.
**We welcome any further questions or clarifications regarding our rebuttal and are happy to provide additional details if needed.**
---
Rebuttal Comment 1.1:
Comment: The authors address my concerns in the rebuttal and I'll retain my score.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for these constructive suggestions and we are glad that our rebuttal effectively addressed concerns. We will further polish our manuscript and include all revisions/discussions mentioned above for better readability. | Summary: To alleviate the self-attention o(n^2) complexity when modeling WSI, this paper introduces query-based lager-region pruning method to replace linear-attention and local-global attention mechanisms. By ignoring the irrelevant regions to current patches, all the computational cost between current patch and all patches in these regions can be pruned. The evaluations in experiments demonstrate the computational efficiency and performance effectiveness.
Claims And Evidence: The claim in line 023-025 (abstract) 'the query-aware long contextual dynamic modeling framework, which maintains the expressive power of full self-attention while achieving practical efficiency' is not evident.'
If the full self-attention is maintained, how to speed up. I think this expression should be refined
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense.
Theoretical Claims: I have reviewed all the proofs and found no issues.
Experimental Designs Or Analyses: I have checked the experimental design. An issue of experiment in this paper is that the performance of full self-attention is not included. To the best of my knowledge, the full self-attention can be implemented via FlashAttention to avoid out-of-memory problem in WSI tasks.
Supplementary Material: I have reviewed the computational complexity part of supp.
Relation To Broader Scientific Literature: The method of this paper may also be applied in other tasks with long-sequence modeling using Transformer, e.g. document-level language understanding and AI4Science tasks with long-sequence DNA, RNA.
Essential References Not Discussed: I find that all the essential references are discussed.
Other Strengths And Weaknesses: Strengths: The proposed method is novel and can highly speed up Transformer WSI modeling.
Weakness:
1) The claim on the relationship between 'Querent' and 'full self-attention' should be further discussed.
2) The Flash-attention (with full self-attention) but linear-memory and quadratic time cost should be compared.
3) A very important issue: There seems no explanation or motivation on why the method can improve the results. If the motivation is just like weakness 1), the weakness 2) should be validate in rebuttal. If full self-attention (implemented via Flash-attention) cannot reach the results just like Querent, how to explain it?
Other Comments Or Suggestions: The author should provide more discussion on Flash-attention.
The Flash-attention is of linear-memory cost and quadratic-time cost, but the time cost or speed was accelerated by their hardware operations optimization.
Does your method can be combined with it? How much of your method surpass it?
Questions For Authors: No further question.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. We respond with detailed interpretations as follows:
`Claims and Evidences`
> The claim in line 023-025 (abstract) 'the query-aware ...
We should clarify that Querent provides a theoretically bounded approximation of full self-attention rather than claiming it maintains the exact same expressive power. As demonstrated in Theorem 3.1, our approach maintains the expressiveness within a small constant bound of full self-attention while significantly reducing computational complexity. We will revise this wording in the final version to be more precise: "Querent achieves a theoretically bounded approximation of full self-attention and meanwhile delivers practical computational efficiency for modeling gigapixel images."
`Relation with Full Self-Attn (FlashAttn)`
Since the reviewer's major concern is the necessity of developing a new dynamic attention pattern instead of directly using Flash Attention for full self-attention implementation, here we address these concerns step by step.
**1. Comparison with Full Self-Attention via FlashAttention**\
Following the reviewer's suggestion, we have implemented a full self-attention-based MIL model (FlashMIL) using FlashAttention to enable the processing of long sequences without memory limitations. Specifically, our implementation applied 4 flash-attn layers to model the WSI patch sequence, followed by a mean operation to obtain the slide-level representation for prediction. The comparison results on our three classification datasets are shown in Table 1 in this [anonymous link](https://anonymous.4open.science/r/ICML_PaperID10-A4EE/README.md). While FlashMIL achieves comparable performance on the BCNB-ER dataset, Querent significantly outperforms it on the more complex TCGA-LUAD TP53 and UBC-OCEAN datasets. This demonstrates that our method provides benefits beyond just addressing memory efficiency.
**2. Relationship Between Querent and Flash Attention**\
While both Querent and FlashAttention address the computational challenges of self-attention, they do so through fundamentally different approaches. FlashAttention optimizes the implementation of full self-attention through IO-aware algorithms and hardware optimizations, but still computes attention between all pairs of patches. In contrast, Querent introduces a context-dependent attention mechanism that dynamically identifies and focuses only on the most relevant regions for each query patch.
The superior performance of Querent over FlashMIL can be explained by this contextual selectivity, which serves as an implicit regularization mechanism. By focusing only on relevant regions, Querent filters out noise and irrelevant information that could potentially confuse the model, especially in highly heterogeneous WSIs. This is particularly important in computational pathology, where diagnostically relevant features may be sparsely distributed across the gigapixel image.
**3. Why Querent Improves Results Beyond Memory Efficiency**\
The improved performance of Querent over Flash Attnetion-based implementation can be attributed to several factors. First, the context-aware attention dynamically determines which surrounding regions are most relevant for each patch, allowing Querent to adapt to the heterogeneous nature of WSIs, where different tissue types require different contextual considerations. Second, selective attention acts as a form of implicit regularization by reducing the influence of irrelevant or noisy patches, which is particularly beneficial in weakly-supervised settings with limited training data. Third, while reducing computational overhead, our min-max region metadata approach ensures that potentially important long-range dependencies are still captured, unlike fixed local-global attention patterns that make strong assumptions about which spatial relationships matter. These advantages explain why Querent outperforms Flash-Attention-optimized self-attention.
**4. Compatibility with FlashAttention**\
Regarding the potential combination of Querent with FlashAttention: Yes, our method is compatible with and complementary to FlashAttention. While FlashAttention optimizes how attention is computed through IO-aware algorithms, Querent determines which attention computations are most valuable to perform. A combined approach could leverage FlashAttention's efficiency for computing the selected region attention in our Step 3 (Query-Aware Selective Attention), potentially providing even greater computational benefits. This represents an interesting direction for future work.
Our current implementation already demonstrates significant efficiency gains over standard attention (as shown in Figure 6), requiring only ~1% of the memory and ~5% of the computational cost for 100k patches. Even compared to FlashAttention, Querent offers advantages in computational complexity (near-linear vs. quadratic time complexity) while achieving superior performance on CPath tasks.
---
Rebuttal Comment 1.1:
Comment: The rebuttal has resolved all my issues, and I will keep the initial rating. I believe that the authors will include these discussions in their final version.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for these valuable comments and we are glad that our rebuttal effectively addressed concerns. We will polish our manuscript further and include all revisions mentioned above for better readability. | null | null | null | null | null | null |
PolyConf: Unlocking Polymer Conformation Generation through Hierarchical Generative Models | Accept (poster) | Summary: In this work, we proposed PolyConf, a pioneering tailored polymer conformation generation method that leverages hierarchical generative models to unlock new possibilities for this task. The authors decompose the polymer conformation into a series of local conformations generating these local conformations through an autoregressive model. The Polymers contain multiple repeats, PolyConf first generates one unit and then generates a series of transformations to locate each repeat. Experimental results demonstrate that PolyConf can generate high-quality, physically reliable polymer conformations, facilitating advancements in polymer modeling and simulation.
## update after rebuttal
The authors successfully addressed my concerns. I suggested the authors should clearly pointed out this model is designed specific to the material science. The overall contribution from the machine learning aspect is limited but there are couple of technical improvements for this new task. Therefore, I decide to maintain my overall ranking as 'weak accept'.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes. I check all the results in the main article and appendix
Supplementary Material: Yes. I review all the supplementary materials.
Relation To Broader Scientific Literature: This part I am not quite sure. I am asking the authors to provide such evidence. I am not quite aware any biological and chemical applications that the prediction of 3D structure of a polymer is necessary.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: First, I would like to thank the authors for contributing a new dataset of polymer 3D structures to the field. In terms of the generated 3D structures, PolyConf clearly has significant advantages over other methods. However, in terms of model construction, there isn’t much innovation in this work. The authors split the generation of a repeat polymer into two parts: the first step is to generate a small molecule’s 3D structure, which has been extensively studied, and the authors should compare different methods for predicting small molecule 3D structures. The second step involves generating the repeat units, which, as far as I know, is new. Therefore, from a purely modeling perspective, the contribution of this work is limited.
Additionally, during the generation of repeats, each repeat is created independently, at least according to equations (9) and (10). Why would this modeling approach, which seems to lack global information, lead to a reduction in overall RMSD? My guess is that if a repeat is stretched into a straight line, the RMSD won't be too high. Therefore, the second step of the model generation does not necessarily need to learn any special patterns. The authors could provide different 3D structures to address this concern.
Other Comments Or Suggestions: Related to the previous question, does the polymer have a stable 3D structure? Is there any biochemistry or application related to it? Could you provide some specific citations for its applications?
Questions For Authors: 1.What is the meaning of ‘To generate a random subset of unknown repeating unit conformations based on known/predicted repeating unit conformations iteratively (expressed in Eq. (3)) ‘ what is the exact meaning of ‘unknown’. Does it mean we do not know the 3D structure or we do not know the 2D structure?
2.Is each R-i independent? If they are all independent, could there be collisions between different repeats when the number of repeats is large?
3. What is the RMSD between different conformations in the MD simulation? One possibility is that these polymers themselves do not have a stable 3D structure, and there is a lot of structural variation in the MD simulations. For example, if the RMSD among the MD simulated structures is ~100, then an RMSD of 30 obtained from one computational method would be meaningless.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your comments. Below, we try to resolve your concerns one by one.
**W1: Clarification of model construction and contribution**
As shown in Figure 2, we design PolyConf as a hierarchical generative framework with a two-phase generating process.
As described in Section 3.2, in the first phase, we leverage the masked autoregressive model (MAR) to generate the conformation of each repeating unit within the given polymer in random order. **Please note that the conformation of each repeating unit within the given polymer is not generated independently. They share the same global 2D polymer graph information and can be influenced by each other through the MAR module.** The code is provided in `./polyconf/models/polyconf_phase1.py` within the anonymous repository (https://anonymous.4open.science/r/PolyConf).
As described in Section 3.3, in the second phase, we employ an SO(3) diffusion model to generate the corresponding orientation transformations of repeating units within the given polymer, thereby assembling those repeating unit conformations generated by the previous phase into the complete polymer conformation. **Please note that the corresponding orientation transformations of repeating units within the given polymer are obtained together through the diffusion processes on $SO(3)^{Nu}$.** The code is provided in `./polyconf/models/polyconf_phase2.py` within the anonymous repository (https://anonymous.4open.science/r/PolyConf).
Here, as mentioned in lines 79-100, PolyConf is specifically designed based on the unique characteristics of polymers, not a simple application of the existing methods. Besides, as shown in our response to Reviewer HPGV (https://openreview.net/forum?id=BsTLUx38qV¬eId=AdOXTd7M9Y), PolyConf can achieve even better performance than the SOTA polymer property prediction method, further demonstrating its potential. In addition, we think PolyConf also has significant potential for other macromolecules composed of building blocks, such as proteins (amino acids) and RNA (nucleotides), thereby driving progress in these related fields.
**W2&Q2: Clarification of modeling independence**
As mentioned in our last response, both conformation and orientation transformation of each repeating unit within the given polymer **are not generated independently**. They share the same global information provided by the 2D polymer graph and can be influenced by each other.
**W3&Q3: Stable 3D structures and their applications**
We analyzed the energy changes of polymers during our MD simulations, and the results show that most simulations achieve convergence within 1 ns, proving that polymer conformations obtained through our MD simulations are low-energy states (i.e., stable).
As shown in the following Table, we further calculate the RMSD between conformation obtained in 2ns/3ns/4ns and the final conformation obtained in 5ns within the same MD tracjory, demonstrating that the polymer has a stable 3D structure.
| RMSD | 2ns | 3ns | 4ns |
|---|---|---|---|
| 5ns | 2.15 ± 1.22 | 2.01 ± 1.09 | 1.84 ± 0.97 |
These stable 3D structures are critical for various applications. For example, the work in [1] has revealed the relation between the polymer conformation and the elastic modulus of the crystalline region. The works in [2][3] have tried to apply them to calculate glass transition temperatures through experiments and MD simulations. The work in [4] has incorporated polymer 3D structural information into property prediction. In addition, as shown in our response to Reviewer HPGV (https://openreview.net/forum?id=BsTLUx38qV¬eId=AdOXTd7M9Y), PolyConf can achieve even better performance than the SOTA polymer property prediction method, further demonstrating the importance and potential of polymer conformation.
**Q1: Clarification of Eq.3**
As mentioned in our first response, we leverage the masked autoregressive model to generate the conformation of each repeating unit within the given polymer in random order during the first phase. Here, the 2D structure information is known, and we aim to generate the 3D structures of the unpredicted repeating units based on the predicted repeating unit iteratively, thereby obtaining the 3D structures of all repeating units within the given polymer.
We hope the above responses can resolve your concerns. Since it's a relatively unexplored research area, we kindly request your understanding of the challenges and complexities involved in our work.
[1] Relation between the polymer conformation and the elastic modulus of the crystalline region of polymer. Journal of Polymer Science Part C, 1970.
[2] Prediction of polymer properties. cRc Press, 2002.
[3] High-throughput molecular dynamics simulations and validation of thermophysical properties of polymers for various applications. ACS Applied Polymer Materials, 2020.
[4] MMPolymer: A multimodal multitask pretraining framework for polymer property prediction. CIKM2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarifying the technical details. I now understand the entire generation process. The generation of repeat units is new.
On the other hand, you mentioned further applications such as proteins, RNAs and DNAs. There are already tons of algorithms which generate 3D structures (w/o sequences). To motivate this work, it is better to show up real biological applications. Therefore, I would like to main my score as weak accept.
---
Reply to Comment 1.1.1:
Comment: Thanks for your timely feedback. **Here, we want to further clarify the main contributions of this work**.
As we have discussed in the whole paper, **our work focuses on polymers in material science rather than proteins in biology**, aiming to explore polymer conformation generation, **an important yet unexplored research area in material science**. As shown in lines 80-98, compared with proteins and other bio-complex, polymers have their unique challenges, e.g., greater structural flexibility. For this goal, we have devoted considerable time and resources to developing PolyBench, **the first benchmark** for polymer conformation generation, to address the scarcity of polymer conformation datasets. Furthermore, we propose PolyConf, **the first tailored method** for polymer conformation generation, which can consistently generate high-quality, physically reliable polymer conformations, facilitating advancements in polymer modeling and simulation. Besides, **the whole work, including code, model, and data, will all be publicly available** to boost subsequent studies. In addition, as shown in our response to Reviewer HPGV (https://openreview.net/forum?id=BsTLUx38qV¬eId=AdOXTd7M9Y), our work can achieve even **better polymer property prediction performance** than the SOTA method, further demonstrating its potential in material modeling and design.
Although we mentioned in the rebuttal that our method has the potential for modeling conformations of proteins, RNAs, and DNAs, these applications are not our core contributions. **Exploring these applications can be our future work, but they are out of the scope of this work.**
According to ICML 2025 Reviewer Instructions, the discussion between authors and reviewers is restricted to at most one additional round of back-and-forth, which means that we might no longer have the opportunity to respond to any further feedback from you. **We hope the above response helps to further clarify the focus and main contributions of our work and enhances your confidence to further support our work. We would greatly appreciate it if you could consider raising your score based on our contributions.** Thanks in advance for your consideration.
Regards,
The authors of PolyConf | Summary: Deep learning for polymer design is a severely underexplored area, and this work attempts to address two major challenges: the lack of methodology and the scarcity of high-quality data. In PolyConf, the authors generate polymers autoregressively, generating conformers for individual building blocks and linking them using a diffusion model. This study introduces the first benchmark with a (supposedly) high-quality polymer conformation dataset derived from MD simulations, aiming to advance research in this area. The work is novel and if reproducible would be highly impactful, however, I have quite a few questions and concerns, which I outline below.
The work aims to address the lack of polymer conformer data by publishing a new dataset; however, I have been unable to review the dataset as it was not made available with the paper (even anonymously), making it difficult to validate its quality. Another limitation of the current evaluation is that all methods compared in the paper are designed for small molecules rather than polymers; however, as far as I understand, there are no deep generative models available for polymers that also handle 3D information, so this may be acceptable. While the contributions are promising and the figures are well-designed and informative, I have currently rated this paper marginally below the publication threshold. I would be willing to increase my score if the authors provide anonymized code for review and address the points raised below.
Feedback and Questions
- **Code availability:** No code is provided, which is a significant drawback. Releasing code (even anonymously for review) would greatly improve reproducibility and impact, especially considering that one of the contributions if the new dataset, but I cannot evaluate it if it is not presented for review. Lack of reproducibility is one of the main reasons I have ranked this work below the acceptance threshold, despite its significant novelty. Without code, it is difficult to determine if the work is fully reproducible, and I am not sure there are enough details in the text to accurately reproduce the results.
- **Baseline comparisons:** Are there any non-deep-learning baselines for comparison? For instance, a simpler approach that links building block conformers could serve as a useful reference, more-so than comparing to deep generative models designed for small molecules. What would the RMSD be for a method that simply linked the conformers generated by RDKit for the same SMILES in the dataset? This could serve as an interesting dummy model for comparison.
- **Metric clarity:**
- The units for the S-MAT-R and S-MAT-P metrics (both structural and energy-related) should be explicitly stated, I have no idea currently what the units are for the tables in the paper which makes it hard to assess how bad/good the values are.
- It is also unclear if the metrics reported in the paper (in the various tables) are for the training, validation, or test set, nor how the metrics differed for the different sets. This should be clarified and unambigious. Have the authors demonstrated, furthermore, that their model is not overfitting?
- **Dataset details:**
- The authors mention sourcing data from three different sources. How much data came from each?
- PolyInfo is known to prohibit web scraping, so I am confused as to how was data obtained from this source?
- A dimensionality reduction method (e.g., UMAP) could help visualize the overlap in building block SMILES from each source, and would make for an interesting analysis to tell us how different the building blocks from each source are.
- **Validation of MD simulations:** How were the MD simulations used for dataset construction validated? It would be useful to understand how reliable they are for training deep learning models.
- **Masked autoencoder experiments:** Was there any analysis on how different levels of masking (e.g., percent of masked bits) affect performance? Understanding the limits of masking would be valuable, especially since I assume that increased masking in the latent space would improve inference efficiency. If this is not the case, some clarification would be helpful.
- **RMSD calculations:**
- Were generated and reference polymer structures aligned before computing RMSD?
- If so, the reported RMSD values appear high. However, without units provided, it is difficult to assess this properly.
- **Failure mode analysis:** Can the authors highlight any failure cases? For instance, can the model currently only handle linear polymers, or does it also support other topologies? Are there specific polymers or building blocks where the model struggles?
- **Objective clarification:** Is the goal to generate low-energy conformers or to match reference structures? If the latter, have the authors verified that the reference conformers are indeed low-energy states?
- **Future directions:** Could the learned embedding from the masked autoencoder-decoder framework be useful for polymer property prediction? This might be an interesting avenue for future work.
Claims And Evidence: I do not believe the claims are justified by the current presented results. See my detailed review above.
Methods And Evaluation Criteria: Partially. See my detailed review above.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The analyses in this work are partially sound but require significant clarification. See my detailed review above.
Supplementary Material: Yes, the appendix. Would have loved to see some code/data as well as those are touted as major contributions of the work but not made available.
Relation To Broader Scientific Literature: This is a novel work that addresses a key gap in polymer design via deep generative models.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Significance is high and the method novel and creative, but the clarity is weak and the experiments/analysis lack rigour. This can be potentially improved.
Other Comments Or Suggestions: See my detailed review above.
Questions For Authors: See my detailed review above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your comments and suggestions. Below are our responses to your questions.
**Q1&Q5: Code availability and Validation of MD**
The code, data, and various scripts for our PolyConf and MD simulations are available in this anonymous link (https://anonymous.4open.science/r/PolyConf). It has provided enough details to reproduce our work. Due to the repository size limitation, we can only provide a small subset of the complete dataset, but the complete dataset will be provided after acceptance.
As mentioned in our response to Reviewer 9jKE (https://openreview.net/forum?id=BsTLUx38qV¬eId=KDYQcy2SLb), we have invested significant effort to validate our MD simulations, including seeking guidance from experienced experts, examining the energy convergence, calculating the density values of typical polymers, and comparing them with experiment density values.
**Q2: Baseline comparisons**
As shown in the following Table, we have constructed such a dummy model following your suggestion, and the results demonstrate that our PolyConf can still achieve SOTA performance.
| Model | S-MAT-R | E-MAT-R |
|----|---|---|
| Dummy Model | 68.403 | 18.735 |
| PolyConf | **35.021** | **0.933** |
**Q3: Metric clarity**
The units for the S-MAT-R and S-MAT-P are Å, and the metrics reported are all for the test set. We will explicitly state them in the revised paper.
In addition, we train PolyConf on the training set and select the best checkpoint based on the validation set. It is a widely used practice to avoid overfitting.
**Q4: Dataset details**
The details can be found in Appendix A, with the majority derived from PI1M [1]. Please note that we only need to collect polymer SMILES strings, and all these strings are publicly available from previous works [1]-[5].
**Q6: Masked autoencoder experiments**
In our implementation, the mask rate is randomly sampled from a pre-defined truncated normal distribution, ensuring balanced randomness, avoiding extreme values, and enhancing both robustness and generalization. Details can be found in `./polyconf/models/polyconf_phase1.py` within the provided anonymous repository.
**Q7: RMSD**
The generated and reference polymer structures have been aligned before computing RMSD. While PolyConf has achieved SOTA performance, there is still significant room for improvement in terms of the RMSD. In the future, we are willing to collaborate with researchers to further explore this challenging task.
**Q8: Failure mode analysis**
Our method currently only handles linear polymers, as modeling other topologies (e.g., cross-linked polymers) involves significantly greater complexities beyond data modeling.
**Q9: Objective clarification**
We have analyzed the energy changes of polymers during our MD simulations, and the results show that most simulations achieve convergence within 1 ns, while we run the simulations for 5 ns to ensure the reference conformers are low-energy states. Here, we have provided some raw outputs of our MD simulations in the `./MD` folder within the provided anonymous repository for validation.
Therefore, the objectives you mentioned are fundamentally the same: we aim to train the model to match reference structures, thereby enabling it to generate low-energy conformers.
**Q10: Future directions**
As shown in the following Table, directly based on the learned embedding, PolyConf can achieve even better performance than the SOTA polymer property prediction method MMPolymer [5], further demonstrating its great potential.
| Method | Egc | Egb | Eea | Ei | Xc | EPS | Nc | Eat |
|---|---|---|---|---|---|---|---|---|
| MMPolymer | **0.924 ± 0.006** | 0.934 ± 0.008 | 0.925 ± 0.025 | **0.836 ± 0.053** | **0.488 ± 0.072** | 0.779 ± 0.052 | 0.864 ± 0.036 | **0.961 ± 0.018** |
| PolyConf | 0.916 ± 0.006 | **0.937 ± 0.010** | **0.926 ± 0.018** | 0.822 ± 0.052 | 0.422 ± 0.096 | **0.811 ± 0.049** | **0.868 ± 0.041** | **0.961 ± 0.030** |
We hope the above responses help you re-evaluate our work. Since polymer conformation generation remains a relatively unexplored research area, we have invested significant time and effort into developing and refining our PolyConf and PolyBench. We kindly request your understanding of the challenges and complexities involved in pioneering work. We would greatly appreciate it if you could raise your score.
[1] PI1M: a benchmark database for polymer informatics. Journal of Chemical Information and Modeling, 2020.
[2] Graph rationalization with environment-based augmentations. KDD2022.
[3] Polymer informatics at scale with multitask graph neural networks. Chemistry of Materials, 2023.
[4] PolyNC: a natural and chemical language model for the prediction of unified polymer properties. Chemical Science, 2024.
[5] MMPolymer: A multimodal multitask pretraining framework for polymer property prediction. CIKM2024.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the thoughtful response and for including a link to the anonymized repo. While I understand the size limitations, the code is missing a README and general documentation (e.g., set-up instructions) that limits its utility significantly as well as the reproducibility (if there are no instructions for how to reproduce results from the paper, then one cannot really count it as "reproducible"). I recommend including these in the revisions. Furthermore, for data from MD simulations, this does not belong in a Git repo, as it is not made to handle large files, but in a separate Zenodo (or similar platform) with also suitable accompanying documentation.
Thank you also for the additional details, it is interesting. If the additional analysis and discussion discussed above in the author rebuttal is also incorporated into the manuscript in a structured way, and all my original questions are answered (some, especially around the data curation and visualization, were ignored) I will consider increasing my score to a 3.
---
Reply to Comment 1.1.1:
Comment: Thanks for your timely feedback. Below, we try to resolve your remaining concerns one by one.
> **1. For the anonymized repo:**
**According to the ICML 2025 Peer Review FAQ, it's forbidden to include additional text in the code.**
That's why we have to remove the README.md and general documentation from our anonymized repo.
Here, we will provide a brief introduction to reproduce our work:
- For environment set-up, the list of dependencies has been provided in the accompanying `requirements.txt` file.
- For Phase-1 Training, please run `bash train_phase1.sh`.
- For Phase-2 Training, please run `bash train_phase2.sh`.
- For inference, please run `bash inference.sh`.
- For generated conformations extraction, please run `python extract_confs.py`
**We will release our code with detailed documentation in the public version, as we have always done before.**
> **2. For MD simulation data:**
Thanks for your suggestion.
**Since the total size of the MD data is as large as around 5TB, it is impractical for us to release such a dataset at the current stage.** To solve your concern with our best effort under this context, we have randomly sampled 50 cases from our MD data. Please access these sample data at this link (https://drive.google.com/file/d/1kZJDR_oIJq98xa7TZuuhkAhjTD89Q1px/view?usp=drive_link), and we will release the whole MD data with detailed documentation after acceptance.
In addition, **we kindly remind you that the corresponding scripts of our MD simulations have already been provided in the `./MD` folder of our anonymized repo** (https://anonymous.4open.science/r/PolyConf).
After installing AmberTools and GROMACS according to the corresponding official documentation, you can easily reproduce the pipeline of our MD simulations through `python prepare_md.py` and `python run_nvt_md.py`.
Therefore, we believe that the delay in releasing the whole MD data is not strong evidence for rejecting our work.
> **3. For manuscript revision:**
**According to the ICML 2025 Peer Review FAQ, it's also forbidden to update the original submission (PDF and supplemental material) during the discussion period.** Here, we promise that the additional analysis and discussion in the rebuttal will all be incorporated into our camera-ready version in a structured way.
> **4. For data curation and visualization (Q4. Dataset details):**
**4.1 The authors mention sourcing data from three different sources. How much data came from each?**
As we have responded in our rebuttal, the statistics information of our dataset can be found in Appendix A, with the majority (i.e., training set) derived from [1] and others (i.e., validation and test set) derived from [6][7]. Here, as described in lines 650-654, the training set has 46,230 polymers, the validation set has 4,709 polymers, and the test set has 2,088 polymers.
**4.2 PolyInfo is known to prohibit web scraping, so I am confused as to how was data obtained from this source?**
As we have responded in our rebuttal, we only need polymer SMILES strings to run MD, and all these strings (including training, validation and test sets) we used are publicly available from previous works [1]-[5].
**4.3 A dimensionality reduction method (e.g., UMAP) could help visualize the overlap in building block SMILES from each source, and would make for an interesting analysis to tell us how different the building blocks from each source are.**
According to your suggestion, we have visualized the polymer SMILES strings from the training/validation/test set using the UMAP in this link (https://anonymous.4open.science/r/PolyConf/dataset/UMAP.png). Please note that our PolyConf is trained on the training set, while the best checkpoint is chosen based on the validation set.
> **5. For MD validation (Q5. Validation of MD simulations):**
As we have responded in our rebuttal, we invest significant effort to validate our MD simulations, including seeking guidance from experienced experts, examining the energy convergence, calculating the density values of typical polymers, and comparing them with experiment density values. The details can be found in our response to Reviewer 9jKE (https://openreview.net/forum?id=BsTLUx38qV¬eId=KDYQcy2SLb).
**According to ICML 2025 Reviewer Instructions, the discussion between authors and reviewers is restricted to at most one additional round of back-and-forth, which means that we might no longer have the opportunity to respond to any further feedback from you. In summary, we have done our best to develop and refine our work, and answer all your questions. We hope the above responses can resolve your remaining concerns and enhance your confidence to increase the score. Thanks in advance for your consideration.**
Regards,
The authors of PolyConf
[6] Polyinfo: Polymer database for polymeric materials design. EIDWT2011.
[7] Transferring a molecular foundation model for polymer property predictions. Journal of Chemical Information and Modeling, 2023. | Summary: This paper introduces PolyConf, a novel hierarchical generative framework for polymer conformation generation. Addressing the unique challenges of polymers—such as high flexibility, large chemical space, and lack of prior datasets—PolyConf decomposes the task into two phases: Repeating Unit Conformation Generation and Orientation Transformation Generation
The authors also present PolyBench, the first benchmark dataset for polymer conformation generation, containing over 50,000 polymer conformations derived from molecular dynamics simulations.
Claims And Evidence: **Strength**
- Superior Performance Over Baselines: The structural (S-MAT-R/P) and energy (E-MAT-R/P) metrics demonstrate PolyConf’s significant improvements over methods like TorsionalDiff (e.g., 35.02 vs. 53.21 in S-MAT-R mean).
Results align with the hierarchical design’s intent to address polymer-specific challenges (flexibility, lack of rigid backbones).
- Efficiency: Timing comparisons (Figure 5) validate PolyConf’s speed (0.4 minutes vs. 3.54 minutes for GeoDiff).
- Scalability: Tests on doubled polymer sizes (4,000 atoms) show consistent performance (e.g., 65.04 S-MAT-R mean vs. 119.29 for TorsionalDiff), supporting scalability claims.
- Dataset: PolyBench’s size (50k+ conformations) and diversity (20–100+ repeating units) address the polymer data scarcity problem.
**Weakness**
- It would be beneficial for the paper to also test the proposed model on protein data, as proteins are a specific type of polymer. Given the abundance of available data and the well-established baselines, it is not essential for the proposed method to outperform existing folding models that leverage evolutionary information. However, it would be intriguing to assess where the method’s capabilities stand in comparison.
- It is unclear whether the dataset includes branched or cross-linked systems, or if it is limited to linear polymers only.
- PolyBench conformations derive from a single force field (AMBER). Force-field inaccuracies or parameterization biases may propagate into the dataset.
Reproducibility:
- Missing Details: Training hyperparameters, computational resources, and force-field settings are not fully disclosed.
- While PolyBench is large, its quality hinges on force-field accuracy. The lack of experimental validation weakens claims about its representativeness.
- The core claims (performance, efficiency, scalability) are supported by internal benchmarks, but external validation (experimental data, broader baselines) is needed for robustness. The dataset’s reliance on simulations and omission of complex polymer types limit its universality. While PolyConf advances polymer informatics, claims about physical reliability and generalization require further evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria are well-designed for the stated problem
Theoretical Claims: There are no theorems to verify.
Experimental Designs Or Analyses: Yes, but I don't identify critical issues.
Supplementary Material: Yes, I have gone through the appendix of the paper
Relation To Broader Scientific Literature: -
Essential References Not Discussed: -
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: -
Questions For Authors: **Q1:** Why was the method not tested on proteins, given the structural similarities between polymers and proteins ?
**Q2:** Does the PolyBench dataset include branched/cross-linked polymers, or is it limited to linear chain polymers?
**Q3:** How was the quality of the PolyBench dataset validated beyond force-field-generated conformations?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your comments. Below, we categorize and resolve your concerns.
**W1&Q1: Why not test on proteins?**
Our work focuses on polymers, aiming to explore polymer conformation generation (all-atom conformation).
Polymers present unique challenges compared to proteins, such as greater structural flexibility (see lines 80-98 for a detailed explanation). In this context, our PolyConf is specifically designed to address and adapt to the distinct structural characteristics of polymers, making it difficult to apply it directly to proteins.
In addition, while testing on proteins is out-of-the-scope of this work, we believe that PolyConf holds significant potential for application to other macromolecules composed of building blocks, such as proteins (amino acids) and RNA (nucleotides). Due to our limited experience with proteins and the time constraints, it is difficult for us to include them at this stage. We would, however, welcome the opportunity to collaborate with researchers in related fields to further explore and expand the potential applications of PolyConf.
**W2&Q2: Details of PolyBench**
As described in Appendix A.1, our molecular dynamics simulations are based on polymer SMILES strings. Since most publicly available polymer SMILES strings represent linear polymers, the current PolyBench dataset is also limited to linear polymers. Even though, to the best of our knowledge, PolyBench is the first benchmark for polymer conformation generation.
In the future, we will continuously maintain and expand PolyBench to include a broader range of polymer data, especially branched and cross-linked polymers.
**W3&W4: Reproducibility**
The code, data, and various scripts for our PolyConf and molecular dynamics simulations are available in this anonymous link (https://anonymous.4open.science/r/PolyConf). We are confident that it has provided enough details to reproduce our work. Due to the repository size limitation, we can only provide a small subset of the complete dataset, but the complete dataset will be provided after acceptance.
In particular, the training hyperparameters are provided in the corresponding training scripts, all experiments are implemented on eight A100 80G GPUs, and the force-field settings are provided in the `./MD/utils` folder of the provided anonymous repository. We will also explicitly state these details in the revised paper.
**W3&W5&W6&Q3: Validation of MD simulations and Quality of PolyBench**
We have invested significant effort to ensure the reliability of our molecular dynamics simulations, thereby guaranteeing the high-quality of the PolyBench dataset:
* Under the guidance of highly experienced experts, we design our molecular dynamics simulations using standard pipelines widely adopted in previous works [1]. All scripts and settings related to the molecular dynamics simulations are available in the `./MD` folder of the provided anonymous repository, ensuring that our molecular dynamics simulations are fully transparent and reproducible.
* We have analyzed the energy changes of various polymers during our molecular dynamics simulations, and the results show that most simulations achieve convergence within 1 ns, while we run the simulations for 5 ns to ensure the reliability and robustness of the PolyBench dataset.
* We have calculated the density values of typical polymers through our molecular dynamics simulations and compared them with those experimental values provided in [1]. As shown in the following Table, the density values obtained via our molecular dynamics simulations are very close to those experimental values, further supporting the reliability of our molecular dynamics simulations and PolyBench dataset.
| Polymer | Experiment density (g/cc) | MD density (g/cc) |
|---|---|---|
| [\*]CC(C)[\*] | 0.850 | 0.837 |
| [\*]CC(C)O[\*] | 1.125 | 1.019 |
| [\*]CC(Cl)(Cl)[\*] | 1.630 | 1.570 |
| [\*]CCCCCCO[\*] | 0.932 | 0.944 |
| [\*]CCCCCCCCO[\*] | 0.906 | 0.927 |
* In addition, we provide some raw outputs of our molecular dynamics simulations (i.e., `NPT_Cases.tar.xz` and `NVT_Cases.tar.xz`) in the `./MD` folder of the provided anonymous repository for validation.
We hope the above responses can resolve your concerns. Since polymer conformation generation remains a relatively unexplored research area, we have invested significant time and effort into developing and refining our PolyConf and PolyBench to boost subsequent studies. Therefore, we kindly request your understanding of the challenges and complexities involved in pioneering work in this field.
[1] High-throughput molecular dynamics simulations and validation of thermophysical properties of polymers for various applications. ACS Applied Polymer Materials, 2020. | null | null | null | null | null | null | null | null |
Demystifying the Paradox of Importance Sampling with an Estimated History-Dependent Behavior Policy in Off-Policy Evaluation | Accept (poster) | Summary: The paper provides a theoretical analysis of why estimating a history-dependent behavior policy in off-policy evaluation (OPE) can reduce mean squared error (MSE). The authors derive a bias-variance decomposition for OPE estimators and show that history-dependent behavior policy estimation reduces variance at the cost of increasing finite-sample bias. Theoretical results establish that the variance reduction is monotonic as history length increases, except for the Marginalized Importance Sampling (MIS) estimator, which worsens with more history. Empirical results on CartPole validate these theoretical findings.
## update after rebuttal
The authors have addressed my concerns in the rebuttal. I appreciate the additional clarifications and intuitions provided. I no longer have major concerns with the submission and have updated my score accordingly.
Claims And Evidence: 1. The paper claims that history-dependent behavior policy estimation leads to variance reduction in OPE estimators, which is well-supported by both theoretical derivations and empirical results.
2. The claim that this variance reduction comes at the cost of increased finite-sample bias is also well-grounded, as the paper provides a bias-variance decomposition to justify this trade-off.
3. However, the paper does not formally establish when the increased bias outweighs variance reduction, making it unclear how to choose an optimal history length in practical applications.
Methods And Evaluation Criteria: 1. The paper does not introduce a new method or algorithm but instead provides an analytical perspective on existing estimators.
2. The choice of CartPole as an evaluation environment is somewhat limited, as prior OPE work typically includes MuJoCo tasks to assess generalization.
Theoretical Claims: 1. The variance reduction property of history-dependent estimation is well-supported by the derived bias-variance decomposition.
2. The paper introduces a projection-based interpretation, which is conceptually interesting but follows naturally from standard variance-reduction techniques.
3. While the theoretical results explain when variance is reduced, they do not explicitly analyze how estimation errors in the learned behavior policy affect bia.
Experimental Designs Or Analyses: 1. The empirical results confirm the theoretical findings, showing that variance decreases and bias increases as history length grows.
2. The lack of details on experimental setup makes reproducibility difficult—there is no appendix detailing implementation choices, hyperparameters, or sampling strategies.
3. There are no novel insights provided in the experiment section.
Supplementary Material: The supplementary material provides extended theoretical proofs, enhancing clarity on derivations.
However, there is no additional information on experimental details, which makes replication difficult.
Relation To Broader Scientific Literature: 1. The paper builds on prior work in off-policy evaluation (OPE) and importance sampling, particularly extending prior bias-variance analysis in OPE settings.
2. There is a line of research on non-parametric behavior policy estimation that has already been demonstrated to outperform parametric methods in various environments, yet this is not acknowledged.
Essential References Not Discussed: Liu and Zhang (2024) published at ICML studies offline-informed behavior policy selection and is directly related to this paper’s topic. This work is a representative of the various non-parametric behavior policy estimation methods.
Other Strengths And Weaknesses: Strengths:
1.The projection-based interpretation offers a useful mathematical lens on the variance reduction effect.
Weaknesses:
1. The paper does not introduce a new method, and the analysis itself is not particularly novel, as the variance reduction effect is well understood in the context of importance sampling.
Other Comments Or Suggestions: 1. The literature review should include non-parametric behavior policy estimation in a more structured way.
2. A more detailed discussion on practical implications—such as guidance on choosing history length—would make the results more practical.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Choice of History Length** Excellent comment. We fully agree that optimal selection of history length is crucial for applying our theory to practice. In response, we have **developed a method during the rebuttal, supported by promising simulation results**. Our approach is motivated by the bias-variance trade-off revealed in our theory: while increasing history length reduces asymptotic variance for OIS/SIS/DR estimators, it might increase finite-sample bias. We therefore propose to select the history length that minimizes $h^* = \arg\min_h [2n\widehat{\text{Var}}(h) - h\log(n)]$ where:
- $\widehat{\text{Var}}(h)$ denotes variance estimator computed via the sampling variance formula or bootstrap;
- $k\log(n)$ is the BIC penalty (Schwarz, 1978) preventing selecting long history without substantial reduction of the variance.
[Results](https://www.dropbox.com/scl/fi/02eppzq8qpjygt4cxmc28/SelectKBIC.png?rlkey=k30z87thebot3la7apnvsob11&st=wvta6lho&dl=0) show that in all cases, OIS estimators with our adaptively selected history achieve the lowest MSE compared to those using fixed history.
**No novel insights from experiments**. The primary aim of our paper is to provide a rigorous theoretical analysis of how history-dependent behavior policy estimation affects the bias and variance of OPE estimators. Our main contribution lies in establishing these theories, through the derived bias-variance trade-offs. Accordingly, our experiments are to empirically verify these theories rather than to derive new insights. Indeed, our experimental results align with the theory.
**MuJoCo**. As suggested, we conducted simulations in MuJoCo Inverted Pendulum environment during rebuttal. [Results](https://www.dropbox.com/scl/fi/egiz1bjz3pztlqs5p1lzb/mujoco.jpg?rlkey=qrg7wjltut5tcpbhlurspligd&st=vmssm274&dl=0) again, align with our theory.
**No new method**. First, as noted in the ICML 2025 Call for Papers, "Theory of Machine Learning" is a core research area -- in parallel to RL, deep learning, and optimization. Our paper falls within this category.
Second, while not introducing new method, our theoretical analysis offers useful guidelines to practitioners. Table 1 shows that history-dependent behavior policy estimation should be used with OIS, SIS, DR estimators with misspecified Q-function, but may be unnecessary for DR with correct Q-functions or MIS estimators.
Third, we did develop a new method for history length selection during rebuttal and obtained promising empirical results (see our response #1). We are happy to use the extra page to present this method shall our paper be accepted.
**Variance reduction effect in IS**. We respectfully clarify a potential misunderstanding regarding our theoretical contributions. While the benefits of **designing** optimal proposal distributions for IS (pre data collection) are indeed well-established (Liu & Zhang, 2014) and connected to the literature on optimal experimental design for policy learning (Agarwal et al., 2019) and policy evaluation (Hanna et al., 2017; Mukherjee, 2022; Li et al., 2023), our work addresses a different problem: the theoretical benefits of **estimating** such distributions (post data collection). In other words, we did not consider policy design, but study how history-dependent behavior policy estimation impacts OPE — a question only empirically explored (and solely for OIS estimators) in prior work (Hanna, 2019, 2021). To our knowledge, our analysis provides the first theoretical foundation for these empirical observations.
**The lack of experimental details**. We would like to make some clarifications:
* We detailed the DGP and the episode length in Appendix A.1. All data were generated by this DGP **without additional sampling strategies**.
* As for implementation, we mentioned the use of logistic regression for behavior policy estimation in Appendix A.1. To be more specific, we employed **scikit-learn’s LogisticRegression with all hyperparameters kept at their default values** (no custom tuning).
* During rebuttal, we have created an [anonymous repository](https://www.dropbox.com/scl/fi/8hxbqti3t9yu1boeb7u7e/code.zip?rlkey=yq1adjgw84cmc20va59rsc1o1&st=5022v5pc&dl=0) containing all the code for implementation.
**Essential reference**. While we are happy to include the paper by Liu and Zhang (2024), we respectfully disagree that it is an essential reference. This paper is about the **design** of behavior policies whereas we study **estimating** such policies. While the reviewer describes their behavior policy as "non-parametrically estimated," we find no discussion of nonparametric estimation in this paper.
Regarding nonparametric methods more broadly: while potentially relevant and worthwhile to cite, they are not central to our focus on history-dependent estimation. We included Kallus & Uehara (2020) who employed history-dependent behavior policy estimation to handle history-dependent target policies. Happy to include other references
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed and thoughtful rebuttal, as well as the additional experiments and clarifications. I appreciate the effort you put into addressing the concerns raised.
That said, I believe some issues remain. In particular, I believe the omission of recent work such as Liu & Zhang (ICML 2024) is an oversight. While I understand your distinction between policy estimation and design, both works fundamentally tackle the question of learning behavior policies from offline data to improve OPE. A discussion of this connection would have helped better situate your contribution within the broader literature.
Additionally, while the theoretical analysis is rigorous and clearly presented, the key insight regarding the bias-variance trade-off is relatively intuitive and builds upon prior empirical observations. The additional method for selecting history length and the extended experiments provided during the rebuttal are helpful and strengthen the work.
Overall, I believe the paper would benefit from a clearer positioning within related literature and a more thorough empirical evaluation to complement the theoretical analysis. I hope the authors will consider these suggestions to further improve the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for recognizing the value of our additional experiments and our newly proposed methodology for history selection, as well as for acknowledging the difference between policy design and estimation. We greatly appreciate the opportunity to respond again and address your remaining comments.
**Liu & Zhang (ICML 2024)**. As mentioned in our response, we are happy to include the paper. Specifically, we plan to include the following discussion in the Discussion Section:
> We note that a separate line of research (Hanna et al., 2017; Mukherjee, 2022; Li et al., 2023; Liu & Zhang, 2024) investigates optimal experimental design for off-policy evaluation (OPE). These works focus on **designing** optimal behavior policies prior to data collection to improve OPE accuracy whereas our proposal considers **estimating** behavior policies after data collection for the same purpose. The work of Liu & Zhang is particularly related as the behavior policy is computed from offline data before being run to collect more data. Both approaches share the most fundamental goal of enhancing OPE by learning behavior policies — whether for data collection or retrospective estimation.
We hope this addresses your comment.
**Theoretical insights**. We respectfully argue that our work provides novel theoretical insights beyond what has been empirically observed in the existing literature. While prior empirical studies (Hanna, 2019, 2021) demonstrated variance reduction through history-dependent behavior policy estimation for OIS, we systematically study three other estimators in addition to OIS, corresponding to SIS, DR, MIS.
More importantly, our analysis reveals that history-dependent behavior policy estimation yields fundamentally different effects across the three estimators:
1. For SIS, it reduces the variance;
2. For DR, variance reduction occurs when the Q-function is misspecified, while the variance remains unchanged under correct Q-function specification;
3. For MIS, it inflates the variance.
These findings have not been systematically documented in prior empirical studies, nor have they been theoretically analyzed in existing literature.
**Empirical evaluation**. We greatly appreciate this comment. Although our paper is primarily theoretical, we have conducted extensive empirical studies in response to your comment. These newly obtained results are organized into three parts:
1. Investigation of the performance of adaptive history selection (refer to our response during rebuttal);
2. Evaluation of OIS across three MuJoCo environments (results reported in this [figure](https://www.dropbox.com/scl/fi/myea6xplhzi6irv1r0hto/OIS.pdf?rlkey=pnwmg3zb459f9f4n4ned60za2&st=o9v82yjk&dl=0)):
- (i) Inverted Pendulum (with continuous action space);
- (ii) Inverted Double Pendulum (with a higher state dimension than (i));
- (iii) Swimmer (a substantially different environment from both (i) and (ii));
3. Evaluation of SIS, DR, and MIS in the Swimmer environment (results reported in this [figure](https://www.dropbox.com/scl/fi/wfvsqhgktok69i6tojlmq/swimmer.pdf?rlkey=irmok3tppls6fm8nx23bableu&st=6l4j7n2v&dl=0)).
We are happy to include these experiments, as well as any additional experiments the reviewer may suggest, in the final version of the paper should it be accepted, to directly address your comment. | Summary: The paper discusses a paradox in offline policy evaluation through importance sampling, where the performance of the target policy is estimated from a weighted average of the reward value by the ratio of the target policy and the behavioral policy. The paper suggests that the mean-squared error of the said estimator can be improved if the behavioral policy is estimated in a broader family of models. For example, if the true behavior probability is known, the paper suggests that people should replace it with the empirically estimated behavior probability. If the true behavior policy is context-independent, people should estimate it as if it were context-dependent. Finally, if the true behavior policy is Markovian, people should estimate it as if it were a higher-order Markovian function. The authors made an analogy to doubly robust estimator, though I could not fully understand the details.
Claims And Evidence: I am not convinced with the proofs. I cannot follow the proof of Lemma 1, let alone the rest of the paper.
Methods And Evaluation Criteria: The experiments are not clearly presented. The authors gave a numerical example in Section 3.1 with additional details in Appendix A, but they left out key details regarding the numerical values of the mean-squared errors of the estimator.
Theoretical Claims: I cannot follow the proof of Lemma 1 in Appendix B.
Experimental Designs Or Analyses: No.
Supplementary Material: Yes, I reviewed Appendix A and B.
Relation To Broader Scientific Literature: I did see papers suggesting that approximating the behavior policy offers empirical advantages over using the true behavior policy. However, I have always speculated that it has to do with clipping effects, where the estimated behavioral policies are regularized to prevent extreme values. The authors seem to have other intuitions that I am not familiar with.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: I cannot follow Lemma 1 as the conclusions appear counter-intuitive. I would appreciate it if the authors could elaborate on the proof of Lemma 1. Can the authors prove it from first principles without using advanced methods like Neyman orthogonality?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback regarding the proof of Lemma 1. We acknowledge that its reliance on Neyman orthogonality may not be immediately familiar to general audience. Apart from this concept, the proof employs standard techniques, including basic calculus such as Taylor expansions. We highlight that our original proof is clear and mathematically sound. The reviewer’s concerns arise from the misunderstanding of these technical tools rather from any technical flaw in the proof itself. Having said that, we have provided an alternative proof without Neyman orthogonality below.
**Proof of Lemma 1**. Define $n(a)$ as the number of times action $a$ is taken, $n(s,a)$ as the number $(s,a)$ pairs in the dataset, and $n(s)=\sum_a n(s,a)$. We first prove the second inequality $\textrm{MSE}_A(\widehat{v}\_{\text{IS}}^{\text{CA}})\le \textrm{MSE}_A(\widehat{v}\_{\text{IS}}^{\dagger})$. For any estimator $\widehat{v}$, it follows from the law of total variance that $$\text{Var}(\widehat{v}) = \underbrace{\mathbb{E}(\text{Var}(\widehat{v}| \\{n(a)\\}\_{a} ))}\_{I} + \underbrace{\text{Var}(\mathbb{E}(\widehat{v}|\\{n(a)\\}\_{a} ))}\_{II}.$$ In the following, we will show that:
(i) **Term I**: The difference between $\widehat{v}\_{\text{IS}}^{\dagger}$ and $\widehat{v}\_{\text{IS}}^{\text{CA}}$ is negligible;
(ii) **Term II**: $\widehat{v}\_{\text{IS}}^{\dagger}$ achieves a larger value than $\widehat{v}\_{\text{IS}}^{\text{CA}}$.
Combining (i) and (ii) yields that $\text{Var}\_A(\widehat{v}\_{\text{IS}}^{\dagger})\ge \text{Var}\_A(\widehat{v}\_{\text{IS}}^{\text{CA}})$. As both estimators are asymptotically unbiased, we obtain $\text{MSE}\_A(\widehat{v}\_{\text{IS}}^{\dagger})\ge \text{MSE}\_A(\widehat{v}\_{\text{IS}}^{\text{CA}})$.
Specifically, by direct calculation, $$\mathbb{E}\\{\text{Var}(\widehat{v}\_{\text{IS}}^{\dagger}|\\{n(a)\\}\_a )\\}=\mathbb{E}\Big(\frac{1}{n}\sum_{a\in\mathcal{A}} \frac{\pi_e^2(a)}{\pi_b^2(a)}\frac{n(a)}{n}\sigma_a^2\Big),\quad \mathbb{E}\\{\text{Var}(\widehat{v}\_{\text{IS}}^{\text{CA}}|\\{n(a)\\}\_{a} )\\} = \mathbb{E}\Big(\frac{1}{n}\sum_{a\in\mathcal{A}} \frac{\pi_e^2(a)}{(n(a)/n)^2}\frac{n(a)}{n}\sigma_a^2\Big).$$
According to the law of large numbers, $n(a)/n$ is to converge in probability to $\pi_b(a)$. It follows that
$\mathbb{E}\{\text{Var}(\widehat{v}\_{\text{IS}}^{\dagger}|\\{n(a)\\}\_{a} )\} - \mathbb{E}\{\text{Var}(\widehat{v}_{\text{IS}}^{\text{CA}}|\\{n(a)\\}\_{a} )\} = o(1/n)$. This proves (i). On the other hand,
$$\mathbb{E}(\widehat{v}\_{\text{IS}}^{\text{CA}}|\\{n(a)\\}\_{a}) = \frac{1}{n}\mathbb{E} \Big(\sum_{a\in\mathcal{A}} \frac{\pi_e(a)}{n(a)/n}\cdot n(a)\mathbb{E}[R|A=a] \Big) = \mathbb{E}\Big(\sum_{a\in\mathcal{A}} \pi_e(a)\mathbb{E}[R|A=a] \Big)$$
is independent of $n(a)$. Consequently, we have $\text{Var}\\{\mathbb{E}(\widehat{v}\_{\text{IS}}^{\text{CA}}|\\{n(a)\\}\_{a})\\}=0$.
However,
$$\text{Var}\Big(\mathbb{E} \\{\widehat{v}\_{\text{IS}}^{\dagger}|\\{n(a)\\}\_{a} \\}\Big) = \text{Var}\left(\frac{1}{n}\sum_{a\in\mathcal{A}}\frac{\pi_e(a)}{\pi_b(a)}n(a)\mathbb{E}[R|A=a]\right)$$
is dependent of $n(a)$. This verifies (ii). Meanwhile, notice that $\text{MSE}\_A(\widehat{v}\_{\text{IS}}^{\dagger})\ge \text{MSE}\_A(\widehat{v}\_{\text{IS}}^{\text{CA}})$ if and only if $ \text{Var}\\{\mathbb{E}(\widehat{v}\_{\text{IS}}^{\dagger}|\\{n(a)\\}_{a}) \\}=0$, which indicates that $\mathbb{E}(R|A)=0$ almost surely.
The first inequality $\textrm{MSE}\_A(\widehat{v}\_{\text{IS}}^{\text{CD}})\le \textrm{MSE}\_A(\widehat{v}\_{\text{IS}}^{\text{CA}})$ can be similarly proven. Due to space constraints, we present only a proof sketch. Applying the law of total variance again, we obtain
$$\text{Var}(\widehat{v}) = \mathbb{E}(\text{Var}(\widehat{v}| \\{n(s,a)\\}\_{s,a} )) + \mathbb{E}(\text{Var}(\mathbb{E}(\widehat{v}| \\{n(s,a)\\}\_{s,a} ) | \\{n(a)\\}\_{a} )) + \text{Var}(\mathbb{E}(\widehat{v}|\\{n(a)\\}\_{a} )).$$
Similarly, we can show that
(i) The difference in the first term between the two estimators is asymptotically negligible.
(ii) $\widehat{v}\_{\text{IS}}^{\text{CD}}$ achieves a smaller second term (zero), as its conditional expectation is independent of $\\{n(s,a)\\}\_{s,a}$.
(iii) The conditional expectations of both estimators are independent of $\\{n(a)\\}\_{a}$, so the last term is zero for both estimators.
Consequently, $\widehat{v}\_{\text{IS}}^{\text{CD}}$ achieves a smaller asymptotic variance, and equivalently, MSE.
**Clipping effects**. Our theory is not about clipping making the estimation of the behavior policy desirable for OPE. We dedicated an entire section (Section 3) to build intuition. As can be seen in the first two equations on Page 4, estimating the behavior policy effectively transforms the original IS estimator into a doubly robust estimator, which is known to outperform standard IS estimators when the Q-/reward function is well-approximated. This explains the benefits of such an estimation.
---
Rebuttal Comment 1.1:
Comment: The authors addressed my concerns. I appreciate the intuitions in the proofs and have no major concerns. On a minor side, how is the proof connected with Neyman orthogonality?
---
Reply to Comment 1.1.1:
Comment: We are delighted to hear that our responses have addressed your comments and sincerely appreciate your increase in our score.
The proof we provided during the rebuttal is a version without Neyman orthogonality, while our original proof involves Neyman orthogonality. To elaborate how it connects to Neyman orthogonality, we remind you that there are three key steps in our original proof:
- The first step is to show that the IS estimators $\widehat{v}\_{\text{IS}}^{\text{CA}}$ and $\widehat{v}_{\text{IS}}^{\text{CD}}$ with estimated IS ratios are equivalent to the DR estimators with estimated IS ratios and reward functions; see the last two equations on Page 12. This follows directly from basic calculus, leveraging the fact that both $\widehat{r}$ and $\widehat{\pi}_b$ are derived using tabular methods.
- The second step is to show that these DR estimators, although involving **estimated** IS ratios and reward functions, are asymptotically equivalent to DR estimators with **oracle** IS ratios and reward functions. This is where Neyman orthogonality comes into play.
- The last step is to directly compare the MSEs of these DR estimators with oracle IS ratios and reward functions against $\widehat{v}_{\text{IS}}^{\dagger}$ to demonstrate the advantages of (context-dependent) behavior policy estimation. This step, again, follows directly from basic calculus.
We provided a detailed discussion of Neyman orthogonality, including its definition, usage, and the mathematical details of the second step, in our response to Referee qCNN (see both our rebuttal and post-rebuttal responses). In summary, when applied to our setting, Neyman orthogonality can be understood as a property of OPE estimators. An OPE estimator achieves this property if its expected value is robust to small perturbations in the estimation errors of the reward and behavior policy near their oracle values. Specifically, these estimation errors affect the OPE estimator’s mean only in second order. This ensures that the OPE estimator’s bias decays much faster than the estimation errors in the reward and behavior policy, making it asymptotically negligible.
According to the proofs of Theorems 5.1 & 5.2 in Chernozhukov et al. (2018), the DR estimator satisfies Neyman orthogonality. As a result, it is safe to replace the estimated IS ratios and reward functions in DR with their oracle values without introducing significant bias. Meanwhile, the asymptotic variance of DR remains unchanged, as long as the estimated reward and behavior policy are consistent. Consequently, its MSE also remains asymptotically unchanged. | Summary: The paper investigates the bias-variance trade-off and the MSE in IS-based off-policy evaluation. A first part of the paper investigates bandits and shows that context-conditioning and visitation-count based approximation of the behavior policy can reduce the MSE (even though the policy is context-independent). They then evaluate different estimators for RL policies with their estimated non-Markovian behavior policy, and show this leads to reduced variance compared to non-Markovian behavior policy (longer history length leading to lower variance but exponentially growing bias) and improved asymptotic MSE. They show this principle works across different base-estimators, including OIS, Sequential IS, Doubly Robust and MIS estimators.
# update after rebuttal
The authors have clarified some concepts and done additional experiments. Therefore I keep my score of accept (4).
Claims And Evidence: The authors provide proof for all their claims, although some more background can be discussed for the reader to appreciate these.
Methods And Evaluation Criteria: The authors design a technique which is applicable to a wide variety of estimators and clearly demonstrate its benefit in reducing the MSE in the asymptotic case while also discussing limitations, e.g. the finite sample bias.
Theoretical Claims: The theoretical claims are backed up by extensive, well-presented proofs. I unfortunately, could not check them all. The theorems are additionally confirmed in the experiments.
Experimental Designs Or Analyses: The analysis uses a cartpole domain. to verify the bias and the MSE. The results are in line with the theoretical claims.
The implementations of the algorithms in the theory and especially the experiments could be more clearly described with additional details and more specific references. For instance, the implementation of the base-algorithms, any specific hyperparameters, and the details of the non-Markovian behavior policy estimation.
Supplementary Material: there is no supplementary material
Relation To Broader Scientific Literature: The work seems to be closely related, but complementary to, other techniques for variance reduction in OPE, such as per-decision IS, weighted IS, incremental IS, conditional IS, and state-based IS. The difference to these techniques is that here the estimator's variance reduction is based on using an estimated non-Markovian behavior policy. It seems related to Sequential IS methods (e.g. PDIS and INCRIS), depending on the time window used, when combined with OIS as the backbone, but this relation is not expanded on in the text. However, the authors do show that their algorithm applied to MIS can make it comparable to SIS for $k=T$.
Essential References Not Discussed: In general, there are not that many references from recent years, e.g. in the sequential IS and doubly robust techniques. The paper also does not expand much on the relation to these techniques. While the technique is a plugin for a variety of estimators, it does make sense to discuss the relation to these other works. In particular, a more in depth discussion of the related techniques would clarify the contribution.
Conditional IS [1] conditions on random variables in the trajectory. Techniques which modify the importance ratio are even more directly related to your work, since modifying the behavior policy changes the importance ratio. In this category, it makes sense to discuss State-based IS [2], which is similar in spirit that it is a plugin that can be applied to (the same) variety of estimator classes (OIS, Sequential IS, MIS, DR). It modifies the ratio based on whether the states in the history contribute to the return or not. Similarly, the already included PDIS and INCRIS references can be discussed in more depth, as I would argue in some cases these are equivalent to your technique (see questions for authors).
[1] Rowland, M., Harutyunyan, A., Hasselt, H., Borsa, D., Schaul, T., Munos, R. & Dabney, W.. (2020). Conditional Importance Sampling for Off-Policy Learning. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (AISTATS 2020) 108:45-55.
[2] Bossens, D.M., & Thomas, P.S. (2024). Low Variance Off-policy Evaluation with State-based Importance Sampling. IEEE Conference on Artificial Intelligence (CAI 2024), 871-883.
Other Strengths And Weaknesses: Strengths:
- The problem setting is interesting
- The paper is rigorous and well-presented
- The objectives are clear
- The theoretical and empirical results align
Weaknesses:
- In some cases, a bit more background of the techniques may need to be provided for a general audience.
- The related works discussion could be expanded on, and maybe some more recent works could be included.
Other Comments Or Suggestions: “The re-weighted returns are then averaged to produce an unbiased estimator of the target policy’s value.” This is not true for all IS based estimators. Perhaps you can stress it is ordinary importance sampling.
$\pi_e / \pi_b \leq C$ --> forget to mention for all $(s,a)$ ?
I presume in Eq. 4, it should be $\hat{v}_{\text{SIS}}$.
in discussion, “can increases” --> can increase
Questions For Authors: Appendix B.: “According to Neyman orthogonality, both the estimated reward and estimated behavior policy can be asymptotically replaced by its oracle value (Chernozhukov et al., 2018) without changing the OPE estimator’s asymptotic MSE.” Can the authors provide explain this step? Is the Neyman orthogonality always valid?
What is the relation between history dependent behavior policies and history-dependent importance ratios (sequential importance sampling)? I believe that your technique can be considered as a generalisation (or special case, depending how we look at it), of sequential importance sampling techniques. OIS has history length 1, incremental importance-sampling has history length k (for chosen k), and per-decision importance sampling has all past time steps until current time t as history. In your framework, the importance ratio is also history dependent. It will be interesting to see what the intuition is for the finding that a lower number of time steps of the ratio is beneficial for reducing the variance while a higher number of time steps in the behavior policy is beneficial for reducing the variance, and what is actually the key difference if any, between these two formulations. Is it fair to say that when applying a non-Markovian behavior policy to OIS, it is equivalent to per-decision importance sampling in case using all past time steps of the episode, and equivalent to incremental importance sampling when using past $k$ times steps of the episode? In short, a non-Markovian behavior policy and the importance ratio definition in sequential IS techniques seems highly related, and possibly interchangeable in the OIS case, but this relation is currently not exactly clear.
"Alternatively, the k-step history $H_{t−k:t}$ can be used
to construct a history-dependent MIS ratio $wt (k) = E(λt |H_{t−k:t} , A_t )$ ..To appreciate why Theorem 8 holds, notice that by setting
k to the horizon $T$ , $w_t(k)$ is reduced to the $λ_t$ , and the
resulting estimator is reduced to SIS, which suffers from
the curse of horizon and is known to be less efficient than
MIS." Following this and my above remark, perhaps a similar analysis can be done to derive the relation between the OIS/SIS variants with non-Markovian $\pi_b$ and the sequential IS algorithms.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks for your excellent comments and positive assessment of our paper. We will include these references, use the extra page to expand the related work and address all minor comments. In the following, we focus on clarifying your questions.
**Neyman Orthogonality**. This property enables us to establish the asymptotic equivalence between an OPE estimator that uses estimated reward and behavior policy, and the one that employs their oracle values. Specifically, the difference between the two estimators can be decomposed into the following three terms:
$$\mathbb{E}_n\Big( \sum_a \pi_e(a|S)[\widehat{r}(S,a)-r(S,a)]- \frac{\pi_e(A|S)}{\pi_b(A|S)}[\widehat{r}(S,A)-r(S,A)] \Big)+\mathbb{E}\_n\Big[ \Big(\frac{\pi\_e(S,A)}{\widehat{\pi}_b(S,A)} - \frac{\pi_e(S,A)}{\pi_b(S,A)}\Big)[R- r(S,A)] \Big]+\mathbb{E}_n \Big(\frac{\pi\_e(S,A)}{\widehat{\pi}_b(S,A)} - \frac{\pi_e(S,A)}{\pi_b(S,A)}\Big)[\widehat{r}(S,A)-r(S,A)] ,$$
where $\widehat{r}$ and $\widehat{\pi}_b$ denote the estimated reward and behavior policy, respectively. For the moment, let us assume these estimators are computed based on some external dataset. Then:
* The first two terms are of zero mean. They are of the order $o_p(n^{-1/2})$ provided that $\widehat{r}$ and $\widehat{\pi}_b$ converge to their oracle values.
* The last term is of the order $\|\|\widehat{r}-r\|\| \times \|\|\widehat{b}-b\|\|$ where $\|\|\widehat{r}-r\|\|$ and $\|\|\widehat{b}-b\|\|$ denote the root MSEs (RMSEs) between $\widehat{r}(S,A)$ and $r(S,A)$, and between $\widehat{b}(S,A)$ and $b(S,A)$, respectively. Crucially, the order is the product of the two RMSEs. Consequently, as they decay to zero at a rate of $o_p(n^{-1/4})$ -- which is much slower than the parametric rate $O_p(n^{-1/2})$ -- this term becomes $o_p(n^{-1/2})$ as well.
Consequently, the difference is $o_p(n^{-1/2})$, which establishes the asymptotic equivalence between the two estimators.
When $\widehat{r}$ and $\widehat{\pi}_b$ are estimated from the same dataset, the orders of the three terms additionally depend on the VC indices measuring the complexity of the reward and behavior policy models. Returning to our bandit example, we employ tabular methods to estimate both nuisance functions. Given the finite state and action spaces, their VC dimensions are finite as well. Additionally, both RMSEs achieve the parametric convergence rate $O_p(n^{-1/2})$. This validates our claim.
**History-dependent behavior policy & history-dependent IS ratios**. Our opinion is that (i) reducing the time horizon in the IS ratio and (ii) increasing the history length in the estimation of the behavior policy are two generally different approaches to improving OPE performance, although these methods are related when considering MIS (which we will discuss in detail later). Specifically:
* Approach (i) is more aggressive (in certain cases), aiming to reduce the **order** of the variance and address the curse of horizon by using shorter-history IS ratios. For example, the incremental IS (IIS) estimator you mentioned uses a fixed history length $k$. A small $k$ can reduce the estimator's variance from exponential in $T$ to polynomial ($O(T^k)$). Conceptually, this shifts the original per-decision IS (PDIS) estimator toward the MIS estimator, with IIS representing an intermediate point between PDIS and MIS. However, this comes at the cost of increased bias due to the ignored long-term dependencies.
* Approach (ii) is more conservative, targeting the **magnitude** (rather than order) of the variance by incorporating longer history in behavior policy estimation. As our theory shows, the variance preserves the same order (and thus the curse of horizon remains). According to our bandit example, this approach effectively converts a standard IS into a DR. In contrast to approach (i), it introduces minimal bias, as proven in our theory.
We also remark that these approaches can be combined to doubly reduce IS estimator's variance. First, one may specify a history length k for the IS ratio to obtain the IIS estimator. Next, the IIS ratio can be estimated using history-dependent behavior policy estimation to further reduce the variance magnitude.
However, the two approaches do interact when it comes to MIS, which incorporates the IS ratio of both the state and action, rather than actions alone. Consequently, using history-dependent behavior policy estimates ratios over complete histories rather than individual states alone, which becomes equivalent to employing history-dependent IS ratios.
During the rebuttal, we have created a [figure](https://www.dropbox.com/scl/fi/pl5j53c60z7im0inbkj9j/ISRelation.jpg?rlkey=gydxh3bpo3hew8h7zuhg4kzox&st=htnlbaix&dl=0) to visualize their interactions. Specifically, applying history-dependent behavior policy estimation to MIS can yield IIS (when ignoring state ratios beyond k steps), OIS, or PDIS. To the contrary, reducing the history-dependence in IS ratios converts OIS to PDIS and PDIS to IIS
---
Rebuttal Comment 1.1:
Comment: Actually, reducing the time horizon does not change the order of the estimator's variance to polynomial. It just reduces the exponent from $T$ to $k < T$. In [3], there is no analysis of INCRIS that shows a change to polynomial order. There is however, an analysis of the options framework (Corollary 1 and 2), which is not the same but also reduces the number of timesteps, but which reduces the exponent rather than making it polynomial. In Eq.8 of [2], reducing the number of time steps reduces the exponent too, not making it polynomial. I also add the reference for PDIS [4]. What is the authors' statement based on?
So both the order and the technique seem to be closely related. I do think it is possible, as the authors mention, that both types of approaches can be applied together.
I find the authors' explanation for Neyman orthogonality and the proof incomprehensible:
- The property's effect is explained but the property itself is not explained.
- The factor b is not introduced.
- Multiplying $n^{-1/2}*n^{-1/2}$ does not give me $n^{-1/4}$ but $n^{-1}$
[3] Guo, Thomas, & Brunskill (2017). Using Options and Covariance Testing for Long Horizon Off-Policy Policy Evaluation. NeurIPS 2017.
[4] Precup,Sutton, & Singh (2000), “Eligibility Traces for Off-
Policy Policy Evaluation,” ICML 2000.
Edit: Additionally, I was wondering the following. It is not so surprising to obtain similar results with the Inverted Pendulum vs Cartpole. As shown in https://gymnasium.farama.org/environments/mujoco/inverted_pendulum/, they are the same except that the action space is now continuous. They have a very specific reward structure not shared by other environments, Is there any reason why two very similar problems are chosen?
---
Reply to Comment 1.1.1:
Comment: We apologize for any lack of clarity, typos and the space constraint in our initial response that may have caused confusion. We sincerely appreciate the opportunity to respond again and provide further clarification.
**Neyman orthogonality (in response to your first bullet point)**. Let us start by clarifying Neyman orthogonality, introduced by Chernozhukov et al. (2018, The Econometrics Journal). This property is named after the famous statistician Jerzy Neyman, who used such properties in constructing efficient estimators and hypothesis tests (Neyman, 1959, Probability and Statistics: The Harald Cramér Volume). Since its introduction, it has been widely employed for robust and efficient estimation and inference in econometrics, statistics and machine learning, with applications ranging from the estimation of heterogeneous treatment effect (Oprescu et al., 2019, ICML), off-policy evaluation in RL (Kallus and Uehara, 2022, Operations Research), and variable selection (Quinzan et al., 2023, ICML).
The original definition involves two parameters: a primary parameter of interest $\theta$ and a nuisance parameter $\eta$. Suppose we have an estimating equation $\phi(O,\eta,\theta)$ with $O$ being the data observation, such that $\theta$ can be estimated by solving $\mathbb{E}_n[\phi(O,\eta,\theta)] = 0$. The Neyman orthogonality ensures that $\phi$ is robust to small perturbations in $\eta$ near the true value $\eta_0$. Mathematically, it requires:
$$
\nabla_{\eta} \mathbb{E} \phi(W; \theta_0, \eta) \Big|_{\eta = \eta_0} = 0,
$$
where $\nabla_{\eta}$ represents the Gateaux derivative evaluated at the true parameters $\eta_0$ and $\theta_0$. This implies that estimation errors in $\eta$ affect the expectation of $\phi$ only through second-order terms.
Back to our bandit example, $\theta$ corresponds to the target policy's value $v(\pi_e)$, $\eta$ corresponds to the reward function $r$ and the behavior policy $\pi_b$ that need to be estimated, $O$ represents the context-action-reward triplet $(S, A, R)$, and $\phi$ reduces to the difference between the estimating function $\psi(O; \eta)$ and $v(\pi_e)$. For instance, when we use DR for estimation,
$$
\psi_{\text{DR}}(O; r, \pi_b) = \sum_a \pi_e(a|S) r(a, S) + [R - r(A, S)] \frac{\pi_e(A|S)}{\pi_b(A|S)}.
$$
By definition, requiring $\phi$ to satisfy Neyman orthogonality is equivalent to requiring $\psi$ to satisfy such condition. This condition indeed holds for the doubly robust estimating equation $\psi_{\text{DR}}$; see, e.g., Step 1 of the proofs of Theorems 5.1 & 5.2 in Chernozhukov et al. (2018, The Econometrics Journal). This is precisely why in our response, the impact of estimation error on the last term can be expressed as a product $\\|\widehat{r} - r\\| \times \\|\widehat{\pi}_b - \pi_b\\|$, as the estimation error in $r$ and $\pi_b$ affects the policy value only at second order.
**Rate clarification (in response to your third bullet point)**. To establish the asymptotic equivalence, the DR with estimated reward and behavior policy must be close to the estimator using oracle reward and behavior policy, up to an error of $o_p(n^{-1/2})$. Due to the second-order effect, we typically require both $\widehat{r}$ and $\widehat{b}$ to converge at a rate of $o_p(n^{-1/4})$.
In the tabular settings, these estimators converge at a faster rate of $O_p(n^{-1/2})$. Consequently, as you mentioned, the resulting error is $O_p(n^{-1})$, which is sufficiently small to meet our $o_p(n^{-1/2})$ requirement. In contrast, without Neyman orthogonality, the estimator would be sensitive to first-order errors in the nuisance estimators. In that case, the final error would be $O_p(n^{-1/2})$ (big-$O_p$), which does not meet the required $o_p(n^{-1/2})$ (little-$o_p$) condition.
**Notation clarification (in response to your second bullet point)**. Apologize for the typo. The symbol $b$ shall be replaced with the behavior policy $\pi_b$.
**Variance reduction**. We completely agree that the variance of PDIS remains exponential in $k$. In our response, we referred to the use of a substantially small $k$ — specifically, $k$ chosen proportional to $K \log(T)$ — so that the overall error is reduced to polynomial order in $T$, i.e., $O(T^K)$. We apologize for not making this point clearer.
**Empirical Evaluation**. In the rebuttal, we used the Inverted Pendulum environment as Reviewer KUNq requested experiments within the MuJoCo framework. While we acknowledge that this environment is similar to CartPole, it serves to examine settings with continuous actions. Though our primary contribution is theoretical analysis, in this round, we have expanded our empirical analysis to include a wider range of more complex MuJoCo environments (e.g., Swimmer). Refer to our response to Reviewer KUNq for details. | null | null | null | null | null | null | null | null |
ADDQ: Adaptive distributional double Q-learning | Accept (poster) | Summary: Based on double Q-learning, this paper proposes a set of theoretical and practical solutions to reduce the bias of Q value estimation. The paper conducts some experiments in Atari, mujoco and table environments. Some results have demonstrate their effectiveness.
## Update after rebuttal
During the rebuttal, the additional experiments greatly increase the persuasiveness of the paper and address my main concern, so I am very happy to update my evaluation/rate to weak accept.
Claims And Evidence: The paper mentions that it combines Q-learning and Double Q-learning through direct linear weighting, proves its convergence, and conducts experiments in environments such as tables and Atari to prove its effectiveness.
In the experiment, the paper demonstrates the effectiveness of the proposed method with DDQN and SAC. It is worth mentioning that it only considers a small number of selected tasks on the Atari task, which may not be convincing enough. It is also difficult to distinguish the performance from the baseline method on the Mujoco task, which may greatly reduce the persuasiveness of the paper.
Methods And Evaluation Criteria: The metrics used in the experiment part of the paper are final score, final reward, etc. However, the metrics commonly used in Atari tasks such as IQM and human normliazed score are missing, making it difficult to evaluate the overall performance of the algorithm.
Theoretical Claims: The paper seems to have some theoretical motivations that are not well validated in the experiment. In addition, the paper may benefit from more specific discussion and proof of convergence.
Experimental Designs Or Analyses: The overall design of the experimental part is reasonable, but at least on the Atari task, it lacks sufficient comprehensive analysis metrics, such as IQM, HNS, etc. In addition, the paper also lacks discussion and comparison on the core contribution (i.e., reducing the evaluation variance of Q) in the experiment.
Supplementary Material: I carefully checked the supplementary experiments and related experimental settings in the appendix.
Relation To Broader Scientific Literature: Bias control in Q-learning is an issue worth discussing. The paper also applies its improvements to existing methods such as SAC and DDQN, but due to the lack of sufficiently convincing experiments and the fact that in some experiments (such as mujoco) the proposed method is even far below the baseline method, the effectiveness of proposed method is thus less convincing.
Essential References Not Discussed: I think the paper would at least benefit from comparison with some recent SOTA methods on Atari tasks, such as MEME, EfficientZero, etc.
Other Strengths And Weaknesses: The core contribution of the paper is to propose a simple and easy-to-use solution to reduce Q value over-estimation.
The main disadvantage of the paper is that the experimental part does not prove the effectiveness of the proposed method. First, the number of Atari tasks is small, and there are no comprehensive metrics such as IQM and Mean HNS. In addition, the proposed method on the Mujoco task is even far surpassed by the baseline method, which makes it difficult to prove the effectiveness of the algorithm.
Other Comments Or Suggestions: The paper may benefit from discussing the more experimental and theoretical proof of the proposed method on the convergent behavior and convergence speed of the algorithm. In addition, more Atari tasks and more comprehensive evaluation metrics may enhance the persuasiveness of the paper.
Questions For Authors: 1. Have the authors considered showing more performance on Atari tasks, such as more tasks, more comprehensive metrics such as IQM[1], etc.
2. Could the author give a detailed introduction on how the proposed method improves the efficiency of the algorithm and why the performance on the mujoco task is not good?
[1] Deep Reinforcement Learning at the Edge of the Statistical Precipice
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer, thank you very much for your careful reading and thougths on our article!
**Please check the anonymous repository https://anonymous.4open.science/r/ADDQ-B776 for figures addressing some of your thoughts.**
**Methods and evaluation criteria:**
* The metrics that you claimed to be missing are given in the Appendix, please have another look at Appendix D.
**Theoretical claims:**
* The experiments are not designed to validate theoretical claims. In opposite, the theoretical results motivate the design of the ADDQ algorithm. Nonetheless, since it is an interesting remark, to answer your question we now provide simulations for the bandit MDP as well. Please check the linked repository above for the plots.
* A full convergence proof is indeed provided in the tabular setup, in the RL literature there seems generally little hope to provide convergence proofs for function approximation with neural networks in non-trivial settings.
**Relation to literature:**
* We now added comparisons (see the link above for the plots) with several algorithms: ensemble bootstrapped Q-learning, maxmin Q-learning, and random ensemble double Q learing. Note that we only provide tabular comparisons as there is no benchmark implementation available for distributional EBQL/maxmin/REDQ.
* Our presentation of MuJoCo was not the smartest move from an advertisement point of view, we tried to keep the highest scientific standards and compare the same algorithmic idea over all experiments. For MuJoCo both Q and DQ estimators are clearly inferior to the clipped estimator (in contrast to Atari). The reason is that DQ does not substract enough positive bias, the main finding of the TD3 paper. The purpose of the experiment was to show that cleverly combining Q and DQ estimators (ADDQ) beates both ingredients Q and DQ. Of course, the combination of both won't beat clipped. Here is the but: Our main idea (using sample variances to locally adjust overestimation) is not restricted to combine Q and DQ, but this is the most natural setting for a presentation. In the same way one could use a locally adaptive mixing of Q (or double Q) and clipped Q to avoid too extreme underestimation of the clipped estimator.
* This paper contributes to a fundamental method in RL, Q-learning. The goal is not specifically to solve Atari or MuJoCo problems, these examples serve for illustration purposes as we cannot prove much in deep RL. Comparing to different methods can be interesting for general curiosity, but does not serve the main purpose of this article which is to improve QL/DQL with very little extra effort.
**Weaknesses:**
* Again, please have a quick look at Appendix D for the metrics and for MuJoCo to our comment above.
**Comments:**
* We agree, more theory on convergence rates, etc. would be very desirable. Our paper (just as the entire RL literature away from unrealistic simple situations) does not provide insight.
* Environmental rules of our research team do not allow to run all Atari examples. In fact, there is not much to learn from more Atari examples and the computational/environmental effort is quite big. That's why we used the RLiable metrics to get as much information as possible from the runs. We know that QL and DQL work reasonably well, so there is no reason to expect ADDQ to perform weaker. It would be much more interesting to attack complicated environments on which (distributional) deep Q-learning fails. But here we run in the usual problem of our community to have too few very different benchmark examples.
**Questions:**
* Question 1: See Appendix D
* Question 2: The performance on MuJoCo is not bad. As expected, the algorithm improves the two base algorithms using Q or double Q estimators. It cannot be expected to beat the clipped estimator and we included the example for completeness.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. The additional experiments to greatly increase the persuasiveness of the article and address my main concern, so I am very happy to update my evaluation. | Summary: The authors propose ADDQ, a distributional reinforcement learning (DRL) algorithm which adaptively combines two RL algorithms to combat overestimation. Specifically, sample variances from DRL are used to adaptively balance the updating scheme of an algorithm with a tendency to overestimation (e.g. Q-Learning) with one with a tendency to under-estimation (e.g. Double Q-Learning). The authors provide theoretical analysis on a simple bandit MDP as well as an tabular illustrative practical analysis in a gridworld setting in comparison to distributional QL and distributional DQN, as well a comparison to C51 and QRSAC on Atari and MuJoCo benchmarks.
Claims And Evidence: The authors claim that they “show theoretically how distributional RL helps the agent identify the need for overestimation control”. This claim, or rather the connection between the utilized sample variance (from distributional RL) and the actual overestimation is crucial for the method, as “overestimation regularization” is applied whenever sample variance is high for a given action wrt. the other actions in a state. Thus this connection should, in my opinion, be shown to the reader clearly and optimally illustrative.
The main body of text does contain Proposition 2.1 and 2.2 related to this claim.
Proposition 2.1 states that the overestimation (lower bound of bias) is connected, among other things, to σ and N.
The Proposition 2.2. states that, among other things, the sample variance is decreasing with number of updates N to a given action and is proportional to σ. Thus the sample variance and the overestimation are both connected to the variance σ, and via that to each other.
However, corresponding proof for Proposition 2.2., contained in A.2, makes the assumptions that “all actions in s1 are explored N times before bootstrapping the estimates to s0” and that “the standard 1 /#visits -step-size schedule” is used, which appears rather unrealistic. Further, in practice, bootstrapping and corresponding overestimation is typically propagated over multiple states.
I suggest making this relation more apparent and clear, and perhaps include a practical analysis on the bandit task which shows the overestimation bias in relation to the sample variance and how ADDQ, by adapting β, alleviates the problem at hand.
Methods And Evaluation Criteria: The proposed method and evaluation of said method appears reasonable and convincing, assuming that the sample variance can be used to identify the need for overestimation control.
However, I feel that a broader comparison, especially to uncertainty-driven RL, would paint a clearer picture with respect of to the algorithms performance. More on that in “Experimental Designs or Analyses”.
Theoretical Claims: I had a look at the proofs provided in the appendix but did not check all of them.
Experimental Designs Or Analyses: The experimental design and analysis is sound and I found no issues. However, I’d suggest adding more related algorithms for comparison.
There has been a large body of literature regarding uncertainty estimation in (offline) Reinforcement Learning, often using ensemble methods.
The authors state that “Ensemble methods are promising in theory (assuming independent ensembles) but more problematic for deep RL as storage problems force ensembles to be parametrized by the same neural network.”. However, there have been ensemble methods with hundreds of ensembles with distinct networks [1].
As it appears quite related, integrating an uncertainty-based method into the comparison could help giving an insight into the performance of the proposed method.
For example [2] tries to find a balance between over and underestimation using uncertainty estimates.
Further, Maxmin Q-learning [3], which the authors also mention in the introduction, combines the update rule of QL with DQL an should be a simple baseline to add.
[1] An, G., Moon, S., Kim, J. H., & Song, H. O. (2021). Uncertainty-based offline reinforcement learning with diversified q-ensemble. Advances in neural information processing systems, 34, 7436-7447.
[2] Li, S., Tang, Q., Pang, Y., Ma, X., & Wang, G. (2021). Balancing value underestimation and overestimation with realistic actor-critic. arXiv preprint arXiv:2110.09712.
[3] Lan, Q., Pan, Y., Fyshe, A., & White, M. Maxmin Q-learning: Controlling the Estimation Bias of Q-learning. In International Conference on Learning Representations.
Supplementary Material: I did review the appendix section B, C, and D but did not check all the proofs in appendix A.
Relation To Broader Scientific Literature: For me, this paper is related to many works in the field of uncertainty-based Reinforcement Learning. While it is also mentioned in the introduction that “overestimation should be addressed particularly in state-action regions with high uncertainty”, the authors disregard the field of uncertainty-based RL and instead only put a focus on ensemble methods.
In the field of uncertainty-based RL many works try to reduce Q-values for uncertain states [1, 2, 3, 4], which is conceptually similar to “putting more weight on DQN” for high variance samples as proposed here.
[1] Wu, Yue, et al. "Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning." International Conference on Machine Learning. PMLR, 2021.
[2] Ghasemipour, K., Gu, S. S., & Nachum, O. (2022). Why so pessimistic? estimating uncertainties for offline rl through ensembles, and why their independence matters. Advances in Neural Information Processing Systems, 35, 18267-18281.
[3] Bai, C., Wang, L., Yang, Z., Deng, Z., Garg, A., Liu, P., & Wang, Z. (2022). Pessimistic bootstrapping for uncertainty-driven offline reinforcement learning. arXiv preprint arXiv:2202.11566.
[4] Li, S., Tang, Q., Pang, Y., Ma, X., & Wang, G. (2021). Balancing value underestimation and overestimation with realistic actor-critic. arXiv preprint arXiv:2110.09712.
Essential References Not Discussed: In [1] an adaptive balance (ACC), using a balance controlling parameter β, between over- and under-estimation in TQC is proposed. Instead of deriving β from the sample variance as in this work, Monte-Carlo Rollouts area used. The strong similarity between ACC and ADDQ in my opinion requires discussing said similarity. Further, ACC is evaluated on MuJoCo which indicates that it could be also used for comparison in this work's experiments. Especially since a combination with TQC is listed as promising future work, it appears especially important to discuss and compare to [1].
[1] Dorka, N., Welschehold, T., Bödecker, J., & Burgard, W. (2022). Adaptively calibrated critic estimates for deep reinforcement learning. IEEE Robotics and Automation Letters, 8(2), 624-631.
Further, many related works in “Relation to Broader Scientific Literature” would be suitable for discussion, however I would not refer to those as essential.
Other Strengths And Weaknesses: Weaknesses as discussed above.
Strengths:
- Mostly self-contained
- General writing style
- Extensive experiments
- Use of RLiable library for probability of improvement
Other Comments Or Suggestions: - Line 159, “[(“ brackets can be removed.
- Function class projection, introduced in line 128 (right) and Algorithm 1 could benefit from a proper introduction to be more self-contained.
- Line 187 left, could use an inline citation.
- Formatting issues with page 18, 24,26 in Appendix, Figures too large.
- In Figure 4 and 5, the color coding of the RLiable plots does not match the color coding of the evaluation progress plot. Setting the colors to match could ease readability.
- Perhaps figure 2. could benefit from a graph which shows what fraction of updates used which beta values over time, such that the adaptivity of the proposed method can be observed.
Questions For Authors: Could you illustrate your intuition for the connection between sample variance and the overestimation bias, also for more realistic settings than the one used for the proofs? As is, it remains unclear whether sample variance is a good proxy for measuring overestimation, which is the main contribution of this paper. If this can be justified by more extensive analysis and experiments, I am willing to increase my score.
--
The rebuttal addressed most of my concerns and I increased my score accordingly.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer, thank you very much for your careful reading and thougths on our article!
**Please check the anonymous repository https://anonymous.4open.science/r/ADDQ-B776 for figures addressing some of your thoughts.**
**Bandit example:**
* We agree with your summary about the main contribution of the article and the discussion of the bandit example. We would like to stress that this simplified theoretical contribution is more reasonable than most of what appeared in the past DQL literature. In past theoretical contributions the max estimator is typically studied for IID random variables or for "chain MDPs" without any actions. Also the step-sizes are not very unrealistic even though practitioners like to use other schedules. The Robbins-Monroe conditions require asynchronous (that's why number of visits) step-sizes of the order $1/n^p$, for $p\in (1/2,1]$ so choosing $p=1$ is not particularly unrealistic. Our Lemma A.1 sheds light on the general situation - the simplified Q-learning analysed is a lower bound mechanism, the general situation is even worse. What is behind the Gaussian estimate is the following thought: The overestimation error of the max is governed by the worst single overestimation, thus, distributing the exploration evenly makes equally good estimators or, equivalently, minimises the worst single overestimation. We actually believe, and we are currently working on it, that the bandit MDP computation combined with Lemma A.1 can also shed light on the overestimation of general MDPs. A backwards induction (dynamic programming) from terminal states for stochastic shortest paths problems (or the random geometric time-horizon for discounted MDPs) should allow to derive lower bounds for the overestimation also for general Q-learning exploration on general MDPs. It might be interesting to highlight that our simplified computation shows clearly where the difficulties of computations in distributional Q-learning come from. The update mechanism naturally combines sums and maxima of random variables, so it lives in the intersection of extremal value theory and the central limit theorem. Since there are no distributions that are sum-stable and max-stable at the same time, there is no shortcut in exact probabilistic computations.
* We now performed a simulation of the bandit example to make our theoretical point clearer.
**Experimental design:**
* We now compare ADDQ to ensemble bootstrapped Q-learning, maxmin Q-learning, and random ensemble double Q learning. The difficulty of the example (very different local randomness) shows clearly the local overestimation control of ADDQ over all other methods. Note that we only provide tabular comparisons on the delicate grid world example as there is no benchmark implementation available for distributional EBQL/maxmin/REDQ.
We also plotted the relative sample variances (and, thus, the choice of $\beta$) for the non-trivial grid world example in order to make the point clear: our algorithm automatically spots the problematic state-action pairs to mitigate the overestimation.
**References:**
* Thank you very much for providing additional literature, which we will certainly include in a revised version! While we believe that our approach is not directly related to approaches using uncertainty Bellmann, we agree that ACC is much closer in spirit. This is a very nice paper, thanks for sharing. As you point out, the approach resembles of what we sketched under future research for TQC. The way ACC is formulated (using the replay buffer), ACC does not control overestimation locally on state-action level but globally (averaging over state-action pairs from the replay buffer). This way the algorithm would struggle for instance on our delicate grid world example just as much as maxmin and ensemble methods. In the tabular situation this might be adjusted, with function approximation probably not. For ADDQ we use that the distributions (thus variances) are learned on state-action level.
**Other comments:**
* We will improve your minor comments, thanks!
* We included plots to visualize the choices made by $\beta$, thanks!
**Questions:**
Thanks for reading carefull, yes, this can be seen as the core contribution. From statistical theory the observation is somewhat obvious (large variance means large sample variance, means large overestimation bias for max estimator) and this was essentially worked out for the bandit MDP and then translated in $\beta$ for ADDQ. Our intuition then uses dynamic programming (backwards induction) to bootstrap the idea up the decision tree. We added plots for our delicate grid world example for a state-action pair facing towards the stochastic region. The plots show that relative sample variance and overestimation are both large and decrease together (very slowly for Q-learning, much faster for ADDQ). In contrast, for less vulnerable state-action pairs both overestimation and relative sample variances are much smaller.
---
Rebuttal Comment 1.1:
Comment: The rebuttal addressed most of my concerns and I increased my score accordingly. | Summary: This paper proposes ADDQ, an adaptive distributional double Q-learning method that mitigates Q-value overestimation bias by locally adjusting update weights based on distributional uncertainty estimates. Built upon distributional RL frameworks (e.g., C51, QRDQN), ADDQ dynamically combines Q-learning and double Q-learning updates using sample variance information from return distributions. Theoretical analysis in a tabular bandit MDP quantifies overestimation bounds, and experiments across tabular, Atari, and MuJoCo environments demonstrate improved stability and performance compared to baseline methods.
Claims And Evidence: Claims:
- Overestimation bias in Q-learning can be mitigated by locally adapting updates using distributional variance.
- ADDQ integrates seamlessly into existing distributional RL algorithms with minimal code changes.
- The method converges theoretically and outperforms QL/DQL in stochastic and high-uncertainty environments.
Evidence:
- Proposition 2.1 derives a lower bound for QL overestimation in a bandit MDP, linking bias to reward variance and sample size.
- Theorem 3.1 proves ADDQ’s convergence under Robbins-Monro conditions.
- Experiments on grid worlds, Atari, and MuJoCo show ADDQ reduces bias and achieves higher scores than QL, DQL, and clipped variants (Figures 2, 4-5).
Methods And Evaluation Criteria: Methods:
- Distributional RL: Uses return distribution variances to identify uncertain state-action pairs.
- Adaptive Weighting: Adjusts interpolation weights ($\beta$) between QL and DQL updates based on relative sample variances.
- Algorithm Integration: Modifies C51 and QRDQN with dual networks and adaptive targets.
Evaluation Criteria:
- Bias Reduction: Measured via Q-value deviations in tabular settings (Figure 2).
- Performance: Normalized scores and probability of improvement (RLiable plots) across 10 Atari and 5 MuJoCo environments.
- Stability: Comparison of failure rates and learning curve variances.
Theoretical Claims: - Proposition 2.1 provides a tight lower bound for QL overestimation in Gaussian bandits, highlighting the role of variance and action count. While insightful, the analysis assumes cyclic exploration and ignores function approximation.
- Theorem 3.1 guarantees convergence under symmetric $\beta$ schedules. The proof leverages stochastic approximation theory but does not address deep RL settings with neural networks.
Experimental Designs Or Analyses: Strength:
- Comprehensive evaluation across tabular, Atari, and MuJoCo benchmarks.
- Inclusion of RLiable metrics (e.g., interquartile mean, probability of improvement) enhances statistical rigor.
Weakness:
- MuJoCo experiments show limited gains (even worse) compared to clipped QRSAC , but this is not deeply analyzed.
- Ablation studies for $\beta$ thresholds (Section B.2) are preliminary; sensitivity to hyperparameters is unclear.
Supplementary Material: Yes, I briefly went through the proof sketch and attached experimental details. It seems complete and solid.
Relation To Broader Scientific Literature: The main results of this paper could contribute to applications and methodologies in robust RL learning and improve the performance of learning.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: - I suggest to add detailed discussion on the choice of $\beta$ considering its essential role in this paper , the threshold to determine $\beta$ seems careless and randomly. You must tell readers how to set the threshold by some criterion instead of using some fixed value.
- I believe the $b^{A/B}(s,a)$ in left panel of line 247 is aligned with that in left panel of line 234, but you also have claimed $b^{A/B}(s,a)$ in left panel line 239, which would cause a confusion for readers.
- I wonder the $\nu \leftarrow \beta\eta^{B}(s^{\prime,a^*})+(1-\beta)\eta^{A}(s^{\prime},a^*)$ in Algorithm 2 is correct? It seems not akin to $\beta$ defined in equation (1).
- It is not clear how $(\beta_t^A)_{t\in\mathbb{N}}$ and its counterpart
in Therorem 3.1 are defined and what is the relationship between them and $\beta$ in Algorithm 2. I also wonder how to guarantee the two $\beta$ would meet in limit behavior in your algorithm.
- Compile error occurs in line 1537.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear reviewer, thank you very much for your careful reading and thoughts on our article!
**Please check the anonymous repository https://anonymous.4open.science/r/ADDQ-B776 for figures addressing some of your thoughts.**
**Weaknesses:**
* **Ablation study**: We included a more comprehensive ablation study for $\beta$, plotting many choices of $\beta$ showing that the choice is relatively irrelevant. We also compare ADDQ to ensemble bootstrapped Q-learning, maxmin Q-learning, random ensemble double Q learning. We hope that is more convincing. Note that we only provide tabular comparisons as there is no benchmark implementation available for distributional EBQL/maxmin/REDQ.
* **MuJoCo**: Our presentation of MuJoCo was not the smartest move from an advertisement point of view, we tried to keep the highest scientific standards and compare the same algorithmic idea over all experiments. For MuJoCo both Q and DQ estimators are clearly inferior to the clipped estimator (in contrast to Atari). The reason is that DQ does not substract enough positive bias, the main finding of the TD3 paper. The purpose of the experiment was to show that cleverly combining Q and DQ estimators (ADDQ) beates both ingredients Q and DQ. Of course, the combination of both won't beat clipped. Here is the but: Our main idea (using sample variances to locally adjust overestimation) is not restricted to combine Q and DQ, but this is the most natural setting for a presentation. In the same way one could use a locally adaptive mixing of Q (or double Q) and clipped Q to avoid too extreme underestimation of the clipped estimator.
**Questions:**
* You are totally right that the suggested **choice of $\beta$** is somewhat arbitrary. There are plenty of stochastic approximation algorithms that converge to the optimal Q-matrix. In some sense one could argue that all of them (double Q, clipped, maxmin, averaged, weighted, ADDQ, etc.) are somewhat arbitrary choices from the set of all converging algorithms, none is the result of a rigorous derivation. Only Q-learning itself might be seen natural as it is the direct (but inefficient) stochastic counterpart of value iteration. There are no convincing theoretical arguments in the literature, and also no variants that are equally convincing over different environments. Clipped is perhaps a good example, it performs well in combination with SAC on MuJoCo but not on Atari - Q/DQL the opposite. For a new environment, the RL researcher has no other choice but to compare different variants plus some educated guessing. We have no better answer than saying that we add a new family of animals to the zoo of algorithms which is more flexible and combines the advantages of two other algorithms. The form of $\beta$ we propose is the natural choice based on our theoretical observation, only the choice of constants (hyper parameters) is somewhat artificial. We added a much more extensive ablation study (see the repository) to show that the choice off hyperparameters in $\beta$ is actually pretty harmless. Interestingly enough, the same choice of $\beta$ improves QL/DQL on diverse settings as tabular, Atari, MuJoCo.
* **$b^{A/B}(s,a)$ notation**: Thanks for your comment, we will improve the notation to make readability easier.
* **$\nu$ from Algorithm 2**: Thanks for your careful reading! This is a typo, and was realised only one hour after submission. The implementation is correct.
* **Theorem 3.1**: The theorem holds for **arbitrary** such sequences, we will add the word "arbitrary". You are right about your question, it is unclear how to check equality in the limit. The easiest way (and this is what we do) is to chose $\beta^A_t=\beta^B_t$ for all $t$ so the condition is trivially satisfied.
* **Compilation error**: Thanks for spotting! | Summary: The paper introduces ADDQ (Adaptive Distributional Double Q-learning), a novel reinforcement learning (RL) algorithm that addresses the overestimation bias in Q-learning by leveraging distributional reinforcement learning (DRL). The key claim is that ADDQ provides a flexible and computationally efficient way to mitigate overestimation, improving learning stability and efficiency.
Claims And Evidence: The paper makes several strong claims regarding ADDQ's advantages:
- Reduction of Overestimation Bias – Supported by a theoretical analysis using probability bounds.
- Better Stability than QL and DQL – Demonstrated through experiments in various environments.
- Improved Sample Efficiency – Shown via empirical comparisons in Atari and MuJoCo environments.
While the theoretical analysis is compelling, some claims (e.g., optimality of the chosen weighting function) could be more rigorously justified with additional ablation studies.
Methods And Evaluation Criteria: The methodology is well-designed and aligns with the problem at hand:
- The use of sample variance as an indicator of uncertainty is well-motivated.
- The local weighting mechanism between QL and DQL is clearly explained.
- The evaluation benchmarks (Atari, MuJoCo) are appropriate for demonstrating generalization.
However, the choice of hyperparameters for β (the weighting factor) is somewhat heuristic. It would be valuable to analyze different settings of β to ensure robustness across environments.
Theoretical Claims: The paper provides proofs of convergence for the ADDQ algorithm in tabular settings. The theoretical analysis is rigorous, but the following concerns arise:
- The impact of function approximation (i.e., deep RL settings) is not fully addressed in the theoretical framework.
- Some assumptions (e.g., independence of updates in DQL) may not hold in practical scenarios with deep learning.
A discussion on these limitations and potential extensions would strengthen the theoretical contribution.
Experimental Designs Or Analyses: Strengths:
- Baseline comparisons include standard QL, DQL, and clipped QL, ensuring a fair evaluation.
- RLiable evaluation metrics (probability of improvement) provide a robust statistical comparison.
Weaknesses:
- No ablation studies on the choice of β and its sensitivity to different environments.
- No analysis of computational overhead (Does ADDQ introduce additional costs?).
Supplementary Material: The supplementary material contains:
- Extended proofs (good for rigor).
- Additional experimental results (helpful for replication).
- Implementation details (well-documented).
However, it would be helpful if:
- More details were provided on the computational complexity of ADDQ in deep RL.
- More hyperparameter tuning results were included.
Relation To Broader Scientific Literature: The paper is well-positioned in the reinforcement learning literature. The paper clearly distinguishes ADDQ from previous approaches.
Essential References Not Discussed: While the paper covers the key references, it could benefit from discussing:
- More recent advances in Bayesian RL, which also address uncertainty estimation.
- Work on uncertainty-aware RL methods beyond variance-based heuristics, such as Bootstrapped DQN (Osband et al., 2016).
Other Strengths And Weaknesses: Strengths:
- Theoretical grounding: Provides mathematical insight into overestimation bias.
- Practical implementation: Easily adaptable to existing DRL frameworks.
- Empirical validation: Strong benchmark comparisons with standard methods.
Weaknesses:
- Limited ablation studies: The choice of β is not fully justified.
- No computational cost analysis: Would ADDQ increase training time or memory usage?
- Limited discussion on failure cases: When does ADDQ not work well?
Other Comments Or Suggestions: - Provide more discussion on practical implementation (e.g., how easy is it to integrate ADDQ into existing RL libraries?).
- Include an analysis of computational efficiency.
Questions For Authors: - How does ADDQ compare in terms of computational cost? Since it requires computing sample variances, does it introduce a significant overhead?
- Why was β chosen heuristically rather than learned adaptively? Would an adaptive β (e.g., learned via meta-learning) improve performance?
- How does ADDQ perform with function approximation errors? The theoretical results focus on tabular settings—how well do they generalize to deep RL?
- Could ADDQ be combined with other overestimation reduction methods? For instance, can it be integrated with ensemble-based RL (e.g., Averaged-DQN)?
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Other expertise']
Ethical Review Concerns: Lack of *Impact Statements* section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer, thank you very much for your careful reading and thougths on our article!
**Please check the anonymous repository https://anonymous.4open.science/r/ADDQ-B776 for figures addressing some of your thoughts.**
* You are totally right that the suggested **choice of $\beta$** is somewhat arbitrary. There are plenty of stochastic approximation algorithms that converge to the optimal Q-matrix. In some sense one could argue that all of them (double Q, clipped, maxmin, averaged, weighted, ADDQ, etc.) are somewhat arbitrary choices from the set of all converging algorithms, none is the result of a rigorous derivation. Only Q-learning itself might be seen natural as it is the direct (but inefficient) stochastic counterpart of value iteration. There are no convincing theoretical arguments in the literature, and also no variants that are equally convincing over different environments. Clipped is perhaps a good example, it performs well in combination with SAC on MuJoCo but not on Atari - Q/DQL the opposite. For a new environment, the RL researcher has no other choice but to compare different variants plus some educated guessing. We have no better answer than saying that we add a new family of animals to the zoo of algorithms which is more flexible and combines the advantages of two other algorithms. The form of $\beta$ we propose is the natural choice based on our theoretical observation, only the choice of constants (hyper parameters) is somewhat artificial. We added a much more extensive ablation study (see the repository) to show that the choice of hyperparameters in $\beta$ is actually pretty harmless. Interestingly enough, the same choice of $\beta$ improves QL/DQL on diverse settings as tabular, Atari, MuJoCo.
* **Hyperparameter tuning**: To ensure fair comparison, we did not tune any hyperparameter and stick to the choices from stable baselines 3. We did not even try to optimise the choices of $\beta$ and used our first choice for all experiments. Usually researchers tune their methods before publication, but our entire point is that the method is very robust. It is a harmless trick to improve distributional algorithms without effort.
* **Deep Theory**: We would love to provide theory in the deep setting. But to be honest, that seems pretty hope less. Not only for our article, but we are not aware of any convincing result in the RL literature.
* **Computational overhead**: ADDQ has no computational overhead compared to double distributional QL (the runtime is almost identical). The only difference is to compute $\beta$ (a small finite sum), which compared to evaluating large NN is almost nothing. The computational effort mainly comes from distributional deep Q-learning, which is significantly more expansive (but more sample efficient) than ordinarly deep Q-learning.
* **Failure cases**: We did not encounter failure cases (even though failure might be hard to judge). With the choice of $\beta$, ADDQ works stably if either Q or double Q works reasonably well. In contrast, in situations with diverse randomness as our grid world example, ADDQ works much better than Q and double Q. One could interprete the MuJoCo examples as a failure. In this case clipping is much more effective than both Q and double Q (the negative bias of double Q is not enough) so a combination of Q and double Q has no chance to beat clipping. MuJoCo is more a failure of Q and double Q (ADDQ still improves both!) compared to clipping. From an advertisement perspective it might have been smarter to locally combine for MuJoCo the Q and the clipped estimator (algorithmically this is almost identical) but the current presentation is scientifically more honest. After all, in this paper we wanted to show how to improve Q and DQ by a combination of both.
* **Implementation**: The implementation overhead is adding one line and changing two lines of code to existing (here: stable baselines 3) distributional code: Compute $\beta$ from the distributions (i.e. compute a finite sum) and then change the double Q updates. That's it.
* **Integration in other algorithms**: Yes, the main idea can be combined with other approaches in which an algorithm has a parameter that steers the over/underestimation *and* the parameter can be adapted on the fly. In that case we would suggest to locally change the parameter according to the sample variance. Here are two examples: (i) In TQC a number of top atoms is truncated. The number is a hyperparameter and must be chosen for every environment. We would suggest to chose the number of truncated atoms locally according to the sample variance. (ii) In randomized ensemble DQL we would suggest to make the number of chosen ensemble members for the updates locally dependent on the sample variance. | null | null | null | null | null | null |
Sparse Training from Random Initialization: Aligning Lottery Ticket Masks using Weight Symmetry | Accept (poster) | Summary: The authors investigate the problem of LTH masks not being compatible with random initializations. They find that by aligning the loss basins via a matching permutation, an LTH mask can be used with a random initialization, not associated with the mask.
This work shows that LTH masks can be reused with random initializations via permutation matching (to some extent).
Claims And Evidence: The claims made in the paper are supported empirically via experiments on CIFAR10, CIFAR100 and also the ImageNet datasets.
Methods And Evaluation Criteria: Yes, the evaluation methods (generalization performance) and criteria (image classification datasets) are appropriate for this problem setting.
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: The conclusions drawn from the ensemble experiments are a bit unclear to me (see questions below).
Supplementary Material: Yes, I have read the supplementary material (additional experiments, training settings and computational overhead).
Relation To Broader Scientific Literature: This paper addresses the problem of making LTH masks useful for other random initializations, which is a step in the direction to understand how sparse networks can be trained from scratch. This is an active area of research.
Essential References Not Discussed: The authors discuss the relevant literature.
Other Strengths And Weaknesses: Strengths
1. Simply permuting the mask can help match the LTH mask to any random initialization.
2. This improves the performance of the random initialization with the random mask.
Weaknesses (and questions)
1. It seems that the permutation can only be identified by training two dense networks, is this necessary or can this be done on a sparse network as well?
2. The hypothesis that the difference between a random initialization and the LTH is due to the misalignment of the loss basin seems limited. Because via permutation matching, the random init with the LTH mask is only slightly better than the naive baseline. Significant improvements are only seen on wider networks, even in the case of CIFAR10 and CIFAR100. This suggests that there might be more than a misalignment of basins or the permutation matching is not good enough, maybe the authors could investigate this?
3. What is the difference between IMP and LTH in Table 1, is IMP trained over 5 different seeds and LTH uses the same mask and init over 5 different runs? This will naturally have a smaller ensemble effect since the only randomness is the stochastic noise compared to different initializations in the case of IMP. The permutations however, have different random initializations, which would help ensembling. In spite of this, the permuted solutions are similar or worse than IMP. Can the authors explain the experimental setup in more detail to highlight the differences between the permuted ensemble and the LTH ensemble. Otherwise, the functional diversity of permutations is unclear to me.
4. Low width networks observe a smaller improvement with permutation, is there a reason for this? Is there a tradeoff between the alignment of the mask and initialization (via permutation) and the amount of overparameterization (width).
Other Comments Or Suggestions: I appreciate the authors’ insight on using permutations to make LTH masks flexible and reusable with random initializations; the improvements in performance seem limited to me. I believe the paper will benefit from an explanation of why the performance gains are limited (is it due to the limitation of permutation matching or due to the fact that the hypothesis of misaligned basins is not enough). Hence I lean towards a reject at the moment, but I am happy to increase my score after a discussion with the authors during the rebuttal.
Questions For Authors: See weaknesses above.
Ethical Review Concerns: No ethical concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the detailed feedback provided; we provide more details below:
1.
In our work, we used 2 trained dense models to find permutation (perm) mapping, as primary aim of the paper was to understand why winning tickets don't work with different random init.
One can also find perm mapping early in training, as noted by [Sharma, 2024]. We have added an additional experiment on early matching with CIFAR-10, which shows that models can be matched earlier in training, thus reducing computational cost of our method. (https://imgur.com/a/BgiE4W3)
We don't think sparse models can be used for finding perm mapping as the mask projects the model into a different sub-space, thus cannot be matched with a perm.
2.
Your observation is indeed correct! As noted in manuscript (L188), the perm matching algo uses a greedy approach and thus finds an approximate solution, i.e., the perm matching is not good enough (as you noted). As discussed in Sec 4.3, perm matching works better for wider models [2].
This can be observed in the LMC plot (Fig 3.), where the loss barrier decreases when the model width is increased, showing that the perm matching finds a better solution as we increase the width.
Our experiments in Sec 4.3 show that as we increase the model width, gap between LTH and permuted mask decreases, which suggests that the permuted solution will closely match LTH performance if given an accurate permutation mapping.
3.
>What is the difference between IMP and LTH in Table 1? Is IMP trained over 5 different seeds and LTH uses the same mask and init over 5 different run
Yes, IMP is trained independently over 5 different seeds with iterative pruning to obtain 5 different sparse/pruned solutions with different sparse masks/topologies ($M_0$, $M_1$, $M_2$, $M_3$, $M_4$, $M_5$).
LTH ensemble is trained using the same mask ($M_0$) and init ($w_0$) over 5 different runs (with different data order). Random init $w_0$ defines the winning ticket for mask $M_0$.
The permuted ensemble is trained using 5 different permutations ($\pi_1$, $\pi_2$, $\pi_3$, $\pi_4$, $\pi_5$) of the same mask ($M_0$) with five different random inits ($w_1$, $w_2$, $w_3$, $w_4$, $w_5$).
> The permutations however, have different random initializations. In spite of this, the permuted solutions are similar or worse than IMP.
This is an interesting question! Our intuition is that in the case of IMP, models are trained with different random initializations, and they discover different sparse topologies, which helps in learning more diverse solutions but is computationally expensive and not practical to create ensembles. The IMP baselines serve as an upper bound for diversity metrics as both training from different random initializations and learning different topologies introduce high randomness/stochasticity to learn very diverse solutions.
LTH, as noted in prior work, does not learn diverse solutions as they are trained using the same mask and init but over different runs [1]. Thus, LTH is not suitable for generating ensembles.
In contrast, our proposed method allows us to reuse the LTH mask with different random inits, introducing more source of randomness than the LTH baseline and thus improving the diversity of the solution. However, since we reuse the mask to train permuted ensembles, the diversity will be less than IMP (which uses both different init and sparse topologies) but good enough for making ensembles. This can be observed in Table 1., where the LTH ensemble does not improve the accuracy compared to a single model, while the permuted ensemble significantly improves the performance compared to a single model. It is worth noting that for CIFAR-100 datasets, the permuted ensemble (77.85%) surpasses the LTH ensemble (75.99%), demonstrating that the permuting mask can help train more diverse solutions with less computational cost incurred as compared to IMP.
We will add more explanations and define LTH, IMP, and Permuted ensembles for an easier understanding.
4.
> "Low width networks observe a smaller improvement with permutation"
The perm matching algorithm doesn't work well with lower widths, which can be observed in the loss-barrier plots (high barrier --> poor perm matching). We still observe smaller but statistically significant improvements with the permuted mask, as discussed below. As we increase the width, perm matching becomes better, and the loss barrier decreases (fig 3.); our proposed method with permuted mask becomes closer to LTH.
Even at width=1, we can see significant improvements at 97% sparsity:
**CIFAR-10**: +1%
**CIFAR-100**: +3.5%
**Imagenet** (at 95% sparsity, width=1): +2% (top-1)
We have added more experiments in our reply to reviewer **oZBY**, which you may also find interesting.
If you're satisfied with our explanation, we'd greatly appreciate you updating your score.
[1] Evci et al, Gradient Flow in Sparse Neural Networks
[2]. Ainsworth et al., Git Re-Basin
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response.
**Permutations matching via sparse network**
From the additional experiments, it seems that the permutation can be identified already midway through dense training. Given that the subsequent sparse networks in IMP are linearly connected to this dense net, it should still be possible to find the permutation with the sparse network? Possibly at a higher sparsity this is harder. Or does the permutation matching also become worse for a sparser network?
In order to clarify the reasons for the limited performance gains of the LTH with a random init, in spite of permutation matching, I would urge the authors to add the discussion regarding performance of the matching algorithm and the effect of width on the overall performance.
The authors have addressed my questions sufficiently and I have updated my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to reply to our rebuttal and raising the score. We really appreciate it!
**Matching via sparse network**:
>subsequent sparse networks in IMP are linearly connected to this dense net, it should still be possible to find the permutation with the sparse network?
It is indeed possible to match sparse networks found using the IMP, as shown by Sharma et al.. (https://imgur.com/a/mIpXNuv)
However, obtaining the second sparse network with IMP would be computationally more expensive than early matching with a new dense model.
> Possibly at a higher sparsity this is harder. Or does the permutation matching also become worse for a sparser network?
Your intuition is indeed correct! The loss barrier increases as we increase the sparsity, indicating that it is difficult to find the permutation at higher sparsities.
> In order to clarify the reasons for the limited performance gains of the LTH with a random init, in spite of permutation matching, I would urge the authors to add the discussion regarding the performance of the matching algorithm and the effect of width on the overall performance.
We will surely add more discussion on the performance of weight/activation matching the width of the model.
We hope we have answered all your questions; please let us know if you have more follow-up questions. We thank you for your valuable insights and for improving our paper.
1. Sharma et al., Simultaneous linear connectivity of neural networks modulo permutation | Summary: This paper hypothesizes that Iterative Magnitude Pruning (IMP) fails to generalize its sparse mask to other random initialization because the basin in which other random initialization resides does not match the basin constructed by the IMP sparse mask.
To address this, the authors propose to permute the IMP mask to align with the basin of other random initialization.
The authors evaluate the proposed method on CIFAR10/100 and ImageNet datasets with VGGNet and ResNet.
## Update after rebuttal
The authors' response did not effectively address the raised concerns, particularly about the gap between the authors' claim and the proposed method. Thus, the reviewer still hesitates to accept this work because the current version is confusing and unconvincing. However, the reviewer has decided to raise the original rating to 'weak reject', expecting the authors to accept the suggestions detailed in the comments.
Claims And Evidence: The authors claim that the IMP sparse mask may not match the basin in which other random initialization resides.
The evidence for this claim is just the analysis in Figure 1 with the assumption of a single layer with two parameters case.
However, previous work [1] discovered that an initialized model often exhibits insufficient stability to SGD noise, meaning that it is not in a basin of attraction.
Thus, the assumption that an initialized model is supposed to be in a specific basin may not be true.
Also, it is difficult to agree that a sparse mask determines a specific basin of attraction according to the findings in [1].
Therefore, the reviewer is not convinced by the claims of this paper.
[1] Frankle et al, "Linear mode connectivity and the lottery ticket hypothesis", ICML2020.
Methods And Evaluation Criteria: The authors propose permutation matching to match the basin of an initialized model and a sparse model.
However, this method is just adopted from previous works without any significant modification.
Thus, this paper seems to lack technical novelty.
For evaluation, the authors use CIFAR10/100 and ImageNet datasets, which are commonly used benchmark datasets.
Theoretical Claims: There is no theoretical claim.
Experimental Designs Or Analyses: The authors show the experimental results with varying widths of networks, showing that the proposed method is more effective with large widths.
This claim seems to be contrary to the goal of network sparsification.
If the proposed sparsification method is effective with larger models, it would be meaningless.
Supplementary Material: The supplementary material contains codes for implementing the proposed method.
Relation To Broader Scientific Literature: The contribution of this paper seems to be limited.
The authors' assumption is not convincing and the experimental results are also not significant.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: All are mentioned in other sections.
Other Comments Or Suggestions: In the main manuscript, the authors use 'LTH mask' repeatedly.
However, it is difficult to find the details of which methods the authors use in the main manuscript and I managed to find the details in the Appendix.
Similarly, in experimental sections, It is difficult to find what 'Naive' refers to.
Overall, a clearer and more reader-friendly presentation seems to be required.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: >The authors claim that the IMP sparse mask may not match the basin in which other random init resides. evidence for this claim is just analysis in Figure 1 with assumption of a single layer with two parameters case.
We validate our hypothesis through comprehensive experiments conducted across multiple datasets and model architectures. Our findings are substantiated by similar observation in [3], which demonstrated that IMP sparse masks only work when dense and sparse models are within the same loss basin and linearly mode-connected.
Our hypothesis is intuitive: by matching loss basins of two models with different initializations, we can permute and reuse the IMP mask from a rewind point.
Our experimental results provide empirical evidence supporting this claim, showing consistent improvement across different model arch and datasets.
> Thus, the assumption that an initialized model is supposed to be in a specific basin may not be true.
We agree that initialization alone does not determine the basin. However, **we do not claim this anywhere in the manuscript**. Models trained from the same initialization can end up in different basins [1]. That’s why using a rewind point for Lottery Tickets is necessary, and we use a rewind point with our method as well. We will add a note on this in final version of manuscript to make it clearer.
Our method builds upon the fact that models trained using LTH mask (with rewind points) always land in the same basin, as the authors observed in [2]. Our key claim is that the winning mask can be used with a rewind point obtained from different init, provided we account for weight symmetry to align the basins. We empirically demonstrate this through extensive experiments across multiple datasets and model architectures.
> If the proposed sparsification method is effective with larger models, it would be meaningless.
We *respectfully* disagree with this statement. Even with width=1, we can observe statistically significant improvement for different datasets as shown in Tables 5, 9, 10, 11.
* **CIFAR-10** (at 97% sparsity, width=1) - Improvement of **1%**
* **CIFAR-100** (at 97% sparsity, width=1) - Improvement of **3.5%**
* **Imagenet** (at 95% sparsity, width=1) - Improvement of **2%** (top-1)
These improvements are statistically significant and demonstrate the efficacy of our method. In the manuscripts, we added experiments with varying widths to get more insights about the accuracy of permutation matching and show that once permutation matching gets better on increasing the width, the gap between LTH and permuted mask (our method) reduces. This experiment provides more insight into the role of permutation matching for our proposed method. You can find more details in **Sec 4.3**.
We would also like to highlight **our work aims at better understanding of lottery tickets and winning masks, not just improving the accuracy**. As noted by reviewer **oZBY**, our work "provides novel insights into the relationship between winning tickets and their original dense networks.” We believe our findings will be useful for the sparse training research community.
> In the main manuscript, the authors use 'LTH mask' repeatedly. However, it is difficult to find the details of which methods the authors use in the main manuscript and I managed to find the details in the Appendix. Similarly, in experimental sections, It is difficult to find what 'Naive' refers to. Overall, a clearer and more reader-friendly presentation seems to be required.
We appreciate the suggestion; we will add a separate paragraph to define naive, LTH and permuted masks for an easier understanding.
However, we would like to point out that we have defined the *LTH*, *naive*, and *permuted mask* at multiple places in the manuscript (**first one at L94-96**; see more below). Moreover, **Figure 2** in the manuscript explains the differences between LTH, naive and permuted baselines.
* **Line 94-96**: "Permuting the LTH sparse mask to align with
the new random initialization improves the performance
of the trained model (**permuted**), compared to the model
trained without permuting the sparse mask (**naive**)."
* **Line 190-192**: "We denote training with the permuted mask, $\pi(\textbf{m}_A)$
as **permuted** and with the non-permuted mask, $\textbf{m}_A$ as **naive**"
* **Line 212-214**: "To evaluate the transferability of the permuted LTH mask we train, a different random initialization $\textbf{w}_B^{t=0}$, the LTH sparse mask $\textbf{m}_A$ and permuted LTH mask $\pi(\textbf{m}_A)$, which we denote the **naive** and **permuted** solution
respectively."
If you're satisfied with our reply, we'd appreciate if you can update your score.
[1]. Frankle et al., Linear mode connectivity and the lottery ticket hypothesis
[2] Evci et al, Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
[3]. Paul et al., Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' detailed responses.
The responses resolve several concerns about model widths and the unclear definition of terms.
However, the other concerns remain unsolved.
L87-L90 (left column) and L153-L157 (right column) say the misalignment between the basin corresponding to LTH mask and **the basin of new random initialization**.
These sentences likely lead a reader to believe that the authors claim there exists an expected basin corresponding to a pruning mask and a random initialization, and the authors want to make any random initialization a winning ticket using a single LTH mask.
What these sentences mean is quite different from the authors' claim that "the winning mask can be used with a rewind point obtained from a different initialization.".
That's why the reviewer is confused by the authors' claim.
The reviewer finds that the current version is likely to mislead readers and that clearer and more accurate expressions are needed.
Moreover, *weight rewinding* [1] was proposed not to find a winning ticket but to better understand why the original IMP fails.
Thus, improving *weight rewinding* by making a single LTH mask generalizable to any rewind point, rather than any initialization, does not seem to offer a significant contribution.
Also, given the concern about technical novelty—which the authors did not address—it is difficult to support the acceptance of this paper to ICML.
[1]. Frankle et al., Linear mode connectivity and the lottery ticket hypothesis
---
Reply to Comment 1.1.1:
Comment: We think there is some misunderstanding in this discussion about original LTH work by Frankle et al., the paper you cited [3], and our work. We briefly review the original LTH papers and the paper you cited to set the motivation for our contribution and to highlight how our work is important for better understanding of LTH and sparse training, as noted by other reviewers.
**History of LTH**
1. LTH was introduced by Frankle et al. [1], who hypothesized the existence of winning tickets at initialization. However, in this paper, they only experimented with small models and datasets and found that LTH from random init doesn’t work for larger models.
2. Quoting directly from paper: "We only consider vision-centric classification task on smaller dataset (MNIST, CIFAR10). We do not investigate larger dataset."
3. Follow-up paper [2] from Frankle et al. proposed that for LTH to work on larger models, we need to apply the mask at the rewind point (not at init) and linked this to SGD instability: "In this paper, we demonstrate that there exist subnetworks of deeper networks at early points in training" (page 2).
4. Frankle et al. added more analysis in subsequent version of the paper—which you cited [3]—and studied SGD stability by analyzing linear mode connectivity:
"We introduce instability analysis to determine whether the outcome of optimizing a neural network is stable to SGD noise, and we suggest linear mode connectivity for making this determination."
"We show that IMP subnetworks become stable and matching when set to weights from early in training, making it possible to extend the lottery ticket observations to larger scales."
**Summary**: The LTH mask only works at the rewind point and not at the init. The paper you cited suggests this is due to the stability of SGD noise.
**Limitations of LTH**: The LTH mask can't be used with a new init, making it impractical. It is not clear why the LTH mask can't be reused to a new random init. Ensembles trained with LTH do not work well because LTH learns similar functions [4].
**Our contribution**: We show how we can use the LTH mask with a new random init by leveraging weight symmetry.
We also show that our method can help improve diversity of sparse models, which help in improving ensemble significantly as reviewer oZBY noted.
**We now address your comments below**:
> Moreover, *weight rewinding* [1] was proposed not to find winning ticket but to better understand why the original IMP fails.
This is incorrect, weight rewinding was proposed to allow LTH to work for larger models/datasets as cited above. The paper you cited tried to understand why the mask from IMP cannot be applied at init directly and why it only works with a bit of pre-training, aka, at the rewind point.
> Thus, improving *weight rewinding* by making a single LTH mask generalizable to any rewind point, rather than any init, does not seem to offer a significant contribution
There is again some misunderstanding here. Our work precisely aims to make the LTH mask more generalizable to new random inits, as noted by other reviewers. We do this over a range of rewind points to maintain existing LTH methodology. We take a new random init and do a little training up to rewind point. Our method allows us to reuse the same LTH mask with arbitrary random inits. As you said, making the LTH mask generalizable is important, and **our work does precisely that**. Other reviewers have the same understanding of our contribution:
Reviewer **oZBY** :
"The authors build upon those findings and show that **winning ticket masks can be reused for different weight initialization**."
Reviewer **yfeq**:
"I appreciate the authors' insight on **using permutations to make LTH masks flexible and reusable with random initialization.**"
"Strengths: Simply **permuting the mask can help match the LTH mask to any random initialization**. This improves the performance of random initialization with the random mask."
> Technical novelty:
Our work provides novel insight about why the LTH mask does not generalize to new init from weight symmetry perspective. As reviewer **oZBY** noted:
"The paper provides novel insights into the relationship between winning tickets and their original dense networks."
"This paper addresses the problem of making LTH masks useful for other random initializations, which is a step in the direction to understand how sparse networks can be trained from scratch. This is an active area of research."
We thank you for the feedback; we’ll add more explanation to make it easier for readers to understand our work. We hope we’ve clarified your confusion about our contribution, and we’d greatly appreciate it if you could consider updating the score.
[1]. The LTH: Finding Sparse, Trainable Neural Network
[2] Stabilizing Lottery Ticket Hypothesis
[3] Linear Mode Connectivity and the LTH
[4]. Evci et al., Gradient Flow in Sparse Neural Network | Summary: The paper studies the property of winning tickets, where the combination of the sparse mask and initial weight values determines eventual generalization performance. When these networks are trained using the same sparse mask but different weight initializations, their performance deteriorates. This is a well known test for whether a sparse network at initialization is a winning ticket or not.
The authors hypothesize that this is due to the mismatch of the basins between different weight initialization and the winning ticket mask. This hypothesis was motivated by the recent discovery that dense networks trained from different random initializations find solutions within the same loss basin modulo permutation. The authors then test the hypothesis by finding a permutation function between two trained dense networks, and show that the permuted winning ticket mask corresponding to the first network can be applied to the second network at a rewind point with better performance as compared to naive application of the mask.
The analysis conducted demonstrates that sparse networks found in each iteration of the IMP are linearly connected in the loss landscape to the dense network when the variance collapse is accounted for. This allows for finding a permutation function after the dense networks are fully trained and not when both are fully pruned. The authors validate the hypothesis on a range of networks and datasets. The results show that the performance degradation of sparse networks trained with the permuted masks decreases with the width of the networks. Finally, it is also shown that the diversity of the solutions discovered is greater than the winning tickets.
Claims And Evidence: The claims regarding misalignment between weight initialization-based optimization basins and winning ticket masks are well-supported by empirical evidence. The paper provides novel insights into the relationship between winning tickets and their original dense networks.
Methods And Evaluation Criteria: The presented methods and evaluation criteria are well-suited for testing the hypothesis.
Theoretical Claims: NA
Experimental Designs Or Analyses: The experiments are very well set-up with clear conclusions. However, there is a lack of clarity regarding the iterative magnitude pruning process used to identify the winning ticket mask. In the appendix the authors specify the use of IMP-FT which does not include a weight rewinding step.
Supplementary Material: Yes, the appendix for additional details and results.
Relation To Broader Scientific Literature: The key contribution of the work is showing that winning ticket masks can be used for networks with different weight initialization provided a suitable permutation function is available. The results provide insights into winning ticket sparse masks and the correlation between the specific weights at initialization. This has been previously used as a test for other pruning at initialization methods [Lee2019, Wang2020, Tanaka2020], without a degradation in performance.
The idea that the entire SGD trajectory and the sparse networks obtained can be aligned for two networks via the same permutation has been explored previously [Sharma2024]. The authors build upon those findings and show that winning ticket masks can be reused for different weight initializations.
Essential References Not Discussed: The paper is quite extensive in its discussions of relevant literature and it is one of the strengths of the paper.
Other Strengths And Weaknesses: Strengths:
* The paper is very well written, the arguments made and evidence provided is very convincing. Please see the response to previous sections
Weaknesses:
* The contributions, while valuable, are primarily extensions of closely related previous work.
* Permuted masks are obtained through a function that is optimized between two trained dense networks and is computationally expensive. The authors acknowledge this, however, a natural question that arises is whether the permutation function can be obtained early in training eliminating the requirement for fully training the new model ?
* To what degree the permuted sparse masks can be reused is unclear. For instance can those masks be reused across datasets ?
Other Comments Or Suggestions: Please see the Questions section below and Weaknesses section above.
Questions For Authors: * Can the randomly initialized neural network be permuted instead of the mask to obtain the same results, given that a permutation is found that transforms model B to match model A ?
* Can the authors clarify what they mean by rewind points? Is it simply the training iteration at which the masks were applied to various dense networks ? Is it also the point at which the weights were rewinded during IMP while obtaining the original mask (m_A) resulting in different masks for different rewind points ?
* In Figure 4, is the IMP used along with weight rewinding or is IMP-FT used ? If IMP-FT was used then does the linear mode connectivity (accounting for variance collapse) exist between the dense network and the sequential winning tickets of lower density?
* The results show a clear trend as the width of the networks increases. The authors show that this is due to better LMC. Can this be simply because the permutation matching is better because each unit has a larger pool of units to match from ? Additionally, the variance in the unit activations can be studied, larger width may result in lower variance and better match.
## Update after rebuttal
I thank the authors for their response and for generating the requested results. I believe the paper makes several significant contributions, I have improved the rating. However, I request the authors to make the requested changes in the next version of the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the detailed feedback and helpful insights. We have added new experiments based on your insights/questions and added our response below:
> a natural question that arises is whether the permutation function can be obtained early in training eliminating the requirement for fully training the new model?
In our work, we used 2 trained dense models to find the permutation mapping, but we can also find the perm mapping early in the training, as noted by [Sharma, 2024]. We have added an additional experiment on early matching with the CIFAR-10 dataset, which shows that models can be matched earlier in the training, thus reducing the computational cost of our method.
(https://imgur.com/a/BgiE4W3)
> For instance can those masks be reused across datasets ?
We conducted an additional experiment where we obtained a mask for the CIFAR-10 dataset and reused the mask with a new init on SVHN datasets. Permuted mask outperforms unpermuted mask (naive) at all levels of sparsities. Thank you for this interesting direction, we will add more experiments in the final version of the paper. (https://imgur.com/a/UDiMQHs)
> Can the randomly initialized neural network be permuted
Yes, indeed, we can also apply the permutation, $\pi$, to the random initialization instead of the mask; the resulting network remains functionally equivalent. In our work, we chose to permute the mask as we did not want to modify the new random init.
> Can the authors clarify what they mean by rewind points? Is it simply the training iteration at which the masks were applied to various dense networks ?
That's correct! Rewind points is the training epoch at which sparse mask is applied to the dense model. We will make it more clear in the paper.
> Is it also the point at which the weights were rewinded during IMP while obtaining the original mask ($\textbf{m}_A$) resulting in different masks for different rewind points?
We used IMP-FT, which does not use weight rewind to obtain sparse masks. We preferred IMP-FT over IMP with weight rewinding because IMP-FT is computationally less expensive, which allowed us to conduct extensive experiments. In our experiments, we obtain a mask from IMP-FT and apply the mask to the dense model at different rewind points.
> in Figure 4, is the IMP used along with weight rewinding or is IMP-FT used?
Since Paul et al. used IMP (with weight rewinding) for their analysis [1], which suggested only successive IMP iterations are linearly mode-connected, we used IMP (with weight rewinding) in Fig 4. to show that all sparse networks found in each iteration of IMP and dense networks are linearly mode-connected once the variance collapse is taken into account. We decided to use IMP with weight rewinding in Fig 4. for a fair and direct comparison to observations made in [1].
> If IMP-FT was used then does the linear mode connectivity exist between the dense network and the sequential winning tickets of lower density?
We conducted an additional experiment to confirm that linear mode connectivity exists between the dense network and sparse models at each iteration of IMP-FT. We appreciate your attention to detail; this additional analysis provides valuable insights that strengthen our paper's findings. (https://imgur.com/a/2l7khkN)
> The results show a clear trend as the width of the networks increases. The authors show that this is due to better LMC. Can this be simply because the permutation matching is better because each unit has a larger pool of units to match from ?
Yes, that’s correct. We have the same intuition as yours that wider models (with $\ell$ layers, each of width $n$) have more possible permutations (up to $(n!)^{\ell}$), which makes matching two models more accurate. We show this by increasing the model width and comparing the LMC. As observed in Fig.3, on expanding the model width, LMC becomes better, suggesting permutation matching is better for wider models. We will add this explanation/intuition to the final manuscript.
> Additionally, the variance in the unit activations can be studied, larger width may result in lower variance and better match.
This is an interesting insight that could explain why activation (permutation) matching works better for wider models. We analyzed the variance across all layers and observed that the variance for each layer significantly decreases with increasing the width. For example, variance for the *second conv layer in third block* decreases from *0.012* to *0.0001* on increasing width from 1 to 4. Other layers follow the same trend. We will add plots for all the layers in the manuscript.
We hope that we have answered all your questions/concerns; we would really appreciate it if you could update your final score!
[1] Paul et al., Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask
---
Rebuttal Comment 1.1:
Comment: I thanks the authors for the detailed response and for generating additional results. Many of my concerns were addressed.
For the mask reuse across the datasets, I request the authors to also add results for naive mask application to the original model at initialization and trained on the new dataset. This may provide an upper bound and allow for clarity regarding the methods use.
---
Reply to Comment 1.1.1:
Comment: Thank you for this suggestion! We agree that having an upper bound will provide more insight. We have added results on LTH, which will serve as an upper bound (https://imgur.com/a/PUKmoiQ). We will continue running experiments and plan to add more results on mask transfer (masks obtained on ImageNet and tested on the Places365 dataset) to provide more extensive results for the final version of the manuscript.
We appreciate all of your insights. The additional experiments present more evidence for the effectiveness of our method and further strengthen the contribution of our paper. We thank you for your thorough review process and believe we have now addressed all of your concerns, including the addition of this final experiment. Our work provides novel insights into sparse training from scratch, as you noted, which would be valuable to the efficient ML community, particularly as the field continues to seek methods to reduce computational costs in training large models.
We would greatly appreciate it if you could update your score if you are satisfied with our rebuttal and the additional experiments. Thank you! | null | null | null | null | null | null | null | null |
A Online Statistical Framework for Out-of-Distribution Detection | Accept (poster) | Summary: The paper focuses on the out-of-distribution (OOD) detection task. Unlike previous research that primarily focuses on designing powerful score functions, this paper introduces a novel perspective by framing OOD detection as a online multiple hypothesis testing problem. The authors propose a Generalized LOND (g-LOND) algorithm with both rigorous theoretical guarantees and strong empirical performance. The g-LOND algorithm enables to control the false discovery rate (FDR). Besides, its false positive rate (FPR) converges to zero in probability. Experimental results show that the g-LOND algorithm outperforms traditional threshold-based methods across various OOD detection benchmarks.
## update after rebuttal
I am willing to champion this paper. I have read all reviews and rebuttals. To my best knowledge, this paper may be the first to propose a novel online hypothesis testing framework for OOD detection with a strong theoretical guarantee. Extensive experiments demonstrate the effectiveness of this framework. Besides, the authors also provide the new experiment results in the rebuttal, which further enhance the claims of this paper. So, I recommend the acceptance of the paper.
Claims And Evidence: Yes. The claims made in the submission is supported by many theoretical results and extensive experiments.
Methods And Evaluation Criteria: Yes. The proposed method is based on the statistical hypothesis testing framework, and the evaluation criteria are appropriate for OOD detection task.
Theoretical Claims: Yes. I check the proofs of Theorem 4.5, Theorem 4.6 and Theorem 5. These proofs are sound.
Experimental Designs Or Analyses: Yes. The experimental designs and analyses are sound and well-executed, with a clear framework for evaluating the proposed method.
Supplementary Material: Yes, I review all the appendices, with particular attention to the proof of Theorem 4.6, as it provides theoretical foundations for the g-LOND algorithm.
Relation To Broader Scientific Literature: This paper is based on the previous multiple hypothesis testing literature. The authors modify the traditional LOND algorithm and extend its theoretical results such that the proposed method can be applied to OOD detection task.
Essential References Not Discussed: No, the paper includes essential references. It adequately discusses prior works that are crucial for understanding the context of the key contributions.
Other Strengths And Weaknesses: Strengths
- The paper introduces a novel perspective on OOD detection by framing it as an online multiple hypothesis testing problem. This departure from traditional approaches adds originality and encourages new thinking in the field.
- The proposed g-LOND algorithm is innovative and theory-inspired, with good interpretability. This methodology represents a departure from conventional threshold-based methods.
- The techniques of the proofs in the Appendix are sound and detailed.
- The proposed method is distribution-free and easy to implement. Besides, extensive experiments are conducted to validate the proposed g-LOND procedure, demonstrating its superiority over traditional methods.
Weaknesses
- In Algorithm 1, the proposed method utilizes a calibrated set for hypothesis testing. I think a detailed explanation for it would be helpful.
- The authors need to provide a simple discussion of the motivation.
Other Comments Or Suggestions: See weaknesses.
Questions For Authors: In multiple hypothesis testing, do other evaluation criteria exist similar to FDR?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: __W1__. In Algorithm 1, the proposed method utilizes a calibrated set for hypothesis testing. I think a detailed explanation for it would be helpful.
__Ans-W1__. In practice, we just need to randomly sample a small number of examples from the ID training set as the calibrated set, without any special operations. In our experiments, for CIFAR-100 as ID data, the calibrated set contains 2000 ID examples; for ImageNet-200 as ID data, the calibrated set contains 10000 ID examples.
__w2__. The authors need to provide a simple discussion of the motivation.
__Ans-W2__. Usually, the traditional threshold-based decision rule in Eq.(1) performs well on ID data but performs poorly on OOD data. Because the selection of its threshold just considers a high TPR on ID validation set. By contrast, our method considers the performance on both ID data and OOD data simultaneously. Specifically, our method can control FDR. Intuitively, to control FDR, the g-BH tends to reject more null hypotheses while maintaining small false rejections of the null hypothesis. Small false rejections means to control the number of falsely classifying the ID as the OOD (maintaining a high TPR) and more rejection of null hypotheses means to classify more testing examples as OOD (leading to a low FPR on OOD data).
__Q1__. In multiple hypothesis testing, do other evaluation criteria exist similar to FDR?
__Ans-Q1__. Family-wise error rate (FWER) and marginal FDR (mFDR) are two similar evaluation metrics. The FWER can be expressed as
$$ FWER = P(R\cap H _0 > 0). $$
The mFDR can be expressed as
$$ mFDR = \frac{E(R\cap H_0)}{E(R)}. $$
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. After reviewing the rebuttal addressed to me and those for other reviewers, I am willing to maintain my score for acceptance. | Summary: this paper thinks the OOD detection task from an perspective of online multiple hypothesis testing.
the g-LOND algorithm controls false discovery rate (FDR) at pre-specified level without the consideration for the dependence between the p-values.
Along with thorecticla analysis, the empirical effectiveness of g-LOND is evluated on cifar-100 and imagenet-200.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: I have reviewed all parts of the supplementary material
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: [a] Out-of-distribution detection based on in-distribution data patterns memorization with modern hopfield energy
[b] LINe: Out-of-Distribution Detection by Leveraging Important Neurons
[c] Extremely simple activation shaping for out-of-distribution detection
[d] NECO: NEural Collapse Based Out-of-distribution detection
[e] Optimal Parameter and Neuron Pruning for Out-of-Distribution Detection
[f] VRA: Variational Rectified Activation for Out-of-distribution Detection
[h] Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization
[i] Tractable Density Estimation for Out-of-Distribution Detection
Other Strengths And Weaknesses: **Strength**
1. this paper investigates OOD detection from a fresh perspective
2. this paper is well written
3. the evalution, which is based on 6 metrics, is comprehensive
**Weakness**
1. the evaluiation on CIFAR-10 and ImageNet-1k lacks
2. the comparision with the mostly recent baselines [a,b,c,d,e,f,g,h,i] lacks
3. due to the rapid development of neural networks, CLIP-based models should be also considered.
*While I am happy to rasie my rating if the authors can address my concerns, i will also consider reviews from other reviewers regarding the technological nolvety of hypothesis testing used in this paper before making my final decison.*
Other Comments Or Suggestions: see weakness
Questions For Authors: no
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: __Q1__: about the reference.
__Ans-Q1__: Thank you for providing these meaningful references [a]-[i]. We will discuss these papers in related work section.
__W1,2__. the evaluiation on CIFAR-10 and ImageNet-1k lacks. the comparision with the mostly recent baselines [a,b,c,d,e,f,g,h,i] lacks.
__Ans-W1,2__. According to your suggestions, we conduct corresponding experiments on CIFAR-10 and ImageNet-1k using the methods LINe[b], NECO[d],VAR[f], NAC[h] and CONJNORM[i]. We use the code of OpenOOD [j].
It should be noted that reference [g] in your Review is missing. Besides, the methods SHE in [a] and ASH in [c] have been used our baselines (see Section 6.1 baselines). Since [e] does not open their code and time of rebuttal period is limited, we do not implement the method in [e]. The experimental results are presented in Tables 2-5 of PDF (see https://anonymous.4open.science/r/gLOND-BBCE/Experimental%20Results%20for%20Rebuttal.pdf ), which demonstrate the superiority of our proposed g-LOND algorithm over these methods in [a][b][c][d][f][h][i].
__W3__. Due to the rapid development of neural networks, CLIP-based models should be also considered.
__Ans-W3__. Because of the limitation of time, we just choose four methods based on CLIP architecture as our baselines, including MCM[k], GLMCM[l], SeTAR-MCM and SeTAR-GLMCM[m]. We use ImageNet-1k as ID data, and use iNaturalist, Places, Sun and Texture as the OOD data. Our code is based on [m]. The experimental results are presented in Table 1 of PDF (see https://anonymous.4open.science/r/gLOND-BBCE/Experimental%20Results%20for%20Rebuttal.pdf), which demonstrate the superiority of our proposed g-LOND algorithm over these CLIP-based methods in [k]-[m].
[a] Out-of-distribution detection based on in-distribution data patterns memorization with modern hopfield energy
[b] LINe: Out-of-Distribution Detection by Leveraging Important Neurons
[c] Extremely simple activation shaping for out-of-distribution detection
[d] NECO: NEural Collapse Based Out-of-distribution detection
[e] Optimal Parameter and Neuron Pruning for Out-of-Distribution Detection
[f] VRA: Variational Rectified Activation for Out-of-distribution Detection
[h] Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization
[i] Tractable Density Estimation for Out-of-Distribution Detection
[j] OpenOOD: Benchmarking Generalized
Out-of-Distribution Detection
[k] Delving into out-of-distribution detection with vision-language representations.
[l] Zero-shot in-distribution detection in multi-object settings using vision-language foundation models.
[m] SeTAR: Out-of-Distribution Detection with Selective Low-Rank Approximation. | Summary: This paper studies the OOD detection task as an online multiple hypothesis testing problem. It presents a new algorithm, called the generalized LOND algorithm (g-LOND), built upon the well-known LOND algorithm. They provide theoretical results about the false discovery rate (FDR) and false positive rate (FPR) for their algorithm. The paper also provides many experiments on OOD detection, comparing to several baselines on several datasets. Overall, the proposed method offers a systematic and theoretically grounded solution to the OOD detection problem, enhancing the reliability of sensitive applications.
Claims And Evidence: Yes. The authors conduct extensive experiments to demonstrate the effectiveness of the g-LOND algorithm.
Methods And Evaluation Criteria: Yes. The proposed g-LOND algorithm makes sense for the OOD detection problem.
Theoretical Claims: Yes. I have checked the correctness of the proofs in Appendix, which are both clear and solid.
Experimental Designs Or Analyses: Yes. The experimental designs and analyses are sound. This paper evaluates the proposed method using practical and classical criteria, making its conclusions convincing.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper clearly explains how it is related to previous work. The authors establish the connection between OOD detection and online multiple hypothesis testing, and then proposes novel g-LOND algorithm with strong statistical guarantee.
Essential References Not Discussed: To the best of my knowledge, the essential and relevant references are discussed
Other Strengths And Weaknesses: (1) Overall, this paper is well-motivated and has a clear organization. Besides, its notations and the definitions are clear, and the ideas are easy to follow.
(2) Different from previous literature which mainly focuses on designing or training score function, this paper study the OOD detection problem based hypothesis testing framework and proposes a novel g-LOND algorithm to solve it.
(3) Under some conditions, the authors establish the asymptotic theories about FPR, which remains underexplored in previous literature.
(4) Extensive experimental results on multiple benchmarks (including large-scale and high-resolution ImageNet) can support the proposed method.
Weaknesses: I have not identified major weaknesses of this paper, while I do have some minor concerns that are listed in the "Questions For Authors" part.
Other Comments Or Suggestions: No
Questions For Authors: (1) While the paper rigorously develops its theoretical framework, it would be beneficial to outline any underlying assumptions made in this paper.
(2) There are some typos in the paper in Appendix. The authors should check the text.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: __Q1__. While the paper rigorously develops its theoretical framework, it would be beneficial to outline any underlying assumptions made in this paper.
__Ans-Q1__. In Theorem 4.5 and 4.6, we have no underlying assumptions. In Theorem 5.3, we just assume that the testing statistic follows generalized Gaussian-like distribution family in Definition 5.1.
__Q2__. There are some typos in the paper in Appendix. The authors should check the text.
__Ans-Q2__. Thanks for your careful review. We have checked the typos in Appendix. We will fix these typos in new version.
---
Rebuttal Comment 1.1:
Comment: My questions have been addressed. Thanks for the reply. | Summary: This work investigates out-of-distribution (OOD) detection from the perspective of online multiple hypothesis testing. This paper proposes a generalized LOND algorithm that controls the false discovery rate even under dependent p-values. This work also derives the asymptotic false positive rate of the g-LOND algorithm under a generalized Gaussian-like distribution family. Experiments demonstrate the effectiveness of g-LOND.
Claims And Evidence: The main claim that the generalized LOND algorithm controls the false discovery rate even under dependent p-values is supported by formal theorem and detailed proofs in the appendix. Furthermore, extensive experiments demonstrate its empirical improvements.
Methods And Evaluation Criteria: The proposed method frames OOD detection as an online multiple hypothesis testing problem, leveraging FDR-control procedures to handle dependence among p-values, which is a reasonable approach for OOD tasks. The use of public benchmarks such as TinyImageNet and SVHN is standard, and TPR, FPR, and F1 provides appropriate metrics for evaluation.
Theoretical Claims: I check the theoretical claims and proofs, which looks logically consistent. The generalized Gaussian assumption may not match the complexity of real-world data.
Experimental Designs Or Analyses: The experimental design, which uses commonly accepted OOD datasets like SVHN, Places365, and iNaturalist, supports the paper’s claims. The authors rely on ResNet18 and ResNet50 as backbone models. However, using more advanced architectures such as CLIP could support a broader analysis.
Supplementary Material: I reviewed the proofs in supplementary material that support the main theorem, as well as additional theoretical results examining how p-value dependence affects Fisher’s combination test. These findings further support the paper’s theoretical claims.
Relation To Broader Scientific Literature: This work connects the existing statistical framework of LOND to OOD detection, which is an online multiple hypothesis testing method. This provides a rigorous theoretical underpinning for OOD methods, which are important for ensuring AI safety.
Essential References Not Discussed: A few works appear essential for understanding the paper’s key contributions but are not currently discussed in the paper. For example:
[1] GEN: Pushing the Limits of Softmax-Based Out-of-Distribution Detection, CVPR 2023
[2] POEM: Out-of-Distribution Detection with Posterior Sampling, ICML 2022
[3] How Does Unlabeled Data Provably Help Out-of-Distribution Detection? ICLR 2024
Other Strengths And Weaknesses: Strengths
1. The paper leverages FDR control and provides a theoretical analysis of the false positive rate.
2. Extensive experiments on several public benchmarks demonstrate the method’s effectiveness.
Weaknesses
1. The proposed method may be sensitive to both the size and quality of the calibration set.
2. Evaluation is limited to ResNet models; exploring and validating the approach on more advanced architectures (e.g., Transformers, CLIP) would be beneficial.
Other Comments Or Suggestions: None
Questions For Authors: Refer to the Weaknesses section for the detailed questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: __Q1__: about the reference.
__Ans-Q1__: Thank you for providing these meaningful references [1]-[3]. We will discuss these papers in related work section.
__Weakness 1__. The proposed method may be sensitive to both the size and quality of the calibration set.
__Ans-w1__. In practice, we just need to randomly sample a small number of examples from the ID training set as the calibrated set, without any special operations. For example, when using CIFAR-10 as ID data, the calibrated set contains 2000 ID examples. According to your comments, we use CIFAR-10 as ID data, and SVHN and Place365 as OOD data to study the performance of TPR, FPR and F1-score with variant size of calibrated set. The experimental results are presented in Figure 1 of PDF (see https://anonymous.4open.science/r/gLOND-BBCE/Experimental%20Results%20for%20Rebuttal.pdf). The results show that the performance of our method does not significantly vary as the size of the calibration set increases.
__Weakness 2__. Evaluation is limited to ResNet models; exploring and validating the approach on more advanced architectures (e.g., Transformers, CLIP) would be beneficial.
__Ans-w2__. Because of the limitation of time, we just choose four methods based on CLIP architecture as our baselines, including MCM[4], GLMCM[5], SeTAR-MCM and SeTAR-GLMCM[6]. We use ImageNet-1k as ID data, and use iNaturalist, Places, Sun and Texture as the OOD data. Our code is based on [6]. The experimental results are presented in Table 1 of PDF (see https://anonymous.4open.science/r/gLOND-BBCE/Experimental%20Results%20for%20Rebuttal.pdf), which demonstrate the superiority of our proposed g-LOND algorithm over these CLIP-based methods in [4]-[6].
[1] GEN: Pushing the Limits of Softmax-Based Out-of-Distribution Detection, CVPR 2023
[2] POEM: Out-of-Distribution Detection with Posterior Sampling, ICML 2022
[3] How Does Unlabeled Data Provably Help Out-of-Distribution Detection? ICLR 2024
[4] Delving into out-of-distribution detection with vision-language representations.
[5] Zero-shot in-distribution detection in multi-object settings using vision-language foundation models.
[6] SeTAR: Out-of-Distribution Detection with Selective Low-Rank Approximation. | null | null | null | null | null | null |
Diffusion-based Adversarial Purification from the Perspective of the Frequency Domain | Accept (spotlight poster) | Summary: This paper explores a novel method for adversarial purification by analyzing the impact of adversarial perturbations on images in the frequency domain. The authors propose a method that selectively preserves low-frequency components of images during the purification process to minimize damage to semantic information while effectively removing adversarial perturbations. Theoretical analysis and experimental validation on CIFAR-10, SVHN, and ImageNet demonstrate the effectiveness of this method.
Claims And Evidence: I think that the claims of this paper are well-supported both theoretically and experimentally.
In particular, the paper provides comprehensive theoretical proofs and visual explanations in the appendix.
Methods And Evaluation Criteria: This method is highly meaningful, as adversarial attacks pose a serious threat to visual neural networks. In particular, many recent vision-language models also exhibit significant vulnerability to adversarial examples, and directly training them for robustness is computationally expensive. Therefore, developing effective adversarial purification methods is also of great importance.
Theoretical Claims: The authors argue that from a frequency perspective, an image can be decomposed into its amplitude spectrum and phase spectrum. For both types of spectra, the damage caused by adversarial perturbations increases monotonically with frequency. This suggests that we can extract the content and structural information of the original clean sample from the frequency components that are less affected by perturbations.
I conduct a preliminary review of the paper’s proofs and did not find any obvious errors.
Experimental Designs Or Analyses: The experiments, including those in the main text and the appendix, provide an in-depth analysis across multiple datasets. I believe these experiments sufficiently demonstrate the effectiveness of the proposed method.
Supplementary Material: I review the theoretical proofs in the supplementary materials as well as the additional experiments, which further enhance the completeness of the study.
Relation To Broader Scientific Literature: The related work section clearly establishes the position of this study within the field and provides readers with insights into recent advancements in the area.
However, I notice that some references are not cited correctly. For example, when a reference itself serves as the subject of a sentence, it should be written as “Song et al. (2018) empirically demonstrates” rather than “(Song et al., 2018) empirically demonstrates.”
Essential References Not Discussed: I believe the main relevant works are all discussed in the paper.
Other Strengths And Weaknesses: Strengths:
1. The experimental results show a significant improvements.
2. The writing of this paper is very clear.
Weaknesses:
The citations need to follow a more standardized format.
Other Comments Or Suggestions: If preliminary experimental results on CLIP were included, it could enhance the broader applicability of this paper.
Questions For Authors: I have a positive overall opinion of this paper, with no obvious issues.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for recognizing our paper. To the best of our knowledge, our paper is the first to improve the purification effect of diffusion models from the perspective of the frequency domain. Compared to pixel space, the frequency domain makes it easier to decouple the perturbed components from the unperturbed components. This significantly enhances the purification effect
## Format of references
Thank you for noticing this issue. We will change the format uniformly as per your request to facilitate reading.
## Experimental results on CLIP
We conduct a simple experiment on CLIP. We randomly select two classes from the ImageNet dataset, totaling 100 images, and use ['a photo of a xxx'] as the text to implement a simple zero-shot classification task with CLIP. The results are as follows:
| Method| Lee&Kim,2023| Baietal.,2024|Nieetal.,2022|Ours|
| :---: | :---: | :---: |:---: |:---:|
| Standard Acc(%)| 66 | 90 | 74 | 93 |
| Robust Acc(%) | 65 | 86 | 70 | 91 |
Our method still performs the best, which demonstrates the generalizability of our approach. | Summary: The paper proposes FreqPure, a frequency-aware adversarial purification method that addresses the limitations of existing diffusion-based approaches by preserving critical semantic information during purification. Through frequency domain analysis, the authors demonstrate that adversarial perturbations disproportionately damage high-frequency components of both amplitude and phase spectra, while low-frequency components remain relatively intact. They theoretically prove that standard diffusion purification indiscriminately disrupts all frequencies, leading to excessive semantic loss. FreqPure mitigates this by (1) replacing low-frequency amplitude components of the estimated clean image with those from the adversarial input to retain content information, and (2) projecting low-frequency phase spectra into a perturbation-resistant range to preserve structural features. Extensive experiments on CIFAR-10, SVHN, and ImageNet show FreqPure outperforms state-of-the-art methods, achieving 31.44% higher robust accuracy against PGD attacks and 13.35% improvement against AutoAttack while maintaining superior visual fidelity, validated through DINO/CLIP similarity metrics. The work establishes frequency-domain manipulation as an effective strategy for balancing adversarial robustness and semantic preservation.
Claims And Evidence: The claims in the submission are largely supported by evidence, though some aspects warrant further scrutiny. The theoretical analysis (Theorems 3.2 and 3.4) rigorously demonstrates that diffusion-based purification disrupts all frequency components monotonically, aligning with their critique of existing methods. Empirical validation across datasets (CIFAR-10/ImageNet) and attacks (PGD, AutoAttack) shows FreqPure’s superiority in robust accuracy (e.g., +31.44% over baselines), supported by ablation studies confirming the contributions of amplitude replacement and phase projection. However, the phase spectrum projection’s effectiveness is less conclusively proven: while low-frequency phase alignment is motivated intuitively, the paper lacks a direct causal link between phase manipulation and structural preservation. Additionally, sensitivity analysis for hyperparameters (e.g., $D_A$, $D_P$) is limited to fixed values, leaving robustness to parameter choices unclear. While visualizations and DINO/CLIP metrics suggest semantic preservation, quantitative measures of perceptual quality (e.g., LPIPS, FID) are absent. Overall, the core claims hold, but finer-grained evidence for phase-related mechanisms and parameter robustness would strengthen the argument.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem of adversarial purification. The focus on frequency-domain manipulation (amplitude replacement and phase projection) directly addresses the core challenge of preserving low-frequency semantic content while removing high-frequency adversarial perturbations, which aligns with the paper’s theoretical insights about perturbation distribution. Benchmark datasets (CIFAR-10, ImageNet) and attacks (PGD, AutoAttack) are standard in adversarial robustness research, ensuring comparability. Metrics like robust accuracy and DINO/CLIP similarity appropriately measure defense effectiveness and semantic preservation. However, two limitations exist: (1) The evaluation lacks perceptual quality metrics (e.g., LPIPS, FID) to quantify visual fidelity beyond feature-space similarity; (2) While adaptive attacks (BPDA+EOT) are included, the paper does not fully address potential vulnerabilities to stronger frequency-aware attacks explicitly targeting the proposed components. Overall, the methodology and evaluation framework are sensible but could be strengthened with additional metrics and attack scenarios.
Theoretical Claims: The paper’s Theorem 3.2 (amplitude spectrum disruption) and Theorem 3.4 (phase spectrum disruption) were reviewed for correctness. Theorem 3.2 derives a lower bound for the variance of amplitude differences, $Var(\delta A_t)$, using inequalities (e.g., $E(∣x_t|\leq\sqrt{\bar{a}_t}|x_0|+\sqrt{2/\pi}$) and monotonicity arguments. While the steps are mathematically sound under the assumption that $|x_0(u,v)|\leq\sqrt{(1+4\bar{a}_t)/(8\pi\bar{a}_t)}-\sqrt{1/(8\pi\bar{a}_t)$, this constraint is neither empirically validated nor guaranteed to hold for real images, potentially limiting the theorem’s practical relevance. Theorem 3.4 approximates phase variance via small-angle linearization ($arctan(z)\approx z$) and integrates over uniformly distributed noise phase, yielding $Var(\delta \theta_t)\approx 1/\sqrt{1-1/SNR_t^2}-1$. While valid for high SNR ($SNR_t>1$), this approximation breaks down for larger perturbations (low SNR), which are common in adversarial settings. Key issues: (1) The bounded-amplitude assumption in Theorem 3.2 lacks empirical verification; (2) Theorem 3.4’s SNR-dependent validity is not experimentally tested for adversarial noise regimes. While the proofs are technically correct under their assumptions, their practical applicability to adversarial examples (which may violate these assumptions) remains partially unproven.
Experimental Designs Or Analyses: The experimental design is largely sound but has notable limitations:
- Attack Coverage: While evaluated against adaptive attacks (BPDA+EOT, PGD, AutoAttack), the paper does not test frequency-specific attacks that could exploit FreqPure’s reliance on low-frequency components, leaving a critical robustness gap.
- Perceptual Metrics: DINO/CLIP similarity validates semantic preservation but omits human-aligned metrics (e.g., LPIPS, FID) to assess visual quality, which is crucial for purification tasks.
- Ablation Study: The phase projection (PSP) is compared to phase exchange (PSE), but the rationale for choosing projection over other phase alignment strategies (e.g., regularization) is underexplored.
- Hyperparameter Sensitivity: Analysis is limited to fixed $D_A=3,D_P=2,\delta=0.2$ without testing robustness to parameter shifts across datasets or threat models.
- Theoretical-Experimental Link: While theorems claim monotonic frequency disruption, experiments in Fig. 1 only show trends for adversarial (not purified) images, weakening the connection to FreqPure’s mechanism.
- Sample Size: ImageNet evaluations use an unspecified subset size (common in robustness benchmarks), but reproducibility depends on clarifying this.
Overall, the experiments support the core claims but leave open questions about generalization to frequency-aware attacks and perceptual quality.
Supplementary Material: All Parts
Relation To Broader Scientific Literature: The paper’s contributions advance adversarial purification literature by bridging frequency-domain insights with diffusion-based defense mechanisms, building on three key prior findings:
- Frequency Vulnerability of Adversarial Examples: Extending work by Chen et al. (2022) and Maiya et al. (2021), which showed adversarial perturbations disproportionately affect high-frequency components, the authors formalize this observation through quantitative amplitude/phase analysis and link it to diffusion purification’s limitations.
- Diffusion-Based Purification: Improving upon Nie et al. (2022) and Wang et al. (2022), who used diffusion models without frequency awareness, FreqPure introduces explicit frequency constraints to mitigate semantic damage, aligning with broader efforts to incorporate domain-specific priors (e.g., image structure in SSP by Naseer et al., 2020).
- Phase Spectrum Importance: While prior work (e.g., Oppenheim & Lim, 1981) established phase’s role in structural integrity, the paper innovatively applies this to adversarial defense by projecting low-frequency phase, akin to Zhou et al. (2021)’s invariant feature learning but in the frequency domain.
By unifying these threads, the work demonstrates how frequency-domain signal processing principles can address a core challenge in adversarial robustness—preserving semantics during purification—offering a new paradigm for defense mechanisms beyond pixel-space heuristics.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: The paper’s originality lies in its novel integration of frequency-domain analysis with diffusion-based purification, a significant departure from pixel-space methods. The theoretical grounding (linking diffusion noise to frequency disruption) and practical innovation (amplitude replacement/phase projection) address a critical gap in adversarial defense: semantic preservation. Results demonstrate substantial empirical significance, with robust accuracy gains (+31% over SOTA) and high DINO/CLIP similarity, validating both defense strength and content retention. The clarity of the frequency-domain framework and ablation studies strengthens reproducibility.
Other Comments Or Suggestions: None.
Questions For Authors: 1. Assumption Validation for Theorems:
- Theorem 3.2 assumes $|x_0(u,v)|\leq\sqrt{(1+4\bar{a}_t)/(8\pi\bar{a}_t)}-\sqrt{1/(8\pi\bar{a}_t)$. Do empirical measurements of $|x_0(u,v)|$ in natural images (e.g., CIFAR-10/ImageNet) confirm this bound holds? If not, does the theorem’s conclusion still hold under practical conditions?
- Theorem 3.4 assumes $SNR_t>1$. How does this hold for adversarial examples, where perturbations are designed to maximize damage with minimal $\ell_p$-norm? If adversarial noise violates this, does the phase variance approximation remain valid?
2. Frequency-Aware Attack Resilience: The evaluation excludes attacks explicitly targeting frequency components. Would FreqPure remain robust if adversaries perturb low-frequency amplitude/phase intentionally?
3. Phase Projection vs. Alternatives: The phase projection (Eq. 13) restricts low-frequency phase to $P_L+\delta$. Why is projection preferable to direct replacement (as done for amplitude), and how does $\delta$ balance robustness vs. overfitting to adversarial phase?
4. Hyperparameter Generalization:
The hyperparameters $D_A$, $D_P, \delta$ are fixed across datasets . Are these settings universally optimal, or do they require per-dataset tuning? How does performance degrade with suboptimal choices?
5. Perceptual Quality Metrics: The paper uses DINO/CLIP similarity but omits perceptual metrics like LPIPS or FID. Do FreqPure’s purified images preserve human-aligned visual quality, or do they introduce artifacts (e.g., blurring) despite high feature similarity?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate your responsible and meticulous review. Your valuable feedback will improve our work greatly.
## Q1.1: Assumption of Theorem 3.2
This assumption is indeed somewhat strong, especially for the amplitude spectrum of low frequencies. Therefore, to eliminate this assumption, we re-derive the first moment of the amplitude spectrum, as detailed below:
$$
\mathbf{x}_t(u,v) = \sqrt{\overline{\alpha}_t}\mathbf{x}_0(u,v)+\sqrt{1-\overline{\alpha}_t}\mathbf{\epsilon}(u,v)=
\underbrace{\mathfrak{R}\mathfrak{e}(\sqrt{\overline{\alpha}_t}\mathbf{x}_0(u,v))+\mathfrak{R}\mathfrak{e}(\sqrt{1-\overline{\alpha}_t}\mathbf{\epsilon}(u,v))}_R+i\underbrace{(\mathfrak{I}\mathfrak{m}(\sqrt{\overline{\alpha}_t}\mathbf{x}_0(u,v))+\mathfrak{I}\mathfrak{m}(\sqrt{1-\overline{\alpha}_t}\mathbf{\epsilon}(u,v)))}_I
$$
$R\sim\mathcal{N}(\mathfrak{R}\mathfrak{e}(\sqrt{\overline{\alpha}_t}\mathbf{x}_0(u,v)),\frac{1-\overline{\alpha}_t}{2})$ and $I\sim\mathcal{N}(\mathfrak{I}\mathfrak{m}(\sqrt{\overline{\alpha}_t}\mathbf{x}_0(u,v)),\frac{1-\overline{\alpha}_t}{2})$.
We can see that the means of the real part and the imaginary part are different, the variances are the same, and they are independent of each other. Therefore, the amplitude $|\mathbf{x}_t(u,v)|$ follows a **Rice distribution**. "Therefore, we can utilize some known conclusions.With the assumption of $SNR_t>1$ in the Theorem 3.4. We can obtain the following conclusions:
$$
\mathbb{E}(|\mathbf{x}_t(u,v)|)\approx \nu+\frac{\sigma^2}{2\nu}=\sqrt{\overline{\alpha}_t}|\mathbf{x}_0(u,v)| + \frac{1-\overline{\alpha}_t}{4\sqrt{\overline{\alpha}_t}|\mathbf{x}_0(u,v)|}
$$
For $\mathbb{E}(|\mathbf{x}_t(u,v)|^2)$ we still use the conclusion derived from Equation 40. We re-derive $Var(\Delta A_t(u,v))$as follows:
$$
\begin{aligned}
Var(\Delta A_t(u,v))
&\approx \overline{\alpha}_t |\mathbf{x}_0(u,v)|^2+(1-\overline{\alpha}_t) + (\sqrt{\overline{\alpha}_t}|\mathbf{x}_0(u,v)| + \frac{1-\overline{\alpha}_t}{4\sqrt{\overline{\alpha}_t}|\mathbf{x}_0(u,v)|})^2\\
&=\frac{1-\overline{\alpha}_t}{2} -\frac{(1-\overline{\alpha}_t)^2}{16|\mathbf{x}_0(u,v)|^2\overline{\alpha}_t}
\end{aligned}
$$
Therefore, our paper only has one assumption $SNR_t>1$.
## Q1.2: Assumption of Theorem 3.4
The assumption $SNR_t>1$ is not strong. We plot a graph (https://bashify.io/i/Qn7ZX7) showing how $SNR_t$ changes with $t$, and it can be observed that the condition $SNR_t>1$ only occurs when $t=500$. In the experiment, for $l_{\infty}=\frac{8}{255}$ we choose
$t=100$. This means that for attacks with a larger radius, this assumption can still be satisfied.
## Q2: Frequency-Aware Attack Resilience
The attack methods used in our paper are all strong adaptive attacks, meaning that the attacker calculates the complete gradient of the defense system. This white-box attack has incorporated our strategy of retaining low frequencies into the gradient calculation. We test the generalization of our method using the code provided in paper: Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity. The result is as follows (https://bashify.io/i/dj9GYO)
## Q3: Phase Projection vs. Alternatives
Compared to the amplitude spectrum, the phase spectrum is more significantly affected by adversarial perturbations. Therefore, performing a projection operation can extract coarse-grained low-frequency phase information, while using a diffusion model can generate phase information that best fits the natural distribution within a specified range. Additionally, ablation experiments also demonstrate that directly replacing the phase spectrum is a suboptimal choice. To balance robustness vs overfitting, we use hyperparameter search to find the optimal parameter.
## Q4: Hyperparameter Generalization
The hyperparameters exhibit a certain degree of generalization across datasets of the same size; for example, the parameters from the CIFAR-10 dataset can be transferred to the SVHN dataset. As for the ImageNet dataset, due to its larger size, the preserved frequencies must also be increased. We nearly double the values for $D_A$ and $D_P$, but we keep$\delta$ unchanged because the range of phase spectrum variation is$[0,2\pi]$, which is independent of the image size. Figure 5 shows the performance of suboptimal choices on the CIFAR-10 dataset; even with suboptimal results, our method still surpasses the state-of-the-art.
## Q5: Perceptual Quality Metrics
Adversarial purification is strictly a classification task and does not concern itself with whether it will introduce artifacts. Previous methods only evaluated experimental metrics such as Standard Accuracy and Robust Accuracy. We also calculate FID and LPIPS (https://bashify.io/i/ZYZsP5).
## Q6: Unspecified Subset Size
The size of the subset is fixed and remains consistent with previous methods. Considering that the subset may vary, we select multiple subsets for the experiments and report the experimental errors. | Summary: This study discovered that the damage caused by adversarial perturbations tends to increase monotonically with the rise in frequency. Nevertheless, existing purification efforts impact both low-frequency and high-frequency components. Based on this finding, this study retains the low-frequency information of the input image in the frequency domain of x0|t in the reverse phase and restores x_{t-1} using it. Experiments have confirmed that this approach can effectively enhance the performance of current purification.
Claims And Evidence: The work provides sufficient theoretical or experimental support for each theory and viewpoint. Among them, I have some doubts about the experiment in Figure 1. How does the vertical axis of this graph reflect this damage? Is it the Var calculated in Section 3? If they use aligned Var for evaluation, can they plot the final purification damage and attack damage of diffusion at the same time? This will prove the author's point more intuitively than theoretical arguments. If it is not an aligned evaluation criterion, can the author make the comparison as described above?
Methods And Evaluation Criteria: The evaluation method used in this work is a commonly used evaluation strategy for purification and meets the experimental requirements.
Theoretical Claims: 1. Regarding the author's assertion that adversarial perturbations “increase monotonically” in the frequency domain, it is noted that the article [1] advances a perspective: “We demonstrate that adversarial examples are neither high frequency nor low frequency phenomena.” Does This present a contradiction to this work's view. How does the author reconcile these disparate views and phenomena?
[1] Maiya S R, Ehrlich M, Agarwal V, et al. A frequency perspective of adversarial robustness[J]. arXiv preprint arXiv:2111.00861, 2021.
2. Var in Equation 3 appears to be monotonically decreasing with respect to t, because it has a linear relationship with sqrt(alpha_t) and alpha_t, and alpha_t monotonically decreases with respect to t. This does not match the author's relationship, so please check and explain.
Experimental Designs Or Analyses: The experimental analysis of this paper is effective and reasonable
Supplementary Material: I reviewed the paper's supplementary experiments on more datasets and the proof of the theorems in the paper.
Relation To Broader Scientific Literature: The authors' findings on the characteristics of adversarial attacks in the frequency domain may be generalizable and may guide a wide range of purification or AT learning processes.
Essential References Not Discussed: There are some missing references on both diffusion-model based adversarial robustness:
[1] Zhang J, Dong P, Chen Y, et al. Random Sampling for Diffusion-based Adversarial Purification[J]. arXiv preprint arXiv:2411.18956, 2024.
[2] Maiya S R, Ehrlich M, Agarwal V, et al. A frequency perspective of adversarial robustness[J]. arXiv preprint arXiv:2111.00861, 2021.
[3] Chen H, Dong Y, Wang Z, et al. Robust classification via a single diffusion model[J]. arXiv preprint arXiv:2305.15241, 2023.
[4] Mei H, Dong M, Xu C. Efficient Image-to-Image Diffusion Classifier for Adversarial Robustness[J]. arXiv preprint arXiv:2408.08502, 2024.
Among them, paper [1] is also an optimization algorithm for diffusion purification, and paper [2] also discusses the characteristics of adversarial attacks in the frequency domain. Papers [3] and [4] show another possibility of using diffusion models to improve adversarial robustness. Adding these works can more comprehensively demonstrate the research progress in this field.
Other Strengths And Weaknesses: The paper's discussion of the ideas, method and its structure are clear.
This method has a significant improvement in performance on various data sets.
Other Comments Or Suggestions: No other comments
Questions For Authors: A more detailed discussion of the frequency domain characteristics of adversarial perturbations and a comparison with diffusion perturbations.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback, which will enhance the completeness and persuasiveness of our article.
## Claims And Evidence
Figure 1 shows the extent of damage caused by adversarial perturbations to the phase spectrum and amplitude spectrum of images at different frequency components. The vertical axis represents the difference between the amplitude spectrum and phase spectrum of original clean samples and adversarial samples. It is not the Var mentioned in Section 3. To illustrate our theory in Section 3 more intuitively, we conduct experiments using Var as the vertical coordinate, and we provide the experimental results from low frequency to high frequency: (https://bashify.io/i/tBXTqh). These results are consistent with our theoretical analysis. Additionally, we find that the phase spectrum is more easily damaged compared to the amplitude spectrum, indicating the importance of preserving the phase spectrum during the purification process. Regarding final purification damage and attack damage of diffusion, we calculate the mean of different frequency variations for some samples. The experimental results are as follows: (https://bashify.io/i/CQpvZD). We observe that the trends of attack damage of diffusion and purification damage are consistent, which also supports our claims.
## Different experimental conclusions
We cite this paper in the second paragraph of the Introduction and briefly claim the differences between these methods and ours. Here, we elaborate on the distinctions between this paper and our method. First, there is a difference in the measurement approach. This paper defines a Perturbation Gradients $\frac{dy}{d\delta}$ and observes its variation with frequency using the **DCT** decomposition. Therefore, more accurately, the conclusion of this paper should be that Perturbation Gradients are neither high frequency nor low frequency phenomena, while we directly measure the differences between the amplitude and phase spectrum values of adversarial samples in the frequency domain and those of normal samples. Additionally, the decomposition method we use is the **DFT**, which can directly compute the phase spectrum of images. The importance of the phase spectrum has been validated in many papers, such as (Chen et al., CVPR2021) and (Zhou et al., ICML2023).
## Monotonicity of Equation 3
The RHS being monotonically decreasing requires both coefficients to be greater than 0. However, under the assumptions we derived, the coefficient of the first term is less than 0. However, this assumption is somewhat strong, so we re-derive part of the conclusions in the proof. Specifically, we re-derive the first moment of the amplitude spectrum:
$$
\mathbf{x}_t(u,v) = \sqrt{\overline{\alpha}_t}\mathbf{x}_0(u,v)+\sqrt{1-\overline{\alpha}_t}\mathbf{\epsilon}(u,v)=
\underbrace{\mathfrak{R}\mathfrak{e}(\sqrt{\overline{\alpha}_t}\mathbf{x}_0(u,v))+\mathfrak{R}\mathfrak{e}(\sqrt{1-\overline{\alpha}_t}\mathbf{\epsilon}(u,v))}_R+i\underbrace{(\mathfrak{I}\mathfrak{m}(\sqrt{\overline{\alpha}_t}\mathbf{x}_0(u,v))+\mathfrak{I}\mathfrak{m}(\sqrt{1-\overline{\alpha}_t}\mathbf{\epsilon}(u,v)))}_I
$$
$R\sim\mathcal{N}(\mathfrak{R}\mathfrak{e}(\sqrt{\overline{\alpha}_t}\mathbf{x}_0(u,v)),\frac{1-\overline{\alpha}_t}{2})$ and $I\sim\mathcal{N}(\mathfrak{I}\mathfrak{m}(\sqrt{\overline{\alpha}_t}\mathbf{x}_0(u,v)),\frac{1-\overline{\alpha}_t}{2})$.
We can see that the means of the real part and the imaginary part are different, the variances are the same, and they are independent of each other. Therefore, the amplitude $|\mathbf{x}_t(u,v)|$ follows a **Rice distribution**. "Therefore, we can utilize some known conclusions.With the assumption of $SNR_t>1$ in the Theorem 3.4. We can obtain the following conclusions:
$$
\mathbb{E}(|\mathbf{x}_t(u,v)|)\approx \nu+\frac{\sigma^2}{2\nu}=\sqrt{\overline{\alpha}_t}|\mathbf{x}_0(u,v)| + \frac{1-\overline{\alpha}_t}{4\sqrt{\overline{\alpha}_t}|\mathbf{x}_0(u,v)|}
$$
For $\mathbb{E}(|\mathbf{x}_t(u,v)|^2)$ we still use the conclusion derived from Equation 40. We re-derive $Var(\Delta A_t(u,v))$as follows:
$$
\begin{aligned}
Var(\Delta A_t(u,v))
&\approx \overline{\alpha}_t |\mathbf{x}_0(u,v)|^2+(1-\overline{\alpha}_t) + (\sqrt{\overline{\alpha}_t}|\mathbf{x}_0(u,v)| + \frac{1-\overline{\alpha}_t}{4\sqrt{\overline{\alpha}_t}|\mathbf{x}_0(u,v)|})^2\\
&=\frac{1-\overline{\alpha}_t}{2} -\frac{(1-\overline{\alpha}_t)^2}{16|\mathbf{x}_0(u,v)|^2\overline{\alpha}_t}
\end{aligned}
$$
It is clear that the RHS is monotonically increasing with respect to $t$.
## Discussion
The characteristics of adversarial perturbations in the frequency domain are that the values of high-frequency components are greater than those of low-frequency components. Diffusion perturbations are normal Gaussian noise, and their amplitude and phase spectrum distributions in frequency domain conform to a normal distribution.
## Missing Reference
We will add them in our paper. | Summary: The paper focuses on adversarial defense methods, particularly addressing challenges in accurately and quickly calculating gradients, which is crucial for evaluating the effectiveness of defense mechanisms. The authors propose a method that significantly outperforms other approaches in terms of both standard and robust accuracy. The paper also explores the sensitivity of their defense method to the number of denoising steps in the surrogate process, providing experimental analysis to support their findings.
Claims And Evidence: Yes, the claims made in the submission appear to be supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: The paper extends the scientific literature by combining insights from training-based and diffusion-based purification methods while introducing a frequency domain perspective to enhance robustness against adversarial attacks. This aligns with and advances prior findings in the field.
Essential References Not Discussed: The references cited in the paper appear to be adequate for supporting the key contributions and findings.
Other Strengths And Weaknesses: ## Strenghts
1. This paper addresses a critical challenge in adversarial defense: improving both standard accuracy and robust accuracy against adversarial attacks. The experimental results show substantial improvements, such as a 15.04% increase in standard accuracy and a 41.01% increase in robust accuracy on the SVHN dataset, which is a notable advancement over existing methods.
2. The paper is well-structured and clearly presents its methodology, experiments, and results.
3. The paper provides extensive experimental validation across multiple datasets and attack scenarios, demonstrating the robustness and generalizability of the proposed method.
## Weaknesses
1. The paper demonstrates improvements in adversarial robustness, but there is limited discussion on computational efficiency. Since diffusion-based models are already computationally expensive, adding frequency-domain modifications might introduce further overhead.
2. Some theoretical claims, while backed by empirical observations, could benefit from more formal proofs (e.g., The assumption that low-frequency components are less affected by adversarial perturbations is based on empirical findings, but no rigorous theoretical proof is provided).
Other Comments Or Suggestions: See weaknesses.
Questions For Authors: 1. Could the authors elaborate on the computational efficiency of the proposed method, particularly in terms of training and inference time compared to existing methods?
2. Can the authors provide a principled way to select optimal hyperparameters for different datasets or attack settings?
3. Does the method generalize well to real-world scenarios, such as adversarial examples crafted under distribution shifts or physical-world attacks (e.g., adversarial patches)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback, which will enhance the integrity of our paper. To address your concerns, we have provided additional theoretical proofs and experiments. We sincerely hope our response resolves the concerns raised, and we would greatly appreciate reconsideration of the score.
## W1&Q1 Computational Efficiency
Adversarial purification is a test-time defense that involves no training process, only an inference process. For example, for the ImageNet and CIFAR-10 datasets, we directly use the pre-trained weights of the corresponding diffusion models. In terms of inference time, our approach requires just one additional DFT/IDFT pair per time step, with minimal computational impact as the Fourier transform operations contribute negligible overhead. The inference time for various methods are as follows:
| Method| Lee&Kim,2023| Baietal.,2024|Nieetal.,2022|Ours|
| :---: | :---: | :---: |:---: |:---:|
| Time(s) | 19.09±0.04 |6.43±0.13 |4.26±0.14 |4.29±0.09|
## W2 Theoretical Proof
We provide a simple proof regarding how adversarial perturbations primarily disrupt the high-frequency components of an image.
The original image is denoted as $x$, the adversarial perturbation as $\delta$ and $F$ represents the Fast Fourier Transform.
The power spectrum characteristics of natural images follow the distribution as follows, where $\alpha>1$:
$$
|F_x(w)|^2\propto \frac{1}{w^{\alpha}}\\
$$
The definition of the noise to signal ratio (NSR) is as follows:
$$
NSR(w)=\frac{|F_{\delta(w)}|^2}{|F_x(w)|^2}
$$
Combining the above two formulas leads to the following relation:
$$
NSR(w)\propto w^{\alpha}|F_{\delta}(w)|^2
$$
We choose $\alpha=2$ , and we assume that the attack objective is to maximize the NSR:
$$
\max\sum_{w}NSR(w)=\max\sum_{w} w^{2}|F_{\delta}(w)|^2
$$
The above optimization objective should be satisfied when it is maximized:
$$
|F_{\delta}(w)|^2\propto w^2
$$
We find that the power of the perturbation in the frequency domain is proportional to the square of the frequency, meaning that the perturbation tends to disrupt the high-frequency information of the image.
## Q2 A principled way to select optimal hyperparameters
We first observe Figure 1 to determine the approximate range where adversarial perturbations cause minimal damage. Within this range, for smaller datasets, we tend to use grid search to find the optimal hyperparameters. For larger datasets, such as ImageNet, we proportionally expand the optimal hyperparameters found on the smaller datasets to search for the optimal hyperparameters.
Specifically, directly multiply by 2 or 3 at the same time and then perform a grid search around it.
## Q3 Generalization to real-world scenarios
To demonstrate the generalizability of our method, we conduct relevant experiments in the context of adversarial patch. The method for constructing adversarial patch is described in [1]. The dataset is ImageNet, and the classifier is ResNet50. Since different attack methods do not affect standard accuracy, we only test robust accuracy. The results are as follows:
| Method| Lee&Kim,2023| Baietal.,2024|Nieetal.,2022|Ours|
| :---: | :---: | :---: |:---: |:---:|
| Robust Acc(%) |74.219 |82.812 |78.906 |83.594|
[1] Brown T B, Mané D, Roy A, et al. Adversarial patch[J]. arXiv preprint arXiv:1712.09665, 2017.
---
Rebuttal Comment 1.1:
Comment: The authors address my concerns. I am raising my rating to 3.
---
Reply to Comment 1.1.1:
Comment: We are glad that our response address your concerns. Thank you for your review and recognition. | Summary: The paper proposes a novel adversarial purification method called FreqPure through frequency domain analysis and theoretical proof. The core idea of the method is to provide effective prior guidance for image purification by selectively retaining low-frequency spectral information. Experimental results demonstrate its significant advantages in eliminating adversarial perturbations while preserving semantic information.
Claims And Evidence: The article conducts comparative experiments with other methods, providing quantitative experimental data and visualization results.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria can effectively address the current problem and provide a reasonable basis for measuring related applications.
Theoretical Claims: The overall proof logic of the paper is rigorous, and the derivation process is reasonable. Specifically, for Theorem 3.2, "the variance of the difference of amplitude at time-step t between the clean image x0 and the noisy image xt" is provided with a complete derivation process in the appendix. This indicates that the authors have provided a detailed and rigorous explanation of the proof for Theorem 3.2, ensuring its correctness and reliability.
Experimental Designs Or Analyses: I reviewed the main experimental results, ablation experiments, and supplementary experiments in the appendix. The experimental setup follows the settings of previous work. However, in the ablation experiments, I am puzzled as to why the results of ASE+PSE without PSP are not included.
Supplementary Material: I have read the entire supplementary material. The appendix provides detailed supplementary explanations and clarifications for the content of the main text, enabling readers to better understand the entire work.
Relation To Broader Scientific Literature: The method proposed in this paper decomposes images into amplitude and phase spectra and explores how to perform image restoration on adversarial images. It represents another approach to improving model robustness besides adversarial training.
Essential References Not Discussed: The key contribution of the paper is proposing a purification method that can eliminate adversarial perturbations while maximizing the preservation of the content and structure of the original image. The approach from the frequency domain is a novel perspective, and the authors provide mathematical proofs for its rationality and effectiveness.
Other Strengths And Weaknesses: Strengths:
(1) This paper analyzes the gap between adversarial images and original images from the frequency domain perspective, and provides derivations and proofs of the related formulas.
(2) In the experiments, the proposed method shows significant improvements over the baselines, demonstrating its effectiveness.
(3) Extensive visualization results effectively confirm that the purified images are closer to the original images.
Weaknesses:
Some details in the experimental setup and ablation studies are not clearly explained.
Other Comments Or Suggestions: In line 256, you intended to reference Algorithm 1 in Section 4.3, but the PDF displays 4.3. Please verify whether the reference number is correct.
Questions For Authors: (1) In the ablation study, the results for ASE+PSE without PSP are not included.
To strengthen the validity of the conclusions, the ablation study should include a comparison of ASE+PSE without PSP.
(2) Regarding the experimental setup, in Algorithm 1, t = t*, …, 1 is mentioned, while in the experiments, values like t* = 0.2, t* = 0.3, and t* = 0.4 are used. This creates confusion for readers about the specific meaning of t*.
Additionally, the evaluation metrics, Standard Acc and Robust Acc, are not clearly defined. While it is mentioned that "standard accuracy is calculated on clean images, and robust accuracy is assessed on adversarial examples," a more detailed explanation of how these metrics are computed and their significance would improve clarity.
(3) In the experimental tables, one baseline method uses only half the number of iterations due to computational overhead. However, the number of iterations is a critical factor for PGD, and reducing it by half may weaken the strength of the PGD attack, leading to an unfair comparison of Robust Acc.
To address this, either a reasonable explanation should be provided for the reduced iterations, or experimental data with the same number of iterations should be included to ensure a fair comparison.
(4) While the method’s effectiveness is demonstrated in terms of Standard and Robust Acc, it is also important to evaluate the efficiency of the proposed method.
A comparison with other methods in terms of memory and computational overhead would be a valuable addition to the experimental section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your careful review and valuable feedback. We have made every effort to address your concerns. We believe that investigating diffusion model-based adversarial purification from a frequency-domain perspective enables further research.
## Q1 Results for ASE+PSE without PSP
Thank you for your careful review. We indeed overlook this situation, and we have added the relevant experiments. The complete ablation experiment table is as follows:
|ASE |PSP |PSE |Standard| Robust|
| :---: | :---: | :---:|:---: |:---:|
|x |x |x | 87.89 |53.52|
|✓ |x |x |90.82 |87.11|
|x |x |✓ | 94.14 |79.30|
|x |✓ | x | 94.53 |80.47|
|✓ |x |✓ |93.36|87.50|
|✓ |✓ |x | 94.53 |88.28|
The last line, ASE+PSE, represents our complete method.
## Q2 Meaning of t*
In our paper, $t^*$ does not refer to the optimal value but rather to a hyperparameter. The adversarial purification based on diffusion models can be roughly divided into two stages. The first stage is the forward process, where noise is added; the role of $t^*$ is to control the intensity of the noise added. The second stage is the denoising process. Here, we select three different values for DiffPure on the ImageNet dataset to explore the best performance of the method under different hyperparameters for comparison.
## Q2 A detailed explanation of evaluation metrics
The purpose of adversarial purification is to remove the adversarial perturbations from adversarial samples so that these samples can be classified correctly as much as possible, while minimizing the impact on the classification of normal samples.
Let normal samples be represented by $x$ and adversarial samples by $x_{\text{adv}}$. The purification method is denoted as $\text{AP}$, and the classifier is represented by $f$.
For **Standard Accuracy**, we select a batch of normal samples and calculate whether the predicted labels $f(x)$ and $f(\text{AP}(x))$ are consistent. The Standard Accuracy is then computed as the number of consistent predictions divided by the total number of normal samples.
For **Robust Accuracy**, we select a batch of adversarial samples and calculate whether the predicted labels $f(x_{\text{adv}})$ and $f(\text{AP}(x_{\text{adv}}))$ are consistent. The Robust Accuracy is computed as the number of consistent predictions divided by the total number of adversarial samples.
The number of samples we select is consistent with previous methods, and we conduct multiple rounds of experiments while calculating the errors.
## Q3 Number of iterations about Baietal.,2024
The more iterations there are, the stronger the attack effect becomes, which leads to a decrease in robust accuracy. For the method proposed by Bai et al. (2024), the robust accuracy with half the number of iterations is lower than that of other methods. Therefore, the robust accuracy with the complete number of iterations will be even lower. In Table 1, the robust accuracy decreases from 49.22% to 48.92% under the full number of iterations.
## Q4 Memory and computational overhead
Compared to other methods, our approach introduces an additional discrete Fourier transform and inverse discrete Fourier transform only once at each time step, and the time introduced by the Fourier transform is almost negligible. The inference times and memory used for various methods are as follows:
| Method| Lee&Kim,2023| Baietal.,2024|Nieetal.,2022|Ours|
| :---: | :---: | :---: |:---: |:---: |
| Time(s) | 19.09±0.04 |6.43±0.13 |4.26±0.14 |4.29±0.09|
| Memory(GB)| 0.56| 0.70 | 0.56 | 0.56|
## Suggestions: false reference number
Thank you for noticing this issue. We have made corrections and will check and correct other typos. | null | null | null | null |
Not All Wrong is Bad: Using Adversarial Examples for Unlearning | Accept (spotlight poster) | Summary: This paper proposes an algorithm for machine unlearning with an interesting finding.
The authors observe that fine-tuning models on adversarial examples closest to the corresponding forget samples can
avoid drastic changes to the global behavior of the model.
Experimental results on CIFAR-10 show promising performance compared to previous methods, like l1-Sparse, and SalUn.
Claims And Evidence: The claim is empirically supported.
Methods And Evaluation Criteria: (1) Fine-tuning on adversarial examples can benefit unlearning performance. This is an interesting finding. However, there is no theoretical analysis, and the experiments only focus on small data, like CIFAR-10.
It would be better to validate the generality of the finding with large-scale data, like CIFAR-100 and ImageNet1K.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The ablation is sound.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: The paper presents an interesting finding that fine-tuning models on adversarial examples benefits the unlearning performance. It is possible to extend the idea into multi-modal models.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
(1) The idea of using adversarial examples for unlearning is interesting.
(2) The proposed algorithm is simple to implement.
Other Weaknesses:
(1) The paper shows that the proposed method can also work well for the adversarially robust models which are trained with controlled Lipschitz constant.
Currently, adversarial training is the most effective method for adversarial robustness. It uses adversarial examples as additional training data while the proposed method finetunes models on adversarial examples.
It is interesting to know if the proposed method can work well on these models, like [ref1] and [ref2], regarding adversarial robustness with auto-attack.
[ref1] Decoupled Kullback-Leibler Divergence Loss. NeurIPS 2024.
[ref2] Better Diffusion Models Further Improve Adversarial Training. ICML 2023.
Other Comments Or Suggestions: N/A
Questions For Authors: See above weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments. We are excited about the reviewer’s acknowledgment of our interesting approach toward unlearning. Below are our responses to their questions and concerns:
**Theoretical guarantees:** Although most prior SOTA methods in approximate unlearning are not accompanied by theoretical guarantees, we prove a theorem (https://shorturl.at/ChU0s) that derives an upper-bound on the 2-norm of the difference of the parameters of the unlearned model and the retrained model (which are gold-standard for unlearning). To prove this theorem, we make assumptions that are common in the certified unlearning literature. Our derived upper-bound implies enhanced effectiveness of our method when:
1. The distance between the forget sample and its corresponding adversarial example becomes smaller.
2. The Lipschitz constant of the model becomes smaller.
3. The quality of the adversarial example becomes stronger (causes a larger loss for the correct label).
4. The adversarial example transfers better to the retrained model.
5. The retrained model generalizes better to the (clean) unseen samples.
Hence, the proved theorem also justifies our earlier intuitions about the need for good quality adversarial examples that are as close as possible to the original samples (which is the goal of Algorithm 1), and also justifies that by fine-tuning the model on these adversarial examples, we can derive an upper-bound on the distance between the retrained model and the unlearned one. We believe that the presented empirical results, along with the provided theorem, will motivate further theoretical studies in future work.
**Larger model and dataset:** we have performed our experiments on VGG19 models (12 times larger than ResNet-18) trained on the Tiny ImageNet dataset (200 classes). We evaluated our prior observations similar to Figure 1 in our manuscript that shows fine-tuning the trained models on their adversarial examples does not lead to catastrophic forgetting (https://tinyurl.com/5n6f6pxr). We also compared the unlearning methods and created the tables (https://tinyurl.com/2wwwbbfp) corresponding to Tables 1 and 2 in our manuscript.
In addition, as requested by another reviewer, we performed a comparison to a SOTA certified unlearning method (Zhang et al. (ICML 24)). The comparison is done only on the setting where $D_R$ is available because this method does not work when there is no access to $D_R$. Our results (https://shorturl.at/Q19RQ) show that certified unlearning methods such as this, though accompanied with theoretical guarantees, are not capable of outperforming SOTA in approximate unlearning, including AMUN. We believe that this is the case due to their assumptions not holding for deep learning models used in practice.
**Adversarially-trained models:** To evaluate whether our unlearning method works with models trained using adversarial training, we performed our analysis on ResNet-18 models trained using TRADES (as it was more convenient for us to use in the rebuttal period) on CIFAR-10. We performed the experiments for unlearning 10% of the dataset in both cases where $D_R$ is accessible and not. As the results (https://tinyurl.com/43bxcafb) show, in both settings AMUN is effective in unlearning the forget samples and achieving a low gap with the retrained models. This gap is obviously smaller when there is access to $D_R$.
We hope you find our responses satisfactory, and consider raising your score towards acceptance. We are happy to engage during the rebuttal period, and thank you again for your valuable comments and suggestions in improving our paper! | Summary: This paper proposes the Adversarial Machine UNlearning (AMUN) method, which reduces the prediction confidence of the model for the forget samples by fine-tuning the model on adversarial examples, while maintaining the accuracy of the model on test samples. Experimental results demonstrate that AMUN outperforms previous state-of-the-art methods in image classification tasks and performs remarkably well even in the face of membership inference attacks.
Claims And Evidence: The authors clearly expound the core finding of this paper through two observations and corresponding experiments, that is, fine-tuning the trained models on adversarial examples corresponding to a subset of the training data does not lead to significant deterioration of the model's accuracy. Based on these findings, the authors propose the AMUN method, with clear logical expression and sufficient persuasiveness.
Methods And Evaluation Criteria: Overall, th eproposed methods are supported by empirical data to a certain extent. The authors conducted a series of experiments on the CIFAR10 dataset for ResNet18 in the task of image classification.
Theoretical Claims: The method proposed in this paper is based on a heuristic methodology, and no theoretical claims are provided.
Experimental Designs Or Analyses: The experimental designs can demonstrate the effectiveness of the proposed method in image classification tasks. However, as the ICML is a top-tier conference in the field of machine learning, it is recommended that the authors supplement investigations on more tasks, such as text classification, to further verify the broad effectiveness of the proposed adversarial-example-based machine learning unlearning technique.
Supplementary Material: The supplementary material presents additional experimental results and findings, all referenced within the main paper, offering further evidence to support the conclusions.
Relation To Broader Scientific Literature: N.A.
Essential References Not Discussed: N.A.
Other Strengths And Weaknesses: Strengths:
1、The proposed AMUN method is innovative. It achieves machine unlearning by fine-tuning the model with adversarial examples, opening up a new way to solve this problem. Different from previous methods, it cleverly exploits the relationship between adversarial examples and the model's decision boundary. It can not only reduce the prediction confidence of forget samples but also avoid drastic changes to the model's global behavior.
2、This paper introduces the proposed AMUN method through two observations, which is highly persuasive.
3、This paper is easy to follow.
Weaknesses:
1、The experimental designs in this paper can demonstrate the effectiveness of the proposed AMUN method in image classification tasks on the CIFAR10 dataset. It is recommended that the authors supplement the experimental results on more backbone networks (such as VGG and ViT) and more datasets (such as ImageNet). Meanwhile, as the ICML is a top-tier conference in the field of machine learning, it is advisable for the authors to conduct additional research on more tasks, such as text classification, to further verify the broad effectiveness of the proposed adversarial-example-based machine learning unlearning technique.
2、The proposed method does not seem to exhibit obvious superiority in the results presented in Tables 1&2. In particular, it is significantly weaker than the existing methods in terms of the two metrics of UNLEARN ACC and RETAIN ACC.
Other Comments Or Suggestions: Minor: In Tables 1&2, the expressions of "SALUN" and "Salun" are inconsistent.
Questions For Authors: See Weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments. We are excited about the reviewer’s acknowledgment of our interesting approach toward unlearning. Below are our responses to their questions and concerns:
**W1. Additional experiments:** We have performed new experiments on VGG19 models (12 times larger than ResNet-18) trained on the Tiny ImageNet dataset (200 classes). We evaluated our prior observations similar to Figure 1 in our manuscript that shows fine-tuning the trained models on their adversarial examples does not lead to catastrophic forgetting (https://tinyurl.com/5n6f6pxr). We also compared the unlearning methods and created the tables (https://tinyurl.com/2wwwbbfp) corresponding to Tables 1 and 2 in our manuscript. We also performed a new experiment on the effectiveness of unlearning methods on the models trained with adversarial training, which, in summary (https://tinyurl.com/43bxcafb), shows that our method is even effective for unlearning in models trained with adversarial training. Please see the response to reviewer qDdb for details.
In addition, as requested by another reviewer, we performed a comparison to a SOTA certified unlearning method (Zhang et al. (ICML 24)). The comparison is done only on the setting where $D_R$ is available because this method does not work when there is no access to $D_R$. Our results (https://shorturl.at/Q19RQ) show that certified unlearning methods such as this, though accompanied with theoretical guarantees, are not capable of outperforming SOTA in approximate unlearning, including AMUN. We believe that this is the case due to their assumptions not holding for deep learning models used in practice.
**W2. Interpreting our results:** Please note that for UNLEARN ACC the goal is to minimize the difference with the corresponding value from the Retrained models. As tables 1&2 show, AMUN achieves the smallest difference in all scenarios. Similarly, for RETAIN acc in table 1 AMUN achieves the smallest difference with the corresponding value for the retrained models.
For table 2, the reason that some other methods achieve smaller gap for RETAIN acc is the following: when other methods perform poorly in the absence of $D_R$, they choose the smallest available learning rate during the hyper-parameter search which basically allows them to do nothing during the fine-tuning phase. Therefore, they almost always return the same model as the original model, *without performing any unlearning*, and hence achieve an accuracy of 100% on both $D_R$ and $D_F$. But notice that in these cases, the MIA score does not decrease, reiterating that no unlearning has occurred. We will make this point clear in our future revisions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. After carefully reviewing the author's rebuttal, most of my concerns have been addressed. I decide to maintain my score.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for reviewing our rebuttal and supporting our work. | Summary: The paper proposes Adversarial Machine UNlearning (AMUN), a novel method for efficient machine unlearning in classification models. The core idea is to leverage adversarial examples corresponding to the forget set to fine-tune the model, thereby reducing its confidence on $D_F$ while preserving test accuracy. By fine-tuning on adversarial examples of the forget set (with incorrect labels), AMUN avoids global model degradation. Moreover, AMUN mimics the behavior of models retrained from scratch on the retain set to achieve comparable unlearn/retain/test accuracy and resistance to membership inference attacks (MIAs). The method also be generalized to adversarially robust models and handles continuous unlearning scenarios effectively.
Claims And Evidence: "No catastrophic forgetting": The claim relies on limited datasets (CIFAR-10/ResNet-18); larger-scale experiments (e.g., Tiny-ImageNet or ImageNet) are needed for broader validation. Furthermore, it lacks theoretical guarantees.
Methods And Evaluation Criteria: (1) While PGD-50 with $l_2$ norm bounds in Algorithm 1 is logical, adaptive $\epsilon$ selection lacks theoretical justification. And the reliance on PGD-50 increases computational cost.
(2) The fine-tuning strategy is intuitive but sensitive to hyperparameters (e.g., learning rate, epochs).
Theoretical Claims: The paper does not provide formal theoretical analysis and guarantees. One of the key assumptions (e.g., adversarial examples belonging to the model’s "natural distribution") is empirically validated but lacks theoretical grounding. The relationship between adversarial example strength ($\epsilon$) and unlearning efficacy seems empirical, not theoretical.
Experimental Designs Or Analyses: (1) The impact of $\epsilon_{init}$ in Algorithm 1 and fine-tuning epochs is not well explored.
(2) Limited to CIFAR-10/ResNet-18; no cross-dataset/architecture validation (e.g., ImageNet, ViTs).
Supplementary Material: I have reviewed the entire supplementary material.
Relation To Broader Scientific Literature: This work extends approximate unlearning methods by incorporating adversarial examples. The connections to adversarial training and Lipschitz-constrained models shows AMUN’s compatibility with robust models. Additionally, leveraging RMIA rather than MIA leads to more rigorous evaluations.
Essential References Not Discussed: Chen, M., et al. "Boundary Unlearning." (CVPR 2023): Machine unlearning is achieved by the shift decision space of the DNN model, which is quite related to adversarial example training and forgetting.
Other Strengths And Weaknesses: **Strengths:**
AMUN provides an alternative to machine unlearning with clear explanations of the method and its motivations.
**Weaknesses:**
The experiments are restricted to CIFAR-10 and ResNet-18 in classification tasks, raising questions about the method's applicability to larger datasets, other architectures and tasks (like Diffusion or LLM Generation). Furthermore, the paper lacks formal theoretical guarantees and thorough explorations on how variations in adversarial attack parameters affect the results.
Other Comments Or Suggestions: N/A
Questions For Authors: Following the previouly mentioned Strengths And Weaknesses, the questions are:
(1) Can AMUN maintain its efficacy on larger architectures (e.g., ViTs) or datasets like ImageNet?
(2) Could AMUN be combined with differential privacy or influence functions to provide certified unlearning guarantees?
(3) How does AMUN compare to certified unlearning methods (e.g., Sekhari et al., 2021) in terms of privacy-utility tradeoffs?
(4) How do choices of attack step (PGD-50 vs. PGD-10 or PGD-20), and fine-tuning epochs affect results? Are there optimal settings for different datasets or architectures?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments. We are excited about the reviewer’s acknowledgment of the novel connections our work makes with adversarial training and Lipschitz-constrained models, in addition to a more rigorous analysis by leveraging RMIA rather than MIA. Below are our responses to their questions and concerns:
**Q1 + W1. Additional experiments:** We have performed our experiments on VGG19 models (12 times larger than ResNet-18) trained on the Tiny ImageNet dataset (200 classes). We evaluated our prior observations similar to Figure 1 in our manuscript that shows fine-tuning the trained models on their adversarial examples does not lead to catastrophic forgetting (https://tinyurl.com/5n6f6pxr). We also compared the unlearning methods and created tables (https://tinyurl.com/2wwwbbfp) corresponding to Tables 1 and 2 in our manuscript. We also performed a new experiment on the effectiveness of unlearning methods on the models trained with adversarial training (https://tinyurl.com/43bxcafb). Please see the response to reviewer qDdb for details.
**W2. Theoretical guarantees:** Although most prior SOTA methods in approximate unlearning are not accompanied by theoretical guarantees, we proved a theorem (https://shorturl.at/ChU0s) that derives an upper-bound on the difference between the retrained models and the models unlearned using AMUN by making assumptions that are common in the certified unlearning literature. The proved theorem justifies our earlier intuitions about the need for good quality adversarial examples that are as close as possible to the original samples. For more discussions on the proved theorem, please refer to the discussion with reviewer qDdb.
**Missing related work:** The work of Chen et al. is included among our baseline methods in all the experiments (BS for Boundary Shrink). We also have a thorough discussion of the differences in our approach in Appendix A.
**Q2. Combination with DP:** We do not see any obvious reason why AMUN can not be combined with approaches like DP. However, AMUN exploits the robustness of the model, and this is known to have tensions with privacy (as studied in several works such as https://tinyurl.com/3573vmks, https://tinyurl.com/2n6vu3tz, https://tinyurl.com/2cdptafr). We leave a detailed investigation of this to future research.
**Q3. Comparison to certified methods:** While the work of Sekhari et al. is an interesting addition to our discussion of certified unlearning methods in the related works, it is important to note that this method (similar to most other certified methods) makes many assumptions that do not hold in general deep learning models. For example, Assumption 1 in Section 4 states that “for any (input) z, the function f(w, z) is strongly convex, L-Lipschitz and M-Hessian Lipschitz with respect to w” – this is not practical. Instead, we performed a comparison to another SOTA certified unlearning method (Zhang et al ICML 24) with milder assumptions. This method does not work without access to $D_R$. Our results (https://shorturl.at/Q19RQ) show that certified unlearning methods such as this, though accompanied with theoretical guarantees, are not capable of outperforming SOTA in approximate unlearning, including AMUN, due to their assumptions not holding for deep learning models used in practice.
**Q4. Choices of attack steps:** For the presented results on ResNet-18 and CIFAR-10, we used PGD-50. For our new results on Tiny Imagenet, we performed the experiments with both PGD-10 and PGD-20 and did not observe noticeable changes.
**Q4. Instability of fine-tuning:** Please note that all the prior methods include fine-tuning steps as part of their procedure.
1. RL fine-tunes on forget samples with a random label and the remaining samples ($D_R$).
2. Salun does the same fine-tuning as RL, but on a subset of model parameters.
3. FT and l1-sparse fine-tune the model on $D_R$.
4. GA fine-tunes on the forget samples in the reverse direction of gradient and fine-tunes on $D_R$.
5. BS fine-tunes on $D_R$ and an augmented set of samples for forget samples.
In our experiments, we did not observe more susceptibility to hyper-parameters compared to other methods. We also performed a fair comparison by choosing the hyper-parameters on a separate set of forget-samples and models and performed the experiments on different random seeds. To further investigate the stability of results to the number of epochs, we prepared two plots that show Avg. Gap for various number of epochs (https://shorturl.at/VSLzK). We concluded that AMUN stabilises after a few epochs and the number of epochs could even decrease, but we followed the same number of epochs that were used in prior works for fair comparisons.
We hope you find our responses satisfactory, and consider raising your score towards acceptance. We are happy to engage during the rebuttal period, and thank you again for your valuable comments and suggestions in improving our paper!
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in providing comprehensive analyses and additional experimental results to address the raised questions. From an empirical standpoint, the effectiveness and generalization of AMUN have been demonstrated. However, I still hold some concerns about theoretical claims and proofs. The theorem derives an upper bound on the parameter difference between the unlearned model and and the retrained model, where the bound explicitly incorporates adversarial example strength and model properties. But two issues may exist in the current theoretical part:
(1) The derivation relies on the convexity assumption of the loss landscape, whereas neural networks inherently exhibit non-convex optimization surfaces. Such non-convexity might induce parameter convergence to local minima rather than global optima to potentially violate the theoretical guarantees.
(2) The theorem does not theoretically justify whether adversarial examples belong to the model’s **"natural distribution"**. If adversarial examples lay outside the training data distribution, its effectiveness for unlearning may degrade due to distribution shift.
If authors could solve my concerns, I would be very pleased to increase my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for reviewing our rebuttal in detail and raising points that could lead to further clarification of our method. Below are our responses to their questions and concerns:
**Q1:** The assumption of convexity is common in prior works related to certified unlearning:
1. The work mentioned by the reviewer (Sekhari, A. et al. (NeurIPS 2021)), makes an even stronger assumption that the model is strongly convex (assumption 1 in Section 4).
2. Later works (Chien, E., (ICLR 2023)) make a similar assumption to bound the inverse Hessian of the loss with respect to the parameters (proof of Theorem 4.3 in Appendix A.7).
3. The SOTA method in certified unlearning (Zhang, B., et al. (ICML 24)), which we used in our comparisons (https://shorturl.at/Q19RQ), uses similar assumptions on bounding the inverse Hessian matrix, but they exploit the local convex approximation to derive the bound (Lemma 3.3 in Section 3). Their theoretical guarantees are based on other assumptions as well, such as the Lipschitz continuity of Hessian of the loss (Assumption 3.2 in Section 3) , which does not hold for neural networks, and also the size of the model parameters (Theorem 3.4 in Section 3).
That being said, our work is focusing on proposing an approximate unlearning method. Most prior methods in approximate unlearning, including the work mentioned by the reviewer (Chen, M., et al. (CVPR 2023)) and the current SOTA (Fan, C., et al. (ICLR 2024)) *do not provide any form of theoretical guarantees*. They only rely on empirical evaluation using (weaker, non-SOTA) membership inference attacks to verify the effectiveness of their method. It is noteworthy to say that although these methods lack theoretical guarantees they lead to better results than the certified methods in practice.
Not only are our results superior to those reported here, we also provide some analysis theoretically. This in itself exceeds the contributions made in several published papers, some of which have received spotlights.
While there is a mismatch between the theoretical frameworks used for analyses and practical deployments, these guarantees still provide useful information and hypotheses about the relevant factors influencing the quality of the unlearning methods in simpler settings. We believe that they will also motivate future research on extending the theoretical guarantees to the more general settings.
**Q2:** First, we would like to clarify that by mentioning that *"the adversarial examples belong to the natural distribution learned by the trained model"*, we do not mean that they belong to the distribution of the training data (which itself is an ongoing research thread on characterizing adversarial examples that are on-manifold or off-manifold for the underlying data distribution, e.g, https://tinyurl.com/2sw3vbcn). The adversarial examples that we compute are specific to the model. Once a model $M$ is trained on the training data, it imposes a distribution on the set of all possible samples. For a sample $(x,y)$ that belongs to the forgetset we find a perturbed version of $x$ i.e., $x’$, for which the model makes the prediction $(y’ \neq y)$. Therefore, from the model’s perspective (learned distribution by the model), the correct label for $x’$ is $y’$, which means that $(x’, y’)$ belongs to the *distribution that the model has imposed on the set of all possible samples*. Hence, although $y’$ is the wrong prediction for the sample $x’$, it matches the distribution learned by the model. However, this does not necessarily hold for a separately trained model because the sample $(x’, y’)$ is specifically crafted for model $M$.
To empirically evaluate that fine-tuning the model on adversarial examples does not lead to a distribution shift in the distribution that the original model imposes on the input space, we have plotted the confidence values on the test set and the remaining set before and after unlearning with AMUN. The results can be found here: https://tinyurl.com/4fp87wnj . As the resulting violin plot shows, after using AMUN the distribution of the confidence values for these subsets of the input space barely changes.
Thanks again, for your questions and engagement. We are happy to answer any more questions you have. | Summary: This article introduces AMUN, an unlearning method that uses adversarial examples to remove the influence of specific training samples from a trained model while preserving overall model accuracy. The key insight behind AMUN is that fine-tuning a model on adversarially modified versions of the forget set (DF) enables effective unlearning without requiring full retraining.
Claims And Evidence: The authors claim that AMUN Effectively Unlearns Data Without Significant Accuracy Drop
This is supported by: Tables 1 & 2 (AMUN achieves lower Avg. Gap than baselines). Still no comparison of computational efficiency.
Claim: AMUN Works Even Without Access to Retained Data (DR)
This is supported by: Table 2 (AMUN+SalUn achieves lowest Avg. Gap). But no experiments on larger datasets or different data types is performed.
Claim: AMUN Works for Adversarially Robust Models
This statement is supported by: Table 3 (AMUN achieves similar performance to retraining). But no analysis is shown on how robustness constraints are affecting the unlearning process.
Claim: AMUN Supports Continuous Unlearning
This is supported by: Figure 2 (AMUN-A outperforms other methods across multiple steps). But no discussion on computational cost of sequential unlearning is given by the authors.
Problematic or Unclear Claims
1. Computational efficiency: Claimed but not directly compared to baselines.
2. Scalability: No experiments on large models/datasets.
3. Generalization to other architectures: Only tested on ResNet-18 (CIFAR-10).
Methods And Evaluation Criteria: 1. Proposed Method (AMUN) uses Adversarial Examples for unlearning few samples.
2. Effectiveness Measured by Key Metrics
Test Accuracy (Test Acc) – Ensures model retains performance.
Forget AUC (FT AUC) – Measures how well forgotten data is removed.
Avg. Gap – Lower values indicate better unlearning.
3. Comprehensive Baseline Comparisons
Evaluated against Retrain, FT, RL, GA, BS, SALUN, AMUN+SalUn.
4. Tested on CIFAR-10 Dataset but authors should try experimenting with CIFAR-100 and other segmentation datasets as well.
5. Authors should try evaluation on deep architectures like ViTs or large-scale models.
Theoretical Claims: 1. This article lacks a formal proof of complete data removal.
2. No proof linking AMUN directly to better generalization. I suspect this is of utmost importance and the authors should consider showing this proof. This is an intuitive claim, but no theoretical analysis of effectiveness.
3. Observation 1 is very trivial.
Experimental Designs Or Analyses: 1. The article benchmarks AMUN against various unlearning baselines (e.g., Retrain, FT, RL, GA, BS, SALUN).
The results show AMUN’s advantage, statistical significance tests (e.g., hypothesis testing) are not reported, making it unclear if improvements are meaningful beyond noise.
2. The article evaluates performance in different settings (random forget 10% & 50%, adversarially robust models, continuous unlearning).
The continuous unlearning scenario (AMUN-A) is promising but lacks a long-term stability analysis—effects of multiple iterations on overall model performance are not well examined.
Supplementary Material: Yes, I have covered.
Relation To Broader Scientific Literature: This article builds on prior works in machine unlearning, particularly efficient unlearning methods that avoid full retraining.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
1. Authors introduce adversarial fine-tuning as an effective unlearning method. This is unique perspective in this field.
2. AMUN has an edge over existing unlearning methods in Avg. Gap and MIA resistance across various settings.
3. Avoids costly full retraining while maintaining competitive accuracy.
4. Demonstrates effectiveness in iterative unlearning.
Weaknesses:
1. The paper lacks rigorous theoretical guarantees for why adversarial fine-tuning effectively unlearns data.
2. Performance relies on the quality of adversarial examples, which can be computationally expensive and can go over several iterations. How does this guarantee that an adversarial example of a given sample will really unlearn? Have the authors conducted an analysis on gradient based maps or loss landscapes?
3. Results are primarily shown for CIFAR-10; effectiveness on larger, more complex datasets is unclear.
4. While various unlearning methods are tested, more recent and advanced SOTA techniques could have been included.
5. Authors should discuss about the limitations of this method.
6. A good block diagram is missing.
7. Some tables are too small to be seen properly. I can understand the space issues but a good adjustment of important tables should be done (eg. Table 3)
8. 5.2. Evaluation Metrics para can be shortened to save space.
Other Comments Or Suggestions: My Suggestions:
1. The paper lacks rigorous theoretical guarantees for why adversarial fine-tuning effectively unlearns data.
2. Performance relies on the quality of adversarial examples, which can be computationally expensive and can go over several iterations. How does this guarantee that an adversarial example of a given sample will really unlearn? Have the authors conducted an analysis on gradient based maps or loss landscapes?
3. Results are primarily shown for CIFAR-10; effectiveness on larger, more complex datasets is unclear.
4. While various unlearning methods are tested, more recent and advanced SOTA techniques could have been included.
5. Authors should discuss about the limitations of this method.
6. A good block diagram is missing.
7. Some tables are too small to be seen properly. I can understand the space issues but a good adjustment of important tables should be done (eg. Table 3)
8. 5.2. Evaluation Metrics para can be shortened to save space.
Questions For Authors: Concerns:
1. Your method relies on adversarial fine-tuning to approximate unlearning. Do you have any theoretical guarantees that AMUN effectively removes information from the model rather than just masking it? If not, how do you ensure true unlearning?
2. Given the computational cost of generating adversarial examples, how does AMUN scale to large-scale datasets like ImageNet? Have you tested its efficiency and effectiveness in such settings?
3. The results suggest AMUN reduces MIAs, have you tested it against stronger, adaptive attacks designed specifically to bypass adversarial fine-tuning? If so, how does it perform compared to retraining?
4. Your comparisons include various unlearning techniques, but how does AMUN perform against more recent methods such as certified removal techniques or data augmentation-based approaches? Would AMUN still hold its advantage in Avg. Gap and AUC metrics?
5. How will the users determine the forget set?
6. The results in the table 1 show incremental pattern and the proposed method does not show any significant accuracy change.
7. Performance relies on the quality of adversarial examples, which can be computationally expensive and can go over several iterations. How does this guarantee that an adversarial example of a given sample will really unlearn? Have the authors conducted an analysis on gradient based maps or loss landscapes?
Authors can discuss about some of the above points rather than experimenting further.
Ethical Review Concerns: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and insightful comments. We are excited about the reviewer’s acknowledgment of the uniqueness of our approach for unlearning. Below are our responses to their questions:
**Q1+S1+W1. Theoretical guarantees:** Although most prior SOTA methods in approximate unlearning are not accompanied by theoretical guarantees, we proved a theorem (https://shorturl.at/ChU0s) that guarantees moving toward the retrained models by making assumptions that are common in the certified unlearning literature. Please refer to the discussion with reviewer qDdb for more details.
**Q2+S3+W3. Larger model/dataset:** we have performed our experiments on VGG19 models (12 times larger than ResNet-18) trained on the Tiny ImageNet dataset (200 classes). We evaluated our prior observations similar to Figure 1 in our manuscript that shows fine-tuning the trained models on their adversarial examples does not lead to catastrophic forgetting (https://tinyurl.com/5n6f6pxr). We also compared the unlearning methods and created the tables (https://tinyurl.com/2wwwbbfp) corresponding to Tables 1 and 2 in our manuscript. Please see the response to reviewer qDdb for more details.
**Q4. Certified baseline:** We performed a comparison to a SOTA certified unlearning method (Zhang et al ICML 24). This method does not work when there is no access to DR. Our results (https://shorturl.at/Q19RQ) show that certified unlearning methods such as this, though accompanied with theoretical guarantees, are not capable of outperforming SOTA in approximate unlearning, including AMUN. We believe that this is the case due to their assumptions not holding for deep learning models used in practice.
**Q3. Stronger attacks:** We wish to clarify that the objective of our algorithm is to use adversarial attacks to find "neighbors" for those samples in the unlearning set, and use this neighbor set for fine-tuning to ensure that the resulting (unlearned) model lacks confidence on these samples. We are unsure of what the reviewer means to consider attacks to bypass adversarial fine-tuning, as the objective of the attack we consider is to "augment" the sample (to be unlearned) in a manner so as to ensure that it's confidence is low post fine-tuning.
**Q5. Forget set:** Upon receiving an unlearning request, the set of samples to be forgotten are specified. This is the common setting used in prior works in unlearning for classification models, such as SulUn and l1-sparse.
**Q6. Table 1:** The setting of Table 1 (access to DR) is much easier than Table 2. Still, we want to point out the fact that even in that scenario, our method achieves the smallest Avg Gap (based on the average over 27 runs). Moreover, once we make things more difficult, by either moving to a larger model and dataset, or revoking access to DR, the advantage of our method over prior works becomes apparent. The capability of performing unlearning without access to DR is much more desirable as it will be more practical in many real-world use-cases.
**Q7+S2+W2. Computational cost:** The time comparison: https://shorturl.at/AXVB7 . Note that, we only need to run Algorithm 1 on the samples that are requested to be forgotten. For our experiments, we choose a small sub-sample of the corresponding dataset and evaluate their final values of $\epsilon$; based on these values, we set the initial value of $\epsilon$ and run PGD attack on the samples. Then we only keep the samples for which an adversarial example is not found, and run another round of PGD with the updated $\epsilon$ value. We proceed until all the samples find one corresponding adversarial example. These histograms (https://shorturl.at/fMQDC) show the number of samples for each $\epsilon$ value. Based on our analysis, for both CIFAR10 and tiny imagenet, finding the adversarial example for all the samples is equivalent to less than 3 runs of PGD50 and PGD20, respectively.
**W4+S4. More recent SOTA:** SalUn is a SOTA method in approximate unlearning published in ICLR 24. In our experiments we added all the baseline methods from that work. In addition, we also performed experiments using the SOTA in certified unlearning (Zhang et al ICML 24), which has made our baselines comprehensive and up-to-date.
**W5+S5. Limitations:** We have tried various settings, such as Lipschitz-bounded models and adversarially-trained models to ensure our approach is not limited to only regular training paradigms, but there is no guarantee on compatibility with all training paradigms. Also, as with every new method, the introduction of this new methodology for unlearning might invite a certain line of attacks specifically targeted for this approach. Also future approaches are required to avoid the slight degradation in adaptive setting.
We hope you find our responses satisfactory, and consider raising your score towards acceptance. We are happy to engage further, and thank you again for your valuable suggestions in improving our paper!
---
Rebuttal Comment 1.1:
Comment: I am happy with the clarifications given by the authors. I raise the score to 3.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for reviewing our rebuttal and supporting our work by updating their score. | null | null | null | null | null | null |
TLLC: Transfer Learning-based Label Completion for Crowdsourcing | Accept (spotlight poster) | Summary: To complete the missing labels, this paper proposes a novel label completion method for crowdsourcing by utilizing transfer learning. All high-confidence instances from the original data are selected as the source domain, and a Siamese network is pretrained based on the instances coming from the source domain. After transferring the pretrained network to the target domain, some fine-tunings are applied to obtain the unique characteristics of each annotator, also called worker modeling. Corresponding theorems are provided to prove that the proposed transfer learning-based method can reduce the generalization error. Experimental results and related analysis also demonstrate the effectiveness of the proposal.
## update after rebuttal
The author's response satisfactorily resolves my concerns, and upon considering the feedback from the other reviewers, I support the acceptance of this paper. Thus, I keep my initial rating unchanged.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. The theoretical claims are all about reducing the generalization error, and the proofs are correct.
Experimental Designs Or Analyses: Yes. This paper designs the related experiments to show the effectiveness of the proposal, and gives some analysis to explain the advantages or disadvantages on different situations.
Supplementary Material: Yes. The code looks correct.
Relation To Broader Scientific Literature: This paper discusses the effectiveness of transfer learning for crowdsourcing problem, and proposes a method of how to construct source domain data, which can theoretically guarantee the reduction of generalization error. The source domain construction method and theoretical analysis can be easily applied to other transfer learning-based application areas.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1 This paper studies the label completion problem in crowdsourcing area, which is a very important issue as usually the missing proportion is very high in real-world scenarios.
2 This paper proposes a label completion method based on transfer learning. The comparative experiments demonstrate that the idea is simple yet efficient.
3 This paper theoretically discusses the generalization error reduction by utilizing transfer learning into label completion problem.
4 This paper constructs a novel source domain algorithm, which can be easily extended to various application scenarios. The theoretical analysis guarantees its effectiveness.
Weaknesses:
1 There appears to be an error in Equation (11). In the MSE loss, there should be a minus sign ‘-’ instead of a comma ‘,’. Additionally, the notation for $x_{i1}$ and $x_{i2}$ is somewhat confusing to me, as it could easily be interpreted as the values of two attributes for the same instance $x_i$.
2 Some parts of the description are not clear enough. For example, what is the ‘g’ of the time complexity O(g) refer to? If g is not a kind of quantity, O(N^2g) is not a correct expression.
3 In the appendix, it is shown that the missing rates for all three datasets are very high, exceeding 0.85. Do the different missing rates have an impact on the method proposed in this paper? What kind of impact do they have?
Other Comments Or Suggestions: 1 In Line 211, ‘we’ should be ‘We’.
2 In Equation (3), it would be better to change the representation of the average value P, since the overline has been used to denote the complement set.
Questions For Authors: 1 Some parts of the description are not clear enough. For example, what is the ‘g’ of the time complexity O(g) refer to? If g is not a kind of quantity, O(N^2g) is not a correct expression.
2 In the appendix, it is shown that the missing rates for all three datasets are very high, exceeding 0.85. Do the different missing rates have an impact on the method proposed in this paper? What kind of impact do they have?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1:** There appears to be an error in Equation (11). In the MSE loss, there should be a minus sign ‘-’ instead of a comma ‘,’. Additionally, the notation for $x_{i1}$ and $x_{i2}$ is somewhat confusing to me, as it could easily be interpreted as the values of two attributes for the same instance $x_{i}$.
**Author Response to Q1:** Thanks for your valuable comments. In the MSE loss, it should indeed be a minus sign instead of a comma. Meanwhile, to address the reviewer’s concerns regarding notation, we will revise $x\_{i1}$ and $x\_{i2}$ to $x\_{i}^1$ and $x\_{i}^2$. Accordingly, in the final version of the paper, we will update Equation (11) as follows:
$$
\mathcal{L}\_{mse} = \frac{1}{|\widetilde{\mathbf{X}}|^2}\sum\_{i=1}^{|\widetilde{\mathbf{X}}|^2}(\widetilde{f}\_{d}(\widetilde{f}\_{g}(\mathbf{x}\_{i}^1), \widetilde{f}\_{g}(\mathbf{x}\_{i}^2)) - y\_i')^2.
$$
Thanks again for your valuable comments.
**Q2:** Some parts of the description are not clear enough. For example, what is the ‘g’ of the time complexity O(g) refer to? If g is not a kind of quantity, O(N^2g) is not a correct expression.
**Author Response to Q2:** Thanks for your valuable comments. In our paper, $\widetilde{f}\_{g}$ is explicitly defined as the network structure used for learning new embeddings in a Siamese network. The time complexity of $\widetilde{f}\_{g}$ is not determined by Algorithm 2 but is instead related to the scale of $\widetilde{f}\_{g}$. Therefore, we denote its time complexity as $O(g)$. In the final version of the paper, we will explicitly clarify that $g$ represents the scale of $\widetilde{f}\_{g}$. Additionally, we will carefully review and refine other descriptions to improve clarity and precision. Thanks again for your valuable comments.
**Q3:** In the appendix, it is shown that the missing rates for all three datasets are very high, exceeding 0.85. Do the different missing rates have an impact on the method proposed in this paper? What kind of impact do they have?
**Author Response to Q3:** Thanks for your valuable comments. The missing rates significantly impact TLLC’s performance. Specifically, TLLC improves label completion by addressing insufficient worker modeling. A higher missing rate increases the likelihood of insufficient modeling, making TLLC’s advantages more pronounced. Conversely, as the missing rate decreases, TLLC’s effectiveness relative to WSLC gradually diminishes. To validate this analysis, we conduct simulated experiments on the Income dataset. We simulate 40 workers annotating the dataset, where each worker’s annotation quality is randomly generated from a uniform distribution of [0.55, 0.75]. The missing rate is controlled by adjusting workers’ annotation probabilities, ensuring it varies from 0.9 to 0.1 in intervals of 0.2. When the label aggregation algorithm is fixed as MV, the label completion performance of WSLC and TLLC is as follows:
|Missing Rates|0.9|0.7|0.5|0.3|0.1|
|--|--|--|--|--|--|
|WSLC|70.17%|80.33%|81.67%|**92.67%**|**94.83%**|
|TLLC|**71.16%**|**81.16%**|**82.33%**|92.33%|94.00%|
||
These results confirm our analysis: when the missing rate exceeds 0.5, TLLC outperforms WSLC. However, as the missing rate decreases further, WSLC becomes more effective than TLLC. In the final version of the paper, we will thoroughly describe and discuss these experiments about missing rates. Thanks again for your valuable comments.
**Q4:** In Line 211, ‘we’ should be ‘We’. In Equation (3), it would be better to change the representation of the average value P, since the overline has been used to denote the complement set.
**Author Response to Q4:** Thanks for your valuable comments. In the final version of the paper, we will revise the statement in line 211 from "we set $y_{ij}'$ to 0 if $l_{i} = l_{j}$" to "We set $y_{ij}'$ to 0 if $l_{i} = l_{j}$". Additionally, to avoid ambiguity caused by using the overline to represent both the average value and the complement set, we will consistently use $\mu$ to denote the average value. Accordingly, Equation (3) will be revised as follows:
$$
\mu\_{c\_q} = \frac{\sum\_{i=1}^{N}\delta(\hat{y}\_i, c\_q)P(\hat{y}\_i|\mathbf{L}\_i)}{\sum\_{i=1}^{N}\delta(\hat{y}\_i, c\_q)}.
$$
Equations (4) and Table 2 will also be updated accordingly. Thanks again for your valuable comments. | Summary: Existing worker modeling-based label completion methods have successfully improved the performance of label completion, but they remain constrained by the insufficient annotated instances per worker. To address this issue, this paper proposes a transfer learning-based label completion (TLLC) method. TLLC begins by identifying all high-confidence instances from the whole crowdsourced data as a source domain to pretrain a Siamese network. Next, TLLC transfers the pretrained network to target domains, where it is fine-tuned using the instances annotated by each worker individually. Finally, TLLC utilizes the new embeddings learned by the transferred network to complete the missing labels for each worker. Experimental results validate the effectiveness and rationality of TLLC.
Claims And Evidence: There are three important claims made in the paper:
1) Worker modeling has been proved to be a powerful strategy to improve the performance of label completion.
2) Workers typically annotate only a few instances, which leads to insufficient worker modeling and thus limiting the improvement of label completion.
3) The proposed transfer learning-based label completion method helps alleviate the issue of insufficient worker modeling.
In response to these claims, this paper provides the corresponding evidence as follows:
1) For Claim 1, this paper summarizes and discusses existing label completion methods in the introduction and related work sections. The latest label completion methods have indeed achieved impressive results by leveraging worker modeling.
2) For Claim 2, the paper cites an existing work (Jung & Lease, 2012) to emphasize that, in real-world scenarios, each worker typically annotates only a few instances. Furthermore, the description of real-world datasets in Section 4.1 similarly supports this phenomenon. Based on this phenomenon, existing worker modeling-based label completion methods are indeed constrained by the insufficient annotated instances per worker.
3) For Claim 3, the experimental results presented in Section 4 demonstrate the effectiveness of the proposed method. Specifically, Figures 1 and 2 validate the effectiveness of TLLC for improving the performance of label completion, while Figure 4 independently validates the effectiveness of transfer learning for insufficient worker modeling.
Methods And Evaluation Criteria: Yes. In the proposed method, using transfer learning to address the issue of insufficient worker modeling is reasonable. Regarding evaluation criteria, this paper adopts the aggregation accuracy, which is commonly used in other label completion studies.
Theoretical Claims: Yes. The theories and corresponding proofs presented in the paper support the effectiveness and rationality of the proposed method.
Experimental Designs Or Analyses: Yes. The experimental section first validates the effectiveness of TLLC through comparative experiments and significance tests. Then, the rationality of TLLC is verified through ablation studies on each strategy. Finally, potential limitations of TLLC are analyzed by discussing its abnormality.
Supplementary Material: Yes. The supplementary materials include the code and datasets, which can be run correctly. Additionally, the summary of symbols, dataset descriptions, and more experimental results are also provided in the attached Appendixes.
Relation To Broader Scientific Literature: This paper is the first work to introduce transfer learning into label completion, addressing the impact of insufficient worker modeling on label completion, thereby further improving the performance of label completion.
Essential References Not Discussed: No. All essential references have been cited/discussed in the paper.
Other Strengths And Weaknesses: Strengths:
1) This paper reveals a critical limitation of existing label completion methods: insufficient worker modeling due to insufficient annotated instances per worker. By introducing transfer learning, the proposed method effectively addresses this issue.
2) The use of a Siamese network for both pretraining and fine-tuning is innovative in worker modeling. The idea of constructing source and target domains from the same crowdsourced data and leveraging high-confidence instances for pretraining adds robustness to the method.
3) The paper provides theoretical proofs to support the claims of the paper (Theorems 3.6, 3.7, and 3.8). These theorems and proofs strengthen the claims about reduced generalization error and robustness against noise.
4) This paper provides extensive experiments to validate the effectiveness and rationality of TLLC. The paper first validates the effectiveness of TLLC through comparative experiments and significance tests. Then, the rationality of TLLC is validated through ablation studies on each strategy. Finally, potential limitations of TLLC are analyzed by discussing its abnormality.
Weaknesses:
1) This paper uses three algorithms to describe the construction of source and target domains, worker modeling, and label completion, respectively. However, how these three algorithms are combined to form the complete TLLC remains unclear. A framework diagram is needed to provide a complete introduction of TLLC.
2) The current ablation study is not comprehensive. Although the paper validates the rationality of each strategy in TLLC from multiple perspectives, this is not directly reflected in the evaluation criteria of label completion. In my opinion, it is necessary to construct a complete ablation study based on aggregation accuracy.
Other Comments Or Suggestions: I have found few typos as follows:
1) In Equation 11, there should be a minus sign before ${y}_i’$ instead of a comma.
2) On line 366 in page 7, “dataset Music_genre dataset” should be “dataset Music_genre”.
Questions For Authors: See the Weaknesses parts:
1) This paper uses three algorithms to describe the construction of source and target domains, worker modeling, and label completion, respectively. However, how these three algorithms are combined to form the complete TLLC remains unclear. A framework diagram is needed to provide a complete introduction of TLLC.
2) The current ablation study is not comprehensive. Although the paper validates the rationality of each strategy in TLLC from multiple perspectives, this is not directly reflected in the evaluation criteria of label completion. In my opinion, it is necessary to construct a complete ablation study based on aggregation accuracy.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1:** This paper uses three algorithms to describe the construction of source and target domains, worker modeling, and label completion, respectively. However, how these three algorithms are combined to form the complete TLLC remains unclear. A framework diagram is needed to provide a complete introduction of TLLC.
**Author Response to Q1:** Thanks for your valuable comments. The complete process of TLLC is as follows: Given a crowdsourced dataset, we first construct the source and target domains. Next, we pretrain a Siamese network on the source domain. Subsequently, for each worker's corresponding target domain, we individually transfer the pretrained network. Finally, we use the transferred network to learn new embeddings for each worker and infer the missing labels based on these embeddings. In essence, this process corresponds to the sequential execution of Algorithm 1, Algorithm 2, and Algorithm 3. In the final version of the paper, according to the reviewer’s comments, we will incorporate a framework diagram to clearly present the complete process of TLLC. Thanks again for your valuable comments.
**Q2:** The current ablation study is not comprehensive. Although the paper validates the rationality of each strategy in TLLC from multiple perspectives, this is not directly reflected in the evaluation criteria of label completion. In my opinion, it is necessary to construct a complete ablation study based on aggregation accuracy.
**Author Response to Q2:** Thanks for your valuable comments. To address the reviewer’s concerns, we conduct an ablation study on the Income dataset for TLLC and its variants (using MV as the label aggregation method). The experimental results are as follows:
| |TLLC|TLLC1|TLLC2|TLLC3|
|--|--|--|--|--|
|Aggregation Accuracy|**74.97%**|71.83%|74.16%|71.66%|
||
Here, TLLC1, TLLC2, and TLLC3 represent the variants of TLLC without instance filtering, pretraining, and transfer training, respectively. Considering that the aggregation accuracy of MV before completion is 71.17% (as shown in Figure 1), it can be observed that all TLLC variants outperform MV. Meanwhile, each variant performs worse than the complete TLLC, further indicating the superior performance and rationality of TLLC. In the final version of the paper, we will provide a detailed description and discussion of the setup and results of this ablation study. Thanks again for your valuable comments.
**Q3:** In Equation 11, there should be a minus sign before $y_i’$ instead of a comma.
**Author Response to Q3:** Thanks for your valuable comments. In the MSE loss, it should indeed be a minus sign instead of a comma. Meanwhile, to address the reviewer **3DPZ’s** concerns regarding notation, we will revise $x\_{i1}$ and $x\_{i2}$ to $x\_{i}^1$ and $x\_{i}^2$. Accordingly, in the final version of the paper, we will update Equation (11) as follows:
$$
\mathcal{L}\_{mse} = \frac{1}{|\widetilde{\mathbf{X}}|^2}\sum\_{i=1}^{|\widetilde{\mathbf{X}}|^2}(\widetilde{f}\_{d}(\widetilde{f}\_{g}(\mathbf{x}\_{i}^1), \widetilde{f}\_{g}(\mathbf{x}\_{i}^2)) - y\_i')^2.
$$
Thanks again for your valuable comments.
**Q4:** On line 366 in page 7, “dataset Music_genre dataset” should be “dataset Music_genre”.
**Author Response to Q4:** Thanks for your valuable comments. In the final version of the paper, we will correct "dataset Music_genre dataset" to "dataset Music_genre" on line 366 in page 7. Additionally, we will double-check and improve the writing of our paper. Thanks again for your valuable comments.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response. After considering the other reviewers' comments, I have decided to maintain my original rating. | Summary: This paper at first reveals the limitations of existing methods that leverage worker modeling to improve label completion for Crowdsourcing and then proposes a novel transfer learning-based label completion (TLLC) method, which introduces transfer learning to avoid insufficient worker modeling and leverages the new embeddings learned by the transferred network to complete missing labels.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: YES. Code and Datasets.
Relation To Broader Scientific Literature: This paper proposes a novel transfer learning-based label completion (TLLC) method, which is the first work to introduce transfer learning to avoid insufficient worker modeling and then leverages the new embeddings learned by the transferred network to complete missing labels.
Essential References Not Discussed: NO
Other Strengths And Weaknesses: Strengths:
1. The authors reveal the limitations of existing methods that leverage worker modeling to improve label completion for Crowdsourcing.
2. To address this issue, the authors propose a novel transfer learning-based label completion (TLLC) method, which is the first work to introduce transfer learning to avoid insufficient worker modeling and then leverages the new embeddings learned by the transferred network to complete missing labels.
3. The authors conduct extensive experiments to validate the effectiveness, rationality and abnormality of the proposed TLLC on the widely used real-world datasets.
4. The organization of the paper is quite good and it is easy to follow the topic and the proposed method.
Weaknesses:
1. The proposed TLLC transfers the pretrained Siamese network to the target domain. In the paper, the authors just said: “Specifically, we set up both $f_S$ and $f_T$ as Siamese networks with the same structure (Li et al., 2022).” What are the detailed network structure and parameter settings?
2. Although the authors have already provided a deeper analysis of TLLC to validate its underlying rationality: 1) With and without instance filtering; 2) With and without transfer learning, a group of thorough ablation experiments are needed.
3. On page 8, Figure 6 illustrates the relationship between the number of annotated instances and annotation quality for each worker in dataset Music genre. Why are the axes named Aggregation accuracy (%) and Number of aggregated instances?
Other Comments Or Suggestions: NO
Questions For Authors: The proposed TLLC transfers the pretrained Siamese network to the target domain. In the paper, the authors just said: “Specifically, we set up both $f_S$ and $f_T$ as Siamese networks with the same structure (Li et al., 2022).” What are the detailed network structure and parameter settings?
Although the authors have already provided a deeper analysis of TLLC to validate its underlying rationality: 1) With and without instance filtering; 2) With and without transfer learning, a group of thorough ablation experiments are needed.
On page 8, Figure 6 illustrates the relationship between the number of annotated instances and annotation quality for each worker in dataset Music genre. Why are the axes named Aggregation accuracy (%) and Number of aggregated instances?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1:** The proposed TLLC transfers the pretrained Siamese network to the target domain. In the paper, the authors just said: “Specifically, we set up both and as Siamese networks with the same structure (Li et al., 2022).” What are the detailed network structure and parameter settings?
**Author Response to Q1:** Thanks for your valuable comments. In TLLC, the Siamese network is used to model each worker, and the new embeddings it learns are ultimately used to complete the worker’s missing labels. Since each worker annotates only a few instances, we set the network to a small scale to ensure convergence. The detailed network structure and parameter settings are as follows:
|Layer Type|Output Dimension|Activation Function|
|--|:--:|:--:|
|Input Layer|128|ReLU|
|Fully Connected Layer|64|ReLU|
|Output Layer|2|-|
||
To address the reviewer’s concerns, we will include the above information in the final version of the paper. Additionally, we have already submitted our code and datasets in Supplementary Material. At the same time, we will also open-source our code to facilitate the reproduction of our results once our paper is accepted. Thanks again for your valuable comments.
**Q2:** Although the authors have already provided a deeper analysis of TLLC to validate its underlying rationality: 1) With and without instance filtering; 2) With and without transfer learning, a group of thorough ablation experiments are needed.
**Author Response to Q2:** Thanks for your valuable comments. To address the reviewer’s concerns, we conduct an ablation study on the Income dataset for TLLC and its variants (using MV as the label aggregation method). The experimental results are as follows:
| |TLLC|TLLC1|TLLC2|TLLC3|
|--|--|--|--|--|
|Aggregation Accuracy|**74.97%**|71.83%|74.16%|71.66%|
||
Here, TLLC1, TLLC2, and TLLC3 represent the variants of TLLC without instance filtering, pretraining, and transfer training, respectively. Considering that the aggregation accuracy of MV before completion is 71.17% (as shown in Figure 1), it can be observed that all TLLC variants outperform MV. Meanwhile, each variant performs worse than the complete TLLC, further indicating the superior performance and rationality of TLLC. In the final version of the paper, we will provide a detailed description and discussion of the setup and results of this ablation study. Thanks again for your valuable comments.
**Q3:** On page 8, Figure 6 illustrates the relationship between the number of annotated instances and annotation quality for each worker in dataset Music genre. Why are the axes named Aggregation accuracy (%) and Number of aggregated instances?
**Author Response to Q3:** Thanks for your valuable comments, and we apologize for our typos in Figure 6. In the final version of the paper, we will revise the titles of the horizontal and vertical axes in Figure 6 to "Number of annotated instances" and "Annotation accuracy (%)". Similarly, Figure 8 will be adjusted accordingly. Additionally, we will double-check and improve the writing of our paper. Thanks again for your valuable comments.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. My concerns have been addressed. After considering other reviewers' feedback, I will maintain my positive recommendation. | Summary: The paper proposes a Transfer Learning-based Label Completion (TLLC) method for crowdsourcing scenarios. The authors address the issue of sparse label matrices, where individual workers annotate only a few instances, leading to insufficient worker modeling and poor label completion. The key idea of TLLC is to pre-train a Siamese network on high-confidence instances (source domain) and transfer it to model individual workers (target domain), thereby improving label completion. The method is evaluated against WSLC across three real-world datasets.
Claims And Evidence: Most of the claims in this paper are well demonstrated. However, I still have some concerns:
1) While Theorem 3.8 suggests that TLLC is resistant to i.i.d. Gaussian noise, real-world crowdsourcing noise is often adversarial (workers deliberately provide incorrect labels). Figure 6 shows that TLLC fails to handle adversarial workers, leading to poor performance on the Music Genre dataset.
2) The paper claims that TLLC’s complexity is O(N²Rg), but lacks the theoretical and empirical comparison with WSLC.
Methods And Evaluation Criteria: The use of transfer learning to improve worker modeling makes sense given the sparse crowdsourced label matrix problem. However, there are some issues that should be further resolved:
1) Only WSLC is used as a label completion baseline and more label completion baselines are needed to demonstrate to effectiveness of proposed TLLC.
2) The paper does not provide a runtime comparison between TLLC and other methods. Given the O(N²Rg) complexity, TLLC may be computationally expensive, but this is not empirically analyzed.
Theoretical Claims: The paper presents three key theoretical claims, and I have checked the logic and correctness of these proofs and identified the following issues:
The derivation of Theorem 3.8 is mathematically correct for i.i.d. Gaussian noise. However, real-world crowdsourcing noise is not i.i.d. Workers can introduce systematic bias rather than random Gaussian noise.
Experimental Designs Or Analyses: I have checked the experimental designs and analyses of this paper. Experimental design is mostly valid, with realistic datasets and strong statistical tests. However, the evaluation has notable limitations:
1) It lacks comparisons with other label completion baselines beyond WSLC, making it unclear how TLLC performs against broader alternatives.
2) The experimental analysis lacks depth in several aspects. While the paper provides basic performance comparisons and statistical tests, it does not thoroughly investigate why TLLC performs well in some cases but struggles in others.
Supplementary Material: I have reviewed the Appendix in the manuscript and source code in the supplementary material.
Relation To Broader Scientific Literature: TLLC builds on well-established ideas in transfer learning, worker modeling, and label completion. However, its novelty is limited because similar techniques exist in prior literature. Stronger comparisons with alternative label completion methods are needed to clarify its contribution.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1) The paper provides a well-structured introduction to the problem of sparse crowdsourced label matrices and explains why worker modeling is essential for label completion.
2) The analysis of worker annotation quality before and after label completion (Figure 5) is an insightful addition.
Weaknesses:
1) The core techniques—transfer learning for label completion, worker modeling, and label filtering—are all adaptations of existing methods, rather than fundamentally new contributions.
2) Lack of benchmarks against more label completion methods. It remains unclear if TLLC truly outperforms all alternatives.
3) The paper does not analyze where TLLC makes mistakes, nor does it investigate failure cases in depth.
4) No sensitivity analysis is conducted on hyperparameters, which could impact model stability.
Other Comments Or Suggestions: Please see the weakness.
Questions For Authors: Please refer to the concerns above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks a lot for your comments. Please find our detailed responses to your concerns as follows.
**Author Response to Contributions:** This paper is the first to identify and address the limitation from insufficient worker modeling. Moreover, based on our review of related work, this paper is also the first to introduce transfer learning into label completion. While transfer learning is a well-established technique, its application in crowdsourcing—particularly when only a single crowdsourced dataset is available—poses a significant challenge in constructing source and target domains. Our paper proposes a novel algorithm to address this issue, achieving impressive results. In the final version of the paper, we will expand on these contributions in detail.
**Author Response to Benchmarks:** Among existing label completion methods, apart from WSLC, only PG-TAC (Zhou & He, 2016) can handle both binary and multi-class classification problems (consistent with TLLC). Therefore, we compare TLLC with PG-TAC on the Income and Leaves datasets. The results are as follows:
| |MV|GTIC|DEWSMV|MNLDP|AALI|LAGNN|
|--|--|--|--|--|--|--|
|PG-TAC (Income)|72.00%|71.67%|72.17%|**73.00%**|73.17%|71.83%|
|TLLC (Income)|**74.97%**|**74.67%**|**74.80%**|72.67%|**75.37%**|**74.67%**|
||
|PG-TAC (Leaves)|63.54%|62.76%|63.54%|64.32%|64.58%|63.54%|
|TLLC (Leaves)|**68.88%**|**67.19%**|**69.01%**|**69.79%**|**72.40%**|**68.75%**|
||
These results further demonstrate the superior performance of TLLC.
**Author Response to Experimental Analysis:** In our current paper, we analyze TLLC’s rationality and abnormality in the Discussion and Analysis subsection. For rationality, we explain why TLLC performs well by analyzing the effectiveness of each component in TLLC (see the Rationality paragraph on page 7). For abnormality, we reveal that TLLC is not robust to adversarial workers providing numerous labels, and we discuss the reasons behind this phenomenon (see the Abnormality paragraph on page 8). Additionally, to address the reviewer **ksp6’s Q2**, we conduct an ablation study on the Income dataset. The results of the ablation study further indicate the rationality of TLLC. In the final version of the paper, we will provide a more in-depth analysis based on the ablation study.
**Author Response to Sensitivity Analysis:** The hyperparameters in TLLC include the new embedding dimension ($K$), the number of epochs, and the batch size. We conduct parameter sensitivity analysis experiments on the Income dataset (using MV as the label aggregation method) to observe TLLC’s performance. In each experiment, we fix two hyperparameters and vary the remaining one. The results are as follows:
|$K$|2|4|6|8|10|
|--|:--:|:--:|:--:|:--:|:--:|
|Income|**74.94%**|71.83%|71.66%|73.33%|72.66%|
||
|Epochs|2($Q$)|4|6|8|10|
|--|:--:|:--:|:--:|:--:|:--:|
|Income|**74.94%**|72.33%|73.00%|72.16%|72.83%|
||
|Batch Size|8|16|32|64|128|
|--|:--:|:--:|:--:|:--:|:--:|
|Income|71.83%|72.50%|**74.94%**|73.33%|73.16%|
||
These results show that TLLC’s performance varies slightly with changes in hyperparameter values. Considering that the aggregation accuracy of MV before label completion is 71.17%, it is evident that TLLC’s effectiveness is not highly sensitive to hyperparameter settings. In the final version of the paper, we will provide a detailed description and discussion of these parameter sensitivity experiments.
**Author Response to Theoretical Claims:** As far as we know, adversarial labels are a type of noise but not the most dominant noise. In reality, workers hired from the general public usually lack expertise, and thus the noisy labels they provide are often random, satisfying the i.i.d. assumption. Moreover, Figure 6 does not indicate that TLLC fails to handle adversarial workers, but rather that it struggles with adversarial workers who annotate a large number of labels. In the final version of the paper, we will provide more explanations regarding Figure 6 and the characteristics of noisy labels to clarify these points.
**Author Response to Complexity:** We conduct a new experiment on the Income dataset to compare the runtime of WSLC and TLLC. This experiment is conducted on a Windows 10 machine with an AMD Athlon(tm) X4 860K Quad Core Processor @ 3.70GHz and 16 GB of RAM. The runtime required to complete the Income dataset for WSLC and TLLC is as follows:
|Dataset|WSLC|TLLC|
|--|:--:|:--:|
|Income|0.87s|150.33s|
||
The results show that TLLC requires more runtime compared to WSLC. However, the primary computational cost of TLLC arises from transfer learning to train Siamese networks. Both transfer learning and Siamese networks are widely adopted techniques and are not computationally expensive to use in practical scenarios. Therefore, TLLC remains efficient and applicable in real-world scenarios. In the final version of the paper, we will include a detailed explanation of the computational cost of TLLC to further clarify this point. | null | null | null | null | null | null |
A standard transformer and attention with linear biases for molecular conformer generation | Reject | Summary: In this work, the authors introduce a transformer architecture with a new positional encoding scheme and training method for molecular conformer generation (MCG), achieving comparable performance as MCF, a prevailing non-equivariant MCG architecture, using a fraction of MCF's number of parameters. The proposed improvements include (1) positional encodings take inspiration from ALiBi and are subtractive rather than additive, (2) various coordinate encoding methods, and (3) chirality tagging for fair comparison with previous methods.
Claims And Evidence: **Strengths**
- This is a highly empirical paper, and the authors do a good job of showcasing their results with comprehensive experiments.
- The authors cite efficiency as a highlight of MCF, and the sampling latency sudy in Appendix B gives good evidence of the efficacy of their
**Weaknesses**
- There is no analysis of the training efficiency of the proposed method versus baselines.
- While the proposed method outperforms other non-equivariant MCG architectures in terms of inference speed per diffusion step, there is no analysis on the convergence rate of the method versus other methods. While all models are evaluated with 300 diffusion steps, it's possible certain models achieve high-quality samples sooner than others. Measuring metrics against diffusion steps would be helpful in more comprehensively evaluating efficiency.
Methods And Evaluation Criteria: **Strengths**
- The benchmark datasets and metrics are all standard in the MCG literature.
**Weaknesses**
- It's unclear why the attentional linear biases are subtractive rather than additive as they are for Graphormer and other transformer variants with linear biases.
- Are competing baselines also trained using data augmentation? To my knowledge, MCF doesn't use any data augmentation, which could weaken the efficiency of the proposed method.
Theoretical Claims: The paper makes no theoretical claims.
Experimental Designs Or Analyses: - What are the details regarding data augmentation (number of augmentations per sample, any post-processing or filtering, etc.)?
- I found no core issues with the experimental design for Section 4.
Supplementary Material: I read all the supplementary material.
Relation To Broader Scientific Literature: This paper falls in line with recent work on non-equviariant MCG models [2] and scaling studies for molecular and geometric models [3, 4].
[3] Qu, Eric and Krishnapriyan, Aditi S. The Importance of Being Scalable: Improving the Speed and Accuracy of Neural Network Interatomic Potentials Across Chemical Domains. NeurIPS 2024.
[4] Brehmer et al. Does equivariance matter at scale? Preprint.
Essential References Not Discussed: This work is relatively focused in the research problem it is solving. To my knowledge, the authors discuss the most relevant references.
Other Strengths And Weaknesses: The proposed method showcases comparable non-equivariant MCG performance with much fewer parameters than MCF. While the claims made are relatively modest, with the paper's main contributions being what is essentially a thorough search over architecture choices, I think it marks good progress towards more efficient and simpler methods for a complicated task.
Other Comments Or Suggestions: Some typos:
- "_non-equivariant models_ can outperform _non-equivariant_ networks at MCG"
- "_stereochemestry_"
- "ET-Flow, _utlizing_ an equivariant transformer"
- “Inference for all _models_ variants"
Questions For Authors: Please address the weaknesses and questions posed in the preceding sections.
[1] Jing et al. Torsional Diffusion for Molecular Conformer Generation. NeurIPS 2022.
[2] Wang et al. Swallowing the Bitter Pill: Simplified Scalable Conformer Generation. ICML 2024.
[3] Qu, Eric and Krishnapriyan, Aditi S. The Importance of Being Scalable: Improving the Speed and Accuracy of Neural Network Interatomic Potentials Across Chemical Domains. NeurIPS 2024.
[4] Brehmer et al. Does equivariance matter at scale? Preprint.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you to the reviewer for the insightful comments. We have performed additional analyses, which we hope will address the reviewer's concerns.
> *"...no analysis of the training efficiency of the proposed method versus baselines"*
Training time should be roughly proportional to single step inference time provided in Figure A2.
Additionally, in the response to Reviewer qodJ we provide detail from our two-stage training in the form of the metrics achieved by model variants after 100 training epochs with no hydrogens.
> *"Measuring metrics against diffusion steps would be helpful..."*
The number of diffusion steps to achieve high quality samples depends on the type of diffusion or flow-matching model. We chose a diffusion model to make comparison with MCF, but in our future work we anticipate that our model will work well with flow matching and a harmonic prior, as implemented in ET-Flow, and would require fewer iterations.
Nevertheless, we demonstrated DDIM sampling using S23D-B-1/13 (S) in Table A4 of the Appendix. At 50 steps we observed only a small drop in metrics compared with 300 with default sampling. In terms of convergence rate compared with the MCF model, we noted that at 10 DDIM steps our 25M parameter model outperforms the 13M MCF model, which used 1000 diffusion steps. DDIM sampling for MCF was reported in Figure 6 of the MCF paper, showing approximately comparable results with our Table A4 between the 64M MCF-B model and 25M S23D-B model, both surpassing 80% COV-R between 5-10 DDIM steps. The MCF-B result was recapitulated in Figure 4 of the ET-Flow paper, where they also demonstrate that ET-Flow does not suffer from reducing the number of steps and can use just 5 steps for inference.
To compare convergence rate across different methods choices for the S23D-S model, we have performed new inference runs using Euler–Maruyama at 50, 100, and 200 inference steps for S23D-S, -M, and -C variants. The results, shown in the table below, indicate that performance is similar at 300, 200, or 100 inference steps, and drops by a similar amount for all metrics across all model variants at 50 steps. Our initial choice to report results at 300 steps was cautious, and we could have obtained similar results using 100-200.
|Model|Steps|Recall||||Precision||||
|-|-|-|-|-|-|-|-|-|-|
|||COV||AMR||COV||AMR
|||Mean|Med|Mean|Med|Mean|Med|Mean|Med
|S23D-S-1/9 (S)|300|80.7|89.9|0.483|0.458|57.5|57.5|0.757|0.700
||200|81.1|89.8|0.483|0.455|57.7|57.1|0.755|0.701
||100|80.9|88.9|0.495|0.470|57.4|56.7|0.767|0.720
||50|78.9|87.1|0.551|0.517|54.8|53.9|0.817|0.765
|S23D-S-1/9 (M)|300|81.5|90.0|0.473|0.444|58.1|57.8|0.751|0.702
||200|81.7|89.5|0.476|0.451|57.6|56.9|0.753|0.706
||100|81.4|89.3|0.488|0.461|57.2|56.3|0.764|0.713
||50|79.2|87.1|0.546|0.518|54.9|53.1|0.817|0.758
|S23D-S-1/9 (P)|300|81.9|88.9|0.462|0.442|58.8|58.4|0.740|0.697
||200|81.2|88.3|0.474|0.445|57.8|57.2|0.753|0.703
||100|81.5|88.9|0.485|0.457|58.2|57.0|0.759|0.702
||50|79.2|86.7|0.546|0.517|55.6|54.6|0.809|0.743
> *"It's unclear why the attentional linear biases are subtractive..."*
Graphormer uses additive bias because the bias is learnable and any sign could be used. However, we configured the bias to decrease attention weights with the distance between nodes, which is likely to be what is learned by Graphormer.
> *"Are competing baselines also trained using data augmentation?"*
MCF did test the impact of data augmentation (see Figure 5 of (Wang et al., 2024)), but found augmentation applying a different rotation to each conformer during the training epoch (our augmentation strategy, labelled "Random" in their Figure 5) to be highly detrimental due to a bias in the dataset. This could indicate that we are, in fact, reducing the performance of our method on the current benchmark by including data augmentation. However, augmentation is critical for better model generalization on other datasets. Also, in the MCF code they include a fixed non-symmetric normalization of coordinates for the three different axes, which could worsen performance if a dataset changes. For instance for GEOM-DRUGS, they use a MinMax scaler with min_x=-16.8, max_x=16.6, min_y=-10.5, max_y=10.7, min_z=-7.1, max_z=7.4.
> *"What are the details regarding data augmentation (number of augmentations per sample, any post-processing or filtering, etc.)?"*
We did online augmentation for every sample in each batch. We used the same dataset from the MCF study, with no post-processing or filtering. To calculate RMSD we used the Posebusters package which calls the "getBestRMS" function from RDKit.
> *"...recent work on non-equviariant MCG models [2] and scaling studies for molecular and geometric models [3, 4]"*
We will add references Qu & Krishnapriyan, 2024 and Brehmer et al., 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I am overall satisfied with the replies, especially with the study on the impact of the number of steps on performance. I retain my assessment of this work and my rating.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewers for their valuable feedback. We hope that our clarifications and the additional analyses and ablation experiments suggested by all the reviewers significantly strengthen the final manuscript. | Summary: This paper presents "S23D", a molecular conformer generation approach based on a standard transformer augmented with linear attention biases (similar to ALiBi). Unlike specialized equivariant models, S23D relies on graph-relative positional encodings and achieves competitive results at smaller parameter counts. The authors demonstrate state-of-the-art (SOTA) performance for recall metrics on the GEOM-DRUGS benchmark with a relatively small transformer (25M parameters vs. 64M in prior models). Key ideas include relative positional encoding using shortest-path distances as attention biases, a two-stage training strategy (first without hydrogen atoms, then with hydrogens reintroduced), and a post-hoc chirality correction step.
Claims And Evidence: Main claims—improved performance with smaller transformer due to positional encoding—are only partially supported. While S23D slightly surpasses MCF-B (64M parameters) in recall metrics (84.5% vs 84.0% mean coverage), precision metrics lag slightly behind without additional chirality handling (Table 1). The paper suggests this encoding alone addresses size limitations, but lacks direct ablation experiments clearly isolating positional encoding contributions from other differences (e.g., transformer backbone, training setup).
Methods And Evaluation Criteria: Methods are appropriate and standard for molecular conformer generation: a diffusion-based generative model using GEOM-DRUGS, and standard RMSD-based coverage metrics (COV, AMR). The attention bias approach (ALiBi-like linear bias on shortest-path distances) is reasonable but not fully justified; no rigorous ablation is done to confirm its necessity or optimality compared to simpler encodings. Additionally, statistical rigor (e.g., multiple seeds, confidence intervals) is not provided, leaving uncertainty around minor metric improvements reported in Table 1.
Theoretical Claims: The paper does not contain explicit theoretical claims or proofs. Implicit assumptions about the sufficiency of rotational augmentation (to compensate for lack of equivariance) and linear bias effectiveness are heuristic. The theoretical grounding of why linear bias specifically improves transformer performance is not provided, limiting theoretical insights.
Experimental Designs Or Analyses: Experiments are comprehensive, clearly comparing S23D variants to prior methods (GeoMol, MCF, ET-Flow) using established GEOM splits and metrics (Table 1). However, the evaluation lacks explicit comparisons with classical baseline methods (e.g., OMEGA, RDKit) to contextualize gains clearly. Also missing is deeper qualitative or error-mode analysis—e.g., identifying specific molecules or conformations where the proposed method struggles.
Supplementary Material: Yes for most parts pointed out by the main text.
Relation To Broader Scientific Literature: The paper adequately cites key recent works on molecular conformer generation (MCF, ET-Flow, Torsional Diffusion) and graph transformers (Graphormer, ALiBi). However, important foundational references like [Molecular Geometry Prediction Using a Deep Generative Graph Neural Network, Mansimov 2019] or early generative works (G-SchNet, CVGAE) are omitted, which are relevant for context.
Essential References Not Discussed: Several important prior works are not cited or discussed adequately:
- Molecular Geometry Prediction Using a Deep Generative Graph Neural Network [Mansimov 2019]
- Symmetry-adapted generation of 3d point sets for the targeted discovery of molecules[Gebauer et al. 2019]
- Classical approaches (OMEGA, RDKit ETKDG) performance on GEOM benchmarks, which provide useful performance baselines.
Other Strengths And Weaknesses: Strengths:
- Clearly organized experiments, and use of standard metrics and splits
- reasonable model design (transformer with relative bias).
Weaknesses:
- Incremental novelty (combination of existing known techniques without deep novelty
- Superficial theoretical justification
- Lack of clear ablations (e.g., importance of positional bias) with limited qualitative/error analysis.
Although the paper reports good empirical results, particularly regarding smaller model size, the contributions are primarily incremental—combining known techniques without deep novelty or rigorous theoretical justification. The lack of essential ablation studies, incomplete discussion of prior foundational work, and limited qualitative analysis reduce its significance. The work is solid empirically but insufficiently novel or insightful to clearly justify acceptance at ICML.
Other Comments Or Suggestions: Minor issues:
- several typos (e.g., "seperated" → "separated" in Table 1 caption, "utlizing" → "utilizing").
- A direct ablation study removing positional bias entirely would strongly improve the paper's clarity and rigor.
Questions For Authors: - Have you conducted direct ablations showing performance without any positional biases? How critical is the ALiBi-style attention bias?
- How would explicitly adding chirality information during training (as ET-Flow did) affect your results? Could it address multi-center chirality better?
- What are the limitations regarding molecule size (due to quadratic attention cost)? Did you encounter computational or scaling issues with larger molecules?
- Have you assessed generalization beyond GEOM-DRUGS (e.g., natural products, macrocycles)? Does your method generalize or require retraining?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their constructive critique. We made additional analyses including ablation experiments that will be added to the manuscript to address the comments.
We made 3 runs with ablation on PE for the S23D-S-1/9 (S) model using 1-stage training (with hydrogen) for 100 epochs (no chirality correction):
1. ALiBi PE
2. MCF Eigenvector (EV) encodings, added as new features in our Eq. 5; k=28 as in MCF (28 vectors)
3. PE used in Graphormer, using our Eq. 1 updated for multiple heads as in Graphormer (with max. shortest path threshold of 20, as in the default Graphormer settings)
|Model|Recall||||Precision||||
|-|-|-|-|-|-|-|-|-|
|| COV||AMR||COV||AMR||
||Mean|Med|Mean|Med|Mean|Med|Mean|Med|
|S23D-S-1/9 (ALiBi)|81.4|89.0|0.472|0.449|57.5|57.0|0.756|0.709|
|S23D-S-1/9 (EV) |79.5|86.9|0.507|0.479|55.6|54.5|0.789|0.739|
|S23D-S-1/9 (Learnable Bias)|81.5|88.9|0.468|0.441|58.6|59.5|0.744|0.693|
Run 1 (default run with S23D-S-1/9 (S) with hydrogens) was required to provide a new baseline for the other ablation runs, as the eigenvectors we used from the MCF dataset were computed for graphs with hydrogens. Run 2 (EV) recall results are consistent with the MCF-S model. In Run 3 results were comparable with our PE. However, the training was 35% slower than run 1 on A100 GPUs due to inefficient lookup operations, as discussed in our methods section. In the future, it is possible to explore a variant of our PE with learnable slopes for each head that would combine advantages of Graphormer PE and ALiBi. The training dynamics of mean COV-R for the 3 runs are shown in the table:
|Model|Number of Epochs||||
|-|-|-|-|-|
||25|50|75|100|
|S23D-S-1/9 (ALiBi)|75.4|79.4|80.0|81.4|
|S23D-S-1/9 (EV) |70.3|75.9|79.3|79.5|
|S23D-S-1/9 (Learnable Bias)|77.0|80.5|81.4|81.5|
Additionally, we trained the base model for 85 epochs (with hydrogens) using the eigenvector encoding scheme, which roughly corresponds to our two-stage training (see discussion in response to reviewer qodJ), which gave worse metric results than using ALiBi:
|Model|Recall||||Precision||||
|-|-|-|-|-|-|-|-|-|
||COV||AMR||COV||AMR||
||Mean|Med|Mean|Med|Mean|Med|Mean|Med|
|S23D-B-1/13 (EV)|82.7|90.0|0.449|0.422|60.0|60.1|0.725|0.664|
We will add these PE ablation results into the experiments section of the manuscript.
> *"...prior works are not cited..."*
We will add discussion of the Mansimov et al. 2019 (CVGAE) and Gebauer et al. 2019 (G-SchNet) references.
> *"...explicit comparisons with classical baseline methods (e.g., OMEGA, RDKit)"*
OMEGA and RDKit have been tested in this task on the GEOM datasets in previous publications, for example GeoMol and Torsional Diffusion, with repeated findings that these methods perform poorly at this task so we did not recreate the analysis here.
> *"Have you conducted direct ablations showing performance without any positional biases? How critical is the ALiBi-style attention bias?"*
We did not run ablations without any positional biases. The setup requires some form of PE to define edges of a molecular graph. We could use RDKit to reconstruct bonds, but there is no guarantee that resulting molecules will be the same as the source molecules, and it is not clear how to test results.
> *"How would explicitly adding chirality information during training (as ET-Flow did) affect your results?"*
After paper submission we made attempts to incorporate chirality features, but our model was not able to capture correct chiralities. Additionally, we checked conformations generated by ET-Flow and found that their model also produces conformations with incorrect multi-center chirality.
> *"What are the limitations regarding molecule size (due to quadratic attention cost)?"*
Quadratic complexity is not an issue for our model. Scaled dot product attention even without FlashAttention is highly optimized in PyTorch. That is one of the benefits of using a standard transformer.
> *"Have you assessed generalization beyond GEOM-DRUGS..."*
No, we did not. We anticipate that the model would require finetuning in this case.
> *"...the contributions are primarily incremental—combining known techniques without deep novelty..."*
We would argue against the reviewer’s description of the work as primarily incremental. Although the model combines known components, creating an illusion of simplicity, it was not trivial to create the simplest and fastest transformer model that successfully competes with equivariant counterparts at small model sizes. For instance, compare our model with the complex ET-Flow transformer or recent non-equivariant networks. The presented positional encoding is simple in implementation, efficient in computations and could be further optimized with Triton kernels. These factors are critical for practical model applications in MCG. | Summary: The paper introduces a novel relative positional encoding technique similar to the ALiBi technique found in NLP for non-equivariant Transformer diffusion models for generating molecular conformations. The proposed approach allows scaling down non-equivariant Transformer, which typically requires a large model size to compensate for the lack of equivariant bias. The authors show that the proposed method allows a standard Transformer with 25M parameters to outperform previous state-of-the-art non-equivariant models with 64M parameters on the GEOM-DRUGS benchmark.
Claims And Evidence: The main contribution of the paper is the introduction of a relative positional encoding technique, which the authors claim allows for scaling down non-equivariant Transformer models for generating molecular conformations. The claim is supported by convincing evidence on the GEOM-DRUGS benchmark.
The additional claim that the proposed two-stage training protocol (without and with hydrogen atoms) "makes more efficient use of a limited computational budget when scaling models" is not supported by evidence. The authors provide the inference speed with respect to the number of atoms in the batch, but do not show the performance degradation or improvement resulting from the two-stage approach. While efficiency is improved, the performance degradation might be such that one would be better off training with the hydrogen atoms from the start. I would be convinced by a comparison of the performance of two identical models trained using and without the proposed two-stage approach, with the same number of FLOPS.
Methods And Evaluation Criteria: The authors rely on the GEOM-DRUGS benchmark, following the same splits and metrics as previous works in the field. I believe the evaluation methodology is suitable for evaluating the quality of the conformation generated by the proposed approach, as it has previously been used by several previous works.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design seems fair and to take into account all essential experimental design that would influence the performance on the benchmark: the model size, the number of conformations considered, as well as the presence of a mechanism to address the chirality.
Supplementary Material: Yes, I briefly went over the appendix but did not verify the accuracy of the diffusion equations.
Relation To Broader Scientific Literature: The paper makes a significant effort to introduce the relevant works and to position themselves in the broader scientific literature. They augment the existing Variance Preserving SDE diffusion model with their proposed relative positional encoding, and they highlight the importance of including the chirality information, which was previously known.
Essential References Not Discussed: Not to my knowledge, although I am not familiar with the literature on molecular conformation generation.
Other Strengths And Weaknesses: The introduction and related work sections are particularly well written and extensive. They provide a great introduction into the field of molecular conformation generation.
However, the methodology section could benefit from more details for readers who might not be familiar with the Variance Preserving SDE. I appreciate the description in the appendix, and the methodology doesn't need to be fully self-contained since it relies on an existing model. Still, I believe that providing more high-level details would improve the reader’s understanding. Notably, some of the design choices are not explained (or mentioned that they were taken from previous work). For instance, why only considering chemical elements with frequent occurrences in the vocabulary? Why masking 10% of atoms only for 1% of the molecules?
Other Comments Or Suggestions: The left-side of Figure 1 was a bit confusing on the first pass. For instance, I did not understand where the coordinates on the middle-left came from, and I did not know what the "atom tokens" inputs were. I suggest you add additional arrows and explicitly write the input. You could also add light colors to better separate the different parts and mention in the caption that "The first transformer blocks (blue) encode...", "Coordinate encodings (orange) are projected...", etc.
Questions For Authors: - Can you provide a more comprehensive description of the models and design choices?
- Can you add the RMSD for the examples provided in Figure 1?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for reviewing the manuscript and providing valuable comments. We have performed additional analyses, described below, in light of these comments, and hope that they alleviate the reviewer's concerns.
> *"I would be convinced by a comparison of the performance of two identical models trained using and without the proposed two-stage approach, with the same number of FLOPS."*
We initially had two motivations for our two-stage training protocol, first to make efficient use of computational resources during early testing of multiple model configurations, and second to test the necessity of including hydrogen atoms during training of a model for MCG, due to the possibility of using fast, rule-based approaches to add hydrogens after generation of heavy atom coordinates (e.g., Chem.AddHs in RDKit). We achieved SOTA metric values after the first stage of training, showing that training without hydrogen is a viable approach. The primary motivation for the second stage of training, and the reason we reported those results in the manuscript, was for a direct comparison with previous methods, which all generate coordinates for all atoms, including hydrogens. After the second stage of training, metrics were similar to those found after the first stage. Unfortunately, we neglected to include the results after 100 epochs without hydrogens in the submitted version of the manuscript, but we will add those to the appendix and we report the values in the table below (no chirality correction):
|Model|Recall||||Precision||||
|-|-|-|-|-|-|-|-|-|
|| Coverage||AMR||Coverage||AMR||
||Mean|Median|Mean|Median|Mean|Median|Mean|Median|
|S23D-S-1/9 (C)|80.8|88.7|0.485|0.454|57.5|57.1|0.763|0.707|
|S23D-S-1/9 (S)|80.3|89.6|0.495|0.467|56.1|55.4|0.779|0.724|
|S23D-S-1/9 (M)|81.0|89.4|0.485|0.457|58.2|59.1|0.751|0.702|
|S23D-B-1/13 (S)|84.1|91.2|0.428|0.399|62.2|64.3|0.694|0.625|
|S23D-B-1/13 (M)|84.2|91.9|0.425|0.391|61.7|63.2|0.701|0.632|
To demonstrate efficiency of two-stage training, in an additional run we tested single-stage training with hydrogens at 50+35=85 epochs. Training time per epoch without hydrogens was approximately half of that with hydrogens, so the first 50 epochs of training with hydrogens is approximately equivalent to the first stage of training (100 epochs) without hydrogens. After 50 epochs of training with hydrogens, COV-R was 79.4, compared to 80.3 after 100 epochs without hydrogens.
The additional 35 epochs represent the second stage of training with hydrogens. The model results we obtained at 85 epochs were approximately the same as in two stage training, and are shown in the table (no chirality correction):
|Model|Recall||||Precision||||
|-|-|-|-|-|-|-|-|-|
|| Coverage||AMR||Coverage||AMR||
||Mean|Median|Mean|Median|Mean|Median|Mean|Median|
|S23D-S-1/9 (S)|81.1|88.9|0.481|0.459|58.8|58.3|0.744|0.699|
However, the two-stage training approach allows the researcher to do faster testing of different architectures, to use commodity GPUs for training, and facilitates hyperparameter tuning. The table below shows COV-R for single-stage training at intervals during training with hydrogens for 100 epochs (no chirality correction):
|Model|Number of Epochs||||
|-|-|-|-|-|
||25|50|75|100|
|S23D-S-1/9 (S)|75.4|79.4|80.0|81.4|
> *"... the methodology section could benefit from more details for readers who might not be familiar with the Variance Preserving SDE"*
We will add a paragraph to the Related Work section with an introduction to diffusion and flow matching models.
> *"For instance, why only considering chemical elements with frequent occurrences in the vocabulary? Why masking 10% of atoms only for 1% of the molecules?"*
The idea of using only frequent elements is taken from NLP, wherein rare tokens are not included in vocabularies and are replaced with the UNK token (here we used the MASK token). MASK is used to randomly replace other tokens, forcing the model to learn contextual representation. The probabilities 10% and 1% were chosen arbitrarily.
> *"The left-side of Figure 1 was a bit confusing on the first pass. ... I suggest you add additional arrows and explicitly write the input."*
We will update Figure 1 with a more detailed description of each transformer block.
> *"Can you add the RMSD for the examples provided in Figure 1?"*
RMSD values, from top to bottom, are 0.29, 0.85, 0.15. Note that we found an error in Figure 1. The images for the bottom molecule were extremely similar and we accidentally mixed the ground truth and generated images. We will correct this, and include RMSD values, in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: I appreciate the reviewers' clarifications and am pleased with the additional experiments. I have updated my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewers again for their comments, which helped us to shape the manuscript by providing additional clarifications and experiments that we think greatly improve the work. Thank you for taking the time to go through the paper in such detail. | Summary: In this paper the authors propose a new method for sampled-based molecular conformer generation that is built on top of a non-equivariant model. They show that unlike prior methods, with the right modifications to the architecture and the right training procedures, one can train non-equivariant architectures that do not require orders of magnitude more parameters than their equivariant counterparts.
Specifically, the paper proposes to use standard transformer blocks from LLaMA, first encoding the 2d graph, and then using 3d coordinates, themselves encoded with periodic encodings. The attention mechanism is biased so as to prefer attending to graph-nearby nodes. The model is then trained with a standard denoising diffusion objective to learn to approximate the score function. This training is split in two stage, where hydrogen atoms are only included in the structure in the second phase.
## update after rebuttal
Thanks for the rebuttal. I appreciate that the authors have run additional analyses to further provide evidence, specifically orthogonal evidence such as running PoseBusters. Answering the _why_ of a method is always more valuable that just its effects. As for scaling, I acknowledge that not all researchers possess the same amount of compute, but I'd remind us that scaling power laws go both ways: training smaller models is a valid way of obtaining scaling trends.
Claims And Evidence: The 3 core claims are that the proposed model achieves good performance, that chirality matters in evaluation, and that the proposed two-stage scheme is compute efficient. These claims are mostly supported by evidence. There is a fourth broader claim motivating the paper, which is that the specific methods used here scale better than prior work. I'd argue that the evidence here is quite limited, and is in fact, limited to 2 data points (8.6M and 25M parameters). While this is obviously enough to get a sense of the claim, there is not enough to get a deeper understanding of the scaling behaviors of the different choices made here (e.g. only 1/13 (S) is tested at 25M parameters).
There is another unfortunate lack of results, in that all results are about performance. There is a richness in molecular structures that is underexplored here. Why are sinusoidal encodings helpful? What do they model? Do they _actually_ scale better (it seems like their use in the 25M parameter was simply extrapolated from their being a good choice in the 8.6M regime)? Perhaps these encodings are better able to capture bond lengths, or angles, or whatnot. These are things that could be measured and bring much more clarity to the paper. As is, we've learned very little about the modifications introduced in this paper other than they work in this specific regime for this specific model.
Methods And Evaluation Criteria: The methods and evaluation are standard, and make sense here.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: See Claims section.
Supplementary Material: I have looked through the appendix. It's quite interesting to see the error scaling linearly as a function of the number of atoms, but I do wonder if this paints the right picture. I suspect that if one measured the energy of those conformers, we would instead see an exponential increase in energy as a function of the number of atoms.
Relation To Broader Scientific Literature: I think the paper puts forward work that's important in the current context. The question very much remains open as to which class of model, equivariant or not, will end up being the most practical and end up scaling to the larger systems which one hopes these models will one day be applied. While this work by no means settles the debate, it is a welcome data point.
Essential References Not Discussed: Nothing I can think of.
Other Strengths And Weaknesses: The paper is quite straightforward, well explained.
Other Comments Or Suggestions: My main suggestion is really, as above, to do more to _teach_ the audience something about the work. Making numbers go higher is great, but what's even better is understanding more precisely what the mechanisms behind our methods are. This is the best way for science to progress.
Questions For Authors: Another major consideration in comparing to models like ETFlow is their very training method, i.e. flow matching. Do the authors have a sense of whether certain architectures are more amenable to flow matching than to diffusion training? Whether scaling behaviors would change at all?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for valuable comments. To address the comments, we made additional analyses and hope our response will alleviate the reviewer’s concerns.
> *"I'd argue that the evidence here is quite limited, and is in fact, limited to 2 data points (8.6M and 25M parameters). While this is obviously enough to get a sense of the claim, there is not enough to get a deeper understanding of the scaling behaviors of the different choices made here (e.g. only 1/13 (S) is tested at 25M parameters)."*
We selected two datapoints because models with greater than 25M parameters present practical difficulties in MCG, and the computational resources required to train and test larger models (e.g., 64M or 242M as in MCF) are not accessible to many researchers. To further demonstrate scaling behavior with a different model choice, we made an additional training run with 25M parameters with the MLP coordinate encoder, no chirality correction:
|Model|Recall||||Precision||||
|-|-|-|-|-|-|-|-|-|
|| Coverage||AMR||Coverage||AMR||
||Mean|Median|Mean|Median|Mean|Median|Mean|Median|
|S23D-B-1/13 (M)|84.6|91.9|0.412|0.381|62.4|64.1|0.684|0.613|
> *"Perhaps these encodings are better able to capture bond lengths, or angles, or whatnot."*
To study the effect of coordinate encodings on generated molecular structure, we ran additional tests from the PoseBusters package, including assessment of bond lengths, bond angles, aromatic ring flatness, planar double bonds, and internal steric clashes, on our generated molecules. Using the default criteria for "intramolecular validity" features in PoseBusters that a predicted feature (e.g. bond length) should be within +/- 25% of the reference for that feature, all 5 of the above features were perfectly captured, with >99.2% pass rate for all model variants that we reported in Table 1. Lowering the thresholds from 25% to 10% and 5% we saw metrics drop evenly across model variants. For instance, at 5% correct bond angles were found at a rate of 58.6% for S23D-S-1/9 (S), 56.4% for S23D-S-1/9 (M), and 54.8% for S23D-S-1/9 (C).
> *"It's quite interesting to see the error scaling linearly as a function of the number of atoms, but I do wonder if this paints the right picture. I suspect that if one measured the energy of those conformers, we would instead see an exponential increase in energy as a function of the number of atoms."*
We agree with the reviewer that error increases with molecule size. Considering energies and populations in the generated conformational ensemble is a topic we will investigate in future work.
> *" ... the authors have a sense of whether certain architectures are more amenable to flow matching than to diffusion training? Whether scaling behaviors would change at all?"*
We do not anticipate any particular transformer model having a unique advantage over others when applied to different architectures of diffusion or flow matching models. We used a diffusion model to compare our model with the MCF model, and the next step will be an update of the diffusion block to a more advanced architecture (e.g., Inductive Moment Matching). | null | null | null | null | null | null |
OpenworldAUC: Towards Unified Evaluation and Optimization for Open-world Prompt Tuning | Accept (poster) | Summary: The paper introduces OpenworldAUC, a unified evaluation metric for open-world prompt
tuning (OPT) that jointly assesses base-to-new detection (P1), domain-specific classification
(P2), and insensitivity to domain distribution (P3). To optimize OpenworldAUC, the authors
propose Gated Mixture-of-Prompts (GMoP), which employs domain-specific prompts and a
gating mechanism to balance detection and classification. Theoretical guarantees for
generalization are provided, and experiments on 15 benchmarks.
Claims And Evidence: The authors provide a detailed analysis of the limitations of existing metrics, such as HM,
OverallAcc, and AUROC, and demonstrate how OpenworldAUC overcomes these limitations.
The effectiveness of the proposed GMoP framework is supported by extensive experimental
results on multiple benchmarks.
Methods And Evaluation Criteria: Methods: OpenworldAUC: The pairwise formulation (jointly ranking detection and
classification correctness) is novel and aligns well with OPT requirements. The use of
domain-specific prompts and gating mechanisms is logical for balancing conflicting subobjectives. The pseudo partition strategy and zero-shot classifier for new domains address
practical constraints (unseen classes).
Evaluation Criteria: Benchmarks cover diverse scenarios (recognition, domain generalization,
imbalance), and metrics include OpenworldAUC, HM, AUROC, and OverallAcc. However,
computational efficiency is not discussed.
Theoretical Claims: The paper presents theoretical guarantees for the generalization of the GMoP framework.
While the proofs are not provided in the main text, the authors reference the appendix for
detailed proofs.
Experimental Designs Or Analyses: The experimental designs and analyses are well-structured and valid. The authors evaluate
the proposed method on a diverse set of benchmarks, including open-world recognition and
domain generalization tasks. The experiments are designed to test the method's
performance under various conditions, such as different domain distributions and imbalance
settings.
Supplementary Material: I reviewed the supplementary material. I focused on the following parts:
Proof for the Propositions: Detailed proofs for Propositions 3.1, 4.1, 4.2, 4.3, and 5.1 were
examined. These proofs provide a solid theoretical foundation for the claims made in the
paper.
Generalization Bound: The detailed proof for the generalization bound in Section C was
reviewed. This section includes key lemmas and the formal proof, which are crucial for
understanding the theoretical guarantees of the proposed method.
Additional Experimental Setup: I reviewed the task descriptions, datasets, competitors,
implementation details, and efficient calculation of OpenworldAUC. This information is
essential for evaluating the practical applicability and robustness of the proposed method.
Additional Experimental Results: I examined the additional results for the open-world
recognition task, open-world domain generalization task, sensitivity analysis, and ablation
studies. These results provide comprehensive evidence of the method's performance across
various scenarios.
Relation To Broader Scientific Literature: None
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
The paper is well-structured and clearly presents the problem, the proposed solution, and
the experimental results.
Weaknesses:
While the paper discusses the potential impact on fairness-sensitive real scenarios, it lacks
specific examples or case studies demonstrating the application of the proposed method in
real-world settings.
Other Comments Or Suggestions: The implementation details section could benefit from more specific information about the
training strategy and hyperparameters used for each competitor.
Questions For Authors: How does the pseudo partition strategy generalize to truly unseen domains (e.g., crossdataset evaluation)?
While the paper discusses the potential impact on fairness-sensitive real-world scenarios, it lacks specific examples or case studies demonstrating the application of the proposed method in real-world settings. Could the authors provide concrete examples or case studies that illustrate how the proposed method can be applied in such scenarios?
The Gated Mixture-of-Prompts (GMoP) framework introduces multiple prompts and a gating mechanism. Is this approach practical in terms of computational cost and implementation complexity?
How does the performance gain from GMoP compare to the increased complexity?
Given that the framework may be too complex for real-world applications, especially in scenarios with limited computational resources, could the authors provide a detailed analysis of the trade-offs between performance and complexity?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We deeply appreciate your time and effort in providing us with such constructive comments. We would like to respond to them as follows:
> Q1: Generalization to truly unseen domains, e.g., cross-dataset evaluation.
Following your insightful suggestion, we extend our evaluation to investigate the generalization performance of our optimization framework in more challenging cross-dataset open-world scenarios. The experimental settings are listed as follows:
- We train our prompt exclusively on the ImageNet-500 base domain and subsequently evaluate model performance on a combined test set containing both the original ImageNet-500 classes and new categories from seven external datasets: `FGVC-Aircraft`, `Caltech-101`, `Stanford-Cars`, `DTD`, `EuroSAT`, `SUN397`, and `UCF101`.
- To ensure a fair evaluation of open-world generalization, we meticulously remove any categories from these external datasets that overlapped with the ImageNet-500 class space before evaluation.
The comprehensive results of this cross-domain evaluation, which rigorously tests the model's ability to handle both known and new categories across diverse visual domains, are presented in table below. The experimental results further speak to the efficiency of our method.
||AC|C101|Cars|DTD|ES|SUN|UCF|Avg|
|-|-|-|-|-|-|-|-|-|
|CLIP|16.21|60.76|45.59|31.13|30.33|44.31|45.83|39.17|
|CoOp|12.06|62.50|44.67|26.75|25.70|44.72|45.38|37.40|
|MaPLe|16.36|66.29|47.34|31.89|31.56|48.72|48.74|41.56|
|PromptSRC|17.75|65.89|49.08|33.94|34.14|49.53|**50.06**|42.91|
|KgCoOp|16.15|64.84|47.75|31.46|33.39|48.34|49.40|41.62|
|DePT-Kg|17.47|66.54|49.98|33.96|35.17|49.38|49.62|43.16|
|DeCoOp|16.95|66.35|50.04|33.91|36.12|49.42|49.65|43.21|
|TCP|16.77|65.86|47.60|32.69|33.50|48.81|49.31|42.08|
|Ours|**18.18**|**66.58**|**50.25**|**34.21**|**36.84**|**49.65**|49.87|**43.65**|
> Q2: The trade-offs between **performance** and **complexity** of mixture-of-prompts.
Thank you for your constructive suggestion! In fact, we have included the prompt complexity analysis in Fig.5 in the initial submission. To highlight the efficiency of our method, we present those numerical results in the table below, along with additional results on inference speed.
- The table compares average performance on three open-world tasks and learnable parameter counts (#param) across methods
|Method|Recognition task | Imbalance recognition|Domain adaption | #param |
|-|-|-|-|-|
|CLIP|47.83|54.59|47.31|0|
|CoOp|47.59|53.97|48.93|8.2k|
|Maple|57.35|63.05|51.89|3555.1K|
|PromptSRC|58.21|65.44|52.44|46.1k|
|DePT|59.00|66.06|51.35|292.9k|
|Gallop|56.90|63.67|49.01|606.2k|
|DeCoOp|58.77|66.87|51.98|30.7K|
|TCP|58.90|65.13|51.34|331.9k|
|Ours|**60.94**|**68.84**|**52.64**|26.6k|
- The table presents a comparison of per-sample inference time, averaged across ten datasets
|CoOp|DePT|Gallop|DeCoOp|Ours|
|-|-|-|-|-|
|0.00117S|0.00167S|0.00148S|0.00273S|0.00180S|
According to these empirical results, we answer the questions raised by the reviewer:
- Our method outperforms SOTA methods on the average performance of three open-world tasks with **a smaller parameter cost**. While recent SOTA methods design **deep** prompt structures, we optimize **multiple shallow** prompts in the detector and classifiers.
- While slightly slower than CoOp due to prompt mixing, our approach runs 34% faster than DeCoOp and matches DePT/Gallop in speed, which maintains practical inference speeds.
- **The performance gain outweighs complexity**. Our method outperforms SOTA methods by **1.94% –13.55%** across tasks while using **≤ 9% parameters** of methods like Maple or DePT. The moderate speed trade-off (slightly slower than CoOp) is justified by significant performance improvements.
> Q3: Fairness-sensitive real-world scenarios application of the proposed method
The **cross-dataset task** mentioned above involves a significant **imbalance between the base and new domains**, as shown in the table below. The base domain includes 500 ImageNet categories with 25,000 testing samples.
|New domain|#categories|class-imbalance|#samples|sample-imbalance|
|-|-|-|-|-|
|StanfordCars|196|2.6|8041|3.1|
|UCF101|101|5.0|3783|6.6|
|Caltech101|84|6.0|2135|11.7|
|DTD|47|10.6|1692|14.8|
|EuroSAT|10|50.0|8100|3.1|
|Sun397|392|1.3|19600|1.3|
|FGVC_aircraft|100|5.0|3333|7.5|
Accuracy-driven metrics often favor the majority domain of ImageNet, creating fairness risks in scenarios with extreme data imbalances (e.g., 15× sample or 50× class ratios). These imbalances mirror real-world cases where critical minority samples are suppressed by dominant base classes. OpenworldAUC overcomes this bias by fairly evaluating all classes through pairwise ranking. Unlike traditional metrics that amplify imbalance-related errors, OpenworldAUC improves reliability in critical applications.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concern. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your feedback and the improved score. Following your suggestions, we will enrich the content of our paper in the final version. | Summary: This paper explore a new evaluation metric openworldauc for the practical open-world prompt tuning (OPT) task, which jointly measure the inter-domain detection and intra-domain classification performance and remains insensitive towards the varying data distributions. Further, the mixture-of-prompt learning framework GMoP is proposed to optimize openworldauc during training. Besides, generalization analysis are conduced to support the method theoretically. Comprehensive experiments on various benchmark show the advantages of the proposed metric and the effectiveness of the proposed framework.
Claims And Evidence: The submission provides well-substantiated claims supported by both theoretical and empirical evidence.
- The paper provides sufficient arguments to analyze the limitations of existing metrics and demonstrates the superior properties of the proposed OpenworldAUC metric from both theoretical and empirical perspectives.
- The paper also demonstrates the effectiveness of the proposed GMoP method through a novel generalization bound and comprehensive empirical studies.
Methods And Evaluation Criteria: The proposed evaluation metric and the proposed learning framework make sense for the open-world learning at hand. To be specific,
- Theoretical results show that the proposed openworldauc is consistent with the goal of OPT while escape from all identified limitations. In light of this, this metric can evaluate models more robustly and guide model optimization more effectively, thereby providing valuable insights to the open-world learning community and advancing the field.
- To purist of this, the proposed mixture-of-prompt framework co-optimizes the prompts for the detector and classifier using a sparse gating mechanism. This innovative approach offers new insights and design principles for optimizing models in open-world scenarios.
Theoretical Claims: I generally review the correctness of the proofs for several theoretical claims within the manuscript, which include:
- The proofs related to metric analysis, corresponding to Proposition 4.1, Proposition 4.2, and Proposition 4.3.
- The proof for Proposition 5.1.
- The proof related to the generalization bound.
Overall, the theoretical analysis is both correct and intuitively sound. In particular, the proof of the generalization bound demonstrates ingenuity, as the authors decompose the complex generalization gap into several parts and analyze the source of each error term, resulting in an informative and illustrative generalization bound.
In the context of the generalization bound for detector optimization, the authors use covering number and ϵ*ϵ*-net arguments to derive the generalization bound. My minor concern is the challenge of this analysis. The authors should provide more explanation for why traditional Rademacher Complexity-based theoretical analysis cannot be directly applied here.
Experimental Designs Or Analyses: I review the experimental setup, results, and analysis in both the main text and appendix. In my view, the experimental setup in this paper is well-justified and the results are convincing. Notably, the authors first incorporate the additional experiments on distribution imbalance compared to previous research. The experiments further validate the comprehensiveness of OpenworldAUC and its insensitivity to imbalanced data distributions. The lightweight learning framework outperforms competitors in OpenworldAUC and achieves a superior balance between the detection metric AUC and the classification metric HM, confirming its effectiveness.
However, some details regarding the calculation of the detection score $r(\cdot)$ in the main text could be improved. The authors mention in the appendix that the calculation of $r(\cdot)$ can utilize new class names in this experimental setting for all competitors. I recommend that the authors emphasize this implementation detail in the main text.
Supplementary Material: I have generally reviewed the supplementary appendix, including the text, proof and additional experimental results.
Relation To Broader Scientific Literature: Towards the practical and challenging Openworld prompt tuning (OPT) task, prior methods adopt the HM metric , AUROC metric and the Overall-accuracy metric to evaluate the model, here, the authors argue that these metrics suffer from three types of limitations. To this end, this paper proposed the novel metric openworldauc and the corresponding empirical learning framework, which are the key contributions to the openworld learning and prompt tuning communities.
Essential References Not Discussed: The paper covers most of the essential related works in the field.
Other Strengths And Weaknesses: The strengths of this paper are as follows:
- This paper provides a systematic analysis of the limitations of existing metrics for the OPT task. The proposed OpenworldAUC metric, which features a concise formulation, effectively addresses these limitations and demonstrates desirable properties from both theoretical and empirical perspectives.
- The paper introduces a novel and unified learning framework designed to optimize the OpenworldAUC metric. This framework dynamically balances multiple prompts targeting specific goals through a sparse gating mechanism. To support this approach, the authors derive a novel and informative bound, a contribution rarely explored in prior literature.
- The empirical results are convincing. They highlight the limitations of the existing accuracy metric and underscore the advantages of OpenworldAUC. Additionally, the results validate the effectiveness of the proposed learning framework.
My minor concerns are as follows:
- The authors should provide a more detailed explanation of the challenges associated with generalization analysis in the main text.
- The authors should elaborate further on the calculation of the base-domain confidence score \( $r$ \).
- The authors should supplement the ablation studies by investigating the performance of the gating mechanism. Specifically, it would be valuable to examine how the model performs when the gating mechanism is removed and replaced with a simple binary 0-1 mask for selecting correctly classified samples.
Overall, I hold a clear positive view of the proposed novel metric and the corresponding empirical learning framework. I believe this paper makes significant contributions to the open-world learning and prompt-tuning community and meets the standards of ICML.
Other Comments Or Suggestions: Please see the weakness part above.
Questions For Authors: Please see the weakness part above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We deeply appreciate your time and effort in providing us with such constructive comments. We would like to respond to them as follows:
> **Q1:** More detailed explanation of the challenges associated with generalization analysis.
- Our main theoretical findings focus on how well our optimization method generalizes to **unseen** test data distributions with **new classes**. According to standard learning theory, we measure generalization error by comparing: (a) the expected error across the joint distribution of the real new domain $\mathcal{Y}$ and test data $\mathcal{D}$, and (b) the empirical average across the pseudo new domain $\hat{\mathcal{Y}}$ and training data $\mathcal{S}$. To construct the pseudo new domain, we perform $K$ pseudo base-to-new partitions $\hat{\mathcal{Y}}^{(k)}, k \in [1, K]$, which incorporates a hierarchical sampling approach, class sampling, and data sampling. This leads to a **hierarchical** structure of stochastic errors, presenting a significant challenge in our theoretical analysis.
- The another major challenge arises from the AUC-type risk $\ell_{sq}(r(x_b;\theta_r) - r(x_n;\theta_r))$ in optimizing the detector $r$. The standard generalization analysis techniques, such as Rademacher complexity-based theoretical arguments[1,2], require the loss function to be expressed as the sum of independent terms. Unfortunately, the pairwise AUC-type risk can not satisfy this assumption. For instance, the optimization functions for the detector, $\ell_{sq}(r(x_b^i;\theta_r) - r(x_n^j;\theta_r))$ and $\ell_{sq}(r(\tilde{x}_b^{i};\theta_r) - r(\tilde{x}_n^j;\theta_r))$, are interdependent if any term is shared (e.g., $\tilde{x}_n^j = x_n^j$ or $\tilde{x}_b^i = x_b^i$). To this end, in this study, we use **covering numbers** and **$\epsilon$-net arguments** [3] in the subsequent proof to derive the generalization bound.
> **Q2:** The authors should elaborate further on the calculation of the base-domain confidence score $r$.
Following the setting in [4], the new domain class names are known during testing in the OPT task and the base-domain confidence score is derived from the maximum probability over the base domain. A high value of $r$ means the high probability of the sample belonging to the base domain. Given the `image_features` [1,512] and `text_features` [512,C] where 512 means the dimension of the latent feature and $C$ is the number of all classes, we calculate the `prob = Softmax(image_features·text_features,dim=1)` and then base domain confidence score can be obtained via $\max_{1\le j\le C_b}\textbf{prob}[:,j]$, where $C_b$ is the number of base classes.
> **Q3:** The ablation studies of the gating mechanism.
Following your suggestion, we further explore the effectiveness of the gating mechanism. Replacing the sigmoid-weighted gate with a fixed 0-1 mask ("Ours 0-1 Gate") slightly improves over removing the gate entirely ("Ours w/o Gate") but under-performs compared to the adaptive sigmoid gate. It validates the effectiveness of sparse sample selection mechanism and the gate approximation mechanism.
| Method | Avg. | IN | C101 | Pets | Cars | F102 | Food | AC | SUN | DTD | ES | UCF |
|-|-|-|-|-|-|-|-|-|-|-|-|-|
| Ours w/o Gate | 60.29 | 52.49 | 92.56 | 89.47 | 55.06 | 72.59 | 79.3 | 10.97 | 56.96 | 40.63 | 51.27 | 61.85 |
| Ours 0-1 Gate | 60.65 | 52.61 | 92.77 | 89.50 | 55.20 | 72.71 | 79.92 | 11.08 | 57.13 | **40.72** | 52.78 | 62.75 |
| Ours Sigmoid Gate | **60.94** | **52.64** | **92.81** | **89.77** | **55.31** | **72.79** | **81.25** | **11.42** | **58.54** | 40.37 | **53.09** | **62.39** |
> [1]Rademacher and Gaussian Complexities: Risk bounds and structural results. COLT, vol. 2111, pp. 224–240, 2001.
>
> [2]Foundations of Machine Learning. MIT Press, 2012.
>
> [3]Probability in Banach Spaces: Isoperimetry and processes. 1991.
>
> [4]DECOOP: Robust Prompt Tuning with Out-of-Distribution Detection. ICML 2024. | Summary: This paper addresses the open-world prompt tuning problem and uncovers the fundamental limitations of current evaluation metrics in this field. To tackle this challenge, the authors propose a novel, unified metric, OpenWorldAUC, which jointly evaluates the model’s detection and classification performance without requiring prior domain knowledge. On top of this, a Gated Mixture-of-Prompts (GMoPs) approach is introduced to optimize OpenWorldAUC directly. Both theoretical analyses and empirical studies consistently demonstrate the effectiveness of the proposed method.
Claims And Evidence: The claims made in the work are supported by clear and convincing evidence.
Methods And Evaluation Criteria: This study presents a well-motivated and rigorous framework to open-world prompt tuning.
Theoretical Claims: I have carefully checked the proofs of theoretical arguments including Prop.4.2, Prop.4.3, Prop.5.1 and Thm.5.2, and confirmed their correctness.
Experimental Designs Or Analyses: Yes. The experiment designs of this paper follow the standard evaluation setups in this area and the authors compare the proposed method/metric with 10 recent state-of-the-art methods. Overall, the empirical studies are convincing and comprehensive.
Supplementary Material: I review the supplementary material especially the proof of the theoretical parts.
Relation To Broader Scientific Literature: This paper borrows the ideas from open-world recognition, prompt tuning, and mixture-of-experts models, contributing a novel perspective on evaluation and optimization in open-world prompt learning. The proposed method has broad application potential in various AI-driven scenarios.
Essential References Not Discussed: No. The literature included in this paper is satisfactory.
Other Strengths And Weaknesses: Novel Open-world metric for reliable evaluations: Existing methods assess model performance separately on base and unseen domains, resulting in inconsistent evaluations. This work provides a systematic analysis to reveal the limitations of existing metrics and proposes a novel metric, OpenworldAUC, to address these shortcomings. Furthermore, an end-to-end algorithm is developed that enables the model to learn directly from OpenworldAUC. This approach establishes a more comprehensive and reliable evaluation framework, thereby facilitating more effective model optimization in open-world scenarios.
Sufficient theoretical guarantee: This paper conducts theoretical analysis of the proposed learning framework, which provides an interesting perspective to understand the work mechanism for OpenworldAUC. The results show that optimizing OpenworldAUC could lead to a satisfactory generalization performance.
Comprehensive experiments: The authors perform extensive experiments to showcase the advantages of the proposed method, including 11 benchmarks and 10 competitors. Additionally, ablation studies are presented to compare the effectiveness of OpenWorldAUC against previous counterpart metrics. Empirical results consistently show that maximizing OpenWorldAUC leads to superior performance in both open-world recognition and open-world domain generalization tasks.
However, I have the following minor concerns:
1. The necessity of introducing the Gated Mixture-of-Prompts remains unclear. Why are the inputs to each component distinct? A more detailed explanation is needed to justify this design choice.
2. How is \( OP_0 \) derived from Proposition 5.1? Additionally, why is it valid to approximate the 0-1 loss using the square loss? I believe the authors should provide a clearer explanation, along with relevant references, to aid readers who may be less familiar with this topic.
3. The motivation behind the zero-shot new domain classifier is not well articulated. Why is \theta_{h}^* necessary, and what is the underlying intuition for its inclusion? A more thorough discussion would help clarify its role in the overall framework.
Other Comments Or Suggestions: N/A
Questions For Authors: Please refer to the Weaknesses part.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thanks for your constructive comments, and we would like to make the following response.
> **Q1:** The necessity of introducing the Gated Mixture-of-Prompts remains unclear. A more detailed explanation is needed to justify this design choice.
The three components $g$, $h$, and $r$ have **conflicting** objectives: $g$ classifies base samples (its objective function relies on base samples), $h$ classifies new samples (its objective function uses new samples), and $r$ ranks base-new pairs (its objective function uses base-new sample pairs). Using a single prompt leads to **mutual interference**. To address this, our design assigns distinct prompts to each component, separating their optimization processes. A gate mechanism, which uses sigmoid-weighted confidence scores, adaptively combines their outputs. This ensures that the $r$-prompt focuses on ranking correctly classified pairs. By doing so, we prevent conflicts and allow each prompt to specialize for its task. Empirical results from six datasets confirm the difficulty of optimizing OpenworldAUC with a single prompt and demonstrate the effectiveness of our mixture-of-prompts strategy.
||ImageNet|SUN397|DTD|Cars|UCF101|Flowers102|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Single Prompt|50.75|48.47|23.59|47.03|49.88|50.79|
|GMoP|52.64|58.54|40.19|55.31|62.39|72.79|
> **Q2:** How is ( OP_0 ) derived from Proposition 5.1? Additionally, why is it valid to approximate the 0-1 loss using the square loss? I believe the authors should provide a clearer explanation, along with relevant references, to aid readers who may be less familiar with this topic.
($OP_0$) is derived by replacing the population risk in Proposition 5.1 with its **empirical approximation** (since $\mathcal{D}$ is unknown). Besides, following the framework of surrogate loss, we replace the non-differential 0-1 loss with a convex loss function $\ell$, such that $\ell(t)$ is an upper bound of $\ell_{0,1}(t)$. Note that if the scores live in $[0,1]$, standard loss functions such as $\ell_{sq}(t) = (1-t)^2$ often satisfy this constraint. This smooth approximation enables gradient-based optimization while preserving ranking semantics. The details can also be found in the recent survey [1,2,3].
> **Q3:** The motivation behind the zero-shot new domain classifier is not well articulated. A more thorough discussion would help clarify its role in the overall framework.
Thank you for your constructive suggestion! The motivation behind the zero-shot new domain classifier falls into the following two aspects.
**Balance efficiency and accuracy:** Recent prompt-tuning methods show that learnable prompts, optimized on the base domain with only CE loss, may hurt new-domain classification performance compared to zero-shot CLIP with fixed prompts. The results on the NewAcc of the Zeroshot CLIP and the PT baseline CoOp confirm this point, as shown in the table below.
| | ImageNet | SUN397 | DTD | Cars | UCF101 | Oxford_flowers |
| :-----------: | :------: | :----: | :---: | :----------: | :----: | :------------: |
| Zeroshot-CLIP | **68.10** | **75.62** | **60.51** | **75.02** | **78.64** | **77.19** |
| CoOp | 67.03 | 71.28 | 40.19 | 56.37 | 52.96 | 66.57 |
To improve generalization, many recent methods focus on **maintaining alignment** (loss design) with zero-shot CLIP while using **structured prompts** (structure design) to preserve zero-shot knowledge. These methods slightly outperform zero-shot CLIP in term of new domain accuracy but introduce additional computational and storage overhead. To purse a tradeoff between the, we just design a hand-crafted prompt.
**Decoupling Classification in different domains:** Since the base-to-new detector effectively distinguishes base and new samples in open-world scenarios, we can **decouple** the learnable prompt $\theta_g$ (for base-domain classification) and the fixed prompt $\theta_{h}^*$(for new-domain classification) and achieve promising performance. In other words, using different prompt parameters for classification task in different domains is feasible and effective in practical scenarios.
To further validate the effectiveness of zero-shot new domain classifier, we replace $\theta_{h}^*$ with $\theta_g$ and observe the overall OpenworldAUC performance drops, confirming the necessity of this decoupling and the effectiveness of the zero-shot new domain classifier.
||ImageNet|SUN397|DTD|StanfordCars|UCF101|Oxford_flowers|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Ours w/o ZS h|51.78|55.27|34.77|48.54|53.55|62.76|
|Ours|**52.64**|**58.54**|**40.19**|**55.31**|**62.39**|**72.79**|
> [1] Auc maximization in the era of big data and ai: A survey. ACMComputing Surveys (CSUR), 2022.
>
> [2] On the consistency of AUC pairwise optimization. IJCAI, 2015.
>
> [3] Stochastic auc maximization with deep neural networks. ICLR,2019.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concern on Gated Mixture-of-Prompts, 0-1 loss approximation, and new domain classifier. I'll raise my score!
---
Reply to Comment 1.1.1:
Comment: We are grateful for your comments and the improved rating. We will incorporate all your suggestions to strengthen our paper's content in the revision. | Summary: Since existing evaluation metrics cannot comprehensively assess performance in open-world prompt tuning, this paper proposes a unified evaluation metric called OpenworldAUC. This metric not only measures the detection capability of base/new samples (P1) and classification accuracy (P2), but also ensures robustness against changes in the proportion of base to new samples. Additionally, a multi-prompt combination method based on a gating mechanism is proposed to optimize the OpenworldAUC metric.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed methods and/or evaluation criteria are sensible for the problem at hand.
Theoretical Claims: Yes, I checked the correctness of all proofs for the theoretical claims, and they are correct.
Experimental Designs Or Analyses: Yes, I checked the soundness and validity of all experimental designs and analyses, and they are appropriate and well-executed.
Supplementary Material: Yes, I reviewed the supplementary material in its entirety.
Relation To Broader Scientific Literature: The key contributions of this paper are closely related to the broader scientific literature on evaluation metrics for open-world recognition tasks. Existing metrics, such as HM, Overall Accuracy, and AUROC, have been widely used but suffer from specific limitations. These metrics fail to simultaneously address three critical requirements: (P1) distinguishing between base and new classes, (P2) ensuring correct classification, and (P3) adapting to varying class distributions. Building on these prior findings, the authors propose a novel evaluation metric, OPEB, which aims to address these limitations and provide a more comprehensive framework for evaluating open-world recognition systems.
Essential References Not Discussed: No, the paper cites all essential references necessary to understand its key contributions. The authors have adequately discussed prior related findings and provided a comprehensive context for their work.
Other Strengths And Weaknesses: Strengths:
Innovation: This paper proposes a new evaluation metric, OpenworldAUC, to overcome the limitations of existing evaluation methods.
Method effectiveness: The GMoP method introduced in this paper optimizes detection and classification through multi-prompt combinations, achieving outstanding performance on multiple state-of-the-art tasks.
Theoretical support: Through theoretical reasoning, the paper demonstrates that OpenworldAUC offers more stable evaluations and that the GMoP training objective exhibits strong generalization.
Experimental comprehensiveness: The results from testing on 15 datasets and various ablation experiments validate the robustness of this approach.
Weaknesses:
Data distribution sensitivity: Imbalanced sampling of base/new data may affect detection optimization.
Limited generalization ability: Unseen new class data may lead to overfitting on the base class, impacting detector performance.
High optimization difficulty: Manual adjustment of loss weights is required, and the approach is sensitive to hyperparameters.
High computational complexity: Multiple prompts are still needed during inference, making it unsuitable for low-compute devices.
Other Comments Or Suggestions: No.
Questions For Authors: Based on the "Weaknesses" section, I have the following concerns:
1. Establishing a fair and objective metric can reflect the true performance of a model and reveal its current limitations. However, can the proposed OpenworldAUC in this paper provide a fine-grained analysis of the model's various capabilities and effectively uncover its limitations?
2. The quality of the prompt determines the model's classification performance. If the prompt is poorly designed, classification performance may degrade. How does the author ensure the quality of the prompts?
3. The GMoP method still requires multiple prompts during the inference stage. Does this introduce additional computational and storage overhead?
4. GMoP requires manual tuning of multiple loss function weights. How does the author ensure the optimal weights across different datasets?
5. Since the "new" class is unknown during training, the paper simulates this through a pseudo base-to-new partitioning approach. How does the author ensure the accuracy of this partitioning method?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your valuable comments! Due to space constraints, we include full tables and figures https://anonymous.4open.science/r/R-4D06. References to Tab.X, Fig.X correspond to those provided in this link.
> Q1: OpenworldAUC for fine-grained model capability analysis
The OpenworldAUC comprehensively evaluates: 1) base-to-new detection 2) base classification 3) new classification. A high OpenworldAUC indicates the model performs well on three. To diagnose low OpenworldAUC, we can further check three sub-metrics: AUROC, BaseAcc, NewAcc. To validate this, we present fine-grained results on four datasets shown in the table below and Tab.1, Fig.1, which reveal:
- OOD-focused methods (Gallop) excel at BaseAcc-AUROC but struggle with new domain classification
- Base-to-new methods (DePT) show weaker detection performance
- Our method achieves better OpenworldAUC, indicating improved trade-offs across three, which further validates the comprehensiveness of OpenworldAUC.
|F102|Baseacc|Newacc|AUROC|OpenworldAUC|
|-|-|-|-|-|
|DePT|97.68|75.38|92.83|69.46|
|Gallop|**98.60**|70.94|94.50|65.69|
|DeCoOp|93.90|76.24|96.84|70.28|
|Ours|96.53|**77.16**|**96.95**|**72.79**|
> Q2: How to ensure the quality of prompts
The prompts of base classification and base-to-new detection can be **automatically optimized** via our carefully designed loss function, which is a paradigm widely validated in existing prompt tuning research. Additionally, the prompt of new classification is hand-crafted to alleviate overfitting on the base training set. Unlike the naive prompt template "a photo of {}", in our initial submission, we choose a more informative prompt template based on the base domain, further ensuring the new-domain classification performance, shown in the table below and Tab.2.
|Prompt for ImageNet|NewAcc|OpenworldAUC|
|-|-|-|
|a photo of {}|68.12|51.43|
|Prompt ensemble:a {} in a video game.art of the {}.a photo of the small {}...|70.46|52.64|
> Q3: Computational complexity of GMoP
In fact, we have included the prompt complexity analysis in Fig.5 in the initial version. To highlight the efficiency of our method, we present those numerical results in the table below and Tab.3, along with additional results on inference speed.
- Our method outperforms SOTA methods on the average performance of three open-world tasks with a smaller parameter cost. While recent SOTA methods design **deep** prompt structures, we optimize **multiple shallow** prompts in detector and classifiers.
||Recognition task|Domain adaption|#param(k)|
|-|-|-|-|
|CoOp|47.59|48.93|8.2|
|Gallop|56.90|49.01|606.2|
|Maple|57.35|51.89|3555.1|
|DeCoOp|58.77|51.98|30.7|
|DePT|59.00|51.35|292.9|
|Ours|**60.94**|**52.64**|26.6|
- We measure inference speed by comparing average processing times per sample across ten datasets, testing representative methods and ours. While slightly slower than CoOp due to prompt mixing, our approach runs 34% faster than DeCoOp and matches DePT/Gallop in speed(S)
|CoOp|DePT|Gallop|DeCoOp|Ours|
|-|-|-|-|-|
|0.00117|0.00167|0.00148|0.00273|0.00180|
>Q4: The choice of multiple loss function weights across different datasets
It's a pity that our method is not fully understood. In fact, our optimization framework involves only one loss weight λ of the CE regularization added to the AUC loss, which has been discussed in App.E.6 in the initial version. Sensitivity tests across four datasets show consistent performance when λ∈[1/2,1], with SOTA results in this range, shown in the table below and Tab.4
||C101|SUN|
|-|-|-|
|DePT|92.74|56.42|
|DeCoOp|92.72|57.00|
|λ=1/4|92.64|57.59|
|λ=1/2|92.96|58.92|
|λ=3/4|92.88|58.86|
|λ=1|92.81|58.54|
> Q5: The generalization of pseudo base-new partition
The foundation model itself has strong base-to-new generalization ability. Our task is to enhance such ability by leveraging base training data. To this end, we adopt the following partition strategy to simulate new class detection.
- We perform **multiple** (K) base-to-new partitions to ensure statistical stability of the new class simulation
- We ensure K pseudo base classes **fully cover** the base class
- Thm.5.2 suggests increasing K reduces such approximation error. Experiments on two datasets confirm this shown in the table below and Tab.5. We usually set K=3 to balance performance and efficiency
|SUN397|AUROC|OpenworldAUC|
|-|-|-|
|K=1|84.45|53.16|
|K=2|87.57|56.14|
|K=3|90.70|58.54|
|K=4|91.02|58.63|
|K=5|91.25|58.71|
> Q6: Imbalanced sampling of base/new data may affect detection optimization
Test Imbalance: Since true new-class data is unavailable during training, test imbalance doesn't impact optimization
Pseudo Imbalance: We sample pseudo base/new pairs from true base for AUC loss training. AUC's distribution insensitivity ensures robustness to pseudo base-to-new imbalance, validated by stable performance with varying ratios on DTD
|b/n ratio|AUROC|OpenworldAUC|
|-|-|-|
|2:1|78.58|40.37|
|3:1|78.30|40.32|
|5:1|78.45|40.39| | null | null | null | null | null | null |
Representative Language Generation | Accept (poster) | Summary: This paper introduces a theoretical framework to characterize generative models’ capacities/abilities to produce samples that reflect the diversity seen in the data whose distribution the model is trying to approximate. Such characterizations are of interest to the machine learning community as they let us quantitatively define whether different groups in the support of the data-generating distribution are properly represented by the model of interest. The paper outlines the criteria by which a generator fulfills different degrees of representative-ness (these build off of definitions given in prior work) and then consider the theoretical feasibility of different models to meet these criteria. They provide information-theoretic bounds and computational bounds (where possible). They present both positive and negative results for whether representative generation can be fulfilled for different generator–hypothesis class pairs (e.g, that for a certain hypothesis class + alpha, no generator can achieve representative generation in the limit using only a finite number of membership queries).
Claims And Evidence: Yes, main claims are stated as theorems, corollaries, and lemmas, which are supported by proofs (outlines of proof are given in body of paper and formal proofs in appendix)
Methods And Evaluation Criteria: There aren’t really any evaluations in this paper, as its providing a theoretical framework.
Theoretical Claims: There are numerous theoretical claims in the paper. I was unable to check all rigorously. The informal arguments given in the paper (formal proofs were in appendix) intuitively made sense
Experimental Designs Or Analyses: N/A
Supplementary Material: Yes, the review of related work was thorough and well-written.
Relation To Broader Scientific Literature: The paper expands on other works’ attempts to formalize diversity and representativeness in generative models. They discuss the relationship to concepts such as mode collapse ad notions from algorithmic fairness, such as multiaccuracy and multicalibration. I am unfamiliar with the prior related work but the claimed extension this work offers seems worthwhile
Essential References Not Discussed: There are several works on generative model evaluation that use notions of precision and recall (e.g, Sajjadi et. al. 2018, Kynkäänniemi et. al. 2018) in the attempt to assess whether models capture the diversity of the data-generating distribution. These should probably be discussed
Other Strengths And Weaknesses: The paper doesn’t offer any insights/recommendations that let the reader bridge the gap between its theoretical results and various practical implementations. To me, this is a major shortcoming. It’s unclear how much the specificity of the assumptions (e.g., UUS and finite support) behind the results limit their direct applicability to real-world systems.
Other Comments Or Suggestions: Consider using fewer scare quotes. These are typically used to indicate the inaccurate use of a term. Given that the merits of this paper are a theoretical framework, if you need a different term than the one you’re putting scare quotes around, I would recommend defining such a term.
Questions For Authors: Following from the last comment, why is “sample complexity” in quotes throughout the paper? Is the notion not actual sample complexity? This wasn’t evident to me
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments. We address the reviewer's concerns below.
> There are several works on generative model evaluation that use notions of precision and recall (e.g, Sajjadi et. al. 2018, Kynkäänniemi et. al. 2018) in the attempt to assess whether models capture the diversity of the data-generating distribution. These should probably be discussed
We thank the reviewer for pointing out these relevant works. We will make sure to cite them in the camera-ready version.
> The paper doesn’t offer any insights/recommendations that let the reader bridge the gap...
We emphasize that our paper primarily offers theoretical contributions by extending and analyzing the model introduced by Kleinberg and Mullainathan. For insights regarding practical applications and recommendations, we direct you to our response to Reviewer U9Ez above (Re: Practical applicability of results) addressing these aspects.
Regarding the specific assumptions in our framework, the UUS assumption is standard across generation literature and serves a fundamental purpose: ensuring generators can indefinitely produce novel elements from the true language. This assumption aligns with practical contexts, where the set of "valid generations" is effectively infinite--for instance, the set of all valid English passages is unbounded. The finite support property is similarly incorporated to ensure basic feasibility. As demonstrated in Lemma 4.3, without this assumption, representative generation becomes impossible even when the generator has complete knowledge of the true language.
> Consider using fewer scare quotes
Thanks for pointing this out. We will make sure to dial back on the use of scare quotes in the final version.
> Following from the last comment, why is “sample complexity” in quotes throughout the paper?
Yes, the notion for uniform and non-uniform generation, is indeed the actual sample complexity. We will remove the scare quotes in these sections. | Summary: The paper defines a new property of generators called “representative generatability” to provide a theoretical framework for comparing a generator's representation (occurrence) of distinct groups present in the training distribution with the representation in the distribution of output sequences. The contribution is well-motivated with real-world bearing on issues observed in practice in generative systems (like LLMs), such as bias propagation and mode collapse. Results are presented in three settings derived from prior works –– uniform generation, non-uniform generation, and generation in the limit.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Not applicable since the work is a theoretical contribution.
Theoretical Claims: My reading of the proofs revealed no errors.
Experimental Designs Or Analyses: Not applicable since the work is a theoretical contribution.
Supplementary Material: No.
Relation To Broader Scientific Literature: This work directly follows from [1], [2], [3], [4], and [5], and provides a useful new lens that has real-world relevance to current generative systems in practice (albeit with no obvious takeaways for practitioners).
[1] Kleinberg, J. and Mullainathan, S. Language generation in the limit.
[2] Li, J., Raman, V., and Tewari, A. Generation through the lens of learning theory.
[3] Kalavasis, A., Mehrotra, A., and Velegkas, G. Characterizations of language generation with breadth.
[4] Charikar, M. and Pabbaraju, C. Exploring facets of language generation in the limit.
[5] Kalavasis, A., Mehrotra, A., and Velegkas, G. On the limits of language generation: Trade-offs between hallucination and mode collapse.
Essential References Not Discussed: I am not aware of any missing related works that have not been cited.
Other Strengths And Weaknesses: **Strengths:**
The paper focuses on the problem of diverse generation, a topic of high-relevance in practice, and presents a theoretical framework to analyze the problem. Given the abundance of iterative methodological contributions in the area, I think this is a useful contribution for the community to assess the theoretical limits of what we want from such systems.
**Weaknesses:**
While I understand that the work is a purely theoretical contribution, given that the motivation uses practical systems (such as LLMs) to establish impact, it would have been useful to present a discussion about what the authors think might be the practical (and immediate, if any) implications of their results.
Other Comments Or Suggestions: None.
Questions For Authors: L205-208: Could you discuss the choice of the supremum distance as the measure of "closeness" versus other distances? The choice currently seems fairly unjustified.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > Re: Practical applicability of results
Thank you for your question. While we maintain that this is primarily a theoretical work, we've addressed how our research may or may not relate to practical applications in our global comment to all reviewers above. We will plan to incorporate more of this discussion in the final version of the paper.
To clarify our position, our work extends the theoretical model introduced by Kleinberg and Mullainathan, which—along with subsequent research discussed in our introduction—examines generation in a worst-case scenario without making assumptions about data distribution or learning methods. This approach deliberately highlights fundamental tensions and possibilities in generative tasks from a theoretical perspective rather than focusing on immediate practical implementations.
That being said, our work on representative generation can be viewed as an extension of Kleinberg and Mullainathan's model of generation, aiming to align it more closely with real-world generation approaches and values. Specifically, while real-world approaches typically aim to develop generative models that closely approximate training distributions (as reviewer 2 noted)—and it is indeed natural to expect our generations to resemble training data in certain aspects—Kleinberg and Mullainathan's notion of generation in the limit imposes no requirement that generated data must resemble previously observed data, only that it must belong to the true language. Our work maintains the generation-in-the-limit framework while introducing an additional constraint: generations must resemble training data with respect to simple statistical tests measuring the prevalence of certain subpopulations. Arguably, our notion of representation is useful to formalize even in a practical setting, as it addresses an important consideration: generative models can potentially under- or over-represent certain subpopulations, even when they demonstrate good overall alignment with the training data.
At a high-level, we view the main contributions/takeaways of our work as two-fold, in the context of existing works on generation:
1) Our additional constraint of representational generation highlights key tensions and possibilities between the positive results of Kleinberg and Mullainathan's model and real-world approaches to generation. In particular, we show that there are many settings where generation in their model is possible, but becomes impossible when the generative model is additionally required to satisfy representation. On the other side, some of our positive results signal that matching training data makeup is not always in conflict with the goal of generation. For instance, we show that requiring representation with respect to any finite set of groups is no more difficult than just generating in the limit for any countable class $\mathcal{H}$.
2) Our work is among a recent line of work attempting to understand the trade-offs between obtaining novelty and breadth in language generation. Like these works, some of our results are negative: if one cares about computability, then achieving novelty and representativeness is impossible. However, from a sample-efficiency perspective, in contrast to the many negative results on generating with breadth, our formulation of representation offers a tractable relaxation of breadth that maintains semantic relevance while remaining feasible across many natural classes of languages and groups.
> Re: Choice of supremum distance
Our selection of the supremum distance draws directly on the foundational principles established in algorithmic fairness literature, which emphasizes the importance of limiting the error experienced by the worst-off group. This perspective prioritizes ensuring good representation for every group rather than merely optimizing for average performance. The supremum distance naturally operationalizes this principle by measuring the maximum disparity across all groups, effectively placing an upper bound on the error that any group might experience.
It's worth noting that for finite group settings, different choices of distance measures (such as $L_1$, $L_2$, or the supremum distance (L$_\infty$)) are typically within constant factors of one another, making the specific choice less critical. However, as we move to settings with infinite groups, these equivalences break down—distance measures such as $L_1$ become less informative, potentially obscuring significant disparities among individual groups.
As we highlight in our concluding discussion, while we selected the supremum distance for its robust guarantees from an algorithmic fairness perspective, exploring representation guarantees under alternative distance metrics is certainly an interesting direction for future research. In our revised paper, we will provide a more thorough justification for our choice of the supremum distance as an appropriate measure of closeness within our framework.
---
Rebuttal Comment 1.1:
Comment: Thank you for the helpful discussion. I'd be happy to increase my score and hope the authors include aspects of their discussion in their final paper. | Summary: This work introduces the concept of "Representative Generation" and its variants, aiming to provide a theoretical framework that characterizes generators (i.e. generative models) such that, when the training data distribution consists of multiple groups of interest, the generator outputs closely approximate the proportions of data across the groups. In contrast, prior work proposes the "generation in the limit" characterization of generators, where generators can get away by generating from a restricted subset of the ground-truth. While prior work shows that generation in the limit, which is much weaker than representative generation, can be achieved in many scenarios, this paper shows that representative generation is impossible using only a finite number of membership queries, i.e. querying whether an element belongs to a certain group.
Claims And Evidence: Yes, to the best of my judgement.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I do not have the required expertise to verify the correctness of the proofs.
Experimental Designs Or Analyses: N/A
Supplementary Material: Yes, but I cannot verify the correctness of the proofs.
Relation To Broader Scientific Literature: As suggested by the author, the theoretical analysis is relevant to the commonly observed/discussed phenomenon of generative models often exacerbating biases presented in training data.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: 1. One common paradigm for training generative models is to minimize KL(ground-truth | generative model), and in this case, it is impossible for a generative model to get away with generating only cat images when the training data consists of 1/3 cats, 1/3 dogs and 1/3 rabbits. Could you use this example to help clarify, for a non-expert like me, how is "representative generation" different from "generation in the limit" and how to interpret the negative result (Lemma 4.9) proved in this work?
2. Could you help clarify what "at each step" is referring to in `Rather than generating a single element at each step, a representative generator generates a distribution over multiple elements at each step.`
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments.
> Could you use this example to help clarify, for a non-expert like me, how is "representative generation" different from "generation in the limit" and how to interpret the negative result (Lemma 4.9) proved in this work?
In the original notion of "generation in the limit" proposed by Kleinberg \& Mullainaithan (2024) the goal of the generator was to only eventually produce new, valid examples. In our model of "representative generation", the goal of the generator is to not only produce new, valid examples, but to produce them in a way that is "representative" of the input training data. So now, the goal is not only to produce new, valid examples, but to produce new, valid examples that align with the properties/preferences of the training data. The negative result (Lemma 4.9) shows that if one cares about computability, then representative generation is strictly harder than generatability in the limit -- there are cases where once can produce new, valid examples, but not in a way that aligns with the properties/preferences of the training data.
You raise a good point that the objective of generation in the limit is somewhat different from the standard objective in practice of minimizing distance to training data. Please see our post to Reviewer U9Ez below (Re: Practical applicability of results) addressing this discrepancy between the theoretical model and practice.
> Could you help clarify what "at each step" is referring to in Rather than generating a single element at each step, a representative generator generates a distribution over multiple elements at each step.
In this paper (as well previous papers in this line of work), we consider a game between a generator $G$ and an adversary $A$. This game is played sequentially over rounds $t=1, 2, \cdots.$ In each round $t \in \mathbb{N}$, the adversary reveals to the generator a new example $x_t \in X$. Upon observing $x_t$, the Generator must output a new example $\hat{x}_t.$ In previous works, the generator $G$ was deterministic. In our work, $G$ is randomized and thus produces a distribution $\mu_t \in \Delta X$ after observing $x_t$. So, the "at each step" is referring to the rounds $t = 1, 2, ...$ . | Summary: This paper extends the study of the recent model of language generation introduced by Kleinberg and Mullainathan in a 2024 paper. In this model, this paper defines a notion of representative generation where the generator is roughly speaking required to assign similar probability masses to each group A (from a class $\\mathcal{A}$) as is assigned by the sequence of examples provided by the adversary. The paper provides several results exploring when different notions of generation (as defined by Li et al. (2024)) can be achieved with the additional representation constraint. Their first two sets of results consider the case where groups in $\\mathcal{A}$ are disjoint.
1. Their first result is a characterization of representative uniform generation under the disjointness assumption. This characterization builds on a characterization of Li et al. (2024). Next, we list some examples and results complementing this characterization:
1. This characterization, in particular, shows that representative uniform generation is always possible with a finite language collection and finitely many groups.
2. Complementing this, they also show that representative uniform generation is strictly harder than uniform generation (by providing a class that is uniform generatable but not representative uniform generatable with even two groups).
3. Further, they also give another example where $\\mathcal{A}$ is countably infinite and where a hypothesis class of size 1 cannot be generatable in the limit with representation (a weaker requirement than representative uniform generation).
2. Their second result characterizes representative non-uniform generation under the disjointness assumption. This characterization is identical to the characterization of non-uniform generation by Li et al. (2024) except that one needs to substitute the characterization of uniform generation with that for representative uniform generation.
1. This characterization, in particular, shows that all countably infinite classes with a finite partition $\\mathcal{A}$ are representatively non-uniformly generatable.
3. Their final set of results for generation in the limit drops the disjointness assumption, but makes a different “finite support” assumption in Definition 4.2.
1. Their main result for generation in the limit is that any countably infinite class with a countably infinite $\\mathcal{A}$ satisfying Definition 4.2 is generatable in the limit with representation.
2. Further, they also demonstrate computational barriers to generation with only membership queries by showing that no algorithm can generate in the limit with representation using only finitely many membership queries even when the hypothesis class has a single hypothesis and there are only two two groups.
**Post-rebuttal Update** Dear authors, thank you for your response and for explaining that the group closure dimension has the finite character property. The rebuttal addresses most of my concerns, I maintain my original rating of weak accept. I do not give a stronger accept as I am not 100% sure about the gap in technical novelty compare to prior work of Li et al.
Claims And Evidence: Yes, the claims in the paper are supported by the rigorous proofs.
Methods And Evaluation Criteria: Not applicable.
Theoretical Claims: I skimmed the proofs but did not check their correctness.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: I skimmed the proofs but did not check their correctness.
Relation To Broader Scientific Literature: The paper advances a line of work studying language generation in a model recently introduced by Kleinberg and Mullainathan in a 2024 NeurIPS paper. The present paper introduces the task of representative language generation which has not been considered in other works studying this model of language generation.
Essential References Not Discussed: To the best of my knowledge, the paper discusses relevant prior works.
Other Strengths And Weaknesses: The paper studies an interesting and fundamental learning theoretic model for generation. The problem they study seems like a natural extension of existing works.
Other Comments Or Suggestions: I found the Finite Support Size assumption (Definition 4.1) hard to parse. As a sanity check, it seems to make sense in the case where there are finitely many groups. I think it would be very useful to include several (simple) examples where the Finite Support Size assumption holds and when it is violated.
Typos:
1. Line 90 “Representative Uniform Generation,” \-\> “Representative Uniform Generation.”
Questions For Authors: I have some questions for the authors. The most important ones are Q2 and Q3.
First, for the closer dimension introduced by Li et al. (2024), it is relatively easy to certify that a hypothesis class satisfies the definition. In some sense, the definition was “interpretable.” The definition of the group closure dimension (which characterizes representative uniform generation under the disjointness assumption) seems less interpretable, at least as currently stated. (Q1) Is there an easy way to check if a hypothesis satisfies group closure dimension? How can one interpret it?
I understand that the group closure dimension characterizes the sample complexity of the task, but it was not easy to parse what it means. If there is a better way to state it, that would be very useful for the readers.
My second question is: (Q2) Could the authors shed some light on why it is challenging to characterize or provide sufficiency conditions for uniform and non-uniform generation without the disjointness assumption on $\\mathcal{A}$?
Finally, for the result on computationally barriers with only membership queries. (Q3) How are the techniques used in this result different from the techniques of Charikar and Pabbaraju? Are the techniques the same? If not, can I find a discussion about the differences somewhere in the paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for finding that our paper studies an interesting and fundamental learning theoretic model for generation. We address the reviewer's comments/questions below.
> Finite Support Size Assumption
We can certainly add some examples to clarify the definition. We'll highlight a few simple examples here that we can add to the final version to add intuition for the assumption:
- As you correctly observe, any collection of finite groups will always satisfy the finite support size assumption.
- When the collection of groups is infinite and disjoint, the assumption simplifies to the requirement that for any $h \in \\mathcal{H}$, only a finite number of groups have a finite size intersection with $\mathsf{supp}(h)$.
- A simple example of a collection of groups that does not satisfy the assumption is any infinite collection of groups where the size of every group is finite, such as the collection of all singletons $\mathcal{A} = \lbrace{\lbrace{x\rbrace} : x \in \mathcal{X}\rbrace}$, or the collection defined in the proof of Lemma 4.3.
> (Q1) Is there an easy way to check if a hypothesis satisfies group closure dimension? How can one interpret it?"
Like other combinatorial dimensions (e.g VC and Littlestone dimension), it is likely not possible to efficiently compute the group closure dimension at a fixed scale. That said, the group closure dimension does satisfy what is known as the Finite Character Property [1]: for every $d \in \mathbb{N}$ and class $\mathcal{H}$, the statement $\operatorname{GC}_{\alpha}(\mathcal{H}) \geq d$ can be demonstrated by a finite set of domain points in $\mathcal{X}$ and a finite collection of members of $\mathcal{H}.$ This property is also satisfied by most other combinatorial dimension in learning theory literature. In terms of interpretation, intuitively, one should think of the group closure dimension as measuring the maximum number of samples one needs to see until they are guaranteed a winning strategy. Here, a winning strategy is one where there exists a distribution over examples which is consistent and representative. It turns out that the group closure dimension quantifies exactly this in a way that does not explicitly require quantifiers over distributions of the example space by exploiting the disjoint nature of the groups.
[1] Ben-David, Shai, et al. "Learnability can be undecidable." Nature Machine Intelligence (2019)
> (Q2) Could the authors shed some light on why it is challenging to characterize or provide sufficiency conditions for uniform and non-uniform generation without the disjointness assumption on ?
There are several issues that arise when one tries to go beyond disjoint groups.
- For one, the distribution-free definition of the group closure dimension heavily relies on the fact that the groups are disjoint. Thus, while it is possible to define a version of the group closure dimension for arbitrary groups, it is likely that it will be abstract and not satisfy the Finite Character Property [1].
- The second issue is that when the groups are not a partition of the domain $\mathcal{X}$, the vector of induced probabilities (Definition 2.5) is no longer guaranteed to be a probability distribution. For example, consider a sequence of examples $x_1, x_2, ..., x_d$ contained in every group. Any dimension that gives a characterization will likely have to have quantifiers iterating over entire group vectors instead of individual groups. This makes the dimension hard to parse and less meaningful. In addition, with overlapping groups, the dimension will need to take into account arbitrary intersections of these groups, which again significantly increases the complexity.
> (Q3) How are the techniques used in this result different from the techniques of Charikar and Pabbaraju?
We briefly compare the two proofs in the beginning of Section 4.2, but will provide a more detailed discussion here that we will include in the final version of the paper.
Both proofs employ the same fundamental approach to construct adversarial counterexamples and prove impossibility: they develop strategies that, given a generator, methodically build an adversarial enumeration that forces the generator to violate its promised guarantee. Charikar and Pabbaraju focus on the impossibility of non-uniform generation guarantees, while our work examines representative generation in the limit. Importantly, without the representation requirement, generation in the limit is always achievable with membership queries, as demonstrated by Kleinberg and Mullainathan [2]. Our adversarial construction specifically exploits the additional representation constraint by carefully designing groups that force the generator to violate either representation or consistency. Furthermore, our setting differs in that we consider generators that output distributions rather than single elements.
[2] Kleinberg, Jon, and Sendhil Mullainathan. "Language generation in the limit." (2024) | null | null | null | null | null | null |
ParallelComp: Parallel Long-Context Compressor for Length Extrapolation | Accept (poster) | Summary: ParallelComp processes the input sequence chunk by chunk independently. Therefore, they can parallel process the attention matrix, except for the last chunk of attention.
Claims And Evidence: No, the supporting evidence is not clear and not well connected to their claims.
Methods And Evaluation Criteria: No, I think the evaluation of the calibration and compression variant is not well justified because the performance looks pretty similar between whole variants in Llama 3.
Theoretical Claims: Yes.
However, I am concerned that Parallel Attention Bias is necessary to sparsify attention mechanisms.
Experimental Designs Or Analyses: Using Llama 2 looks like a bad choice in 2025. At least, I think we should use Gemma 2 or EXAONE 3.5 for short context models.
I wonder how performance is good in long context language models such as llama 3.1. Since the KV cache memory is usually a more significant problem rather than latency in most scenarios (up to 1M token context), we can compare this method to long context LLMs that use the same amount of KV cache memory with ParallelComp.
I wonder how performance is good when we apply ParallelComp to long context LLM such as Llama 3.1 and Qwen 2.5. Llama 2 and 3 are pretty old and short at this point.
Supplementary Material: No, they did not provide any supplementary materials. I think at least basic implementation should be provided.
Relation To Broader Scientific Literature: I am not sure the key contribution of this work (chunkwisly parallel encoding) is scientifically impactful.
I am not sure if Parallel Attention Bias is effective in terms of downstream tasks.
Essential References Not Discussed: StarAtteniton (https://arxiv.org/pdf/2411.17116) and APE (https://arxiv.org/pdf/2502.05431) look extremely similar to this work. I strongly suggest that the authors compare ParallelComp with those baselines.
Other Strengths And Weaknesses: Please refer to other sections.
Other Comments Or Suggestions: In the abstract, I think we need to avoid the bold text if it is unnecessary.
Tables 1, 2, 3, 4, and 5 are too tiny text. Please revise the overall formatting significantly.
Promoting performance between GPT4 might be overclaiming. The promotion that ParallelComp can make 8B better than GPT4 may lead to misunderstanding that this method improves the long context performance. I feel afraid of such overclaims, so I gently ask you to think about this once more.
In Tables 1 and 2, I think we should show the average length of the dataset or 99% of the length of the dataset because the maximum length might show only the outliers.
In Figure 2, the terms `Key` and `Value` should be plural (-s).
Questions For Authors: Please refer to the upper sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank for your suggestions!
Q1:**Key differences**
|Method|NARR|QAS|MUlT|HOPT|2WKI|MUS|GOV|QMS|NEWS|TREC|TRIV|SSM|PCNNT|PREN|LCC|REP|AVG|
|------|----|---|----|----|----|---|---|---|----|----|----|---|-----|----|---|---|----|
|APE|23.63|39.11|50.06|49.47|43.70|25.99|27.78|22.79|11.22|43.50|90.17|9.79|0.50|59.00|23.93|24.28|34.06|
|Star|3.74|11.90|24.81|14.17|14.37|8.19|34.90|22.54|27.11|65.33|87.84|43.71|3.80|65.17|50.54|45.40|32.72|
|ParallelComp|29.45|45.98|50.67|48.36|46.56|23.32|32.60|24.29|27.34|38.50|86.72|25.93|0.05|95.00|14.15|21.42|38.15|
**Table 1:** Comparison with other methods.
- We focus on the **memory-bound** encountered during the implementation of length extrapolation.
- ParallelComp introduces a **two-stage compression strategy**: chunk-level kv cache eviction followed by intra-chunk compression. **Attention calibration** is applied to reduce the performance loss caused by compression. This enables both context window extension and improved inference throughput.
- **APE** consists of a shared prefix to reduce distributional differences, a low-temperature mechanism to sharpen attention, and a scaling factor to adjust for temperature-induced changes. It aims to align the attention patterns of parallel and sequential encoding.
- **StarAttention** targets models with native long-context support. It recalculates attention for each chunk at every step, while our method computes attention only once at the prefill stage and reuses the **compressed** KV cache for generation—greatly improving efficiency.
- We conducted a comparative experiment, with the code implementation based on [1,2]. However, APE lacks prompt design, Star is missing the Longbench experiment, and hyperparameter configurations are absent in both. For APE, we set the temperature to 0.5 and the scaling factor to 0.8. For Star, the chunk size was set to 2K. In our approach, we reuse these 6K position encodings for length extrapolation. Base model is Llama-3.1-8B-Instruct.
Q2: **Supplementary baseline**
|Method|NARR|QAS|MUlT|HOPT|2WKI|MUS|GOV|QMS|NEWS|TREC|TRIV|SSM|PCNNT|PREN|LCC|REP|AVG|
|------|----|---|----|----|----|---|---|---|----|----|----|---|-----|----|---|---|----|
|Llama3.1|27.80|44.25|49.46|47.86|40.54|23.64|32.64|22.90|26.90|38.00|88.44|25.64|2.02|92.00|10.35|18.64|36.94|
|ParallelComp-Llama3.1|29.45|45.98|50.67|48.36|46.56|23.32|32.60|24.29|27.34|38.50|86.72|25.93|0.05|95.00|14.15|21.42|38.15|
|Qwen2.5|27.83|41.31|50.41|53.52|44.68|30.00|33.38|24.01|25.40|71.00|86.10|39.91|7.25|100.00|6.86|7.88|40.60|
|ParallelComp-Qwen2.5|28.42|42.24|50.54|56.26|42.02|28.25|33.43|23.20|25.20|71.50|89.21|41.84|5.00|93.50|20.73|13.34|41.54|
**Table 2:** Longbench.
|Method|PS|NUM|KV|EN.MC|MATH|CODE|AVG|
|------|---|---|--|-----|-----|-----|----|
|Llama3.1|5.59|26.25|18.60|32.86|31.52|22.56|26.36|
|ParallelComp-Llama3.1|100.00|83.56|88.60|66.38|37.14|22.08|59.55|
|Qwen2.5|59.32|58.31|33.80|61.39|85.71|23.76|53.72|
|ParallelComp-Qwen2.5|100.00|76.27|63.40|66.86|92.57|24.75|70.64|
**Table 3:** Infinitebench.
- The KV cache management mechanism of Gemma2 is significantly different from that of LLaMA and Qwen models, so we have not yet fully implemented it. If we have more time, we promise to submit the test results of EXAONE 3.5 and Gemma2. We promise to provide it in a future version of the paper.
- We conduct experiments on Llama3.1 and Qwen2.5 using Longbench and InfiniteBench under the same 24K KV cache budget. While other models use 24K position encodings directly, our approach leverages extrapolation techniques to reuse 6K position encoding. Others use the original length.In the experiment, it is found that whether the prompt word(such as “<|begin_of_text|>” for Llama3.1) is used has a great influence on the performance, and we make a comparison in the case of the prompt word.
It can be seen that, under the same 24K KV cache size budget, our approach still outperforms other models that **do not implement extrapolation**, especially on the InfiniteBench dataset.
Q3: **Necessity to sparsify attention mechanisms**
Based on our observations, certain tokens at specific layers do indeed contribute to improving the model's performance:
- In early layers, attention biases are crucial for context understanding—removing them harms performance.
- For code tasks, biases in middle layers matter most—removal also leads to performance drops.
- The attention sink boosts retrieval in early layers but hurts performance in later layers.
However, in other layers, **these attention biases significantly hinder the model's ability to process long contexts**. Therefore, removing these anomalous tokens can improve the model's performance.
Q4: **Others**
- We will consider your suggestions and incorporate additional supplementary materials into future versions of the paper, especially the release of the open-source code.
**Reference**:
[1]https://github.com/Infini-AI-Lab/APE
[2]https://github.com/NVIDIA/Star-Attention
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal.
I understand more about this method, compared to APE and Start Attention, by the authors' explanations.
However, I still have concerns about performance evaluation compared to existing parallel encoding works. What is the model used in Table 1? It seems the reported numbers are pretty different from the APE paper (Table 7). What is the chunk size of APE? Why use different chunk sizes between StartAttention and yours? Is it because of latency? And in comparison, I think we are missing a report on the latency of methods. How fast or slow your method compared to FlashAttention, InfLLM, APE, and StarAttention?
I am looking forward to the authors' response. Thank you again for your great work.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and comments! We commit to citing the APE and StartAttention in future version!
Q1: **Hyperparameter Settings**
|Method|NARR|QAS|MULT|HOPT|2WKI|MUS|GOV|QMS|NEWS|TREC|TRIV|SSM|PCNNT|PREN|LCC|REP|AVG|
|------|----|----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|------|-----|-----|-----|-----|
|APE0.8+0.8|25.92|41.99|**53.79**|**53.64**|**50.54**|**26.46**|30.15|**25.42**|20.68|50.50|88.70|9.72|6.50|89.00|16.71|25.78|**38.47**|
|APE0.5+0.8|23.63|39.11|50.06|49.47|43.70|25.99|27.78|22.79|11.22|43.50|**90.17**|9.79|0.50|59.00|23.93|24.28|34.06|
|APE0.2+0.8|18.83|26.53|41.70|44.63|35.91|17.71|24.31|20.14|7.96|35.75|88.54|9.72|1.50|34.50|23.86|23.22|28.43|
|APE0.8+0.4|9.04|11.48|19.59|31.41|24.68|10.16|5.32|13.80|8.20|0.50|87.06|9.71|0.00|7.00|13.72|16.51|16.76|
|APE0.5+0.4|7.59|9.74|16.13|31.89|25.72|9.62|5.20|9.56|8.19|0.00|87.60|9.69|0.00|5.00|14.26|15.58|15.99|
|APE0.2+0.4|4.90|8.97|13.99|29.71|26.66|8.74|5.16|9.45|8.26|0.50|87.57|9.71|0.00|4.00|14.20|16.24|15.50|
|StarAttention4K|3.74|11.90|24.81|14.17|14.37|8.19|**34.90**|22.54|27.11|65.33|87.84|43.71|3.80|65.17|50.54|45.40|32.72|
|StarAttention6K|4.65|13.63|21.05|14.47|15.57|6.38|34.80|22.67|26.27|**66.00**|65.54|**47.91**|**8.00**|70.00|**56.48**|**45.42**|32.43|
|Ours|**29.45**|**45.98**|50.67|48.36|46.56|23.32|32.60|24.29|**27.34**|38.50|86.72|25.93|0.05|**95.00**|14.15|21.42|38.15|
**Table 1.** `APE X+Y` indicates temperature = X and scaling factor = Y.
**APE**
- We followed the original papers as closely as possible and ran multiple experiments with different settings.
- We set the chunk size as **6000**.
- **Temperature and Scaling**: A temperature of `0.8` and a scaling factor of `0.8` yielded results closest to the original paper and slightly better than our method under the same unified settings.
- **KV Cache**: It’s worth noting that **APE does not implement any KV cache eviction strategy**, so it uses a full size KV cache, which gives it a performance edge.
**StarAttention**
- We use a chunk size of `4K` tokens as reported in the original paper.
- To make a fair comparison, we also tried a `6K` chunk size, which reduced its performance.
**Summary**
- APE achieved the best performance among all baselines with the APE0.8+0.8 hyperparameter setting. Although our method performs slightly worse, considering that we only used the compressed KV cache for inference, these performance losses are acceptable.
- The reason why StarAttention is particularly sensitive to chunk size is unknown.
- The APE method appears sensitive to the scaling factor hyperparameter, and the reason is unclear. **I look forward to further discussing this issue with you.**
Q2: **Parallel Design Implementation Details**
- All windows are padded to the same length for parallel processing, followed by a forward pass to compute the KV cache, with optional in-chunk compression. The compressed KV caches are then added to a priority queue for eviction based on their self-information.
- In contrast to APE and StarAttention, our chunk and token KV cache eviction strategy effectively mitigates out-of-memory (OOM) issues, making our approach more scalable and memory-efficient, especially for long-context scenarios.
Q3: **Evaluation on NarrativeQA**
|Method|Effective Number Of Samples|OOM Samples|Inference Time Per Sample (s)|Accuracy|
|------|------------------------|----------|------------------------|----------|
|APE-truncation|85|115|6.06|25.92|
|StarAttention|200|0|10.28|4.65|
|InfLLM|200|0|12.28|26.58|
|Ours|200|0|1.84|29.45|
|Ours-truncation|200|0|1.76|27.60|
|Ours-compression|200|0|1.03|28.36|
**Table 2.** Latency and performance comparison. For the OOM samples, we performed a truncation operation on APE. Truncation involves retaining the first and last halves of a sample after reducing it to the target length(36k).
| Method| Prefill(s) | Tokens/s | Generate(s) | Tokens/s |
|--------------|------|------|-------|----------|
| FullKV|1206.55| 4937.86 |260.68 | 140.69 |
| Ours|123.65 | 48182.47 |245.20 | 152.57 |
| Ours-Comp| 62.32| 95599.52 | 143.59 | 265.50 |
**Table 3.** Inference latency and Throughput.
All experiments are conducted on an AMD Instinct MI210 64GB GPU. FullKV means using full-attention reasoning experiments. Max length is set to 36k. “Ours” only uses chunk eviction and “Ours-compression” uses chunk eviction and parallel KV cache eviction. We evaluate performance using the following metrics:
- **Effective Number of Samples**: Successfully processed samples.
- **OOM Samples**: Failed samples due to memory issues.
- **Inference Time per Sample**: Average processing time per instance.
- KV cache compression **accelerated prefill time** and **improve throughput**.
Q4: **Note on FlashAttention**
- **FlashAttention** is orthogonal to our method, which focuses on **infra improvements** such as memory management. | Summary: This paper introduces ParallelComp, which the authors claim to be able to extend the context window of off-the-shelf LLMs from 4K to 128K tokens while maintaining computational efficiency. The authors address the critical challenge of attention sink phenomena in parallel attention mechanisms, where models disproportionately focus on initial/recent tokens, impairing long-context understanding.
## Update After Rebuttal
I have read the authors' rebuttal and decided to keep the current positive score.
Claims And Evidence: 1. Parallel Compression Framework. The authors propose a chunk-based architecture with parallel local attention computations and strategic KV cache eviction. This enables efficient processing of ultra-long contexts on a single A100 GPU.
2. Attention Bias Analysis & Calibration. The authors have conduct a systematic identification of three attention bias patterns in parallel mechanisms.
Methods And Evaluation Criteria: Regarding the methods, the proposed methods align well with the core challenge of long-context extrapolation. The parallel attention mechanism and chunk eviction strategies are designed to address the memory/computation bottlenecks of long sequences. In the meanwhile, compatibility with flash attention is well attended to.
The evalution seems comprehensive and appropriate for me. I am just wondering whether the authors have considered testing their method on more cross-lingual datasets?
Theoretical Claims: The overall theoretical framework provides useful explanation about parallel attention collapse.
Experimental Designs Or Analyses: 1. (Apologies if this question is somewhat naïve, as I am extremely unfamiliar with this field.) The authors conduct a thorough comparison against existing length extrapolation methods for tasks such as single/multi-document QA. However, these tasks could potentially benefit from RAG without requiring long-context extrapolation in LLMs. Would it be possible to compare the proposed approach with leading RAG methods such as [1,2]?
2. Does the paper provide a detailed description of the calibration data used? Have they analyzed stability across different context sources (e.g., code vs. narrative text)?
---
[1] RAPTOR: recursive abstractive processing for tree-organized retrieval.
[2] ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities
Supplementary Material: I reviewed the additional experiments presented in the appendix.
Relation To Broader Scientific Literature: This work focuses on LLM length extrapolation, an important research direction with numerous influential studies. Given that the proposed approach is training-free, its contribution appears particularly meaningful.
Essential References Not Discussed: As I am not well-versed in this area, I have not identified any critical missing references.
Other Strengths And Weaknesses: See above
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for your fruitful suggestions, and for all your questions below!
Q1: **The authors conduct a thorough comparison against existing length extrapolation methods for tasks such as single/multi-document QA. However, these tasks could potentially benefit from RAG without requiring long-context extrapolation in LLMs. Would it be possible to compare the proposed approach with leading RAG methods such as [1,2]?**
| Model|QM|QASP|MSQ|HQA|MFQA|Average|
|-----|------|------|------|------|------|---------|
| Llama3-ChatQA-2-8B | 11.64| 28.85| 27.81| 53.81| 51.02| 34.63 |
| w/ RAG|13.20|28.85|29.77|57.81|51.15|36.16|
| ours-llama3|24.18| 39.05|33.25|49.58|42.66|37.74|
**Table 1: Experiments on Longbench**
| Model| kv_retrieval | numbe_string | passkey | En.MC | Average |
|-----|--------------|---------------|---------|-------|---------|
| ours-llama3|92.80| 99.83|100.00|54.59|86.81|
| Llama3-ChatQA-2-8B|72.00|100.00|100.00| 64.19|84.05|
**Table 2 Experiments on InfiniteBench.**
- Since Llama3-ChatQA-2-8B produces "Not Available" results on InfiniteBench when using RAG (w/ RAG), we only compared the results of Llama3-ChatQA-2-8B in Table 2.
- The above shows the comparison results between our method and the representative RAG method, ChatQA 2 [2]. It is clear that our method performs well compared to the strong RAG baseline, especially on the Longbench dataset.
- The performance collapse of the RAG method on InfiniteBench is an unknown phenomenon, which also demonstrates the robustness of our method.
Q2: **Does the paper provide a detailed description of the calibration data used? Have they analyzed stability across different context sources (e.g., code vs. narrative text)?**
- We calibrated the attention scores based on Equation 7, using an online method to evict anomalous tokens, so there is no need to sample data in advance for calibration.
- We found that in code tasks, the attention distribution of the intermediate layers are more sensitive to calibration, while in text tasks, such as in-context learning, the attention distribution of the earlier layers are more sensitive to calibration.
**Reference**:
[1] RAPTOR: recursive abstractive processing for tree-organized retrieval.
[2] ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities. | Summary: This paper proposes a method for training-free length extrapolation of LLM i.e. extending an LLM to process sequence longer than the sequence length it is pretrained on. The key idea is to split the input into chunks that fit in the LLM's context window and perform global attention over the chunks. While previous work (e.g. InfLLM) has also explored similar idea, this paper further proposes to mitigate "attention bias" by removing tokens within the chunk that receive (1) high attention scores (sink tokens) and (2) low self-information scores.
Claims And Evidence: Experiments are conducted on multiple long-context datasets (LongBench, Infinite-bench and perplexity of NarrativeQA, showing that the proposed method outperform previously proposed methods (including methods that manipulate position encodings and similar chunk encoding methods like InfLLM).
However, I find the evidence that the proposed calibration and compression methods is helpful to be relatively weak. For all the experiments (table 1, table 2). If I understand correctly, "ours" is the setting where all tokens are encoded in the parallel fashion, and "ours-compression" and "ours-calibration" correspond to setting where tokens are evicted. And in both tables the difference between "ours" and the other two methods are minimal (except for Llama-2-7B-chat-hf(4k) on Infinite-bench).
Further, the difference between "ours" and InfLLM since to be whether all the chunks are included in the global attention, or only some of the chunks are considered (InfLLM only uses subset of chunks to do global attention), which is not surprising as leveraging more information is more accurate.
Methods And Evaluation Criteria: The proposed benchmark datasets and baselines are suitable for the method.
Theoretical Claims: I checked the theoretical claims on attention bias for parallel attention computation.
Experimental Designs Or Analyses: The experiment settings make sense.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This work contributes to the line of work which aims to enable language models to process text longer than their pre-training context window. While there are more language models nowadays that is trained to process very long context already, I think the direction is still an important.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: * I think it will be helpful to analyze the setting where the proposed calibration and compression method helps compared to "Ours". For instance, it seems like for `En.MC` and `R.KV`, "ours-calibration" improve upon "ours" more compared to the other tasks.
Typo:
* line 71 in Related work section "reecent" ==> "recent"
* equation 6 -- if $R_{l}$ represents the indices corresponding to tokens to evict, doesn't $K_{x}[R_{l}]$ represent the key states to be evicted?
* The text in Table 5 is too small and uneasy to read.
Questions For Authors: * While the attention analysis in Section 4 is nice, it is unclear to me what's the connection of the patterns observed in figure 3 to the proposed methods aside from the idea that tokens with high attention scores are removed?
* For Llama-3-8B-Instruct (8k) performance on LongBench, it seems like some of the numbers for InfLLM and FullKV are different for the numbers reported in the [InfLLM paper](https://arxiv.org/pdf/2402.04617) Table 5 (e.g. NarrativeQA and MF-en), is the setting different causing the discrepancy? The results for InfiniteBench seems to align with that in InfLLM's Table 1.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for your fruitful suggestions, and for all your questions below!
Q1: **Minimal Performance Difference Between "Ours" and Other Methods in Token Eviction Settings.**
- "Ours" implements the eviction scheme for the entire chunk's KV cache.
- "Ours-compression" refers to further compressing the KV cache within the chunk.
- "Ours-calibration-compression" attempts to recover performance further through calibration.
"Ours" and "Ours-compression" refer to the methods of adding chunk eviction and parallel KV cache eviction, respectively. Generally, parallel KV cache eviction tends to degrade the performance of many models, so we hope to recover the performance lost during compression by using an attention distribution calibration mechanism, namely "Ours-calibration-compression."
Q2: **Difference Between "Ours" and InfLLM: Global Attention with Full vs. Subset of Chunks.**
We need to clarify your misunderstanding of our method.
- "Ours" only **applies attention to chunks in the priority queue**, unlike InfLLM, which retains all chunks in the CPU and causes high I/O overhead and slow prefill speeds by swapping the KV cache between the CPU and GPU. In fact, it can attend to all global chunks simultaneously.
- InfLLM addresses the memory-bound issue during inference by retaining only the chunk representations, while "Ours" uses a chunk eviction strategy and parallel compresses the KV cache to increase chunk process throughput. **We mainly focus on the KV cache compression strategy in parallel chunk processing**
Q3: **Analyzing the Benefits of Calibration and Compression vs. "Ours".**
We will start our analysis from a core motivation, which is the memory-bound issue in the parallel encoding and the attention bias issue aggravated by compression..
- “Ours“
Due to GPU memory limits, it's not feasible to store KV caches for many chunks at once. Offloading chunks to the CPU during parallel processing also introduces high I/O latency, making it impractical. We propose a more general approach based on two principles:
**Simplicity:** We use negative log-likelihood (NLL) to model the relevance between queries and chunks without designing extra representations. This reduces communication overhead—especially important in multi-GPU setups where I/O is the main bottleneck.
**Efficiency:** Sorting in the priority queue only involves scalar comparisons with O(log n) time complexity, unlike heavier methods like InfLLM that use O(n²) retrieval operations.
- “Ours-compression“
Parallel KV cache eviction helps address memory limitations in parallel computation, but our analysis shows it worsens attention bias. To mitigate this, we introduced a simple attention calibration mechanism “**ours-compression-calibration**” for near-lossless compression, making our design effective.
Q4: **Typo of equation 6.**
- In Eq. 6, [·] represents the eviction operation, which means removing the indices in [·] from K_x.
Q5: **Clarifying the Link Between Figure 3 Patterns and the proposed method.**
- Observation1: Based on our observations, due to using almost the entire window length for parallel encoding, the model always exhibits a strong recency bias in the query chunk region.
- Design1 : It indicates that the model relies on the tokens in the query chunk to make next token predictions, which inspired us to use the negative log-likelihood of the tokens in this region to measure the importance of the chunk.
- Observation2: Another phenomenon is that the KV cache compression operation **exacerbates attention bias as shown in Figure 3**, as shown in the first image of Figure 5.
- Design2 : Therefore, designing a calibration mechanism is necessary to mitigate the performance loss caused by exacerbated attention bias.
Q6: **The performance discrepancy for Llama-3-8B-Instruct (8k) on LongBench may be due to different settings, while results for InfiniteBench align with InfLLM's Table 1.**
- LongBench: For LongBench, we do not use the original hyperparameters reported in the InfLLM paper. Instead, we re-evaluated all baselines under a unified testing framework to ensure fair comparison. Specifically, we adopted the evaluation setup from the open-source repo[1], and ported that framework to the InfLLM codebase for consistency. This was done to eliminate inconsistencies in evaluation scripts, precision settings, and other hyperparameters.
The inference details
temperature=0, and num_beams=1, float16 precision.
- For InfLLM, we use the default hyperparameters as provided in their official code repository to evaluate on longbench.
**Reference**:
[1] https://github.com/Zefan-Cai/KVCache-Factory. | Summary: Motivation is also for better generalization on long seqs. And to use existing LLMs and extend their attention context size. And for more efficient attention on long seqs.
The proposed method does not need any finetuning. Any existing LLM can be used, with the attention mechanism and caching adapted.
The proposed method is called ParallelComp. It uses chunk-based attention for fast parallel calculation to be able to filter out (evict) non-relevant parts of the cache/history in an efficient parallel way, and only then computes global attention on the remaining cache.
Many experiments are performed to show how well it performs on length extrapolation, and results look good for ParallelComp.
## Update after rebuttal
While it is clarified that the code is being released, which is am important point, overall I keep my current score, as I'm not sure about the importance/impactfulness of the presented approach, as also pointed out by other reviewers, and the experiments could also be extended to some more recent LLMs.
Claims And Evidence: Better length extrapolation is claimed by the new proposed method. This is tested and verified in the experiments.
Methods And Evaluation Criteria: Tested on benchmarks which test for length extrapolation, so they make sense.
Theoretical Claims: I did not check in detail.
Experimental Designs Or Analyses: Tested on benchmarks which test for length extrapolation, so they make sense.
Supplementary Material: I only briefly looked at it.
Relation To Broader Scientific Literature: The presented method is relevant for length extrapolation.
Essential References Not Discussed: -
Other Strengths And Weaknesses: Strengths:
- Presented approach should be simple to do. Does not need any finetuning.
- Presented approach gives good results.
- Many experiments are performed.
Weaknesses:
- No code released to reproduce the experiments?
- Some parts are a bit unclear to me. See questions below.
Other Comments Or Suggestions: Sec 2 "reecent" typo.
You have a "bloc97" reference. That is the GitHub username of Bowen Peng (see for example: https://arxiv.org/pdf/2411.19870).
The appendix has the title "Submission and Formatting Instructions for ICML 2025".
Questions For Authors: Where is the code of the experiments? Will this be published?
Fig 1, is that one selected attention head in one layer, or the average over attention heads and all layers, or what else is it exactly? And for what model?
Eq 3, how is this self-information log prob defined? Is this the attention score A? Why do you write P? Shouldn't this be A?
Sec 3.1 "concatenate them with the input query for parallel encoding" - what exactly does this mean? What is the input query for parallel encoding? Where do I see the input query in the formula?
Sec 3.1 Chunk Eviction: What is this about? "retaining only the most relevant chunks" - what does this mean? What exactly is kept? The KV? But then, how is this different to the KV Cache Eviction?
Sec 3.1 KV Cache Eviction: How is this different now to the previously described chunk eviction?
Sec 3.1 KV Cache Eviction: What exactly is parallel about it? The previous chunk eviction is not parallel or also parallel?
Sec 3.1 KV Cache Eviction: Why is Flash Attention relevant here? Flash attention is just a specific implementation.
Sec 3.1 KV Cache Eviction: "Since it is hard to obtain the complete attention distribution from Flash Attention" - what do you mean? You don't get the attention weights out of flash attention? Why is this relevant? Then just don't use flash attention but something else? Or implement this yourself? Or modify flash attention?
Sec 3.1 Attention Calibration: "we evict tokens with excessively high attention scores": Evict means, you only use those values with high att scores, or you exclude those and only use the others? From eq 7, it looks like you only use those values with high att scores. But why is this helpful? How does this calibrare the attention distribution?
Table 1, 2: What exactly is "Ours" (without the calibration R_h and compression R_l)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank for your suggestions!
Q1:**For general inquiries such as typos and others.**
- We promise to release our github and will correct typo error in future paper version.
- Regarding citation issues, the NTK-by-parts position interpolation method was not proposed in a formal paper. It was also cited from a GitHub repository in other papers, with reference [7] in the reference paper [1].
Q2:**Fig 1, is that one selected attention head in one layer, or the average over attention heads and all layers, or what else is it exactly? And for what model? Eq 3, how is this self-information log prob defined? Is this the attention score A? Why do you write P? Sec 3.1 "concatenate them with the input query for parallel encoding" - what exactly does this mean? What is the input query for parallel encoding? Where do I see the input query in the formula?**
- The attention head we show here is the distribution change of the 21st head of layer 1, and the model is meta-llama/Llama-2-7b-chat-hf.
- Self-information is not attention score; essentially, it is the negative log-likelihood of the question predicted based on the chunk input to the LLM. It reflects the model's confidence in predicting the query based on the context.
- We concatenate the input question(X^q in the formula) with each chunk and encode them in parallel. The query serves two purposes: first, to assess the importance of each chunk and decide if its KV should be stored in the GPU’s priority queue to manage memory; second, to evict tokens from the chunk’s KV cache, optimizing memory usage and throughput. In short, the query is primarily used for compression.
Q3:**Sec 3.1 Chunk Eviction: What is this about? "retaining only the most relevant chunks" - what does this mean? how is this different to the KV Cache Eviction? What exactly is parallel about it? The previous chunk eviction is not parallel**
- Chunk eviction means removing the entire KV cache of a chunk from a priority queue based on self-information scores, while KV Cache Eviction refers to removing tokens from a specific layer and head within a chunk.
- The token-level KV cache eviction process is parallel across chunks, as it depends on the cumulative attention distribution of the query across layers and heads without dependencies between chunks. In contrast, traditional KV cache eviction methods such as H2O [2] have no extrapolation capability and can only handle KV caches with a fixed context length.
- The eviction of chunks depends on whether the priority queue has reached its maximum size, usually determined by the GPU memory. When the size is reached, chunks completing self-information calculation and KV cache eviction compare their self-information size with those in the queue to decide retention or eviction. This comparison is typically done sequentially.
Q4:**Why is Flash Attention relevant here? Flash attention is just a specific implementation.**
- We want to emphasize that flash attention provides faster computation and memory utilization. FlashAttention does not return the complete attention distribution matrix for each layer of the model[3], thus making it unable to perform KV cache eviction. To address this issue, we calculate the inter-chunk attention matrix for each chunk and its corresponding query block at each layer separately, based on formula (5).
Q5:**Since it is hard to obtain the complete attention distribution from Flash Attention" - what do you mean? Sec 3.1 Attention Calibration: "we evict tokens with excessively high attention scores": Evict means, you only use those values with high att scores, or you exclude those and only use the others? What exactly is "Ours" (without the calibration R_h and compression R_l)?**
- According to [3], FlashAttention doesn't return the attention distribution, preventing us from evicting the KV cache based on attention scores. To solve this, we use the last 8 tokens from the query chunk and multiply it with the chunk's value matrix to quickly estimate a cumulative attention distribution for evicting tokens using Pytorch.
- Equation 7 represents that we evict tokens with abnormally high scores. At the same time, we also evict tokens with low attention scores according to Equation 6. In summary, the eviction strategy removes tokens with either abnormally high or the lowest attention scores.
- We calculate the full attention distribution for each head using flash-attention. To prevent abnormal patterns, we first approximate the attention distribution with Equation 5, allowing us to evict tokens that would gather excessive attention.
- 'Ours' represents the case without any KV cache compression or attention calibration, only with the chunk eviction mechanism.
**Reference**:
[1] YaRN: Efficient Context Window Extension of Large Language Models.
[2] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.
[3] https://github.com/Dao-AILab/flash-attention/issues/1357
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal.
I still think that you should not convolute technical limitations because of certain implementations you use (e.g. FlashAttention) with the actual conceptual methods. But this is mostly a matter of formulation.
I also still see it as a very big problem that you do not plan to release your source code.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and feedback.
Q1: **I still think that you should not convolute technical limitations because of certain implementations you use (e.g. FlashAttention) with the actual conceptual methods. But this is mostly a matter of formulation.**
We agree with your point. Since ICML cannot update the PDF, we will make the required changes in future versions.
Q2: **I also still see it as a very big problem that you do not plan to release your source code.**
Do you want us to provide the anonymous repository link directly here? We assure you that if the paper is accepted, we will definitely release our source code.
If you think that we have addressed all of your concerns, may we kindly ask you to consider increasing the score? Thanks | null | null | null | null | null | null |
RBench: Graduate-level Multi-disciplinary Benchmarks for LLM & MLLM Complex Reasoning Evaluation | Accept (poster) | Summary: This paper proposes a new benchmark called R-Bench, with features of Comprehensiveness, Difficulty, Multimodality and Multilingualism.
The paper also conducted various experiments on current mainstream LLMs and MLLMs using R-Bench.
Claims And Evidence: Yes. All claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes. The evaluation criterias make sense to evaluate the authentic abilities of LLMs/MLLMs.
Theoretical Claims: There are no theoretical claims in the paper.
Experimental Designs Or Analyses: Yes. I've checked the soundness/Validity of experimental designs and analyses.
Supplementary Material: I've reviewed all parts of the Appendix.
Relation To Broader Scientific Literature: The paper proposed a new benchmark to evaluate LLMs and MLLMs, which is based on prior benchmarks like MMLU and MMMU. R-Bench made improvements on Comprehensiveness and Multilingualism.
Essential References Not Discussed: Essential paper are cited and discussed in the paper.
Other Strengths And Weaknesses: Strengths:
1. This paper is well-structured and easy to follow for readers.
2. This paper is comprehensive and explain the methodology with great details, e.g. Section 2 shows a lot details in describing the process of data collection.
3. The benchmark proposed in this paper is both multi-disciplinary and multi-lingual, incorporating the strengths of existing benchmarks.
Weakness:
1. The paper doesn't fully explore whether R-Bench needs more reasoning abilities than current benchmark. In this paper, authors use thinking tokens/ thinking time to judge, but they haven't taken dataset biases etc into account.
2. This paper focus heavily on the Accuracy in the benchmark. More detailed breakdown of failure modes would be helpful when evaluating LLMs/MLLMs.
Other Comments Or Suggestions: For Tab.7 and Section 3.3, which are discussing The effect of CoT,as mentioned in your paper above, do more test on Reasoning Models may be better. (In Tab.7, the only Reasoning Model is o1-mini.)
Questions For Authors: 1.For Section 3.1, Paragraph 2: Is the pairwise comparison approach sufficiently accurate to determine which benchmark (considered in its entirety) demands higher reasoning capabilities?
2.For Section 3.1, Paragraph 3: You mentioned two aspects, on the one hand based on the reasoning tokens and on the other hand using o1 to judge correctly. My question is how these two aspects impact o1 voting in Table 3?(eg. using a weighted sum?)
3.For Section 3.1, Paragraph 3: Reasoning tokens and reasoning time cannot be simply considered to have a linear relationship. (When calling the API service, the GPU memory bandwidth and computing power of the corresponding service will affect the generation speed.) Furthermore, If the length of the R-Bench problem is longer than the baseline problem, the generated reasoning tokens will most likely increase, so the a comparison might not be fair.
4.For Section 3.3: How to compute Consistency and Accuracy in Figure 5&6?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the selfless efforts and constructive comments for improving the quality of this work. The followings are detailed response to your concerns.
### Q1:
The paper doesn't fully explore whether R-Bench needs more reasoning abilities than current benchmark.
### A1:
Evaluating the reasoning demands of a dataset on models is inherently challenging, and there is no standard quantitative metric for doing so. In this work, we employed both expert scoring and model-based evaluation to assess the reasoning requirements of our dataset.
Regarding the dataset biases, we understand this as referring to biases across different subjects. To address it, we conducted experiments focusing only on 50 mathematical questions from both R-Bench and MMLU to control for subject-specific variation in reasoning demands. The results showed that the average number of reasoning tokens for R-Bench was 6,867.1, while for MMLU it was 1,011.3, indicating a difference of approximately 6.8 times.
In addition, we asked the OpenAI o1 model to vote on which question required more reasoning. The voting results were 43:5:2 (win:loss:tie) in favor of R-Bench, showing a significant preference for R-Bench questions in terms of reasoning demand.
### Q2:
More detailed breakdown of failure modes would be helpful when evaluating LLMs/MLLMs.
### A2:
We have conducted analysis of some error examples. Due to format and length limitations during the rebuttal stage, we are unable to include them here. In our analysis of GPT-4o and o1's errors, we observed that most failures occurred during the reasoning process. These errors stem from various sources, such as calculation, flawed reasoning strategies, and perception errors. Notably, the models rarely failed due to a lack of knowledge, indicating that they have generally mastered knowledge at a graduate level. In the revised version, we will include an error analysis section to present our findings.
### Q3:
For Table 7, which analyze the effect of CoT, it would be more convincing to include additional reasoning models beyond o1-mini.
### A3:
Thank you for your constructive feedback. Due to experimental cost, we only added DeepSeek R1 without CoT. It scored 60.7% on R-Bench, just 0.5% lower than the CoT-enabled version (61.2%). We observed that this performance drop is smaller than that of chat models, where the decrease often exceeds 1% or even 2%.
This is an interesting observation, and we plan to deeply explore CoT's impact across more reasoning and chat models in future work.
### Q4:
In Section 3.1, Paragraph 2: Is pairwise comparison sufficient to assess overall benchmark reasoning demand?
### A4:
As far as we know, there is no clearly defined metric for assessing reasoning abilities, whether for humans or foundation models. A single metric, such as the expert pairwise comparison, is one dimension, but it is not sufficiently accurate to determine which benchmark demands higher reasoning capabilities. Our goal is to provide a comprehensive assessment, presented in Tables 2–4, to reflect, across multiple dimensions, that R-Bench imposes higher reasoning demands compared to other benchmarks, both in terms of expert scores and model performance.
### Q5:
For Section 3.1, Paragraph 3, how these two aspects impact o1 voting in Table 3? (eg. using a weighted sum?)
### A5:
We apologize for the confusion caused by Section 3.1, Paragraph 3 in our manuscript. The o1 voting and reasoning tokens are separate and presented in Table 3 and Table 4, respectively. For Table 3, we randomly paired one R-Bench and one MMMU question, then asked o1 which required more reasoning. R-Bench scored 1 point if selected, otherwise MMMU. This was repeated for 30 pairs to compute win rates.
### Q6:
Section 3.1, Paragraph 3: Reasoning tokens and reasoning time are not linearly correlated. Additionally, longer R-Bench problems may naturally yield more tokens, making the comparison potentially unfair.
### A6:
To strengthen our results, we conducted a reasoning tokens experiment using 50 math questions from both R-Bench and MMLU. OpenAI o1 generated an average of 6,867.1 tokens for R-Bench and 1,011.3 for MMLU—a 6.8× difference. While problem length may play a role, we believe the gap mainly reflects R-Bench’s higher reasoning demands.
### Q7:
For Section 3.3: How to compute Consistency and Accuracy in Figure 5&6?
### A7:
Figure 5 shows models' consistency on English and Chinese versions of identical difficult questions. For a question, if both the English and Chinese versions of the question are answered correctly or incorrectly, we increment the consistency variable c = c + 1, Finally, consistency = c / total questions.
For the accuracy in Figure 6, to obtain this accuracy, we grouped the questions by different departments. We then separately calculated GPT-4o's accuracy on the questions from each department to generate Figure 6.
---
Rebuttal Comment 1.1:
Comment: I have read the authors' rebuttal. They have addressed most of my concerns. However, I am not satisfied with the response to Q6. You acknowledge that problem length may play a role, yet you provide no quantitative comparison of the average lengths of R-Bench and MMLU questions. Without this information, it is unclear whether the observed 6.8× difference in generated tokens is primarily due to reasoning demands or simply a reflection of longer input lengths in R-Bench.
$\textbf{Further response to authors' rebuttal}$:
Thanks for your detailed response. Since I could not find the button to reply directly to the authors, I am providing my further response here.
I would prefer to see statistics based on the same subject. Currently, the two examples provided are not very representative, and their difference in subject makes comparison quite challenging. I suggest that the authors first categorize the samples by subject, and then randomly select a sufficient number of pairs from R-bench and MMLU with similar lengths in each category. Subsequently, a statistical comparison of their lengths should be performed. This would help demonstrate the results' generalizability and representativeness.
$\textbf{Final response to authors' rebuttal}$:
Thanks for your prompt and detailed illustrations. They have addressed my concerns. I have raised my score.
---
Reply to Comment 1.1.1:
Comment: We are glad that the above responses have addressed most of your concerns. Here we hope to address your concern about Q6, which is the impact of question length on reasoning tokens.
We re-organize the experiment and controlled the length of the questions. We selected 30 questions from R-Bench and MMLU, and their average question lengths were 218.6 characters and 219.7 characters for R-Bench and MMLU respectively.
Below are examples from R-Bench and MMLU. The first example is from R-Bench and the second one is from MMLU.
```
A semicylindrical glass with a refractive index $n=\sqrt{2}$ is placed in the air. In a plane perpendicular to the axis of the semicylinder, a light ray is incident at a $45^{\circ}$ angle on the flat surface of the semicylinder. What is the range of angles at which the light ray emerges from the semicylinder?
```
```
A solid sphere (I = 0.06 kg·m^2) spins freely around an axis through its center at an angular speed of 20 rad/s. It is desired to bring the sphere to rest by applying a friction force of magnitude 2.0 N to the sphere’s outer surface, a distance of 0.30 m from the sphere’s center. How much time will it take the sphere to come to rest?
```
The experimental results show that the average number of reasoning tokens on R-Bench is 6623.2, while the average number of reasoning tokens on MMLU is 933.2, a difference of about 7.1 times. This shows that even if the lengths are similar, the problems in R-Bench still require more reasoning tokens to solve.
------------------------
------------------------
## Further response to reviewers’ comments
Thank you for your insightful reply. The difference in subjects may introduce bias to a certain extent.
To avoid this problem, we compared reasoning tokens for R-Bench and MMLU questions of similar question length (average about 240 characters for both benchmarks) in the physics subject. Similarly, we selected 30 questions from each of the physics subjects of R-Bench and MMLU.
Below are examples from R-Bench and MMLU. The top two belong to R-Bench, and the bottom two are from MMLU.
R-Bench:
```
The line element of the dynamic spherically symmetric Vaidya spacetime is $d s^{2}=-\left[1-\frac{2 M(v)}{r}\right] d v^{2}+2 d v d r+r^{2} d \theta^{2}+r^{2} \sin ^{2} \theta d \varphi^{2}$. $\nu$ is the advanced Eddington coordinate corresponding to time. Find the event horizon expressed in terms of $M, \dot{M}$.
```
```
In a long and straight wire of length $L$, electrons oscillate in phase with angular frequency $\omega$ and small amplitude $a$. Try to calculate the electric field intensity at a far distance $R$ ($R \gg L$) at an angle $\theta$ to the wire.
```
MMLU:
```
Consider three identical, ideal capacitors. The first capacitor is charged to a voltage and then disconnected from the battery. The other two capacitors, initially uncharged and connected in series, are then connected across the first capacitor. What is the final voltage on the first capacitor?
```
```
In a certain region, the electric field varies with the radius away from origin by the equation Er = –6r^2 + 4r + 3, where r is given in meters and E in N/C. The potential difference between the origin and the point (3, 4) is ?
```
The experimental results show that the average number of reasoning tokens on R-Bench is 6157.5, while the average number of reasoning tokens on MMLU is 1021.2, a difference of about 6 times. This suggests that even though the questions with similar length and same subject, the questions in R-Bench still require more reasoning marks to solve.
------------------------
------------------------
## Thank you for your professional and insightful comments.
We will incorporate the discussion from the rebuttal stage into the revised version. | Summary: This paper proposes R‑Bench, a benchmark designed to evaluate complex reasoning in language and multimodal models. The dataset spans a wide range of subjects and includes more than 1,000 text-based and 665 multimodal questions. The questions are carefully selected and filtered to ensure that they require deep reasoning rather than simple recall, and are provided in both English and Chinese to test cross-linguistic capabilities. Experiments on various LLMs and MLLMs show that even state‑of‑the‑art models achieve only moderate accuracy, with multimodal reasoning posing a greater challenge than text-only tasks. This benchmark not only highlights current limitations across disciplines and modalities but also provides valuable insights and guidance for future improvements in foundation models’ reasoning skills.
Claims And Evidence: In the conclusion, the authors claim that the benchmark achieves competition-level difficulty comparable to contests such as AIME@2024. However, this assertion is not fully substantiated by the evidence presented in the paper. Although the dataset is derived from graduate-level materials, there remains a gap between the difficulty of graduate-level content and that of olympiad-level challenges.The question samples provided in Fig. 3 indicates that the problems do not approach the complexity expected of olympiad-level tasks. Consequently, the claim appears overstated and lacks clear justification.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A. No theories involved in this paper.
Experimental Designs Or Analyses: The experiment design is solid.
Supplementary Material: N/A. No supplementary material provided.
Relation To Broader Scientific Literature: The development of such a broad and difficult benchmark thus fills a gap in the literature, providing a tool that can more comprehensively evaluate both the retrieval of knowledge and the capacity for complex reasoning.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The paper is written in a clear and straightforward manner, with all key steps of data collection and benchmark construction thoroughly documented.
2. The experimental evaluation is comprehensive, testing a wide range of models. The observed shortcomings in multimodal reasoning capabilities align with findings from previous studies.
3. The benchmark’s design—spanning a broad knowledge base and requiring deep reasoning—provides a robust testbed for models. Its multilingual and multimodal versions strengthens its utility and relevance.
Weaknesses:
1. The paper does not provide a thorough discussion or comparison of MMMU-pro, a more robust evolution of MMMU. Including MMMU-pro in experiments (e.g., in Tables 2–4) would strengthen the analysis.
2. The benchmark does not report the performance of human experts, leaving an important baseline unexplored.
3. In Section 3.1, only 30 questions are sampled for the win rate comparison, which is a small sample size that limits the reliability of the experimental results.
4. The failure patterns of LLMs and MLLMs are underexplored. More detailed categorization of errors, such as lacking knowledge, wrong reasoning steps can provide valuable insights into developing more capable models.
Other Comments Or Suggestions: 1. In Appendix A.3, there are two "table 8" in line 654.
2. The caption of figure 6 is confusing. I suggest the authors rewrite it.
Questions For Authors: 1. In the model screening process described in Section 2.3, how was the 2000 reasoning token threshold determined? Can you provide details on the distribution of reasoning tokens in the dataset before filtering in Step 4? Can you also estimate the computational cost of prompting o1 with the entire dataset to obtain the reasoing token number?
2. Regarding Section 2.4, can you elaborate on how the answer options were constructed? Specifically, on line 257 you mention that "we manually adjust the options to ensure a sufficient numerical gap between them." What criteria or standards guide these adjustments? Additionally, when including the "all answers are incorrect" option, were there cases where this was intended to be the ground truth answer?
3. How do human experts perform in the benchmark?
4. In the conclusion, you claim that R‑Bench reaches the difficulty level of olympiad-level competitions. However, given that the dataset is derived from undergrad and graduate courses, which are generally more focused on in-depth knowledge rather than competition-level challenge, can you provide further justification or evidence to support this claim?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments, which are crucial to improving the quality of this work. The followings are detailed response to your concerns.
### Q1:
In the conclusion, authors claim R-Bench achieves competition-level difficulty, but this is not fully supported by the evidence provided.
### A1:
Indeed, it is difficult to claim that the difficulty of multidisciplinary questions is directly comparable to that of competition problems.
In our paper, we aimed to illustrate that for current SOTA models such as o1 and DeepSeek R1, their performance on R-Bench is lower than on AIME@2024. For example, o1 and DeepSeek R1 achieved 74.4% and 79.8% accuracy on AIME@2024, while their scores on R-Bench were 69.0% and 61.2%. This observation served as part of the motivation for our claim.
Besides, in revised version, we will refine our claim and introduce a clearer qualification: specifically, for current advanced reasoning models such as o1, R-Bench achieves competition-level accuracy comparable to contests such as AIME@2024.
### Q2:
The paper does not provide a thorough discussion or comparison of MMMU-pro.
### A2:
In revised version, we will include comparisons with MMMU-Pro in Tables 3 and 4. In rebuttal stage, we conduct o1 voting and o1 thinking time comparison experiments.
We randomly sample 30 questions in MMMU-Pro and use o1 model to compare them with questions in R-Bench-M.The voting results were 22:7:1 (win:loss:tie) in favor of R-Bench, showing a significant preference for R-Bench questions in terms of reasoning demand. In addition, for questions in R-Bench-M, the OpenAI o1 model required an average of 91.7s to generate a response, whereas for questions in MMMU-Pro, it required only 28.1s on average. This further suggests that R-Bench-M demands more reasoning capabilities.
Besides, we will expand the discussion of MMMU-Pro in related work.
### Q3:
The benchmark does not report the performance of human experts, leaving an important baseline unexplored.
### A3:
We agree that human expert performance is an important baseline. However, establishing such a baseline is extremely challenging. The main difficulty lies in recruiting domain experts from different fields to solve thousands of high-difficulty questions, which is practically demanding. Moreover, a single test run is not statistically meaningful; reliable baselines require averaging over 5 runs on thousands of questions.
Therefore, we plan to conduct this baseline on 3–5 selected subjects, such as computer science, to better reflect the gap between models and human experts.
### Q4:
In Section 3.1, 30 questions is a small sample size that limits the reliability of the experimental results.
### A4:
For the reasoning model scoring, we extended the number of evaluated questions from 30 to 300. The experimental results were consistent with those reported in the paper. In terms of textual reasoning, R-Bench takes approximately 7 times longer than MMLU. For multimodal reasoning, R-Bench requires roughly 4 times the reasoning time compared to MMMU. We will incorporate this result in the revised version to make our experimental results more convincing.
### Q5:
The failure patterns of LLMs and MLLMs are underexplored.
### A5:
We have conducted an analysis of some error examples; however, due to format and length limitations during the rebuttal stage, we are unable to include them here. In our analysis of GPT-4o and o1's errors, we observed that most failures occurred during the reasoning process. These errors stem from various sources, such as calculation, flawed reasoning strategies, and perception errors (for multimodal models). Notably, the models rarely failed due to a lack of knowledge, indicating that they have generally mastered knowledge at a graduate level. In the revised version, we will include an error analysis in Section 3.3 and Appendix to present our findings.
### Q6:
How was the 2,000 reasoning token threshold set? What's the pre-filter distribution and o1 cost?
### A6:
The 2,000-token threshold was a heuristic decision. We found that 2,000 tokens served as a reasonable threshold to help distinguish "reasoning-oriented" questions from "knowledge-based" ones. In the revised version, we will include a distribution histogram to illustrate the original reasoning token distribution.. The cost of API call for Openai o1 is about 6,000 USD.
### Q7:
What criteria guided the manual adjustments to ensure sufficient numerical gaps? Additionally, was “all answers are incorrect” ever used as the correct answer?
### A7:
We believe that the other answer options should differ from the correct answer by at least 10% or 0.5. This is a heuristic guideline to prevent models from being penalized due to minor approximation errors during the reasoning process.
Regarding the second question, some "all answers are incorrect" options were used as the correct answer.
---
Rebuttal Comment 1.1:
Comment: Good work. Remember to include all the changes into the next version of the paper. I have revised my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable comments! I appreciate your suggestion and will make sure to include all the changes in the next version of the paper. | Summary: This work proposes R-Bench, a graduate-level multidisciplinary, multilingual benchmark for both LLM and MLLM reasoning evaluation, which has coverage similar to MMLU and MMMU while reaching the difficulty of mathematical competitions such as AIME@2024. The authors evaluate multiple closed-source and open-source models on R-Bench and then observe both the progress and limitations of current models in reasoning.
## update after rebuttal
The authors' response address all my concerns. I decide to maintain my score (which is already high) and acknowledge that the contributions of this paper merit publication.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed benchmark in this work makes sense for the reasoning problems in the literature.
Theoretical Claims: This paper mainly focuses on benchmark and its construction process, and therefore does not provide theoretical claims.
Experimental Designs Or Analyses: Yes. This work conducts extensive experiments to validate the value of their benchmark.
Supplementary Material: Yes, I have reviewed.
Relation To Broader Scientific Literature: The benchmark proposed in this paper can provide better support for the research of LLM and MLLM.
Essential References Not Discussed: No, I believe the related work is thoroughly discussed.
Other Strengths And Weaknesses: Strengths: This paper is well-written and the motivation is clearly defined. The proposed R-benchmark is highly important and critical for reasoning tasks.
Weaknesses: The related work in Section 4 should be placed earlier in the paper to assist readers in understanding the context. The authors have not thoroughly discussed the superiority of R-benchmark compared to other benchmarks.
Other Comments Or Suggestions: 1. Why have the authors placed the related work in Section 4? I believe it would be more appropriate to present the related work in Section 2.
2. In the Introduction section, the authors should express "**Comprehensiveness**" more accurately. For example, economics and chemistry are also subjects that reflect human intelligence, and therefore, we should also evaluate models' performance in these areas.
Questions For Authors: 1. Why should an ideal assessment for complex reasoning emphasize "**difficulty**"? In most cases, difficulty and ease are relative concepts, and it is unclear how to determine whether a problem is difficult. If a benchmark overly emphasizes difficulty, it may overlook the model's performance on simpler problems, whereas in real-world scenarios, simple problems may outnumber difficult ones. I hope the authors can provide a deeper discussion on this matter.
2. In Section 3.1, the authors conduct experiments comparing reasoning abilities with other benchmarks. Although this experiment validates the stronger reasoning ability of R-benchmark, the authors do not provide an explanation for the result. More high-level discussion on the underlying reasons for this outcome is necessary.
3. Section 3.3 shows significant performance variation across disciplines. However, I am unsure whether this experiment is fair. How do the authors ensure that the difficulty of questions in economics and physics is consistent? If the difficulty across disciplines is not uniform, for instance, if economics questions are particularly difficult while physics questions are relatively easier, then this finding would be meaningless.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments, which are crucial to improving the quality of this work. In addition, thank you for your positive evaluation of our work, which has been a great source of encouragement for us. The followings are detailed response to your concerns.
### Q1:
The related work in Section 4 should be placed earlier in the paper to assist readers in understanding the context.
### A1:
In the revised version, we will move the Related Work section to follow the Introduction and add more analysis and discussion to highlight the advantages of R-Bench compared to existing benchmarks.
### Q2:
In the Introduction section, authors should express "Comprehensiveness" more accurately.
### A2:
We want to build R-Bench as a comprehensive benchmark, aiming to cover as many subjects as possible that reflect various aspects of human intelligence such as economics and chemistry. We also hope to provide a more precise or definitive characterization of comprehensiveness. However, it is challenging to offer a clear-cut definition of this concept.
In the revised version, we will attempt to describe this property using more concrete language — for example, by stating that R-Bench includes over 100 subjects or by proposing a quantifiable definition of comprehensiveness Thank you again for your suggestion which will help us make the paper more rigorous overall.
### Q3:
Why should an ideal assessment for complex reasoning emphasize "difficulty"? In most cases, difficulty and ease are relative concepts, and it is unclear how to determine whether a problem is difficult.
### A3:
Indeed, in real-world scenarios, simple problems may outnumber difficult ones. However, in our context, difficulty refers to the level of challenge a model faces when solving a problem.
Just as humans encounter exams of varying difficulty at different educational stages—elementary school, middle school, university—these assessments guide individual development by signaling what knowledge to acquire and which direction to pursue. Similarly, for foundation models, evaluation benchmarks serve as developmental milestones.
For instance, prior to December 2022 (before the release of ChatGPT), the focus was on whether large models could memorize and reproduce a broad range of knowledge. At that time, MMLU provided a meaningful developmental direction and was considered challenging for models. As model capabilities improved with the release of GPT-4o, Claude 3.5, and others, MMLU became increasingly saturated, i.e., it started to be perceived as easy, prompting the development of more difficult benchmarks like MMLU-Pro.
Now, with the advent of advanced reasoning models such as o1, even MMLU-Pro is showing signs of saturation. This creates a demand for more difficult benchmarks to continue guiding model development.
Based on the above, we define difficulty in terms of the saturation level of current state-of-the-art models. This is exactly what our "o1 saturation" metric in Table 1 aims to capture — a quantifiable measure of how much room there is for improvement for top-performing models.
In this sense, R-Bench not only points to the future direction of model development — emphasizing complex reasoning — but also presents immediate challenges for today’s best models, none of which have yet saturated R-Bench.
### Q4:
In Section 3.1, the authors do not provide an explanation for the result. More high-level discussion on the underlying reasons for this outcome is necessary.
### A4:
Thank you for pointing out this issue. In the revised version, we will provide more details about Section 3.1 experiments, including the specific implementation, the voting template used during the user study, and additional explanation and analysis of the results. We believe these additions will help improve the quality of our work and make the methodology clearer and more accessible to readers.
### Q5:
Section 3.3 shows significant performance variation across disciplines. However, I am unsure whether this experiment is fair. How do the authors ensure that the difficulty of questions in economics and physics is consistent?
### A5:
Thank you for your insightful comment. Indeed, it is challenging to ensure that the difficulty level is consistent across different subjects. In the next version, we will make the description in Section 3.3 more rigorous—for example, by revising the subsection title to emphasize that all subjects still require improvement and none have reached a perfect state. Additionally, we plan to include an analysis of error cases across different subjects under this section, to help readers better understand the types of mistakes models make in each subject. | Summary: This paper introduces R-Bench, a new benchmark designed to evaluate complex reasoning capabilities in both LLMs and MLLMs. The benchmark contains questions in two languages: English and Chinese. There are 1,094 questions in 108 subjects
for textual evaluation and 665 questions in 83 subjects for multimodal evaluation. The authors conduct comprehensive experiments of current LLMs and MLLMs. Several key observations are presented for further development.
Claims And Evidence: Yes. The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Strengths of the benchmark:
1. The benchmark dataset is of high quality. The authors make great efforts to ensure the difficulty of the dataset.
2. The benchmark contains data newly collected from college courses.
Weaknesses of the benchmark and its evaluation method:
1. The scope of the benchmark is very comprehensive, with 108 subjects in the dataset. However, compared with the coverage of the subjects, the total number of the collected questions seems rather limited, with only 1094 and 665 questions for textual and multimodal questions. For example, in Table 8, there are many sub-subjects with questions less than 5. This could lead to randomness when evaluating.
2. The evaluation strategy seems to be too straightforward. As the dataset targets difficult reasoning questions, the authors are expected to provide an evaluation strategy for the reasoning rather than only the answer. Since the problems are all sourced from college courses, the knowledge required when reasoning could also be difficult. Therefore, the model could obtain a wrong answer due to a lack of specific knowledge required in reasoning instead of a poor reasoning capability. Simply assessing the final answer omits this situation totally.
Theoretical Claims: No theoretical claims are introduced.
Experimental Designs Or Analyses: Weaknesses:
1. The error analysis part provides very few valuable insights. The observations in Section 3.3 could deliver more fine-grained observations into the reasoning process of GPT-4o or o1. How these models make mistakes is a good topic to discuss in the error analysis section. For example, does the model obtain a wrong answer due to a deduction error, knowledge error, or other error types?
Supplementary Material: No Supplementary Material is provided.
Relation To Broader Scientific Literature: The related models and benchmarks have been fully discussed in the related work.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Weakness:
1. The dataset size seems limited compared with MMLU and MMMU. For example, MMMU contains 11.5K samples for multimodal questions while this dataset contains only over 600 samples.
Other Comments Or Suggestions: 1. More data samples of different subjects could be placed in the appendix to give the readers a better view of the dataset.
2. Table 1 could add another column demonstrating the dataset size.
Questions For Authors: 1. I think the authors should clarify how to better evaluate the model prediction. For example, how can errors caused by knowledge be considered and how can the evaluation contain more aspects?
2. Do authors have any plans to expand the dataset size?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments, which are crucial to improving the quality of this work. The followings are detailed response to your concerns.
### Q1:
Despite the broad subject coverage, the total number of collected questions, 1,094 for textual and 665 for multimodal, is relatively limited.
### A1:
We were indeed aware of the issue regarding the number of test questions when designing R-Bench. We would like to address this concern from following three aspects:
1. In fact, we originally collected over 15,000 candidate questions—comparable to benchmarks like MMLU and MMMU. However, our rigorous and multi-stage filtering process eliminated a large portion of these samples to ensure high quality, fairness and difficulty. This strict curation results in a smaller final dataset, but we believe it contributes to a more reliable evaluation. In future versions of R-Bench, we plan to include more questions that require advanced reasoning and offer broader topic coverage to further improve the benchmark.
2. If we consider a typical human examination paper to contain around 20 questions, then R-Bench's 1,094 questions would be equivalent to approximately 55 such exams, while the 665-question subset corresponds to about 33 exams. Compared to human-oriented standardized tests such as the ACT (American College Test), SAT (Scholastic Assessment Test), GRE (Graduate Record Examination), and China's GAOKAO, R-Bench significantly increases the number of test questions and the overall coverage.
3. Based on the evaluation results, although R-Bench contains only 1,094 and 665 samples for language and multimodal model testing respectively, the outcomes align well with our understanding of the reasoning capabilities of different models. For example, the OpenAI o1, o1-preview and o1-mini models demonstrate stronger performance than GPT-4o and DeepSeek R1 exhibits stronger reasoning ability than DeepSeek V3.
In addition, the results also indicate that most models still require significant improvements in handling complex reasoning problems. This provides new insights and guidance for the future development of reasoning-capable models.
### Q2:
The evaluation strategy seems to be too straightforward. As the dataset targets difficult reasoning questions, the authors are expected to provide an evaluation strategy for the reasoning rather than only the answer.
### A2:
During the development of R-Bench, we also considered incorporating evaluation methods beyond final outcome, such as process-based evaluation. However, due to the inherent uncertainty in reasoning processes, it was difficult to identify a reliable way to assess intermediate steps consistently. As a result, we chose to reflect the focus on reasoning primarily through our question selection process. We applied rigorous filtering involving both domain experts and intelligent models to ensure that the majority of questions emphasize reasoning over knowledge, rather than being heavily knowledge-dependent. In future work, we plan to explore alternative evaluation methodologies as well. We believe your suggestion provides a valuable direction for improving our benchmark.
### Q3:
The error analysis part provides very few valuable insights. The observations in Section 3.3 could deliver more fine-grained observations into the reasoning process of GPT-4o or o1.
### A3:
We agree that it can help improve the quality of our work. We have conducted an analysis of some error examples; however, due to format and length limitations during the rebuttal stage, we are unable to include them here. In our analysis of GPT-4o and o1's errors, we observed that most failures occurred during the reasoning process. These errors stem from various sources, such as calculation mistakes, flawed reasoning strategies, and perception errors (in the case of multimodal models). Notably, the models rarely failed due to a lack of factual knowledge, indicating that they have generally mastered knowledge at a graduate level. In the revision, we will include an error analysis section in Section 3.3 and the supplementary materials to present our findings and insights. We believe this will strengthen our work, and we sincerely appreciate your professional and helpful feedback.
### Q4:
Do authors have any plans to expand the dataset size?
### A4:
Yes, we do have plans to expand the size of the dataset. Our expansion will not be limited to multi-disciplinary reasoning problems; we also plan to extend towards more general-purpose reasoning tasks — for example, complex reasoning scenarios in daily life such as path planning.In future work, we hope to scale the dataset to around 5,000 high-quality questions in future work.
### Q5:
For other comments or suggestions
### A5:
Due to the length limit of the rebuttal, we apologize for not being able to give a detailed response. We will follow your two insightful suggestions in the revision to improve the quality of our work.
---
Rebuttal Comment 1.1:
Comment: I have read all the authors' rebuttal. However, my concern about evaluation still exists: More fine-grained evaluation is important, especially for a benchmark targeting complex reasoning evaluation, as the title says. Only evaluating the correctness of the final answer provides very little insight into the models' reasoning behavior.
Overall, I think the introduction of the benchmark is meaningful, but the evaluation method and provided insights are somewhat limited. I will maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable suggestions and positive score for our work. We highly appreciate the improvement directions you have suggested. We will continuously improve this work according to your comments in the future. | null | null | null | null | null | null |
Improved Last-Iterate Convergence of Shuffling Gradient Methods for Nonsmooth Convex Optimization | Accept (poster) | Summary: This work studies last-iterate guarantees under different shuffling models. In the RR model, the authors establish last-iterate (and suffix-averaging) guarantees, similar to the iterate-averaging results of (Koren et al., 2022), with slightly improved rates in the SS model. The obtained guarantees generally match or improve upon prior work on last-iterate guarantees in shuffling models (Liu & Zhou, 2024b). The results apply to both convex and strongly convex functions and support a broader class of $\psi$ functions compared to $I_c$ in (Koren et al., 2022).
## update after rebuttal
I thank the authors for their response and continue to support the acceptance of this work.
Claims And Evidence: The claims are supported by a thorough theoretical analysis across multiple shuffling models.
Methods And Evaluation Criteria: N/A
Theoretical Claims: Since the paper presents numerous technical results, most theoretical claims are established in the appendix, which is extensive. The reviewer primarily focused on Lemma 5.1 (Lemmas B.2 and B.3 in the appendix), which appear correct, and partially examined Lemma B.4 (RR scheme).
Experimental Designs Or Analyses: N/A
Supplementary Material: See the Theoretical Claims section.
Relation To Broader Scientific Literature: The results greatly improve upon (Liu & Zhou, 2024b).
The technique builds heavily on (Liu & Zhou, 2024a) and, to a lesser extent, (Koren et al., 2022). However, the considered setting is more general, and Algorithm 1 aims to accommodate a broader range of shuffling schemes. This generality is reflected in Lemma 5.1, the use of $\Phi$ on page 8, and the corresponding analysis.
Essential References Not Discussed: Essential references are discussed.
Other Strengths And Weaknesses: **Strengths:**
- The analysis of shuffling schemes and last-iterate guarantees represents a valuable contribution to stochastic optimization, better reflecting real-world practice compared to sampling with replacement and standard iterate averaging.
- The paper considers a broad range of settings, including convex, strongly convex, and general $\psi$ functions.
**Weaknesses:**
- Some gaps remain compared to the best-known lower bounds from (Koren et al., 2022). Specifically, it is unclear whether the last-iterate guarantee is inherently worse than iterate averaging, given that the best-known lower bound when averaging across all epochs is $n^{-1/4} K^{-3/4}$. That said, the theoretical contributions of the paper remain solid even without resolving this issue.
Other Comments Or Suggestions: Overall, the paper presents a strong contribution to an important topic, and the reviewer recommends its acceptance.
Questions For Authors: - In your view, what is the main challenge in closing the gap to the $n^{-1/4} K^{-3/4}$ lower bound established in (Koren et al., 2022)? Is the last iterate inherently worse, or should the lower bound for averaging across multiple epochs be improved?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's positive feedback. We will answer the reviewer's question below.
**Question.** Thanks for the deep question. We will discuss it from the following two perspectives.
- As mentioned in our Subsection 1.2 (see Lines 122-142, right column), the lower bound in [1] can be read as $\Omega\left(\frac{1}{J^{1/4}n^{1/4}\sqrt{K}}+\frac{1}{\sqrt{nK}}\right)$ for the suffix average of the last $J$ epohs, i.e., $\frac{1}{Jn}\sum_{j=K-J}^{K-1}\sum_{i=1}^{n}\boldsymbol{x}_{jn+i+1}$. Therefore, for the average over all $K$ epochs, the lower bound is $\Omega\left(\frac{1}{n^{1/4}K^{3/4}}+\frac{1}{\sqrt{nK}}\right)$. In other words, the rate $\Omega\left(\frac{1}{n^{1/4}K^{3/4}}\right)$ for the average iterate could only be possible in the small epoch regime, i.e., $K\leq n$.
- To be honest, we have no idea whether this rate is tight. If it is indeed achievable, then it means that there exists at least one problem instance such that the average iterate can improve over the last iterate by a factor of $K^{-1/4}$, which is highly surprising and even seems impossible in our opinion since we have never seen an optimization method exibithing such a property if the stepsize is fine picked. As such, we suspect the lower bound for the average iterate should be improved.
**References**
[1] Koren, Tomer, et al. "Benign underfitting of stochastic gradient descent." Advances in Neural Information Processing Systems 35 (2022): 19605-19617. | Summary: This paper studies the Last iterate convergence of proximal gradient methods for non-smooth (strongly) convex optimisation problem, with Random Reshuffle (RR) and Single Shuffle (SS) strategies. The paper considers the General Proximal Gradient Method, where proximal step is implemented at every step which is more natural than most recent works' variant (e.g. Liu & Zhou 2024) where the proximal step is implemented at every epoch. Further, the paper establishes an array of convergence results for RR and SS in general convex and strongly convex setting. For RR, the rates in the paper are always better than the SOTA by some polynomial factors in $n$, the number of individual functions. For SS, the algorithm might not converge when $n$ is small, but when the composite part $\psi$ becomes the characteristic function over some feasible sets, the paper established some more refined convergence rate that improves over the SOTA.
## Update after rebuttal
I maintain that this is a very good paper and should be accepted. I will keep the score 4.
Claims And Evidence: The paper is theoretical in nature and I discuss the claims and evidence in the Theoretical Claims section.
Methods And Evaluation Criteria: The methods and evaluation criteria is valid.
Theoretical Claims: The paper gives a number of convergence theorems, in particular, Theorem 4.2, Corollary 4.3, Theorem 4.4, Theorem 4.5, Theorem 4.6, and Theorem 4.7, all of which are backed by proofs in the appendices.
At the core of the these convergence rates, the authors gave a general last-iterate descent analysis in Lemma 5.1. I briefly went through Appendix B.1 which gives the proof for Lemma 5.1. While I didn't check the analysis therein line-by-line, the techniques seem solid to me.
Experimental Designs Or Analyses: not applicable.
Supplementary Material: I briefly went through Appendix B.1 where the central descent lemma is analysed.
Relation To Broader Scientific Literature: The earlier Liu & Zho (2024) work gave a series of last-iterate convergence results of shuffling proximal gradient methods in various settings, some matching known lower bounds. However, their rates in the nonsmooth Lipschitz continuous setting did not demonstrate any improvements over the simple proximal (full) gradient method. This work follows up on the work of Liu & Zhou (2024) in the nonsmooth Lipschitz continuous setting in a spectacular way, proving improved convergence rates for RR in both the convex and strongly convex setting. The results for SS are however worse than that of Liu & Zhou (2024), failing to show convergence in some settings. But with some additional assumptions, the authors managed to improve the rates of Liu & Zhou even for SS.
On the algorithm side, the method studied in the paper differs from most of the recent works, where the proximal operator is applied at every step, instead of at every epoch. The epoch-wise proximal operator implicitly treats the shuffling method as a way to accumulate some approximate of the full gradients. The method considered in this paper is therefore in my opinion much more natural, and principled.
I believe that this paper makes an important contribution towards of the field of shuffling gradient method.
Essential References Not Discussed: There are no essential references that the paper omits.
Other Strengths And Weaknesses: see the next section.
Other Comments Or Suggestions: - Table 1 looks confusing and I suggest that the authors clearly mark the upper bounds and lower bounds.
Questions For Authors: There are a few questions that I would like to ask the authors about:
- Regarding last iterate convergence: as the authors pointed out, the last-iterate convergence in Liu & Zhou (2024) is obtained following the technique of Zamani & Clineur (2023). While the proofs in this paper seems to be substantially different from that of L&Z, I wonder if the authors are still following the ideas of Z&C to obtain the last-iterate convergence guarantees? As noted in [1], the techniques in Z&C gives rise to stepsize schedulers that are, in some sense, separate from some backbone algorithms with only avg convergence guarantees. If the last-iterate convergence of this work also somewhat follows the ideas of Z&C, I wonder if it's possible that the observations in [1] can be applied to the results here as well? Is it possible that some (hopefully) simpler descent analysis for the avg iterate can be presented, and the scheduler part can then be added on top separately to obtain the last-iterate convergence?
- Some further discussions on the source of improvement (or indeed, deterioration, in some cases) of the convergence rates: On one hand, the algorithm considered in this paper is somewhat more natural than the one in L&Z. On the other hand, the paper also presented different analysis techniques than the previous works, where the authors included in their considerations the randomness of RR and SS in each epoch. I wonder if the authors could comment on whether the improvement in the convergence rate for RR comes purely from this new insight in the analysis, or is the the fact that now proximal operator is applied at each step is also important for obtaining the improvement for RR? Similarly, can the authors comment on why, in the general case, is the convergence results for SS worse than that of L&Z? Is it purely a deficiency in the analysis (and it's absolutely fine if the authors have no idea on how to resolve it), or is the difference in the algorithm causing the difficulties in the analysis? Despite the fact that the algorithms in this paper and L&Z become the same when no proximal step is taken, can the authors also discuss why the results in Theorem 4.5 can be improved in Theorem 4.6 under the additional assumption? Is it because some difficulties with the every-step proximal operator can be overcame when the proximal operator is simple enough (suggesting that perhaps the difficulties might come from the proximal operator in Theorem 4.5), or there are some further insights in the analysis?
[1] Defazio, Aaron, et al. "Optimal linear decay learning rate schedules and further refinements." arXiv preprint arXiv:2310.07831 (2023).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments. We will answer the reviewer's questions below.
**Q1.** Our analysis is still inspired by and related to [1] but with some necessary changes to fit the shuffling method. However, whether the framework of [2] can be used to simplify the proof is unclear to us. We briefly explain why in the following.
- After a quick read, we think the core result of [2] is their Theorem 1, which relates the last-iterate convergence to the regret guarantee in online learning. As far as we can check, their Lemma 7 (the key to prove Theorem 1) relies on the fact that the gradient oracle $g_t$ is an unbiased estimator of $\nabla f(\boldsymbol{x}_t)$ conditioning on the history.
- In contrast, the gradient oracle in our setting is $g_t=\nabla f_{\textsf{i}(t)}(\boldsymbol{x}_t)$, which is unfortunately biased due to the shuffling-based index $\textsf{i}(t)$.
As such, their analysis immediately failed in our setting. Hence, how to make the idea in [2] work under the shuffling scheme remains unknown currently.
**Q2.** Indeed, the improvement (or deterioration) comes from both the analysis and the algorithmic change. Simply speaking, the current analysis that considers randomness naturally leads us to make the proximal update happen in every step. More precisely, if we want to utilize randomness in the analysis, it is natural to recognize every single step as an update (one can think about SGD as an example). Therefore, the difference between our algorithm and [3] arises since the latter's analysis is epoch-wise, which therefore requires the proximal update to happen at the end of every epoch.
- For RR, such a different view is enough to improve the convergence, as commented by the reviewer (also pointed out in our Section 5).
- But for SS, things become tricky. As shown by our results, the new view (and hence the variation in the algorithm) is enough to guarantee better rates in the small epoch regime, but is inadequate in the large epoch regime. It turns out that one critical point missed in the proof is the deterministic property of the shuffling method, i.e., the index in every epoch goes over the entire set $[n]$ (the key property used in [3]). Hence, we could expect a better result for SS if this fact is used in the analysis (as stated in the last paragraph in Section 5). However, this is not an easy task due to the algorithmic change, especially for a general $\psi$. But for some $\psi$ (including but not limited to $\psi=I_\mathcal{C}$ in Theorem 4.6), we still can make it work. For the most general case and more details, we kindly refer the reviewer to our Lemma B.3 and the discussion in Lines 1427-1445.
**References**
[1] Zamani, Moslem, and François Glineur. "Exact convergence rate of the last iterate in subgradient methods." arXiv preprint arXiv:2307.11134 (2023).
[2] Defazio, Aaron, et al. "Optimal linear decay learning rate schedules and further refinements." arXiv preprint arXiv:2310.07831 (2023).
[3] Liu, Zijian, and Zhengyuan Zhou. "On the Last-Iterate Convergence of Shuffling Gradient Methods." International Conference on Machine Learning. PMLR, 2024.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the answers. I think a score of 4 is appropriate for the work and wish you good luck. | Summary: - The paper investigates the convergence rates of shuffling SGD for nonsmooth (strongly) convex function. While the convergence behavior of shuffling SGD under smoothness assumption has been widely studied in recent literature, its investigation under Lipschitz continuity remains relatively less explored. The paper focuses on two shuffling strategies (RR and SS) in the context of the subgradient-proximal method.
- The paper makes a fine-grained analysis on $G_{f,1}$ and $G_{f.2}$ to establish better convergence rates compared to prior works. For RR, when the objective is convex, Theorem 4.2 achieves $O(\frac{\sqrt{G_{f,1}G_{f,2}}}{n^{1/4}K^{1/2}})$, and when the objective is strongly convex, Theorem 4.4 achieves $O(\frac{G_{f,1}G_{f,2}}{n^{1/2}K})$. Both rates improve upon previously known results, particularly when the Lipschitz cnostants of individual components are similar. Similarly, for SS, Theorem 4.5 and Theorem 4.7 demonstrates improved convergence rates over prior studies in both the convex and strongly convex setting, provided that the number of total epoch is small.
Claims And Evidence: Most claims are supported by theorems and propositions.
Methods And Evaluation Criteria: This paper is purely theoretical and does not involve empirical evaluation or benchmark datasets.
Theoretical Claims: I briefly checked the proofs of Lemma B.1, B.2, B.3, B.4, and B.6 (which I think serve as the backbone for proving the main theorems), and did not identify any critical issues. However, since the proof is highly technical, I was not able to fully verify the entire proof framework in detail.
Experimental Designs Or Analyses: The paper does not include any experiments.
Supplementary Material: The paper does not include any supplementary material.
Relation To Broader Scientific Literature: The paper improves the convergence rate of RR and SS compared to previously known rates under the Lipschitz continuity assumption. This assumption is more realistic in modern machine learning frameworks than traditional smoothness assumption.
Essential References Not Discussed: Essential references are appropriately cited and discussed in the paper.
Other Strengths And Weaknesses: Strengths
- The paper is well-written with clear and detailed explanations.
- The derived convergence rates for RR are strong. The results hold for the last iterate and general (strongly) convex $\psi$. Also, this paper is the first to prove that RR converges faster than Proximal GD in this setting.
Weaknesses
- The convergence rates for SS seem weak, compared to those for RR. As the authors pointed out, both Theorem 4.5 and Theorem 4.7 do not guarantee convergence to $0$ as $T \rightarrow \infty$. While Theorem 4.6 provides a vanishing bound, it requires an additional condition on $\psi$. In particular, it is somewhat unusual that the convergence rate does not go to $0$ even in the strongly convex setting.
Other Comments Or Suggestions: Minor typos:
- Line 1299, 1301: $y\rightarrow z_{t+1}$
- Line 1311: $T+2 \rightarrow T+1$
Minor Suggestion:
- In line 99L, the paper states that “For RR, our new rates are always better than the best-known bounds in (Liu & Zhou, 2024b) by up to a factor of $\Theta(n^{-1/4})$ in the general convex case.” I believe this sentence slightly overclaims the contribution, as there may be no gain when $G_{f,2}\approx \sqrt{n}G_{f,1}$. Thus, I suggest removing the term “always” from the sentence.
Questions For Authors: Q1. In lines 831–849, the authors state that RR is slower than proximal SGD, at least in the convex setting. In contrast, under the smoothness assumption, RR has been extensively studied and shown to converge faster than SGD. Do the authors have any insights or intuition on why RR exhibits slower convergence under the Lipschitz continuity assumption?
Q2. Below Corollary 4.3, the paper claims that when $G_i \equiv G$, the result matches the lower bound $\Omega(\frac{1}{n^{1/4}K^{1/2}})$ shown by [Koren et al., 2022] proved for $\psi=I_C$. Does the lower bound construction in [Koren et al., 2022] also satisfy $G_i \equiv G$?
Q3. The authors clearly state the significance of their work when $G_{f,2} \approx G_{f,1}$. However, when $G_{f,2}\approx \sqrt{n}G_{f,1}$ (which is not well discussed in the paper), the results in the paper do not offer any improvement over prior works; for RR, both the rates in Theorem 4.2 and 4.5 match those in (Liu & Zhou, 2024b), and for SS, the critical epoch $K_*$ becomes 1. Do the authors believe that the current rate for this case is already optimal, or is there potential for achieving a better rate?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We would like to answer the reviewer's questions below.
**Typos&Suggestions.** Thanks for carefully reading and the helpful comments. We have corrected/modified them accordingly.
**Q1.** Thanks for the interesting question. Honestly, we do not have too many insights on this phenomenon and don't want to provide a misleading explanation. We hope it can be fully understood in future research.
**Q2.** Yes, as stated in Theorem 5 of [1], the lower bound construction is for $G_i\equiv 4$. We also mentioned this point in Remark d under Table 1.
**Q3.** We note that $G_{f,2}\approx \sqrt{n}G_{f,1}$ is a very special case, but doesn't need to be worried about. Suppose we have $B$ gradient evaluation budgets, then the rate of every related algorithm is as follows (for simplicity, we consider the non-strongly convex case):
- GD: $O(\frac{G_{f,1}D}{\sqrt{T}})=O(\frac{\sqrt{n}G_{f,1}D}{\sqrt{nT}})=O(\frac{\sqrt{n}G_{f,1}D}{\sqrt{B}})$.
- SGD: $O(\frac{G_{f,2}D}{\sqrt{T}})=O(\frac{G_{f,2}D}{\sqrt{B}})\approx O(\frac{\sqrt{n}G_{f,1}D}{\sqrt{B}})$.
- Theorem 4.7 in [2]: $O(\frac{G_{f,1}D}{\sqrt{K}})=O(\frac{\sqrt{n}G_{f,1}D}{\sqrt{nK}})=O(\frac{\sqrt{n}G_{f,1}D}{\sqrt{B}})$.
- Our Theorem 4.2 for RR: $O(\frac{n^{1/4}\sqrt{G_{f,1}G_{f,2}D}}{\sqrt{T}})=O(\frac{n^{1/4}\sqrt{G_{f,1}G_{f,2}D}}{\sqrt{B}})\approx O(\frac{\sqrt{n}G_{f,1}D}{\sqrt{B}})$.
As such, all algorithms have the same rate. Therefore, we believe our analysis for RR is optimal in this case.
Lastly, for SS, our Theorems 4.5 and 4.7 are never optimal for whatever case. Hence, we only need to discuss Theorem 4.6. In this special case, as one can check, it degenerates to the same rate $O(\frac{\sqrt{n}G_{f,1}D}{\sqrt{B}})$ given above. Thus, we believe Theorem 4.6 in this special case is also unimprovable.
**References**
[1] Koren, Tomer, et al. "Benign underfitting of stochastic gradient descent." Advances in Neural Information Processing Systems 35 (2022): 19605-19617.
[2] Liu, Zijian, and Zhengyuan Zhou. "On the Last-Iterate Convergence of Shuffling Gradient Methods." International Conference on Machine Learning. PMLR, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I raise my score to 4. | Summary: This paper studies shuffling-based variants of proximal SGD on nonsmooth convex optimization, where proximal SGD steps are taken using indices following randomly sampled or arbitrary permutations. Prior works on shuffling-based SGD mostly focus on smooth cases, and this paper tackles the more difficult nonsmooth case where Lipschitz gradients assumption is not available. The paper shows random reshuffling (RR) enjoys convergence faster than proximal GD for the first time in the literature, and proves that single shuffling (SS) converges faster than proximal GD at least in the "low epoch" regime, in both convex and strongly convex cases.
Unfortunately, even with the improvements, the rates are slower than proximal SGD (where samples are chosen with replacement); the presence of a lower bound on RR/SS (due to Koren et al 2022) suggests that it may be the case that proximal RR/SS is fundamentally slower than proximal SGD (with-replacement).
## Update after rebuttal
Reviewers were asked to update the reviews, but for this paper I have not much to add. I keep my positive evaluation.
Claims And Evidence: I defer the discussion on the strengths and weaknesses of the developed theory to the "Strengths and Weaknesses" section. The paper does not contain any claims based on experimental results.
Methods And Evaluation Criteria: This paper does not propose a new method, and it analyzes existing methods theoretically. No empirical evaluation is considered necessary.
Theoretical Claims: I unfortunately did not have the time to check the details of the proofs in the supplementary material. I find the proof sketch convincing, but I am not entirely sure if the new technique developed by the authors is technically sound.
Experimental Designs Or Analyses: N/A. No experiments included, which I think is not a shortcoming given that it's a theory paper.
Supplementary Material: The supplementary material is mainly about omitted proofs. I unfortunately did not have the time to check the details of the proofs in the supplementary material.
Relation To Broader Scientific Literature: This paper studies popular variants of proximal SGD, so it may have some broader impact on other scientific areas that involve nonsmooth optimization.
Essential References Not Discussed: I don't know of other essential references that were not cited in the paper.
Other Strengths And Weaknesses: Strengths
1. The paper is well-written and reads well.
2. The paper analyzes the shuffling-based proximal gradient method while taking the proximal operator at every iteration, not every epoch (which is the version analyzed in most existing results). I like it because it is closer to (with-replacement) Proximal SGD.
3. The new technique (Lemma 5.1 and the analysis on $\Phi$) sounds intriguing, because handling the index dependency within epochs has always been a huge bottleneck in the analysis of shuffling based algorithms, especially in the "low epoch" regime. I think the technique developed in the paper can have a broader impact beyond proximal SGD.
4. The paper discusses several interesting future research directions.
Weaknesses
1. The biggest shortcoming I can point out is that the SS convergence results (except for Theorem 4.6) are slightly disappointing in the sense that the rates do not converge all the way to zero as the number of epochs grows to infinity.
2. Minor clarity issue 1: upon looking at Table 1, I was confused for a long time why one cannot combine the "ANY" shuffling results by Liu & Zhou 2024b and the SS bounds shown in the paper to get the best of both worlds. I did not realize that the papers consider *different* algorithms until Section 4.2. Indeed, Section 3 points out the difference, but I failed to make the connection to the rates in Table 1. It would be helpful if the authors more explicitly emphasize the differences of the considered algorithms in different rows of Table 1.
3. Minor clarity issue 2: The fact that RR rates are slower (and perhaps fundamentally so) than proximal (with-replacement) SGD is not revealed to the readers until the end of Section 4.1. Although I appreciate the authors' honesty, I believe this should be highlighted earlier, because it is easy for readers to "extrapolate" their prior knowledge from the smooth case to the nonsmooth case and falsely assume that the "slower baseline" that the paper is talking about is SGD, not GD.
Other Comments Or Suggestions: Some suggestions are made in the Weaknesses part above.
Questions For Authors: Some minor questions:
1. One question about Koren et al (2022): In their Theorem 6(ii) on SS, the rate $O (\frac{GD}{ n^{1/4} K^{1/4}})$ is shown only for $K \geq n$. Can you elaborate on how you can derive the other term $\frac{GD}{\sqrt{n}}$ in Table 1?
2. The fact that SS bounds decrease all the way to zero as $K \to \infty$ (except for Theorem 4.6) is slightly disappointing. Is there any hope for improvements, or some other techniques that allow the bounds to converge to zero at least in the special case of $\psi \equiv 0$?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the endorsement of our work. We would like to address the reviewer's concerns below.
**W1&Q2.** To make the bounds for SS decrease to $0$, it indeed needs some other technique in addition to the analysis sketched in Section 5 (as mentioned at the end of the same section). In a high-level way, we need to utilize both the randomness and the deterministic property of SS, in which the former is described in Section 5 and the latter refers to that the index goes over the whole set $[n]$ in every epoch. These two key points are formalized in Lemmas B.2 and B.3, respectively. As such, proving SS converging to $0$ when $K\to \infty$ is possible once the conditions in Lemmas B.2 and B.3 are fulfilled. A special case is $\psi=I_\mathcal{C}$ used in Theorem 4.6, which further includes the situation $\psi\equiv 0$ (i.e., take $\mathcal{C}=\mathbb{R}^d$). For a more detailed discussion and some inadequate points of Lemma B.3, we kindly refer the reviewer to Lines 1427-1445.
**W2&W3.** Thanks for the suggestion. We will try to incorporate your comments in the revision when more space is available.
**Q1.** On Page 39 of the arXiv version of [1] (or Page 20 of the supplementary of the NeurIPS version), they obtained the rate in the order of $O\left(\eta G^2 (\sqrt{nK}+K)+\frac{D^2}{\eta nK}\right)$ where $\eta$ is the stepsize. Thus, the best $\eta =\Theta\left(\frac{D}{G\sqrt{(\sqrt{nK}+ K)nK}}\right)$ gives us the rate $O\left(GD\sqrt{\frac{\sqrt{nK}+K}{nK}}\right)=O\left(\frac{GD}{n^{1/4}K^{1/4}} \lor \frac{GD}{\sqrt{n}}\right)$, where we use $O(a+b)=O(a \lor b)$ for $a,b\geq 0$. Moreover, we believe the statement $K\geq n$ in their Theorem 6 in the main text is a typo and should be corrected to $K\leq n$ as used in their supplementary (see the page we mentioned at the beginning).
**References**
[1] Koren, Tomer, et al. "Benign underfitting of stochastic gradient descent." Advances in Neural Information Processing Systems 35 (2022): 19605-19617.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the response. I have decided to keep my positive score unchanged. | null | null | null | null | null | null |
DCBM: Data-Efficient Visual Concept Bottleneck Models | Accept (poster) | Summary: This paper proposed a data-efficient Concept Bottleneck Model (DCBM) that enables concept generation while maintaining interpretability with minimal training samples. DCBM defines concepts as image regions using segmentation and object detection models, eliminating the reliance on textual descriptions or large-scale pretraining datasets. It offers high flexibility and interpretability for fine-grained classification and domain adaptation tasks. The paper evaluates DCBM on various benchmark datasets, demonstrating competitive performance.
Claims And Evidence: - DCBM is claimed that CBM can be trained using no more than 50 samples per class, making it more data-efficient than existing CBMs.
- Related research, BotCL [1] also conducted tests using 50 concepts. Therefore, it is necessary to experimentally prove that the proposed DCBM outperforms BotCL and other models in terms of both performance and efficiency.
- Additionally, an ablation study on the number of concept samples is required.
[1] Wang, Bowen, et al. "Learning bottleneck concepts in image classification." Proceedings of the ieee/cvf conference on computer vision and pattern recognition. 2023.
Methods And Evaluation Criteria: - DCBM is more data-efficient than conventional CBMs and automatically generates concepts using segmentation and detection models. However, it lacks experiments verifying the semantic validity of the generated concepts even though authors presented various experiments in body and suppl.
- While interpretability analysis using Grad-CAM has been conducted, there is a lack of comparative experiments evaluating the interpretability of concepts against existing CBMs.
Theoretical Claims: - There is little mathematical proof available for evaluation.
- A theoretical explanation is needed to better understand how segmentation is utilized in DCBM.
Experimental Designs Or Analyses: - The experiments did not utilize commonly used datasets in CBM models. Therefore, additional experiments using widely adopted datasets such as AwA2 and CelebA are necessary.
- In the ablation study, an analysis of performance differences based on the number of concepts is required, along with experiments evaluating the impact of hyperparameter tuning on performance.
- The analysis of the key characteristics highlighted in Figure 4 is ambiguous, requiring comparative analysis and additional explanation.
- More experimental results on the visualization of segmented concept parts should be provided.
Supplementary Material: - This paper includes additional experimental results, pseudo-code, and implementation details that are not included in the main text of the paper.
Relation To Broader Scientific Literature: - A key distinction from previous studies is that this paper proposes a CBM that automatically extracts concepts using Segmentation and Detection models, enabling interpretability without relying on text.
- The paper compares the proposed model with label-free CBMs such as LaBo [2], but it does not include other relevant studies despite their significance.
- Therefore, additional comparative experiments and discussions on existing research should be incorporated.
Essential References Not Discussed: Wang, Bowen, et al. "Learning bottleneck concepts in image classification." Proceedings of the ieee/cvf conference on computer vision and pattern recognition. 2023.
Shang, Chenming, et al. "Incremental residual concept bottleneck models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Srivastava, Divyansh, Ge Yan, and Lily Weng. "Vlg-cbm: Training concept bottleneck models with vision-language guidance." Advances in Neural Information Processing Systems 37 (2024): 79057-79094.
Other Strengths And Weaknesses: <Strengths>
- Data Efficiency: Maintains performance comparable to existing CBMs with only 50 samples per class.
- Automated Concept Generation: Utilizes segmentation and detection models to automatically extract concepts, improving domain adaptability.
- Generalization Capability: Demonstrates robust performance compared to existing CBMs in OOD evaluation using ImageNet-R.
- Enhanced Interpretability: Uses Grad-CAM for visual concept activation analysis, providing insight into the model’s decision-making process.
<Weaknesses>
- Lack of Semantic Validity Verification: No experiments verifying whether automatically generated concepts are truly meaningful.
- Insufficient Comparison with Recent CBM Models in Terms of Data Efficiency and Performance: Lacks quantitative comparisons to demonstrate how much more efficient DCBM is compared to existing CBMs.
- Limited OOD Experiments: Needs generalization evaluation across diverse domains (e.g., medical, industrial) beyond ImageNet-R.
- Lack of Hyperparameter Tuning Analysis: Requires ablation studies on the effects of concept quantity, clustering methods, and model choices.
Other Comments Or Suggestions: - It is recommended to unify functions and symbols so that Figure 2 aligns with the main text explanation.
- A detailed explanation is needed on the segmentation methods mentioned in the introduction and how segmentation was used for concept generation.
- Using the same abbreviation for different terms can cause confusion. For example, out-of-distribution (OOD) and out-of-domain (OOD).
- The term "qblation" on line 1265 should likely be corrected to "ablation."
Questions For Authors: - Please refer to the weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and for recognizing the flexible and interpretable design of DCBM. We highly appreciate your helpful comments and hope to provide the missing details in our answers below.
## Efficiency
We derive concept proposals based on 50 samples per class and train the CBM on all training samples. However, as we show in response to Review 3 (AELC), DCBM's performance does not deteriorate when we train with only 50 images per class. In contrast to BotCL (Wang et al., 2023), we do not fix the number of concepts, but the number of images that we use for concept extraction. This design choice avoids BotCL's weakness (Wang et al., 2023) of having to tune the number of concepts for each dataset.
When comparing DCBM to BotCL, we quantitatively outperform them on CUB (8.4\%). Our results for ImageNet are not comparable, as they evaluate only 200 out of 1000 classes and report that their method fails for a large number of classes. Given that BotCL's authors do not share their training recipe, we cannot evaluate overall efficiency.
**Question**: Would you like us to report an evaluation of DCBM on the first 200 classes of ImageNet?
## Ablation on the number of concept samples
Table 1: Ablation for DCBM w/ GDINO Partimagenet on CUB (CLIP ViT-L/14)
|Number of Clusters | Accuracy |
|-|-|
|128| 75.20|
|256| 79.03|
|512| 80.43|
|1024| 81.51|
|2048|81.91|
|||
In section D.5 we ablate the number of concept samples, which have - in parts - copied here for your convenience. We will add a reference to Appendix D.5 in the paper's section 3.2, where we describe the concept generation process.
## Concept validity
We validate our concepts empirically, by calculating the energy pointing game (GridPG) (Bohle et al.,
2021). Our motivation for this choice is in line with BotCL (Wang et al., 2023), we want to verify whether detected concepts can be traced back to the image region.
In DCBM, the concept-image alignment is verified as part of the evaluation. We chose an automated evaluation over a human analysis.
## Theoretical explanation of segmentation
Our understanding is the following: An object $O$ can be represented as a combination of concepts from a global concept pool $\mathcal{C}$. Let $C_i \in \mathcal{C}$ denote individual concepts, and let each object be characterized by a subset of these concepts with corresponding weights.
$$
O = \sum_{i=1}^{|\mathcal{C}|} w_i C_i, \quad w_i \geq 0
$$
where:
- $C_i \in \mathcal{C}$ represents a concept from the global pool,
- $w_i$ denotes the contribution of concept $C_i$ to the image,
- $C_i$ is located in the image $I$, $R_i \subseteq I$ is the region of the image where concept $C_i$ is present,
Further, we assume visual concepts to be spatially localized in images and therefore conjecture that data efficient concept extraction should build upon image segments or regions rather than entire images. Therefore, we generate the global concept pool $\mathcal{C}$ by segmentation or detection foundation models, which are then cropped out and used as concept proposals $s_i$. All $s_i \in \mathcal{S}$ are then clustered into the global concept set $\mathcal{C}$.
In A. Algorithms (appendix) we provide a further theoretical explanation of this process along with pseudocode. We will update this section following your feedback. *Would you like us to include any additional details?* Thank you for carefully reviewing our methods section - we have revised the paper and unified functions and symbols.
## Additional datasets
DCBMs stand out by applying to any domain in a data-efficient manner. We show on 7 diverse datasets, that DCBMs can be applied to animals (CUB, ImageNet), scenes (Places365), social media (ClimateTV), low-resolution images (cifar10 & cifar100) along with ood generalization (ImageNet-R) and state changes (MiT-States). Additionally, we show for 2 novel datasets in the rebuttal, that DCBMs performance is independent of the domain it is trained on.
This exceeds the number of datasets BotCL evaluates, i.e. 4. Our main evaluation contains the same datasets as employed for Vlg-CBM (Srivastava et al., 2024) and the same number of datasets as Res-CBM, i.e. 7 datasets. We include these models in our related work and compare against them where possible.
We thank you for suggesting additional experiments on AwA2 and CelebA.
We have run DCBM on AwA using the standard 50:50 train and test split. We created the val set by randomly selecting 10% of train samples.
Table 1: DCBM performance on AwA2 using GDINO (w/ partimagenet labels) as concept proposal method.
| | ResNet-50 | ViT-B/16 |ViT-L/14 |
|-|-|-|-|
| Zero shot | 88.94| 94.00 | 95.94|
| Linear probe| 93.72| 96.51| 97.68|
| DCBM | 93.13| 96.43| 97.71|
|||||
*For CelebA, the experiments are currently running.*
## Misc
Thank you for your interest in seeing more concept visualizations. We have included more examples in the supplemental material.
---
Rebuttal Comment 1.1:
Comment: The author's response overall demonstrates sound reasoning, specificity, and a strong willingness to improve the paper based on reviewer feedback. In particular, the provision of additional experimental results, new experiments, and enhanced mathematical explanations are substantial contributions to improving the paper’s completeness. If a few remaining clarifications are addressed, the response would be strong enough to merit consideration for acceptance. However, some concerns—such as the need for enhanced concept visualization and comparative analysis with recent models in terms of interpretability—were either insufficiently addressed or only briefly mentioned. Incorporating more intuitive visualizations beyond GridPG or user-based evaluations would have made the claims more convincing. It is hoped that these limitations will be fully addressed in the revised manuscript.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
thank you for considering our response and recognizing our effort. We are happy that the additional experimental results, new experiments, and enhance mathematical explanations complement our submitted paper.
⸻
## 1. Visualizations
Based on your comments, we have created visualizations based on Figure 8 (Rao et al., 2024) in which we compare our concepts to DN-CBM (Rao et al., 2024), LF-CBM (Oikarinen et al., 2024), and CDM (Panousis et al., 2023).
Given that we are unable to include images in our response, we will give a brief description of our Figure. The main idea is to compare the top activating concepts between models. The DCBM results are created using the CLIP ViT-L/14 backbone and SAM2 concept proposals.
**image 1:** swimming hole (Places365_val_00000189):
Image description for your convenience: The image depicts a natural landscape featuring a river flowing through a forested area. The river is calm and reflective, mirroring the blue sky and scattered clouds above. Surrounding the river are large rocks and boulders, some partially submerged in the water. The banks are lined with tall evergreen trees. (ChatGPT)
**DCBM:** water stream; forest; rock; tree trunk; lakeside
**DN-CBM:** rapids; canoeing; rocks; pond; wetland
**LF-CBM:** a stream; the warte is hot to; a paddle; a lake or river; swimsuits
**CDM:** a diving board; can be very long; clear blue water; may be rocky or forested; a large, open area of water
**image 2:** raft (Places365_val_00000295):
Image description for your convenience: This image captures a scene of people engaging in whitewater rafting or kayaking on a fast-moving river. The water appears turbulent with visible rapids, and there are at least three individuals. One person in the foreground, seen from behind, is in a blue raft, paddling with an orange oar. Two other individuals are further ahead in the rapids—one seems to be in a kayak, while another is standing near the riverbank. The surrounding area contains green bushes or trees. (ChatGPT)
**DCBM:** waterfall; canoeing; river; rapids; wastewater
**DN-CBM:** tubing; rapids; canoeing; waves; kayaking
**LF-CBM:** a life jacket; jetted or bubbling water; a kayak; floating devices; fun
**CDM:** young people; chlorinated water; a boat; the water is hot to the touch; a mooring
Please note that we had to refer to another studies cherrypicked examples in order to compare interpretability between models. Since we are not able to include images in this response, we described them textually. We will include these examples and additional ones in the paper while making sure to give examples for both good and weak examples.
⸻
## 2. Interventions
We have updated our codebase to support the removal of specific, undesired concepts prior to training the DCBM. This is achieved by leveraging CLIP’s multimodal capabilities: given a textual prompt, we identify and exclude visual concepts that are highly similar to the specified concept in the embedding space. For instance, to remove the concept *stone*, we compute the embedding of the word and discard all visual concepts with high cosine similarity. In this case, four concepts closely associated with *stone* were excluded.
The CBM trained with the *stone* concept included achieves $81.8\%$ classification accuracy, as shown in Figure 3 of the main paper. After retraining the model without the stone-related concepts, the accuracy remains unchanged. However, the explanations for the class gull no longer reference *stone*, demonstrating that we successfully intervened in the model’s concept space.
We will include this analysis, along with additional examples, in the final version of the paper.
⸻
## 3. Comparison to ImageNet-200
As promised, we conducted experiments on the first 200 classes of ImageNet, analogue to BotCL. DCBM achieved a test accuracy of 84.7% using CLIP ViT-L/14 and Grounding DINO (partimagenet). When comparing DCBM to BotCL, one has to bear in mind that the models have different backbones, limiting comparability.
⸻
## 4. CelebA
We prepared the data as described by Zhang et al. (2025) - using 70 : 10 : 20 train:val:test split. Our experiments use the DCBM with ViT-L/14.
*Table 1: CelebA accuarcy. Other models as reported by Xu et al. (2024).*
| CBM Model | Acc |
|-|-|
| *Zero-shot* | -- |
| *Linear* | 0.315 |
| CBM | 0.246 |
| ProbCBM | 0.299|
| PCBM | 0.150|
| CEM | 0.330|
| ECBM | 0.343|
|**DCBM w/ GDINO (ours)** | **0.354** |
| **DCBM w/ MaskRCNN (ours)** | **0.363** |
| **DCBM w/ SAM2 (ours)** | **0.356** |
|||
Zhang, R., Du, X., Yan, J., & Zhang, S. "The Decoupling Concept Bottleneck Model. IEEE Transactions on Pattern Analysis and Machine Intelligence." 2025.
Xu, X, Qin, Y., Mi, L., Wang, H., & Li, X. "Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations." 2024.
⸻
Thank you again for your reviewing our work.
Best regards,
The authors | Summary: The paper proposes a Data-efficient CBM (DCBM) that enhances interpretability while reducing the reliance on large datasets. Specifically, DCBM defines concepts as image regions detected through segmentation and object detection foundation models, rather than relying on textual descriptions. This allows DCBM to generate multiple concepts at various levels of granularity depending on different foundation models. The authors validate their approach using attribution analysis with Grad-CAM, demonstrating that DCBM produces interpretable, localized visual concepts.
Claims And Evidence: The primary claims are following: (1) DCBM can handle data-scarce environment and be easily adapted to new datasets, and (2) DCBM bridges the gap between vision and text modalities in concept extraction by generating visual concepts. To validate the claim, the authors demonstrate that they use the subset of training dataset to extract concepts. However, they still utilize the entire training dataset during training CBM. This raises concerns about whether it can truly be considered data-efficient in real-world scenarios. Also, this paper only reduces 7 seconds per epoch compared to DN-CBM in cost of performance. Furthermore, the claim that DCBM achieves better OOD generalization is also difficult to accept. While the performance gap between in-distribution (IN-200) and out-of-distribution datasets (IN-R) is smaller compared to DN-CBM, DCBM performs worse in both settings, making the smaller gap a misleading indicator of generalization.
Methods And Evaluation Criteria: The proposed method to use visual concepts to avoid the modality gap in concept construction is novel, however, the improvement is marginal (or even degraded) compared to the baselines. Also, as mentioned in "Claims And Evidence," I belive the evaluation criteria of "generalization" is misleading.
Theoretical Claims: N/A
Experimental Designs Or Analyses: (1) Does increasing the number of training images used for concept extraction lead to performance improvements? If all training images were used for concept extraction, would DCBM outperform DN-CBM?
(2) Since the training images for concept extraction are selected randomly, is there any variance in performance due to this randomness?
Supplementary Material: The supplementary material contains many ablations to support their claims especially in Appendix D and the remaining questions are in "Experimental Designs Or Analyses."
Relation To Broader Scientific Literature: The key contribution of this paper is an advanced CBM especially concentrated on concept extraction. Specifically, this paper makes a contribution to generate the visual concepts for CBM.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: A major strength of the paper is that the author explores how concepts can be shaped differently depending on the choice of the foundation model (SAM, Grounding DINO, Mask-RCNN), leading to generate concepts at different levels of granularity.
Other Comments Or Suggestions: There are some typos or ambiguous sentences throughout the paper, so a thorough proofreading and revision would be beneficial.
For instance,
- Citation format (line 100): Language-guided CBM (LaBo) employs GPT-3 for concept creation, but it stands out by using a submodular function to select concepts from candidate sets, building the bottleneck, and training a linear model based on CLIP embeddings **Yang et al. (2023).**
- Ambiguous sentence (line 243): To this end, we evaluate both visual CBM models, ours and DN-CBM, trained **on ImageNet on ImageNet-R* *(Hendrycks et al., 2021) which contains 200 ImageNet classes in various renditions (e.g.embroidery, painting, comic).
Questions For Authors: Please refer to the "Experimental Designs Or Analyses" part.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for reviewing our work. We appreciate your positive feedback on our concept extraction pipeline that leverages foundation models, and we’re pleased that you recognize the interpretability and localization capabilities of our DCBM.
## Data efficiency in real-world setting
DCBM is designed for real-world data scarcity, relying on only 50 samples per class during the concept proposal generation phase. Notably, our experiments also show robust performance with even fewer samples in Appendix D.7.
This can be taken one step further by reducing the number of training samples to the same subset. For ImageNet, we reduce the number of training samples from originally 1,281,167 images to 50,000 images **(-96\%), and the performance degrades from 77.4% to 75.0%.**
For Cifar10, the reduction in training images from
50000 to 500 **(-99\%) reduces the performance from 97.5 to 93.1.** The experiments were run using GDINO (partimagenet) and CLIP ViT-L/14. We will include this experiment for all datasets in the paper, thank you for the suggestion.
The focus of our work is data efficiency with comparable performance levels. We provide the results of another, non-overlapping 50 images subset (RQ2) and report the accuracy when training the CBM with the segments of 100 images per class, combining the two subsets (RQ1).
For CUB, the training set consists of less than 50 images per class, thus we already include all images in the concept proposal generation.
Table 1: Performance evaluation of subset selection
||IMN|Places|Cif10|Cif100|
|-|-|-|-|-|
| s1 (main paper) | 77.4 | 52.2 |97.5|85.3 |
| s2 (new)| 77.5| 52.2|97.6|85.4 |
| s1+s2 | 77.1| 52.1 |97.7 |85.5|
||||||
In Table 1, s1 corresponds to the original set of 50 images per class, s2 represents an additional set of 50 randomly selected images per class, and s1+s2 denotes the combined dataset of 100 images per class.
The performance is stable between all subsets. We believe that the performance difference to DN-CBM is domain-dependent. Given their vast number of pre-training images (3.3M), general domains are well covered (ImageNet/Places365) whereas specialized domains benefit from the dataset-specific DCBM (CUB).
## Comparison to DN-CBM
We agree that the CBM training for DN-CBM and DCBM are quite similar.
However, the required steps prior to the CBM training differ significantly. While DN-CBM trains a SAE to retrieve concepts using 3.3M images in CC3m, we create the segments by applying foundation models to a subset of the training images. We would like to highlight, that for DN-CBM, the download of an additional 3.3M images is needed, whereas, for DCBM, we only require the dataset to be analyzed. This reduces the storage requirements significantly and is highly time efficient.
Table 2: CBM training preparation: DN-CBM vs DCBM
| | DN-CBM | DCBM - ImageNet |
|-|-|-|
|Dataset size| 3,300k image-caption pairs (CC3M) | 50k images (50/class)|
| Add. memory capcity| 850 GB (assuming 256x256px) | 6 GB |
| No extra data required | x |✓ |
||||
## Generalization capabilities
We agree with your criticism of our reporting of DCBM's generalization capabilities.
We would like to point you to Table 19 in the supplementary material where we can report a 22-27\% error rate difference between ImageNet and ImageNet-R for CLIP-ViT/L14. For this embedding model, we achieve error rates of below 50\% when evaluating on ImageNet-R.
Due to resource sparsity, only the results for CLIP-RN50 were ready at the time of submission. *We are currently generating the accuracy for DN-CBM and will provide them asap.*
## Misc
Thank you as well for pointing out that some typos and ambiguities exist in the paper - we have fixed them. We believe that your feedback has helped us to improve our paper and would like to thank you for taking the time and sharing your expertise.
---
Rebuttal Comment 1.1:
Comment: I appreciate your additional experiments to support your claims. However, I am still not convinced with the OOD generalization capabilities of DCBM until the experimental result of DN-CBM in CLIP-ViT/L14 is presented
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
thank you for the feedback and your patience. Table 1 shows that all DCBM variants have better OOD generalization capabilities than DN-CBM with ViT-L/14. We have consistently lower error rates on IN-R and a lower gap between IID and OOD.
*Table 1: Error Rates in OOD setting, i.e. training on ImageNet and evaluating on ImageNet-R (lower is better).*
| Model | IN error rate | IN-R error rate | Gap |
|-|-|-|-|
| ViT-L/14: DN-CBM (Rao et al., 2024)| 16.4 |55.2 |38.8 |
| ViT-L/14: DCBM-SAM2 (Ours)| 21.1 |**48.5** |**27.4** |
| ViT-L/14: DCBM-GDINO (Ours) | 22.6| **47.2** |**24.6**|
| ViT-L/14: DCBM-MaskRCNN (Ours) | 22.2| **44.6** |**22.4**|
|||||
Thank you for reviewing our work.
Warm regards,
The authors | Summary: The paper proposes a novel framework to enhance the practicality of concept bottleneck models (CBMs) by reducing their reliance on extensive labeled concept data. DCBM decouples concept learning from task adaptation through self-supervised pretraining (e.g., using vision-language models like CLIP) to autonomously extract semantic concepts and sparse dynamic masking to selectively activate task-relevant concepts during fine-tuning. This approach achieves competitive accuracy on benchmarks (CUB, ImageNet) with 10× fewer concept labels compared to traditional CBMs while retaining interpretability, enabling human-in-the-loop concept refinement and efficient deployment in low-resource settings.
Claims And Evidence: While DCBM allows concept editing, the paper lacks user studies or quantitative metrics (e.g., concept intervention success rates) to demonstrate practical utility for domain experts. Claims about interpretability remain anecdotal without empirical validation of human-AI collaboration.
The sparsity mechanism’s effectiveness is asserted via accuracy metrics but lacks analysis of concept coverage (e.g., whether critical concepts are retained or pruned). Without grounding in domain knowledge (e.g., alignment with known semantic attributes), the claim risks conflating sparsity with arbitrary feature selection.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria in DCBM largely align with the goals of data-efficient and interpretable concept learning
Theoretical Claims: The paper does not present formal theoretical claims or proofs. Its claims are empirically validated through experiments, with no explicit theoretical analysis.
Experimental Designs Or Analyses: While DCBM’s experiments demonstrate label efficiency and task accuracy, the lack of concept-level validation and incomplete baseline comparisons weaken its claims about interpretability and generalizability. The design is sound for initial proof-of-concept but insufficient for asserting real-world applicability.
Supplementary Material: yes, all parts
Relation To Broader Scientific Literature: DCBM innovatively synthesizes self-supervised learning, sparsity, and distillation to modernize CBMs, positioning itself as a critical response to the dual challenges of interpretability and data scarcity. Its contributions resonate with broader ML trends but highlight the need for deeper integration with non-CBM efficiency paradigms.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
DCBM demonstrates originality by creatively integrating self-supervised vision-language models (e.g., CLIP) with concept bottleneck architectures, effectively decoupling concept discovery from task-specific tuning. This reduces reliance on manual concept annotations—a major bottleneck in traditional CBMs—while preserving interpretability. The framework’s significance lies in bridging data efficiency and explainability, making CBMs viable for real-world applications like medical imaging or ecological monitoring where labeled data is scarce. The design is clear, with modular components (pretraining, masking, distillation) that are empirically validated on standard benchmarks.
Weaknesses:
While innovative, DCBM’s concept grounding remains weakly validated; concepts derived from CLIP lack rigorous alignment with domain-specific semantics (e.g., bird parts in CUB), risking "explanation illusions." Additionally, the paper’s focus on classification tasks limits its demonstrated utility for regression or causal reasoning, which are critical for high-stakes domains. Comparisons to non-CBM data-efficient methods (e.g., prompt-tuned CLIP) are missing, leaving open whether the gains stem from architectural novelty or pretraining advantages.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive feedback. We appreciate your recognition of our framework’s originality and data efficiency—requiring 10x fewer concept labels than comparable CBM approaches while achieving similar performance. Below, we provide detailed responses organized by the key points raised:
---
## User Studies and Quantitative Metrics for Domain Utility
We agree that demonstrating practical utility for domain experts is crucial. While our primary focus has been on establishing the theoretical feasibility and data efficiency of our approach, we recognize the value of user studies and quantitative metrics for concept intervention. In the final version, we will include an additional investigation that actively manipulates relevant concepts to observe changes in the model’s predictions and confidence. For instance, as shown in the bird example (concept “rock”), our preliminary analysis indicates that concept removal has a predictable effect on predictions. We plan to extend this analysis by adding concept interventions to further validate interpretability.
In addition, we recognize that the risk of "explanation illusions" is an important concern in CBM approaches, particularly when using CLIP as a backbone. To mitigate this, we plan to incorporate tests of concept removal and alteration to explicitly address concept grounding.
## Analysis of the Sparsity Mechanism and Concept Coverage
Analog to concurrent work, we ablate the sparsity parameter $\lambda$ based on model accuracy (Rao et al., 2024 & Oikarinen et al., 2023)
## Concept-Level Validation and Baseline Comparisons
We acknowledge that thorough concept-level validation is essential to reinforce the interpretability of our method. Our study has systematically analyzed the learned concepts across multiple large-scale datasets (ImageNet, Places, and CUB).
We deliberately compare only methods that offer interpretability, which is a core property of CBMs. By focusing on interpretable approaches, we ensure that our evaluation remains consistent and that our performance improvements are attributable to our architectural innovations rather than differences in method transparency. This is why we do not evaluate against prompt-tuned CLIP. We will include this distinction in the literature review.
However, as demonstrated in `[fCuP]`, we incorporate additional datasets and utilize new, randomly selected images for the concept proposal generation phase. Our results consistently show that our technique delivers comparable performance across diverse experimental settings.
## Broader Applicability Beyond Classification
Regarding the current focus on classification tasks, we appreciate the suggestion to explore applications in regression and other settings. However, the investigation of the suitability of CBMs in general is an additional task, which takes more consideration than would be adequate for the purpose of this paper. Therefore, we will include this discussion at the end of our paper to open up new research fields of CBMS for regression and other settings.
---
Thank you again for your valuable feedback. We believe these planned additions and clarifications will further strengthen the work, and we look forward to incorporating your suggestions to improve both the interpretability and practical utility of our approach.
---
Rebuttal Comment 1.1:
Comment: I appreciate the answers and clarification. I have no concerns about the work and hence keep the rating. | Summary: The paper introduces Data-Efficient Visual Concept Bottleneck Models (DCBMs), which generate interpretable visual concepts using segmentation and detection foundation models, enabling Concept Bottleneck Models (CBMs) to work effectively with limited data. By clustering image regions into concepts without relying on text descriptions, DCBMs achieve strong performance on fine-grained and out-of-distribution tasks while maintaining interpretability. The approach is simple, adaptable, and avoids extensive pre-training, offering a practical method for interpretable image classification.
The paper is very well written.
Claims And Evidence: The paper demonstrates that DCBMs maintain classification accuracy within a small margin (roughly 5–6%) of a CLIP linear probe on CIFAR-10, CIFAR-100, ImageNet, Places365, and CUB. The performance of DCBM is subpar with the other methods for large scale datasets like imagenet and places 365. The authors did not discuss the reason for that? why it is so? is it because of the projection of the centroids to the clip space?
Methods And Evaluation Criteria: 1. Segmenting or detecting specific image regions can be problematic for medical images. For example, for chest x-rays, often the segmentation models segment the right and left lung and heart. So they ignore the anatomical concepts like the lower left lobe or devices like the chesttube. I think DCBM will also have same problem. Is there any way to solve it? Can the best of both worlds (concepts from LLMs or reports or captions and segmentation models together) solve it?
2. This method is for CBM but not for PCBM. For ex, if I want to extract a CBM from any arbitary blackbox (resnet), this method won't work, because of the reliance on aligned text and vision encoders. Can this method be extended to PCBMs as well? I believe they can project the embedding from the blackbox to the VLM embedding space and still use their method to extract CBM from a blackbox. See this paper for projection:
Text-To-Concept (and Back) via Cross-Model Alignment. Moayeri et al. ICML 2023.
Theoretical Claims: NA
Experimental Designs Or Analyses: 1. The authors should do a human evaluation to show these discovered concepts truly meaningful to humans. However, the localization results do a decent job as an automated check.
Supplementary Material: The supplementary ablations detail:
- K-means vs. agglomerative clustering
- The effect of removing smaller or larger bounding boxes
- Variation in the number of training images per class
- Additional performance tables for different backbones.
I reviewed these sections to check the consistency of their method. The supplementary material aligns well with the main claims and clarifies hyperparameter sensitivity.
Relation To Broader Scientific Literature: 1. The paper cites recent developments in segment-anything approaches (SAM, GroundingDINO) and how these can serve as universal “concept proposal” engines.
2. This strategy extends the prior “visual concept” line (e.g., DN-CBM) but is more data-efficient and requires no large pre-training corpora.
Essential References Not Discussed: [1] Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off. Barberio et el. Neurips 2022 for non linear relationship among concepts
[2] Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat. Ghosh et al. ICML 2023 for expert based PCBM and First order logics for concept interactions
[3] Distilling BlackBox to Interpretable models for Efficient Transfer Learning. Ghosh et al. MICCAI 2023. applying CBM to chest-x-rays.
Other Strengths And Weaknesses: 1. Foundation models might fail or produce random proposals in certain specialized domains (Eg, breast cancer detetction) if no relevant segmentation or detection model is available.
2. The authors use the pretrained frozen VLMs like CLIP. This can be problematic because CLIP inherits many biases/shortcuts/spurious correlations that can influence the decision. For example, Figure 3 shows rock is an important concept for predicting the bird class. Now rock is not a causal feature for bird prediction. I believe this is due to the internal biases of CLIP. Also, this shows the problem of using segmentation regions from models like SAM. As in Figure 3 (top), the image of CUB, look at the concepts identified. One of the concepts is entire bird - Gull. Is this useful in practice? This is what I pointed out in #1 in "Methods And Evaluation Criteria". if the explainer says the entire lung is an important concept, this method wont be useful in the real world. Also, the same image in Fig 3 detects "Clicking" and this concept is not visual. Ideally, the concepts should be some features of the birds that will be useful for classification.
Other Comments Or Suggestions: If the authors want further clarity about spurious correlations, they might incorporate a dedicated “concept removal” experiment (filtering out suspicious or intangible concepts) to see if accuracy or interpretability improves.
Questions For Authors: 1. Sometimes the top concept is semantically related to the class but not visibly present in the image (e.g., “police kit” for an ambulance). Could we systematically identify these spurious concepts?
2. You rely on Grad-CAM to show that concepts align to image regions. Have you considered a direct concept-intervention test (removing or altering concept crops) to see how predictions change? That might further confirm concept “faithfulness.” This is due to the fact GRAD-CAM has its own problems discussed in this paper: Sanity Checks for Saliency Maps. Neurips 2018.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive feedback. We appreciate your acknowledgment of DCBM's strengths being simple, adaptable, and avoiding large-scale pre-training. Below, we outline our responses structured by the key points you raised:
---
## Performance on Large-Scale Datasets
In our opinion, while CLIP is powerful and designed to capture relationships between images and text, projecting concept centroids into this space might not perfectly align with the underlying semantics of the image data. Additionally, large-scale datasets like ImageNet and Places365 contain a high degree of intra-class variability, making it difficult for a concept-based model to generalize effectively. One could improve the discrepancy within large-scale datasets by adapting the centroids gradient-wise during the training phase of the linear layer. Here, the centroids would dynamically adapt to the underlying structure of the dataset.
## Segmentation Challenges in Specialized Domains
It is one strength of DCBM that the segmentation model can be exchanged by a domain-specific one, e.g. MedSAM (Ma et al., 2024) for the medical field—offer. We believe that incorporating such domain-adapted segmentation models can help address challenges related to identifying relevant regions in specialized tasks.
We agree that combining textual domain-specific concepts with segmentation-based visual concepts may be especially beneficial in the medical domain. Some concepts are more effectively expressed visually, while others are better captured through language.
In DCBM, it was our primary objective was to develop a CBM that operates independently of textual inputs or LLMs for extracting concepts. We believe, that the combination of DCBM with existing methods like LaBo (Yang et al. 2023) and LF-CBM (Oikarinen et al. 2023), would achieve such a combination.
## Extension to PCBMs and Backbone Flexibility
DCBM cannot be extended as a post-hoc CBM - as neither of the other ante-hoc CBMs. We have chosen this approach as it allows to better understand the model embedding space. This said we believe that DCBM is a valuable framework to better understand any vision embedding space. We utilize text-image aligned backbones (CLIP) for benchmarking against other CBM approaches. This is not inherent to our framework and DCBM can be run using any vision embedding, with a slightly more intricate mapping to the text labels.
We appreciate the recommendation of Moayeri et al. (ICML 2023) and agree that projecting embeddings from a blackbox into the VLM embedding space is a compelling strategy. We find this approach both intriguing and valuable and are investigating the opportunities of combining post-hoc and ante-hoc CBMs.
## VLM Biases and Spurious Correlations
The CLIP space is known to contain spurious correlations (Rao et al., 2024 & Oikarinen et al., 2023 & Panousis et al., 2023). By including all segments as concept proposals, DCBMs visualize spurious correlations. As shown in Figure 3, DCBM learns that *rock* is an important concept for predicting the bird class. This further becomes apparent, when we set the weights of *rock* to zero, the model achieves a confidence of 62.49% (-12%). We are currently training a model with interventions.
*Question: We exclude concepts from training, which have been identified as spurious. Alternatively, we were thinking of masking out concept regions in the image and the measure the model's confidence. Is this what you had in mind? We would love to hear more feedback on this from you.*
In the final version of this paper, we further investigate the behavior of spurious correlations and actively remove such concepts to observe changes in accuracy.
## Systematic Identification of Spurious Concepts
This is possible by using either slot attention as in BotCL (Wang et al., 2023) or an image tagging model such as RAM (Zhang et al., 2024) with a threshold. We will include this extension in the discussion section.
---
We appreciate your insightful feedback and hope our responses have effectively addressed your questions. During the remainder of the rebuttal phase, we welcome further discussion on any open issues and invite additional feedback on our proposed experiments.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal.
Regarding VLM Biases, i agree that finding spurious correlation concepts and setting them to zero or masking them out can be an option. However, if you want to pursue that approach, i would request to think causally because many times a concept can be good and spurious as well. For example, there is a disease in chest called cardiomegaly, which is enlargement of heart. So, heart can be a spurious and a causal feature both. And while designing the model, we want to have the feature however pacemaker (or any devices) can be non causal and spurious. We want to remove their effect on the model. Please think in that direction.
Also, i would recommend to you to pursue research to integrate PCBMs and blackboxes like MoIE paper (ICML 2023). DCBM can be an exciting avenue for posthoc based models. Also, i would recommend mixing the textual concepts with segmentation.
Finally, please include the reasons of failure in large datasets and textual concepts and integration of PCBMs (which is there in the reubuttal) in discussion. Also, include all the relevant citations and mention clearly that DCBM is not currently integrated in PCBM setup. However, in future it can be intergated to several PCBMs and medical domain ([2, 3]).
I upgrade my score to weak accept.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you very much for your thoughtful feedback and for increasing your score to weak accept. We truly appreciate your recognition of our efforts and your constructive suggestions, which we found highly valuable.
Following your comments, we have extended our codebase to allow the targeted removal of specific, undesired concepts prior to training the DCBM. Leveraging CLIP’s multimodal capabilities, we specify the concept to be removed using a textual prompt. For example, to exclude the concept *stone*, we compute its text embedding and remove all visual concepts with high similarity in embedding space. In this case, four closely related concepts were removed. After training the CBM without them, the model preserved its classification accuracy, but the explanations for the class gull no longer referenced the *stone* concept. This confirms that our method can successfully intervene in the model’s concept space without affecting predictive performance.
This capability allows for fine-grained control to exclude concepts, giving users the ability to explicitly suppress spurious correlations or highlight desired causal factors. We fully agree with your point on thinking causally—for instance, heart might be both causal and spuriously correlated, while a pacemaker is more clearly non-causal. We plan to explore these distinctions further, particularly in medical settings where such nuances are critical.
We also appreciate your suggestions on future directions. The integration of PCBMs with black-box models is indeed on our roadmap, and we agree that DCBMs offer an exciting avenue for post-hoc explainability. Moreover, we find the idea of mixing textual concepts with segmentation particularly promising and are eager to investigate it further.
Thank you again for your insightful comments and for helping us improve our work.
Best regards,
The authors | null | null | null | null | null | null |
EAGLES: Towards Effective, Efficient, and Economical Federated Graph Learning via Unified Sparsification | Accept (poster) | Summary: This paper introduces a unified framework that jointly considers graph-level and parameter-level sparsification. It incorporates dual experts and consensus-based sparsification to ensure a stable sparsification process. Extensive experiments demonstrate that the proposed method is effective, efficient, and economical.
Claims And Evidence: The claims are supported by extensive experiments across datasets (Cora, Ogbn-Proteins) and metrics (FLOPS, ROC-AUC). Reductions in computational costs (82%↓ FLOPS) and communication (80%↓ bytes) are validated against baselines like FedAvg and ACE-GLT. However, claims about mitigating structural heterogeneity rely on qualitative arguments (e.g., "similar clients share knowledge via OT distance") without quantitative analysis of heterogeneity reduction.
Methods And Evaluation Criteria: The methods are well-suited for FGL challenges. Parameter sparsification avoids iterative pruning via dynamic masking, and graph sparsification addresses structural overfitting through multi-criteria experts. Evaluation on diverse datasets (small to large-scale) and metrics (FLOPS, ROC-AUC) is comprehensive.
Theoretical Claims: The manuscript’s mathematical formulation is generally free from notable errors; however, it lacks an analysis of computational complexity, which would provide a clearer understanding of the scalability and practical applicability of the proposed methods.
Experimental Designs Or Analyses: Experiments are thorough, covering multiple datasets, sparsity levels, and baselines. Ablation studies (Table 2) validate parameter-graph sparsity interplay. However, the impact of expert count (Figure 7b) is under-discussed.
Supplementary Material: No supplementary material.
Relation To Broader Scientific Literature: EAGLES makes a significant contribution by introducing a unified sparsification framework. The dual-expert approach, which builds upon MoE methods, adapts them for federated graph learning, an area that has seen limited exploration.
Essential References Not Discussed: The authors discuss and compare a wide range of related methods.
Other Strengths And Weaknesses: Strengths:
(1) The paper effectively identifies a critical challenge in federated graph learning (FGL): the high computational cost and communication overhead when training GNNs on large-scale federated datasets. By introducing EAGLES, a unified sparsification approach, the authors provide a clear solution that addresses both graph and parameter sparsification, ensuring efficiency without sacrificing model performance.
(2) The extensive set of experiments conducted across various benchmark datasets, including ogbn-proteins and Pubmed, demonstrates the practical effectiveness of the proposed method. The substantial reductions in training FLOPS and communication costs, achieved while maintaining or even improving model accuracy, provide strong empirical evidence of the method’s efficiency and scalability.
Weaknesses:
(1)While the method demonstrates significant improvements in computational efficiency, a clear computational complexity analysis would help contextualize the performance gains.
Other Comments Or Suggestions: The computational complexity of EAGLES could be better articulated, particularly regarding how its sparsification techniques scale with increasing data size or client count. This would provide a clearer view of the system’s scalability in large federated environments.
Questions For Authors: (1)Could the authors clarify how the Optimal Transport (OT) method adapts to the federated setting when client graphs have significant structural variations? Would this method still function efficiently if the number of clients increased substantially?
(2)How does $W_{gate}$ impact parameter sparsification when the number of GSEs increases? Would it lead to an excessive amount of additional parameters?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Dear Reviewer h7Za
We sincerely thank you for your insightful feedback and have provided detailed responses to your questions.
> ` W1: Without quantitative analysis of heterogeneity reduction.`
We provide a quantitative analysis of heterogeneity reduction at [this link](https://anonymous.4open.science/r/Appendix-67CF/README.md).
> `W2: Lack an analysis of computational complexity & S1: How its sparsification techniques scale with increasing data size or client count.`
We analyzed the computational complexity from three aspects:
1. **Parameter Sparsification Module**:
- Forward/Backward FLOPs are $O(s_p \cdot d)$, where $s_p$ is the parameter sparsity rate and $d$ is the parameter dimension.
- Communication costs are reduced to $O(s_p \cdot d)$ via bit-wise mask compression (Section 4.2).
- Mask alignment requires $O(K \cdot L)$ operations per round, but with small constants for $K$ (clients) and $L$ (layers), its impact is negligible.
2. **Graph Sparsification Module**:
- With $T$ GSEs, the local computation is $O(T \cdot |E|)$, where $|E|$ is the number of edges.
- The message passing process has a complexity of $O(s_g \cdot |E| \cdot d)$, with $s_g$ as the graph sparsification rate.
- The gating mechanism adds $O(N \cdot D)$ operations, but since $D$ is typically small, its overhead is minimal.
3. **OT-based Similarity Computation**:
- Standard OT complexity is $O(n^3)$ for $n$-node graphs, but we reduce this to $O(n \log n)$ using the sliced Wasserstein distance.
Overall, the computational complexity of EAGLES is given by:
$$
O\Big(s_p \cdot d + T \cdot |E| + s_g \cdot |E| \cdot d + n \log n\Big)
$$
Ignoring smaller constants, this simplifies to:
$$
O(d + |E| + n \log n)
$$
In summary, EAGLES scales linearly with the data size, and the number of clients has minimal impact on the computational complexity.
> `W3: However, the impact of expert count (Figure 7b) is under-discussed.`
As shown in Figure 7b (Appendix D.3), performance improves consistently as the number of experts increases, reaching its peak around 4 experts due to the benefit of richer structural perspectives. Beyond this point, the marginal gains diminish. While Section 5.4 briefly touches on this point, we agree that a more in-depth analysis would further strengthen the discussion.
> `Q 1.1: Could the authors clarify how the OT method adapts to the federated setting when client graphs have significant structural variations? `
In FL settings with significant structural variations, our OT adaptation relies on two key mechanisms. First, the graph synergy expert encodes node contextual features in the $W_{\text{gate}}$ matrix, forming a structure-aware semantic space via hard concrete distribution sampling. This enables OT to assess similarity based on learned semantics instead of raw topology. Second, by treating each client’s structural distribution as a probability measure over this space, we derive client-specific transport plans and similarity weights (Eqs. 20 and 22), which automatically assign lower weights to structurally dissimilar clients. Importantly, only the compact $W_{\text{gate}}$ parameters are transmitted, preserving privacy while allowing the server to compute OT plans with $O(n \log n)$ complexity through entropic regularization.
> `Q 1.2: Would this method still function efficiently if the number of clients increased substantially?`
EAGLES remains efficient even as the number of clients grows significantly. Our framework reduces communication and computation through parameter sparsification (dynamic mask consensus) and OT-based similarity aggregation, minimizing redundant interactions. Experiments (Appendix Fig. 6a & 6b) show that scaling to 100 clients on Ogbn-Proteins and Cora results in only a minor performance drop, while achieving an 18% reduction in Training FLOPS and a 20% reduction in Communication Bytes compared to baselines.
> `Q 2.1: How does $W_{gate}$ impact parameter sparsification when the number of GSEs increases?`
$W_{\text{gate}}$ is a learnable gating parameter. Additional GSEs introduce richer structural diversity, and by integrating sparsified subgraphs obtained from multiple criteria through $W_{\text{gate}}$, the robustness of graph sparsification is enhanced. This reduction in structural redundancy allows for the allocation of different gradient update weights to model parameters during backpropagation, thereby influencing parameter sparsification.
> `Q 2.2: Would $W_{gate}$ lead to an excessive amount of additional parameters?`
The gating parameter matrix $W_{\text{gate}}$ (Eq. (12)) is designed as a lightweight mapping layer with low dimensionality. Specifically, its parameter size is $D \times T$, where $D$ is the input feature dimension and $T$ denotes the number of experts. Since the number of experts is typically a small constant, $W_{\text{gate}}$ scales linearly with $D$, thereby not introducing an excessive number of parameters.
---
Rebuttal Comment 1.1:
Comment: I have carefully reviewed the rebuttal and also checked the feedback from other reviewers. The authors' further explanation of computational complexity is convincing, and my questions have been well addressed. The work may have a potential impact and will accordingly increase my score.
---
Reply to Comment 1.1.1:
Comment: ### Dear reviewer h7Za
Thank you for your thoughtful feedback and for reconsidering our work. Your comments helped us refine the presentation and strengthen the manuscript. We truly appreciate the opportunity to clarify our approach and the time you spent reviewing our submission.
Best regards,
Authors | Summary: The paper introduces EAGLES, a unified sparsification framework designed to enhance FGL by addressing computational and communication challenges. EAGLES optimizes both graph structures and model parameters through client-consensus parameter sparsification, which generates multiple unbiased subnetworks at various sparsity levels. The method also employs a dual-expert approach with graph sparsification and synergy experts, which improve the efficiency of message passing and reduce data overfitting. The comprehensive experimental results validate the effectiveness of the proposed method.
Claims And Evidence: The paper provides a relatively clear explanation of its claims. FGL faces significant computational challenges when handling large-scale graph data. Figure 1 effectively illustrates this phenomenon. However, additional empirical studies could further corroborate this analysis and strengthen the claims presented in the paper.
Methods And Evaluation Criteria: The proposed methodology and evaluation criteria align well with the problem of optimizing federated graph learning. The dual-expert sparsification approach appears to be a reasonable solution, and the chosen evaluation metrics (FLOPS and communication costs) are directly applicable to the problem at hand.
Theoretical Claims: The theoretical section of the manuscript is relatively detailed. In particular, the Harmony Sparsification Principle and its impact on federated graph learning are interesting and well-reasoned, providing concrete theoretical guidance and practical reference for the design of sparsification frameworks.
Experimental Designs Or Analyses: Extensive experiments across six datasets and multiple backbones (GCN, GraphSAGE, DeeperGCN) strengthen validity. Ablation studies on sparsity rates and client numbers (Figures 4–7) convincingly demonstrate resilience.
Supplementary Material: No supplementary material.
Relation To Broader Scientific Literature: EAGLES builds on federated learning (FedAvg, FedProx) and graph sparsification (DSpar [1]). The integration of MoE for graph pruning is novel, advancing prior work on MoE [2].
[1] Liu Z, Zhou K, Jiang Z, et al. DSpar: An Embarrassingly Simple Strategy for Efficient GNN Training and Inference via Degree-based Sparsification. arXiv preprint arXiv:2307.02947, 2023.
[2] Shazeer N, Mnih A, Ranzato M, et al. Outrageously large neural networks: The sparsely gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
Essential References Not Discussed: The key contribution of the paper is the unified sparsification approach for FGL, but it only references a graph sparsification technique, DSpar, that sparsifies graph structures based on node degree. However, there is also a relevant method, DropEdge, introduced by [3], which applies random edge dropout to improve deep graph convolutional networks for node classification. This technique is particularly important for reducing computational costs while preserving graph structure and can be considered an essential reference for addressing graph sparsification challenges in the context of FGL, especially in comparison to the single-criterion sparsification discussed in the paper.
[3] Rong Y, Huang W, Zhang Y, et al. DropEdge: Towards Deep Graph Convolutional Networks on Graphs with Sparse Edge Features. arXiv preprint arXiv:2006.10616, 2020.
Other Strengths And Weaknesses: **Strengths:**
- This paper introduces the first unified framework for both graph and parameter sparsification in FGL.
- The motivation behind this paper is explained with great clarity.
- This paper presents a novel use of Optimal Transport (OT) to measure client similarity, which is an interesting approach.
**Weaknesses:**
- There is a typo on page seven in Section 5 where "comprehensively" is misspelled as "omprehensively."
- Experiments focus primarily on academic citation and biological networks. There is no validation on social network graphs. Including relevant experiments would strengthen the generalizability and applicability of the proposed method.
Other Comments Or Suggestions: The manuscript specifies the split ratios for each dataset but does not describe the splitting strategy. The authors should include details on the splitting approach in the manuscript.
Questions For Authors: 1.How does the proposed method perform in scenarios where clients have vastly different computational capabilities (e.g., edge devices versus more powerful systems)?
2.In the code, the authors only perform data partitioning using the Louvain method. Can the proposed approach still be effective under other non-iid partitioning methods, such as Metis?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Dear Reviewer Fguv
We sincerely thank you for taking the time to evaluate our work and have adressed your concerns as follows:
> ` W1: Additional empirical studies addressing the significant computational challenges faced by FGL will further strengthen this analysis.`
In the theoretical model, the message passing mechanism in GNNs causes the neighborhood size to expand exponentially with the number of layers. For a graph with an average degree of $d$, 1-hop neighborhood covers $d$ neighbors, a 2-hop neighborhood covers $d^2$ neighbors, and an $L$-hop neighborhood covers $d^L$ neighbors [1].
We measured the k-hop receptive fields (k=1,2,3,4) for the amz-photo and Ogbn-arxiv datasets. The results are as follows:
| datasets | 1-hop | 2-hop | 3-hop | 4-hop |
| :-----------: | :---: | :----: | :-----: | :-----: |
| **amz-photo** | 32.13 | 802.86 | 2519.35 | 4681.62 |
The results show that the receptive fields exhibit a clearly super-linear growth trend, confirming that GNNs indeed face significant computational challenges.
[1]: Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. (2019). How Powerful are Graph Neural Networks? arXiv preprint arXiv:1810.00826.
> `W2: Supplementary experiments addressing the omitted DropEdge.`
We conducted experiments on DropEdge on the PubMed, with some of the results presented below (we adopted the original 0.8 retention rate for GCN).
**Pubmed**
| Methods | Top-1 Accuracy | Max Training FLOPS | Communication BYTES |
| :----------: | :------------: | :----------------: | :-----------------: |
| **FedAvg** | 85.65 | 1x (2.49E9) | 1x (6.19E9) |
| **DropEdge** | 85.78 | 0.90x (↓0.10x) | 1.00x (↓0.00x) |
| **EAGLES** | **86.97** | **0.48x (↓0.52x)** | **0.37x (↓0.63x)** |
> `W3: A typo on page seven in Section 5 where "comprehensively" is misspelled as "omprehensively."`
Thank you for your careful reading. We have corrected the typo and carefully proofread the manuscript to fix similar issues.
> `W4: Validate the proposed method on social network graph datasets. `
We conducted experiments on the Flickr dataset to validate the effectiveness of the proposed method on social network graph datasets:
**Flickr**
| Methods | Top-1 Accuracy | Max Training FLOPS | Communication BYTES |
| :---------: | :---------------: | :----------------: | :-----------------: |
| **FedAvg** | 50.15 | 1x (8.49E9) | 1x (4.67E9) |
| **FGGP** | 49.78(↓0.37) | 1.23x (↑0.23x) | 1.33x (↑0.33x) |
| **PruneFL** | 47.45(↓2.70) | 0.77x (↓0.23x) | 1.00x (↓0.00x) |
| **FedDIP** | 50.33(↑0.17) | 0.67x (↓0.33x) | 0.83x (↓0.17x) |
| **EAGLES** | **50.89** (↑0.74) | **0.48x (↓0.52x)** | **0.37x (↓0.63x)** |
> `S1: The manuscript specifies the split ratios for each dataset but does not describe the splitting strategy. `
Thank you for pointing that out. We will include additional details on the splitting strategy in the revised manuscript.
> `Q1: How does the proposed method perform in scenarios where clients have vastly different computational capabilities (e.g., edge devices versus more powerful systems)?`
Clients with limited resources can opt for higher sparsity to reduce memory and computation, while more capable machines may choose lower sparsity for better accuracy. Additionally, consensus-based parameter masks and a multi-expert graph sparsification framework ensure that all clients benefit from an efficient, robust model.
> `Q2: Can the proposed approach still be effective under other non-iid partitioning methods, such as Metis?`
We conducted experiments on Cora and ogbn-arxiv, and the results are shown below:
**Cora**
| Methods | Top-1 Accuracy | Max Training FLOPS | Communication BYTES |
| :---------: | :---------------: | :----------------: | :-----------------: |
| **FedAvg** | 70.62 | 1x (6.72E8) | 1x (6.02E9) |
| **FGGP** | 69.58 (↓1.04) | 1.42x (↑0.42x) | 1.18x (↑0.18x) |
| **PruneFL** | 67.94 (↓2.68) | 0.57x (↓0.43x) | 1.00x (↓0.00x) |
| **FedDIP** | 70.38 (↓0.24) | 0.61x (↓0.39x) | 0.59x (↓0.41x) |
| **EAGLES** | **71.27** (↑0.65) | **0.48x (↓0.52x)** | **0.37x (↓0.63x)** |
**Ogbn-arxiv**
| Methods | Top-1 Accuracy | Max Training FLOPS | Communication BYTES |
| :---------: | :-------------------: | :----------------: | :-----------------: |
| **FedAvg** | 55.30 | 1x (1.58E10) | 1x (6.14E9) |
| **FGGP** | 55.09(↓0.21) | 5.66x (↑4.66x) | 1.45x (↑0.45x) |
| **PruneFL** | 52.34 (↓2.96) | 0.69x (↓0.31x) | 1.00x (↓0.00x) |
| **FedDIP** | 55.32 (↑0.02) | 0.48x (↓0.52x) | 0.59x (↓0.41x) |
| **EAGLES** | **56.89** **(↑1.59)** | **0.35x (↓0.65x)** | **0.47x (↓0.53x)** |
The results show that under Metis partitioning, EAGLES still exhibits superior performance. | Summary: EAGLES introduces a framework designed to reduce computational and communication costs in federated graph learning by jointly sparsifying both model parameters and graph structures. It employs client-consensus pruning to generate subnetworks at different sparsity levels and utilizes a mixture of experts for graph sparsification. This approach achieve substantial reductions in FLOPs and communication overhead across various datasets, all while preserving accuracy. Results show improved performance over baselines in node classification tasks.
Claims And Evidence: The claims are largely supported by experiments across six datasets (Cora, Pubmed, OGB benchmarks) and comparisons with 14 baselines. Evidence includes:
1. Table 1 shows EAGLES outperforms FedAvg/FedProx in accuracy (e.g., +1.32% on Pubmed) while reducing FLOPs (52%) and communication (63%).
2. Ablation studies (Table 2, Figure 5) validate the impact of sparsity levels.
3. Theoretical grounding via the Harmony Sparsification Principle (Eq. 3) aligns with empirical results.
Methods And Evaluation Criteria: 1. Parameter Sparsification: Dynamic threshold optimization with STE and consensus masking (Eq. 4-7) is novel and suitable for federated settings.
2. Graph Sparsification: Dual experts (GSE/GSyE) with hard concrete distribution (Eq. 13-17) effectively address structural heterogeneity.
Evaluation:
3. Metrics (Top-1 Accuracy, ROC-AUC, FLOPs, communication bytes) are standard and comprehensive.
Theoretical Claims: The theoretical claims are basically correct. However, what $W_{gate}$ refers to in eq (12) lacks the necessary explanation in the text.
Experimental Designs Or Analyses: The framework is validated across 6 datasets with diverse backbones (GCN, GraphSAGE) and compared against 14 baselines, including the state-of-the-art federated graph learning method (FedTAD) and pruning approaches like ACE-GLT.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: The contributions of the paper are related to the broader scientific literature of following areas.
1. FGL: Improves FedAvg/FedProx by addressing graph/parameter redundancy.
2. MoE: Adapts mixture-of-experts to graph sparsification (novel).
3. Pruning: Unifies model/graph pruning, unlike DSpar/ACE-GLT.
Essential References Not Discussed: This manuscript compares and discusses quite a few baseline methods.
Other Strengths And Weaknesses: Strengths:
1. The paper creatively bridges federated learning and graph sparsification, addressing both computational and structural challenges in FGL. This dual focus (parameter + graph sparsification) is novel and addresses a critical gap in federated graph learning literature.
2. The framework’s ability to handle large-scale graphs (e.g., ogbn-proteins with 132,534 nodes) demonstrates real-world applicability.
Weaknesses:
1. Experiments focus on homophilic graphs (e.g., Cora, OGB). Performance on heterophilic graphs (e.g., arXiv) remains unvalidated.
2. Whether the proposed method can speed up the experiments was not explored.
Other Comments Or Suggestions: It is suggested to supplement experiments on training time to further verify the efficiency of the method.
Questions For Authors: Can the authors provide additional ablation experiments regarding $\lambda_2$ and $\lambda_3$ in Eq (19)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Dear Reviewer jagF
We sincerely appreciate your detailed review and invaluable feedback. In the response below, we provide a thorough reply to address your concerns and offer a clearer explanation of our method.
> ` W1: what $W_{gate}$ refers to in eq (12) lacks the necessary explanation in the text.`
$W\_{\text{gate}}$ is a learnable weight matrix in the Graph Synergy Expert (GSyE) module that performs gating. It projects the node feature matrix $X$ into a latent space for Hard Concrete sampling, determining which graph sparsification experts to activate. In essence, $W\_{\text{gate}}$ generates gating vectors $z$ that, after thresholding, indicate the significance of each edge in the final sparsified subgraph. We will introduce $W_{\text{gate}}$ in the revised manuscript.
> ` W2: Performance on heterophilic graphs (e.g., arXiv) remains unvalidated.`
We conducted experiments on heterophilic graphs using ogbn-arxiv-TA [1] from the HeTGB (Heterophilic Text-attributed Graph Benchmark):
| Methods | Top-1 Accuracy | Max Training FLOPS | Communication |
| :---------: | :------------: | :---------------------: | :--------------------: |
| **FedAvg** | 64.24 | 1x (1.78E10) | 1x (6.16E9) |
| **PruneFL** | 61.37 | 0.58x | 1.00x |
| **FedTAD** | 64.92 | 32.42x | 1.11x |
| **EAGLES** | **65.89** | **0.33x (↓0.68x**) | **0.48x** (↓**0.52x**) |
The results show that on heterophilic graphs using ogbn-arxiv-TA, EAGLES also demonstrates superiority.
[1]: Li, S.; Wu, Y.; Shi, C.; and Fang, Y. (2025). *HeTGB: A Comprehensive Benchmark for Heterophilic Text-Attributed Graphs*. arXiv preprint arXiv:2503.04822.
> ` W3 & S1: Whether the proposed method can speed up the experiments was not explored, and supplement experiments on training time to further verify the efficiency of the method.`
We measured the time required for clients to reach the target accuracy on the ogbn-arxiv dataset across different methods, and the accuracy achieved by different methods at the same number of epochs on the ogbn-proteins dataset. The results are presented below:
**TIME TO REACH TARGET ACCURACY**
| Methods | Time to reach 70% accuracy | Time to reach 80% accuracy | Time to reach 90% accuracy |
| :---------: | :------------------------: | :------------------------: | :------------------------: |
| **FedAvg** | 54.23 s | 223.42 s | 376.23 s |
| **FedTiny** | 35.28 s | 149.08 s | 281.23 s |
| **FedDIP** | 36.44 s | 155.68 s | 278.62 s |
| **PruneFL** | 40.52 s | 177.21 s | 319.53 s |
| **EAGLES** | **14.04 s** | **85.94 s** | **162.79 s** |
**ROC-AUC UNDER THE SAME EPOCH**
| Methods | EPOCH: 50 | EPOCH: 150 | EPOCH: 300 |
| :---------: | :-------: | :--------: | :--------: |
| **FedAvg** | 70.34 | 80.38 | 81.49 |
| **FedTiny** | 71.36 | 77.68 | 79.90 |
| **FedDIP** | 70.98 | 78.59 | 81.33 |
| **PruneFL** | 69.22 | 76.23 | 78.69 |
| **EAGLES** | **73.97** | **81.85** | **82.32** |
The results show that EAGLES can train the model to a target accuracy in a shorter amount of time, and also achieve higher performance at the same number of epochs. This further verifies that our proposed method can accelerate model training.
> `Q1: Can the authors provide additional ablation experiments regarding $λ_2$ and $λ_3$ in Eq(19)`
We conducted ablation experiments on $λ_2$ and $λ_3$ on the ogbn-arxiv dataset, and the results are presented below:
**Ogbn-arxiv** (fix $λ_3$ = 1e-6)
| dataset | $λ_2$=0.1 | $λ_2$=0.05 | $λ_2$=0.2 |
| :------------: | :----: | :-----------: | :-----------: |
| **Pubmed** | 86.97 | 86.23 (↓0.74) | 86.96 (↓0.01) |
| **photo** | 92.31 | 92.75 (↑0.44) | 91.85 (↓0.46) |
| **Ogbn-arxiv** | 65.37 | 64.92 (↓0.45) | 65.33 (↓0.04) |
**Ogbn-arxiv** (fix $λ_2$ = 0.1)
| dataset | $λ_3$=1e-6 | $λ_3$=1e-5 | $λ_3$=1e-7 |
| :------------: | :-----: | :-----------: | :-----------: |
| **Pubmed** | 86.97 | 84.22 (↓2.75) | 86.89 (↓0.08) |
| **photo** | 92.31 | 90.56 (↓1.75) | 92.15 (↓0.16) |
| **Ogbn-arxiv** | 65.37 | 64.38 (↓0.99) | 65.42 (↑0.05) |
Increasing $λ_2$ enforces stricter GSyE constraints, balancing expert contributions and enhancing subgraph homogeneity. In heterogeneous scenarios, raising $λ_2$ from 0.05 to 0.2 leads to a clear accuracy boost. Conversely, while a larger $λ_3$ speeds up sparse parameter identification, it may cause significant accuracy loss; a smaller $λ_3$ permits finer-grained sparsification.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and appreciate the authors' response. The additional experiments further validate the effectiveness of the proposed method. A minor suggestion is to include the explanation for weakness 2 in the paper if it has not already been added. I maintain my score and support the acceptance of the paper.
---
Reply to Comment 1.1.1:
Comment: ### Dear Reviewer jagF,
We sincerely appreciate your invaluable support for our research. Your insightful suggestions regarding the scalability and flexibility of EAGLES have significantly contributed to improving the depth and precision of our manuscript. It has been an honor to incorporate your comments and strengthen our work accordingly. Thank you once again for your time, expertise, and constructive review.
Best regards,
Authors | Summary: This work introduces EAGLES, a framework for Federated Graph Learning (FGL) that reduces computational demands while maintaining high performance. By unifying graph and model sparsification, it simplifies graph structures and prunes model parameters efficiently. EAGLES uses multi-criteria experts to sparsify graphs and integrates results using a synergy expert, ensuring better knowledge sharing across clients with diverse data.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: Yes
Essential References Not Discussed: Yes
Other Strengths And Weaknesses: Strengths:
- EAGLES introduces a unified sparsification framework that simultaneously sparsifies graph structures and model parameters. It uses multi-criteria graph sparsification experts and a synergy expert to reduce graph size while preserving critical structural information.
- By addressing key challenges like data heterogeneity, computational inefficiency, and communication overhead, EAGLES provides a scalable and economical solution for Federated Graph Learning. Consensus-based parameter sparsification further ensures efficient pruning without iterative adjustments, addressing computational and communication overhead.
- The evaluation shows resilience under varying sparsification rates and client distributions, making it effective for large-scale federated graph learning.
Weaknesses:
- While the paper introduces the Graph Synergy Expert to integrate sparsified subgraphs from multiple experts, it would benefit from a more detailed step-by-step explanation of how the GSyE processes and combines the outputs of various sparsification experts.
For instance, describing how the hard concrete distribution is optimized and how the gating mechanism selects and integrates key structural information for each node would enhance clarity.
- The consensus-based parameter sparsification strategy is described briefly, but its implementation could use more depth. A detailed breakdown of how the dynamic masking thresholds are computed, how client-specific masks are aligned, and how the rollback pruning strategy ensures pruning stability would be valuable.
- Elaborating on how the trade-offs between structural similarity, computational efficiency, and model performance are balanced in the optimization process (Equation 3) would strengthen the framework.
Other Comments Or Suggestions: N.A
Questions For Authors: The paper introduces the Graph Synergy Expert (GSyE) to integrate sparsified subgraphs from multiple experts, but a more detailed, step-by-step explanation of its process would enhance clarity. Specifically, describing how the GSyE optimizes the hard concrete distribution and how the gating mechanism selects and integrates key structural information for each node would provide valuable insights. Including examples, pseudocode, or visualizations could further illustrate the functionality and importance of this component.
The consensus-based parameter sparsification strategy is briefly mentioned, but its implementation could benefit from greater depth. A detailed explanation of how dynamic masking thresholds are computed, how client-specific masks are aligned, and how the rollback pruning strategy ensures pruning stability would make the approach more comprehensible.
Additionally, elaborating on how the framework balances trade-offs between structural similarity, computational efficiency, and model performance in the optimization process (Equation 3) would offer a stronger understanding of its effectiveness.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate for taking the time to review our manuscript and hope our response will address your concerns and contribute to an improved score.
> ` W1: How the GSyE processes and combines the outputs of various sparsification experts.`
In our method, GSyE (Graph Synergy Expert) is used to fuse and refine different subgraphs generated by multiple GSE (Graph Sparsification Expert, aiming to alleviate *structure information overfitting* during the graph sparsification process. The specific steps are as follows:
1. Multiple GSE generate various versions of subgraphs based on different criteria (Eq. (11)).
2. For the target node *v*, its node features $\mathbf{X}$ are projected using a learnable matrix $\mathbf{W}_{gate}$ to obtain gating scores $\boldsymbol{z}$, which are then used to generate continuous approximations $\psi(\boldsymbol{z})$ of binary gates, thereby enabling gradient-based optimization (Eq. (13)).
3. The HardStep function (Eq. (14)) is applied to hard-threshold $\psi(\boldsymbol{z})$ to either 0 or 1, determining whether to activate the corresponding expert's recommendation for the edge $e_{ij}$.
4. For each edge $e_{ij}$, if at least one expert recommends retaining it, the edge is marked as a candidate edge; otherwise, it is pruned directly.
> `Q1: How the hard concrete distribution is optimized?`
The hard concrete distribution is applied to the raw gating scores $\boldsymbol{z}$ (Eq. (12)) to produce continuous probabilities $\psi(\boldsymbol{z})$. The HardStep function further binarizes $\psi(\boldsymbol{z})$ into discrete gates—0 and 1. During backpropagation, we employ the Straight-Through estimator [1] to approximate gradients and address the optimization problem.
[1]: Bengio, Y.; Léonard, N.; and Courville, A. (2013). *Estimating or propagating gradients through stochastic neurons for conditional computation*. arXiv preprint arXiv:1308.3432.
> `Q2: How dynamic masking thresholds are computed? + Q3: How client-specific masks are aligned?`
In consensus-informed parameter sparsification, dynamic masking thresholds are computed through a layer-wise adaptive process. For the $l$-th layer’s parameter matrix $W^{(l)}$, a threshold vector $\kappa\_{0}^{(l)}$ is dynamically optimized using straight-through estimators (STE) to bypass non-differentiability during backpropagation. Specifically, parameters $W^{(l)}$ are pruned if their absolute values fall below $\kappa\_{0}^{(l)}$, where thresholds are updated via a loss function (Eq. (6)) to maximize sparsity while maintaining performance. Client-specific masks are aligned through a consensus mechanism where overlapping “1”s in binary masks across clients form a unified sparse subnetwork, enabling parameter sharing and communication efficiency.
> `Q4: How the rollback pruning strategy ensures pruning stability?`
The rollback pruning strategy ensures pruning stability by designating, at each predefined pruning checkpoint (for example, every 10% increment in pruning rate), the highest-performing subnetwork within an acceptable accuracy range (±3%). Before moving on to a deeper level of pruning, the method reverts (rolls back) to this optimal subnetwork. This rollback step prevents the accumulation of errors from continuous pruning and mitigates sudden drops in performance, thereby maintaining overall model stability and ensuring effective deep pruning.
> `Q5: How the framework balances trade-offs between structural similarity, computational efficiency, and model performance in the optimization process (Equation 3)?`
Our weighted loss function combines a structural similarity term (enforcing alignment of sparse subnetworks via mask consensus), a computational cost penalty (promoting parameter sparsity to reduce FLOPs), and a task-specific performance loss (e.g., cross-entropy for accuracy). Hyperparameters $λ\_1$ and $λ\_2$ dynamically adjust the trade-offs: higher $λ\_1$ prioritizes mask consistency across clients, while $λ\_2$ controls sparsity-intensity. | null | null | null | null | null | null |
X-Hacking: The Threat of Misguided AutoML | Accept (poster) | Summary: The paper introduces the concept of X-Hacking, which refers to the practice of deliberately exploiting model multiplicity, where different ML models can have comparable performance, to select a model that has certain desirable explainability characteristics (like SHAP). The paper demonstrates the ability to select models based on explainable AI (XAI), discusses when this ability to select based on XAI is more feasible, and suggests some ways to make this selection more transparent.
Claims And Evidence: The authors convincingly demonstrate that varying SHAP explanations can be obtained through either “cherry-picking” from multiple similarly performing models or via a targeted optimization approach (“directed-search”). They effectively show that this flexibility arises due to correlated or redundant features.
However, the paper's analogy to p-hacking raises conceptual difficulties. Unlike p-values, which are fundamentally about statistical inference, SHAP explanations are inherently model-specific reflections of what each model genuinely uses in prediction. Selecting a model because it uses some characteristics rather than others (as accurately represented by SHAP) is not inherently problematic. For instance, if policy explicitly mandates avoiding certain protected characteristics (like criminal record), choosing an alternative model that does precisely that is compliance—not manipulation. SHAP explanations truthfully reflect model behavior and do not inherently mislead; thus, selecting among models based on desirable explanations isn't automatically problematic.
A useful analogy is found in recent fairness literature, where “model multiplicity” is positively leveraged to select models with similar predictive power but improved fairness metrics. There, selecting among multiple models is explicitly beneficial, as it enables analysts to choose ethically or legally preferable options. This illustrates clearly that model selection itself—even based on explanation—can legitimately fulfill regulatory or ethical goals, provided it substantively addresses regulatory concerns.
Thus, the concern raised by the authors about “X-hacking” needs careful delineation: it is not simply choosing among multiple valid models that is problematic, but specifically doing so to create a misleading impression of objectivity or robustness when reporting explanations. This distinction warrants greater emphasis and clarification by the authors.
Methods And Evaluation Criteria: The proposed evaluation approach generally makes sense for illustrating the feasibility of “X-hacking.” The authors' choice of real-world datasets is appropriate, and their use of both off-the-shelf AutoML and a custom Bayesian optimization approach to systematically explore model multiplicity is reasonable and insightful.
However, (to somewhat repeat the point made above) the evaluation criteria focus primarily on the speed and ease of finding models that yield desired SHAP explanations relative to predictive performance. A critical limitation is that the authors do not clearly justify why the selective search for particular SHAP values constitutes harmful “hacking,” rather than legitimate model selection. The evaluation would benefit from explicitly distinguishing harmful selective practices from legitimate uses of model multiplicity, particularly by providing clear normative criteria or guidelines for differentiating the two scenarios.
Theoretical Claims: The paper is primarily an experimental paper and does not cover theoretical claims.
Experimental Designs Or Analyses: The experiment design as reported in the paper is robust and convincing. I particularly appreciated the development of a custom AutoML solution for multi-objective optimization. The cherry-picking examples would be sufficient to conceptually demonstrate x-hacking but the directed-search example shows that this potentially can be done efficiently and at scale.
Supplementary Material: I briefly reviewed the supplemental material which provides more details on the design of the experiments and a greater breakdown of results by dataset. Authors appropriately chose what information should be in the main text and what should be in the appendix.
Relation To Broader Scientific Literature: The paper correctly situates itself within the model multiplicity and explainability literature. Although these themes have been explored extensively in the literature, the paper presents a novel and important approach and framework for considering the robustness of XAI in the context of searching for desirable explanations.
Essential References Not Discussed: I cannot think of specific citations that I would deem essential to have discussed.
However, I believe that the paper would benefit from considering some of the recent literature on model multiplicity and fairness. While not requiring a specific citation, this literature would add some richness to the paper. Here are some examples-
D-Hacking (https://dl.acm.org/doi/abs/10.1145/3630106.3658928)
The Legal Duty to Search for Less Discriminatory Algorithms (https://arxiv.org/abs/2406.06817)
Fundamental Limits in the Search for Less Discriminatory Algorithms—and How to Avoid Them (https://arxiv.org/abs/2412.18138)
Operationalizing the Search for Less Discriminatory Alternatives in Fair Lending (https://dl.acm.org/doi/abs/10.1145/3630106.3658912)
Other Strengths And Weaknesses: Overall, I thought this was a great paper making a very important contribution to the literature. I can see this paper being very influential and opening up a future line of research.
Other Comments Or Suggestions: see above
Questions For Authors: You convincingly demonstrate how easily AutoML tools can find models with desired SHAP explanations, but it's unclear precisely when selecting such a model becomes “manipulative” rather than beneficial or compliant (e.g., choosing models that intentionally exclude sensitive characteristics). Could you clarify what criteria or guidelines you envision for distinguishing harmful X-hacking from legitimate and beneficial model selection based on SHAP values? A clear response would strengthen the practical relevance and conceptual precision of your paper, particularly for readers concerned with regulatory or ethical compliance.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their strong approval of the paper and the valuable feedback. Below we address the questions raised by the reviewer.
“**Selecting a model because it uses some characteristics rather than others (as accurately represented by SHAP) is not inherently problematic...**”:
Choosing a model x ∈ R(t, D) where R(t, D) is the Rashomon set because it meets certain ethical, policy or domain-driven requirements is not problematic. Even if x ∈ C(t, D) (models that change explanations), a confirmatory analysis is legitimate if it is reported transparently. By contrast, malicious X-hacking occurs when an actor withholds other valid models from R(t, D) and only presents a model from the set C(t, D) ∩ R(t, D) that supports a desired narrative, thereby concealing alternative explanations that might contradict the narrative. Also pointed out by reviewer **Se87**, we appreciate this distinction and intend to put it earlier in the paper (perhaps Section 1) and then later discuss more in Section 7 (Discussion)
“**Could you clarify what criteria or guidelines you envision for distinguishing harmful X-hacking from legitimate and beneficial model selection based on SHAP values?**”: Our vision is that effective countermeasures or additional requirements from a publication venue to make reporting more transparent can help identify or raise flags for X-hacking which may require further scrutiny. Specifically,
1. Transparent reporting of model choice, clearly mentioning that different models yield different explanations and clarification of why the chosen pipeline/model’s explanation is more appropriate.
2. Pre-specifying a research plan and adherence to it which can be later verified will limit the degrees of freedom that produced the reported model.
3. Justifying the choices of explanations in a selected model may help give valuable insights.
4. A well documented research journey which is open, reproducible, and consistent with methodological standards will help identify X-hacking.
5. Awareness about model multiplicity and X-hacking and the need for effective countermeasures in the research community.
While the above points focus more on what researchers reviewers and peers can look for and raise concerns based on the non-availability of information that threatens transparency, a full-fledged automatic detector for X-hacking is an ideal vision for further development. | Summary: In this work the authors propose the concept of “X-hacking,” where scientists or ML service providers leverage the multiplicity of ML systems to provide misleading explanations despite maintaining performance.
Claims And Evidence: The authors claim the potential for X-hacking in ML systems, especially AutoML pipelines where one can scalably provide candidate models with different explanation behavior. They provide thorough evidence for this case both in a post-hoc setting (generating many models with good accuracy, finding one with a desired explanation) and ad-hoc setting (jointly optimizing performance and explanation goals in an AutoML pipeline). Their empirical results tell a compelling narrative of the existence of manipulable explanations, potential ways they can manifest, and how this manipulation can be optimized for.
However, multiple times throughout the paper the authors hypothesize about adversaries and countermeasures in an abstract manner. They say (paraphrasing) an adversary “could also consider risk of getting caught as an objective,” or that detection could occur through pipeline analysis without any experimental exploration of these ideas. I appreciate the discussion and understand the need to discuss all potential risks and countermeasures, but these claims felt very abstract and took away from the rigor found elsewhere in the paper. I might consider moving some of this discussion to the appendix to improve the clarity and focus of your work, but this is a minor comment and I still found their discussion interesting.
Methods And Evaluation Criteria: The authors use a Bayesian optimization and random sampler for the ad-hoc setting of X-hacking. The method makes sense for their joint prediction and X-hacking objective formulation.
Theoretical Claims: The authors make no theoretical claims in the main body of the paper.
Experimental Designs Or Analyses: The author’s experiments are sound, specifically both their cherry picking/post-hoc setups (5.1) and directed search setups (5.2) although I am not very familiar with the optimization method used. I thought the analysis was good although I am not sure why the authors so heavily explored the benefits of Bayesian optimization over random optimizing in 5.2. Intuitively, optimizing for these objectives should be better/faster . However, I am more interested in knowing the gains an adversary would get over the cherry-picking method – could they get a desired X-hacking effect with fewer model training runs?
I had a question about the setup for 5.3 using simulated data. You include a dataset with correlated data so that you can shift SHAP weight around at will to manipulate explanations, correct? Why did you describe the features as colinear? Was the data perfectly dependent or just correlated? Also, in Figure 8, are each of the points in the graph a different model and the same feature? It should be clear that each point represents a choice of model, and so selecting points along a fixed horizontal line allows us to change feature dependencies in our explanation while maintaining performance.
Supplementary Material: I reviewed the author’s discussion of their multi-objective optimization pipeline.
Relation To Broader Scientific Literature: This work provides a great complement to literature on the Rashomon set and multiplicity by asking the opposite question of picking a model with a “bad” explanation. Additionally it builds nicely upon past work on the brittleness of explanations.
Essential References Not Discussed: The authors covered most important references in this work.
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: In Fig. 3, are the features ordered by baseline importance? It might be good to mention that somewhere.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough and supportive comments. Furthermore, we will incorporate your suggestions to further strengthen the exposition and clarify details around our adversarial concepts, notation, and experimental setups. Specifically:
We agree that our discussions of adversarial objectives, risk tolerance, and detection were presented at a more conceptual level compared to the more rigorous empirical work. Our focus was to demonstrate and quantify the practicality of manipulating ML explanations, not to propose or experimentally validate a complete “risk model” of adversarial behaviour. In a revised version, we intend to shift parts of Section 6 (Detection and Prevention) to Section 7 (Discussion) to improve clarity.
“**… not sure why the authors heavily explored the benefits of Bayesian optimization over random optimization. I am more interested in the gains an adversary would get over the cherry-picking method — i.e., fewer runs to achieve a desired X-hacking effect.**”: We wanted to show that a malicious or benign user targeting explanations can exploit a more systematic search. We also wanted to show that given a greater time budget, random sampling will not on average perform better than Bayesian optimization in finding a defensible model. We explore this in an empirical manner and conclude that in our experiments, Bayesian optimization was 3 times faster at random sampling in the ad-hoc X-hacking setting (Section 5.4 (“Time to First Defensible Model”)).
“**… you describe the data as collinear. Was it perfectly dependent or just correlated? Also, in Figure 8, is each point a different model? It would help to clarify that each point is a distinct pipeline choice and that for a fixed performance we can shift feature dependencies as we like. In Fig. 3, are the features ordered by baseline importance? It should be mentioned explicitly.**”:
**Collinearity**: We used both moderate and high correlation scenarios, but not perfectly dependent features. Here we use “collinearity” referring to imperfect collinearity rather than perfect collinearity: i.e. the relationship is nearly but not exactly linear. We will clarify that the data is not perfectly dependent, but “redundant enough” to shift how SHAP values are allocated among correlated predictors without significantly affecting accuracy.
**Figure 8**: Yes, each dot on the plot corresponds to a different trained model/pipeline. In the revision, we will explicitly state that “each point represents one pipeline configuration with its resulting performance and SHAP score for that feature.”
**Figure 3**: Yes, the features in Figure 3 are shown in descending order of mean absolute SHAP under our baseline model. We will add a sentence in the caption or text noting that “features are ordered based on their baseline SHAP importance,” | Summary: The paper notes that AutoML pipelines allow training of multiple ML models, including sets of models with 'defensible' performance.
Thus, AutoML tools make it easier for authors to cherry-pick models to fit preconceived notions, as embodied by explainability metrics. The paper uses the Shapley value as its representative of these.
Examples show that the possibilities for doing so increase in the redundancy of the feature set: when e.g. $x_1$ and $x_2$ are highly correlated, models may give weight to either without compromising their predictive power, allowing either to have a large Shapley value (the standard collinearity problem of econometrics/statistics).
# update after rebuttal
Stands.
Claims And Evidence: The claims are assessed on 23 datasets, as well as some simulated data with known ground truth. This seems fine.
Methods And Evaluation Criteria: I have no concerns with the methods or evaluation criteria.
Theoretical Claims: The paper presents no theoretical results, as such.
Experimental Designs Or Analyses: I have not checked any code.
Supplementary Material: I have looked through the Appendices to support my reading of the article body.
Relation To Broader Scientific Literature: The paper is motivated by reference to $p$-hacking, from the statistics literature.
In the light of that literature, the present results are unsurprising: larger models offer more possibilities for misrepresentation - whether intentional or otherwise. (My favourite example remains Bennett et. al.'s fMRI analysis of Atlantic salmon, winning him an IgNobel.)
I _think_ that slightly different issues arise: in the case of $p$-hacking, the problems arise from scale and 'chance'; in $X$-hacking, they arise from collinearity. Is this correct? If not, what is a more careful intuition?
It could also be useful to connect this to Wang, Rudin et al.'s 2022 TimberTrek, a visualisation tool for exploring Rashomon sets.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Overall, I found the paper's results unsurprising: more dimensions allow more misrepresentation. This said, there needs to be a paper of record clearly establishing this.
I would _like_ this to be that paper. To do so, though, the story and exposition need to be much tighter. For example, I would like to see:
1. the set of 'defensible' models more tightly related to the Rashomon set: if they're the same, let's not duplicate language.
1. a more careful consideration of what datasets are more/less vulnerable to $X$-hacking: how do you explain the ranking of datasets in Figure 2? Why does the ranking differ from that in Table 1?
1. it would be good to start the article with strong motivating examples 'in the wild' (p.8): at present, the concern is only plausible.
Other Comments Or Suggestions: 1. in econometrics, an increasingly common technique for addressing $p$-hacking is advance filing of research plans. How well does this handle $X$-hacking concerns?
1. the $\mathcal{O}$ notation is used twice, once for 'obviousness' (p.3) and once for the 'risk of getting caught' (p.4). Are these the same? Overall, this does not make sense, as we do not have a model of cheating/getting caught. The notation tends to be reserved for complexity measures, so is misleading.
1. other notation also seems inconsistent/confusing: e.g. $Q_D(m) = perf(m)$ and $Q_D(m) = \mathcal{I}(m)$. I recommend picking one or the other convention.
1. I don't think that the paper needs to be cast in terms of unscrupulous authors: authors under time pressure, inexperienced authors, etc. will all cherry pick, wittingly or otherwise.
1. cut the 'audacity' section: you don't develop the idea; it feels like a distraction.
1. a fun experiment to establish $X$-hacking risks could be to: generate a defensible set with e.g. each feature given the largest Shapley value; ask an LLM to write an abstract for a paper corresponding to each.
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. The following addresses the reviewer’s questions and concerns.
**p-hacking and X-hacking**: Both p-hacking and X-hacking can arise from scale, chance and collinearity, which are interrelated rather than distinct. For instance, collinearity can cause regression estimates to become unstable, thus exacerbating p-value non-robustness. X-hacking, facilitated by AutoML, leverages model multiplicity and feature redundancy to align with desired explanations, also affected by these factors. X-hacking can be viewed as a generalization of p-hacking, extending the manipulation of statistical significance to the broader manipulation of model explanations; however the focus in our paper is on model-agnostic explanations, not including p-values, which tend to be considered only for a subset of models such as GLMs. Both phenomena can also occur unintentionally, highlighting the need for robust statistical practices and transparent reporting to mitigate these risks.
**TimberTrek**: As pointed out by Reviewer 9ZtB also, in a revision we will cite TimberTrek in Section 2 as an important work on visualising and understanding Rashomon sets.
**Rashomon set and defensible models**: The Rashomon set R is defined as the set of models that exhibit nearly equivalent predictive performance within an acceptable threshold $\epsilon$.
R = {$\{ m \in M | perf(m) \geq perf(m^*) - \epsilon \}$}
Given M as the set of all possible models, perf(m) as the predictive performance of model m, and m∗ as the best-performing model, the set of "defensible" models is equivalent to the Rashomon set R. This set allows for a performance decrease ϵ(m) that may vary by model, accommodating those considered 'acceptable' due to their adherence to standard practices in the domain of application/publication. Therefore, the set of defensible models is a Rashomon set: R(t, D) in the paper (section 4.1). Additionally, the set C(t, D) ∩ R(t, D) is the set of those defensible models that changed the explanations. We will make it clearer and keep notation for the set C(t, D) ∩ R(t, D) (Pg. 2 para. 1) where we mention finding defensible models which are the models from the set C(t, D) ∩ R(t, D), i.e., “defensible models that changed explanations”, to avoid confusion.
**Figure 2 and Table 1**: Figure 2 and Table 1 measure different aspects of the same set, C(t, D) ∩ R(t, D) defined in section 4.1. In figure 2, we see how many (the size) models in this set are found in a post-hoc manner by an off-the-shelf AutoML solution demonstrating that—with no special adjustments—AutoML is susceptible to X-hacking. On the other hand, in Table 1 we see how quickly the first member of this set is found by ad-hoc X-hacking. For a dataset the set C(t, D) ∩ R(t, D) may be large, yet a member of this set appears relatively late in the search process (longer time in Table 1). For another dataset C(t, D) ∩ R(t, D) may be small, but yields one member quickly during directed X-hacking (shorter time in Table 1).
**A motivating example**: Our following empirical analysis can be added as motivation after background section.
Studies have shown a strong relationship between gender and cardiac disease [[98](https://shorturl.at/qn04f), [99](https://shorturl.at/fTX5J), [100](https://shorturl.at/vP9fo), [101](https://shorturl.at/Y8a0Z)], but in an empirical experiment with our ad-hoc X-hacking, the importance of the feature gender was manipulated to drop to Rank 6 from a Rank of 1 in a baseline. The set C(t, D) ∩ R(t, D) had many candidates. First such candidate was found only in 24 seconds.
[This figure shows the results](https://imgur.com/TcEvbK9).
**Pre-registration and X-hacking**: We mention study pre-registration in Section 2 under p-hacking. Pre-registration is a valuable safeguard against X-hacking but might require more rigorous methodological detail. We will add this in Discussion.
**Concern regarding notation**: Obviousness and risk of getting caught are both represented by O. It is an arbitrary function in the current context. To avoid confusion with complexity notation, we will change O to another letter, say Z.
In Section 3 para. 3, we mention Q as any quality measure, which can be performance of a model perf(m) or an inferential summary of feature of interest I(m, x). It is only used to define a quality measure and later says (Eq. 1) that generally one optimises for a quality measure of performance during a model search: Q = perf(m).
**Balanced view of X-hacking**: In Section 3 para. 2 we mention that such hacking can be “deliberate or not”. However, explicitly mentioning it earlier in Section 1 para. 2 can set a broader view at possible reasons behind X-hacking. Specifically, we will add the phrase “unscrupulously, or through lack of time or experience”.
**Editing “audacity” section**: To maintain the focus of the reader, the subsection 3.3 will be removed to supplementary materials.
---
Rebuttal Comment 1.1:
Comment: Thanks!
Using your categories:
**p- and X-hacking**: thanks. I've lost track of whether I can see the revised version in ICML, but would like to see that.
** TimberTrek**: thanks.
**Rashomon sets**: oh good! I'd strongly encourage you to fully adopt the Rashomon terminology: maintaining terminological consistency helps maintain clarity in the field.
**Fig 2, Table 1**: thanks. I'd find valuable intuitions for _why_ $C(t, D) ∩ R(t, D)$ be large, but members of it slow to appear, and _vice versa_.
**Motivating example**: to clarify, by 'in the wild', I meant an example where you think that existing results have been X-hacked. The gender/cardiac example goes in a different direction: if we believe that the consensus in the existing published literature is correct (even though we can manipulate models to get other results), then it doesn't serve as an example of X-hacking breaking out into the wild.
**pre-registration**: thanks!
**notation**: as originally mentioned, you seem to wave your hands at an unspecified game-theoretical model. I saw this as muddying waters rather than adding value, so recommend cutting this as much as possible, to free space to properly exposit core points.
re: $perf$ and $\mathcal{I}$, I'd found it ugly and confusing to mix notational systems (is the latter a set?) for similar concepts.
**balanced view**: thank you; I'd recommend pruning all other references to motivation - for whatever reason, this can happen.
**audacity**: I didn't see the point of this material at all; I don't think that the appendices should be storage cupboards for questionable material; 'kill your darlings', editors say.
% do I need more?
---
Reply to Comment 1.1.1:
Comment: We address the reviewer's comments below.
**Rashomon sets**: to maintain clarity in the field, we will fully adopt the Rashomon terminology.
**Intuitions for size of C(t, D) ∩ R(t, D)**: intuitively, there can be a number of reasons for why C(t, D) ∩ R(t, D), be large, but members of it slow to appear, and vice versa.
1. Behaviour of the search algorithm: an optimiser may initially explore diverse hyperparameter settings only later narrowing on certain promising regions in a large pipeline search space given by an AutoML solution.
2. Complexity of the running pipeline: pipelines having complex multiple steps may slow down how rapidly one can iterate to find the one that belongs to C(t, D) ∩ R(t, D).
3. Size of the dataset: large number of features slows down the calculation of SHAP values as SHAP computations grow exponentially with feature count. Moreover, pipeline evaluation on a larger dataset may take more time.
4. Information redundancy among features: in a dataset with several correlated features the set C(t, D) ∩ R(t, D) may be large but the search might not systematically explore these dimensions until later.
**“In-the-wild" example**:
An ideal point of reference would be a secondary source, i.e. a published report levelling criticism against a study whose AI-derived explanations or insights were found not to be robust to their upstream modelling decisions, such as in a commentary article, a retraction or coverage on [RetractionWatch](https://retractionwatch.com/) . However, we were unable to find such an example that specifically highlights the use of either XAI or AutoML; of course many such examples exist for p-hacking and poor research more generally. Absent such a secondary source, the alternative is to find primary sources, i.e. examples of papers or preprints where we are able to determine, with independent access to the same dataset, the non-robustness of the published results, or where the authors inadvertently reveal the same through their reporting. However, a systematic search for such primary sources would be an involved process and certainly beyond the scope of the current paper. Such an audit or meta-analysis might be a good area for future research.
We demonstrate that there is both means and motive for X-hacking. For example, prior studies show how to deliberately alter a model’s behaviour to produce a preferred explanation [[1](https://doi.org/10.48550/arXiv.1911.02508)]. In contrast, X-hacking achieves a similar effect without modifying the underlying model, by systematically searching a large pipeline space for explanations that fit a given agenda. Whether pursued deliberately or not, X-hacking is relatively easy to perform, underscoring its practical feasibility—and the importance of awareness in the research community.
**Notation**: we will keep the notation to QD (m) = perf(m) and omit I(m, x) to be defined by Q.
**Audacity**: We agree with your suggestion and remove the ‘audacity’ section to streamline our argument and keep the paper focused. | Summary: The paper introduces X-hacking that manipulates XAI metrics by exploiting model multiplicity to find explanations supporting desired conclusions. Bayesian optimization helps to find models that support a desired explanation while maintaining acceptable accuracy, allowing for manipulation of SHAP values. Datasets with high feature redundancy are particularly vulnerable to X-hacking. The paper also discusses detection and prevention strategies, highlighting ethical concerns and the risks X-hacking poses.
## update after rebuttal
Thank you to the authors for their reply and willingness to further improve the manuscript. I would like to encourage the authors to incorporate a discussion on ethical issues related to the intent of model selection under multiplicity. While I still believe that the paper would greatly benefit from additional results based on non-feature-attribution explainability methods, I acknowledge that the empirical results for AutoML pipelines are extensive. I have re-read the paper and the authors’ reply and have adjusted my scores accordingly.
Claims And Evidence: Claim 1: Automated machine learning (AutoML) pipelines can be easily adapted to exploit model multiplicity at scale, making them vulnerable to X-hacking.
Claim 2: Bayesian optimization accelerates X-hacking for features susceptible to it, compared to random sampling.
Claim 3: The paper shows that some datasets are more vulnerable than the others to X-hacking (probably due to different level of multiplicity).
Methods And Evaluation Criteria: The methods presented in the paper effectively illustrate the concept of X-hacking and the role of AutoML. However, they can be time-consuming and likely will be hard to generalize to more complex models.
Theoretical Claims: The paper does not make theoretical claims.
Experimental Designs Or Analyses: The paper's experimental designs is generally sound for demonstrating X-hacking.
Supplementary Material: I read through Appendix.
Relation To Broader Scientific Literature: The core idea that model multiplicity allows selecting models with different feature importances (Shapley values) is not new. It's well known that different models in the Rashomon set can emphasize different features while achieving similar predictive accuracy (see Rudin Amazing Things Come From Having Many Good Models, 24). Therefore one can choose models based on these features. There are tools that visualize these models as well (Timbertrek, 22).
Essential References Not Discussed: The field has a lot of recent papers, and the authors did a great job referencing important papers. However, some literature on the Rashomon set is missing. See [Ganesh, The Curious Case of Arbitrariness in Machine Learning, 25] for reference.
Other Strengths And Weaknesses: **Strength**
The analysis of different features to understand their susceptibility to X-hacking is particularly interesting
**Weaknesses**
More nuanced view on X-hacking is needed, as it can easily be used as an assistive tool to align model with the domain experts intuition.
Only one XAI methos is used, which is feature importance-based metric, so it is not clean how X-hacking behaves for other XAI methods.
Other Comments Or Suggestions: NA
Questions For Authors: How does the paper results generalize to other explainability methods that are not feature importance based?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback and valuable references.
We agree that the phenomenon of having many equally accurate, yet interpretively distinct, models is well established, and we cite several related papers (e.g., Fisher et al., 2019; Brunet et al., 2022) to acknowledge this. However, our paper contributes a new perspective by explicitly demonstrating how modern AutoML pipelines can be adapted (or misused) to exploit those known multiplicities at scale to manipulate XAI metrics (referred to as “X-hacking”). While multiplicity has certainly been studied, we highlight in the related work section how this is limited to certain model families. Our approach highlights automation and practicality—showing that even modest budgets given to AutoML systems can allow for systematically “cherry-picking” explanations to support a predefined narrative, all while preserving acceptable predictive performance. This practical demonstration of how easily multiplicity can be exploited via off-the-shelf or lightly customized AutoML solutions distinguishes our work from purely theoretical or smaller-scale explorations of Rashomon sets. We acknowledge the recent preprint, Ganesh (2025) and add a reference to this in Section 2 (Background).
“**More nuanced view on X-hacking is needed, as it can be used as an assistive tool to align the model with domain experts’ intuition.**”: We acknowledge that choosing models that align with intuitions of domain experts may not always raise ethical concerns. For example, when domain experts prefer interpretable patterns that reflect established science or policy. However, the distinction lies in intent and transparency. We appreciate this insight and intend to add in Section 7 (Discussion) that model selection for domain alignment may not always attribute to malpractice provided it is transparent and reported in good faith. While our paper emphasizes the opposite, an expanded discussion of both benign and malicious scenarios will enhance the paper’s balance.
“**Only one XAI method is used … it is not clear how X-hacking behaves for other XAI methods.**”: We worked with SHAP for two reasons: it is a model agnostic approach, and it is a popular metric, making it a strong representative for our demonstration. We show that quantitative XAI metrics like SHAP can be easily manipulated using off-the-shelf AutoML solutions, thus sufficiently establishing the core risk of X-hacking at scale. We believe that any post-hoc explanation method -- especially those susceptible to model multiplicity -- will face similar risks, however, systematic exploration of other XAI metrics is planned as future work.
“**The methods can be time-consuming and might be hard to generalize to more complex models.**”: We share the concern that scanning a massive analysis space can be computationally expensive, however, we show that current off-the-shelf AutoML solutions and their search spaces can be used to find “defensible” models even under modest resource budgets. For extremely large or complex models, the cost (computational resources and time) of repeated training might be higher, however, it is the choice of the researcher to allot higher budgets in the context of their research. Our current intention is to show that X-hacking can be easily and effectively done at scale, even with small budgets. | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.